text
stringlengths 7.27k
650k
| label
int64 0
10
|
---|---|
arXiv:1608.03703v2 [] 26 Apr 2017
Template estimation in computational anatomy:
Fréchet means in top and quotient spaces are
not consistent
Loïc Devilliers∗, Stéphanie Allassonnière†, Alain Trouvé‡,
and Xavier Pennec§
April 27, 2017
Abstract
In this article, we study the consistency of the template estimation
with the Fréchet mean in quotient spaces. The Fréchet mean in quotient
spaces is often used when the observations are deformed or transformed
by a group action. We show that in most cases this estimator is actually
inconsistent. We exhibit a sufficient condition for this inconsistency, which
amounts to the folding of the distribution of the noisy template when it
is projected to the quotient space. This condition appears to be fulfilled
as soon as the support of the noise is large enough. To quantify this
inconsistency we provide lower and upper bounds of the bias as a function
of the variability (the noise level). This shows that the consistency bias
cannot be neglected when the variability increases.
Keyword : Template, Fréchet mean, group action, quotient space, inconsistency,
consistency bias, empirical Fréchet mean, Hilbert space, manifold
∗ Université
Côte d’Azur, Inria, France, loic.devilliers@inria.fr
Ecole polytechnique, CNRS, Université Paris-Saclay, 91128, Palaiseau, France
‡ CMLA, ENS Cachan, CNRS, Université Paris-Saclay, 94235 Cachan, France
§ Université Côte d’Azur, Inria, France
† CMAP,
1
Contents
1 Introduction
3
2 Definitions, notations and generative model
5
3 Inconsistency for finite group when the template is
point
3.1 Presence of inconsistency . . . . . . . . . . . . . . . .
3.2 Upper bound of the consistency bias . . . . . . . . . .
3.3 Study of the consistency bias in a simple example . . .
. . . . . .
. . . . . .
. . . . . .
4 Inconsistency for any group when the template is
point
4.1 Presence of an inconsistency . . . . . . . . . . . . . .
4.2 Analysis of the condition in theorem 4.1 . . . . . . .
4.3 Lower bound of the consistency bias . . . . . . . . .
4.4 Upper bound of the consistency bias . . . . . . . . .
4.5 Empirical Fréchet mean . . . . . . . . . . . . . . . .
4.6 Examples . . . . . . . . . . . . . . . . . . . . . . . .
4.6.1 Action of translation on L2 (R/Z) . . . . . . .
4.6.2 Action of discrete translation on RZ/NZ . . . .
4.6.3 Action of rotations on Rn . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
5 Fréchet means top and quotient spaces
the template is a fixed point
5.1 Result . . . . . . . . . . . . . . . . . .
5.2 Proofs of these theorems . . . . . . . .
5.2.1 Proof of theorem 5.1 . . . . . .
5.2.2 Proof of theorem 5.2 . . . . . .
6 Conclusion and discussion
a regular
8
9
12
13
not a fixed
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
14
14
15
18
20
22
22
23
23
23
are not consistent when
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
25
25
26
26
28
28
A Proof of theorems for finite groups’ setting
29
A.1 Proof of theorem 3.2: differentiation of the variance in the quotient space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
A.2 Proof of theorem 3.1: the gradient is not zero at the template . . 32
A.3 Proof of theorem 3.3: upper bound of the consistency bias . . . . 32
A.4 Proof of proposition 3.2: inconsistency in R2 for the action of
translation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
B Proof of lemma 5.1: differentiation of the variance in the top
space
35
2
1
Introduction
In Kendall’s shape space theory [Ken89], in computational anatomy [GM98],
in statistics on signals, or in image analysis, one often aims at estimating a
template. A template stands for a prototype of the data. The data can be the
shape of an organ studied in a population [DPC+ 14] or an aircraft [LAJ+ 12],
an electrical signal of the human body, a MR image etc. To analyse the observations, one assumes that these data follow a statistical model. One often
models observations as random deformations of the template with additional
noise. This deformable template model proposed in [GM98] is commonly used
in computational anatomy. The concept of deformation introduces the notion of
group action: the deformations we consider are elements of a group which acts
on the space of observations, called here the top space. Since the deformations
are unknown, one usually considers equivalent classes of observations under the
group action. In other words, one considers the quotient space of the top space
(or ambient space) by the group. In this particular setting, the template estimation is most of the time based on the minimisation of the empirical variance
in the quotient space (for instance [KSW11, JDJG04, SBG08] among many others). The points that minimise the empirical variance are called the empirical
Fréchet mean. The Fréchet means introduced in [Fré48] is comprised of the
elements minimising the variance. This generalises the notion of expected value
in non linear spaces. Note that the existence or uniqueness of Fréchet mean is
not ensured. But sufficient conditions may be given in order to reach existence
and uniqueness (for instance [Kar77] and [Ken90]).
Several group actions are used in practice: some signals can be shifted in
time compared to other signals (action of translations [HCG+ 13]), landmarks
can be transformed rigidly [Ken89], shapes can be deformed by diffeomorphisms [DPC+ 14], etc. In this paper we restrict to transformation which leads
the norm unchanged. Rotations for instance leave the norm unchanged, but it
may seem restrictive. In fact, the square root trick detailed in section 5, allows
to build norms which are unchanged, for instance by reparametrization of curves
with a diffeomorphism, where our work can be applied.
We raise several issues concerning the estimation of the template.
1. Is the Fréchet mean in the quotient space equal to the original template
projected in the quotient space? In other words, is the template estimation
with the Fréchet mean in quotient space consistent?
2. If there is an inconsistency, how large is the consistency bias? Indeed,
we may expect the consistency bias to be negligible in many practicable
cases.
3. If one gets only a finite sample, one can only estimate the empirical Fréchet
mean. How far is the empirical Fréchet mean from the original template?
These issues originated from an example exhibited by Allassonnière, Amit and
Trouvé [AAT07]: they took a step function as a template and they added some
3
noise and shifted in time this function. By repeating this process they created a
data sample from this template. With this data sample, they tried to estimate
the template with the empirical Fréchet mean in the quotient space. In this
example, minimising the empirical variance did not succeed in estimating well
the template when the noise added to the template increases, even with a large
sample size.
One solution to ensure convergence to the template is to replace this estimation method with a Bayesian paradigm ([AKT10, BG14] or [ZSF13]). But there
is a need to have a better understanding of the failure of the template estimation with the Fréchet mean. One can studied the inconsistency of the template
estimation. Bigot and Charlier [BC11] first studied the question of the template
estimation with a finite sample in the case of translated signals or images by
providing a lower bound of the consistency bias. This lower bound was unfortunately not so informative as it is converging to zero asymptotically when the
dimension of the space tends to infinity. Miolane et al. [MP15, MHP16] later
provided a more general explanation of why the template is badly estimated
for a general group action thanks to a geometric interpretation. They showed
that the external curvature of the orbits is responsible for the inconsistency.
This result was further quantified with Gaussian noise. In this article, we provide sufficient conditions on the noise for which inconsistency appears and we
quantify the consistency bias in the general (non necessarily Gaussian) case.
Moreover, we mostly consider a vector space (possibly infinite dimensional) as
the top space while the article of Miolane et al. is restricted to finite dimensional manifolds. In a preliminary unpublished version of this work [ADP15],
we proved the inconsistency when the transformations come from a finite group
acting by translation. The current article extends these results by generalizing
to any isometric action of finite and non-finite groups.
This article is organised as follows. Section 2 details the mathematical terms
that we use and the generative model. In sections 3 and 4, we exhibit sufficient
condition that lead to an inconsistency when the template is not a fixed point
under the group action. This sufficient condition can be roughly understand as
follows: with a non zero probability, the projection of the random variable on
the orbit of the template is different from the template itself. This condition is
actually quite general. In particular, this condition it is always fulfilled with the
Gaussian noise or with any noise whose support is the whole space. Moreover
we quantify the consistency bias with lower and upper bounds. We restrict
our study to Hilbert spaces and isometric actions. This means that the space
is linear, the group acts linearly and leaves the norm (or the dot product)
unchanged. Section 3 is dedicated to finite groups. Then we generalise our
result in section 4 to non-finite groups. To complete this study, we extend in
section 5 the result when the template is a fixed point under the group action
and when the top space is a manifold. As a result we show that the inconsistency
exists for almost all noises. Although the bias can be neglected when the noise
level is sufficiently small, its linear asymptotic behaviour with respect to the
noise level show that it becomes unavoidable for large noises.
4
2
Definitions, notations and generative model
We denote by M the top space, which is the image/shape space, and G the
group acting on M . The action is a map:
G×M
(g, m)
→
M
7→ g · m
satisfying the following properties: for all g, g 0 ∈ G, m ∈ M (gg 0 )·m = g ·(g 0 ·m)
and eG · m = m where eG is the neutral element of G. For m ∈ M we note by
[m] the orbit of m (or the class of m). This is the set of points reachable from
m under the group action: [m] = {g · m, g ∈ G}. Note that if we take two orbits
[m] and [n] there are two possibilities:
1. The orbits are equal: [m] = [n] i.e. ∃g ∈ G s.t. n = g · m.
2. The orbits have an empty intersection: [m] ∩ [n] = ∅.
We call quotient of M by the group G the set all orbits. This quotient is noted
by:
Q = M/G = {[m], m ∈ M }.
The orbit of an element m ∈ M can be seen as the subset of M of all elements
g · m for g ∈ G or as a point in the quotient space. In this article we use these
two ways. We project an element m of the top space M into the quotient by
taking [m].
Now we are interested in adding a structure on the quotient from an existing
structure in the top space: take M a metric space, with dM its distance. Suppose
that dM is invariant under the group action which means that ∀g ∈ G, ∀a, b ∈
M dM (a, b) = dM (g · a, g · b). Then we obtain a pseudo-distance on Q defined
by:
dQ ([a], [b]) = inf dM (g · a, b).
(1)
g∈G
We remind that a distance on M is a map dM : M × M 7→ R+ such that for all
m, n, p ∈ M :
1. dM (m, n) = dM (n, m) (symmetry).
2. dM (m, n) ≤ dM (m, p) + dM (p, n) (triangular inequality).
3. dM (m, m) = 0.
4. dM (m, n) = 0 ⇐⇒ m = n.
A pseudo-distance satisfies only the first three conditions. If we suppose that
all the orbits are closed sets of M , then one can show that dQ is a distance. In
this article, we assume that dQ is always a distance, even if a pseudo-distance
would be sufficient. dQ ([a], [b]) can be interpreted as the distance between the
shapes a and b, once one has removed the parametrisation by the group G. In
other words, a and b have been registered. In this article, except in section 5, we
5
suppose that the the group acts isometrically on an Hilbert space, this means
that the map x 7→ g ·x is linear, and that the norm associated to the dot product
is conserved: kg · xk = kxk. Then dM (a, b) = ka − bk is a particular case of
invariant distance.
We now introduce the generative model used in this article for M a
vector space. Let us take a template t0 ∈ M to which we add a unbiased noise
: X = t0 + . Finally we transform X with a random shift S of G. We assume
that this variable S is independent of X and the only observed variable is:
Y = S · X = S · (t0 + ), with E() = 0,
(2)
while S, X and are hidden variables.
Note that it is not the generative model defined by Grenander and often
used in computational anatomy. Where the observed variable is rather Y 0 =
S · t0 + 0 . But when the noise is isotropic and the action is isometric, one can
show that the two models have the same law, since S · and have the same
probability distribution. As a consequence, the inconsistency of the template
estimation with the Fréchet mean in quotient space with one model implies the
inconsistency with the other model. Because the former model (2) leads to
simpler computation we consider only this model.
We can now set the inverse problem: given the observation Y , how to estimate the template t0 in M ? This is an ill-posed problem. Indeed for some
element group g ∈ G, the template t0 can be replaced by the translated g ·t0 , the
shift S by Sg −1 and the noise by g, which leads to the same observation Y . So
instead of estimating the template t0 , we estimate its orbit [t0 ]. By projecting
the observation Y in the quotient space we obtain [Y ]. Although the observation
Y = S · X and the noisy template X are different random variables in the top
space, their projections on the quotient space lead to the same random orbit
[Y ] = [X]. That is why we consider the generative model (2): the projection
in the quotient space remove the transformation of the group G. From now on,
we use the random orbit [X] in lieu of the random orbit of the observation [Y ].
The variance of the random orbit [X] (sometimes called the Fréchet functional or the energy function) at the quotient point [m] ∈ Q is the expected
value of the square distance between [m] and the random orbit [X], namely:
Q 3 [m] 7→ E(dQ ([m], [X])2 )
(3)
An orbit [m] ∈ Q which minimises this map is called a Fréchet mean of [X].
If we have an i.i.d sample of observations Y1 , . . . , Yn we can write the empirical quotient variance:
n
Q 3 [m] 7→
n
1X
1X
dQ ([m], [Yi ])2 =
inf km − gi · Yi k2 .
n i=1
n i=1 gi ∈G
(4)
Thanks to the equality of the quotient variables [X] and [Y ], an element which
minimises this map is an empirical Fréchet mean of [X].
6
In order to minimise the empirical quotient variance (4), the max-max algon
P
rithm1 alternatively minimises the function J(m, (gi )i ) = n1 km−gi ·Yi k2 over
i=1
a point m of the orbit [m] and over the hidden transformation (gi )1≤i≤n ∈ Gn .
With these notations we can reformulate our questions as:
1. Is the orbit of the template [t0 ] a minimiser of the quotient variance defined
in (3)? If not, the Fréchet mean in quotient space is an inconsistent
estimator of [t0 ].
2. In this last case, can we quantify the quotient distance between [t0 ] and a
Fréchet mean of [X]?
3. Can we quantify the distance between [t0 ] and an empirical Fréchet mean
of a n-sample?
This article shows that the answer to the first question is usually "no" in the
framework of an Hilbert space M on which a group G acts linearly and isometrically. The only exception is theorem 5.1 where the top space M is a manifold.
In order to prove inconsistency, an important notion in this framework is the
isotropy group of a point m in the top space. This is the subgroup which leaves
this point unchanged:
Iso(m) = {g ∈ G, g · m = m}.
We start in section 3 with the simple example where the group is finite and the
isotropy group of the template is reduced to the identity element (Iso(t0 ) =
{eG }, in this case t0 is called a regular point). We turn in section 4 to the case
of a general group and an isotropy group of the template which does not cover
the whole group (Iso(t0 ) 6= G) i.e t0 is not a fixed point under the group action.
To complete the analysis, we assume in section 5 that the template t0 is a fixed
point which means that Iso(t0 ) = G.
In sections 3 and 4 we show lower and upper bounds of the consistency bias
which we define as the quotient distance between the template orbit and the
Fréchet mean in quotient space. These results give an answer to the second
question. In section 4, we show a lower bound for the case of the empirical
Fréchet mean which answers to the third question.
As we deal with different notions whose name or definition may seem similar,
we use the following vocabulary:
1. The variance of the noisy template X in the top space is the function
E : m ∈ M 7→ E(km − Xk2 ). The unique element which minimises this
function is the Fréchet mean of X in the top space. With our assumptions
it is the template t0 itself.
2. We call variability (or noise level) of the template the value of the variance
at this minimum: σ 2 = E(kt0 − Xk2 ) = E(t0 ).
1 The term max-max algorithm is used for instance in [AAT07], and we prefer to keep the
same name, even if it is a minimisation.
7
3. The variance of the random orbit [X] in the quotient space is the function
F : m 7→ E(dQ ([m], [X])2 ). Notice that we define this function from the
top space and not from the quotient space. With this definition, an orbit
[m? ] is a Fréchet mean of [X] if the point m? is a global minimiser of F .
In sections 3 and 4, we exhibit a sufficient condition for the inconsistency,
which is: the noisy template X takes value with a non zero probability in
the set of points which are strictly closer to g · t0 for some g ∈ G than the
template t0 itself. This is linked to the folding of the distribution of the noisy
template when it is projected to the quotient space. The points for which the
distance to the template orbit in the quotient space is equal to the distance to
the template in the top space are projected without being folded. If the support
of the distribution of the noisy template contains folded points (we only assume
that the probability measure of X, noted P, is a regular measure), then there
is inconsistency. The support of the noisy template X is defined by the set of
points x such that P(X ∈ B(x, r)) > 0 for all r > 0. For different geometries of
the orbit of the template, we show that this condition is fulfilled as soon as the
support of the noise is large enough.
The recent article of Cleveland et al. [CWS16] may seem contradictory with
our current work. Indeed the consistency of the template estimation with the
Fréchet mean in quotient space is proved under hypotheses which seem to satisfy
our framework: the norm is unchanged under their group action (isometric
action) and a noise is present in their generative model. However we believe
that the noise they consider might actually not be measurable. Indeed, their
top space is:
Z 1
L2 ([0, 1]) = f : [0, 1] → R such that f is measurable and
f 2 (t)dt < +∞ .
0
The noise e is supposed to be in L2 ([0, 1]) such that for all t, s ∈ [0, 1], E(e(t)) = 0
and E(e(t)e(s)) = σ 2 1s=t , for σ > 0. This means that e(t) and e(s) are chosen
without correlation as soon as s 6= t. In this case, it is not clear for us that the
resulting function e is measurable, and thus that its Lebesgue integration makes
sense. Thus, the existence of such a random process should be established before
we can fairly compare the results of both works.
3
Inconsistency for finite group when the template is a regular point
In this Section, we consider a finite group G acting isometrically and effectively
on M = Rn a finite dimensional space equipped with the euclidean norm k k,
associated to the dot product h , i.
We say that the action is effective if x 7→ g · x is the identity map if and only
if g = eG . Note that if the action is not effective, we can define a new effective
action by simply quotienting G by the subgroup of the element g ∈ G such that
x 7→ g · x is the identity map.
8
The template is assumed to be a regular point which means that the isotropy
group of the template is reduced to the neutral element of G. Note that the
measure of singular points (the points which are not regular) is a null set for
the Lebesgue measure (see item 1 in appendix A.1).
Example 3.1. The action of translation on coordinates: this action is a simplified setting for image registration, where images can be obtained by the translation of one scan to another due to different poses. More precisely, we take
the vector space M = RT where G = T = (Z/N Z)D is the finite torus in Ddimension. An element of RT is seen as a function m : T → R, where m(τ ) is
the grey value at pixel τ . When D = 1, m can be seen like a discretised signal
with N points, when D = 2, we can see m like an image with N × N pixels etc.
We then define the group action of T on RT by:
τ ∈ T, m ∈ RT
τ · m : σ 7→ m(σ + τ ).
This group acts isometrically and effectively on M = RT .
In this setting, if E(kXk2 ) < +∞ then the variance of [X] is well defined:
F : m ∈ M 7→ E(dQ ([X], [m])2 ).
In this framework, F is non-negative and continuous.
Schwarz inequality we have:
lim F (m) ≥
kmk→∞
(5)
Thanks to Cauchy-
lim kmk2 − 2kmkE(kXk) + E(kXk2 ) = +∞.
kmk→∞
Thus for some R > 0 we have: for all m ∈ M if kmk > R then F (m) ≥ F (0) + 1.
The closed ball B(0, R) is a compact set (because M is a finite vector space)
then F restricted to this ball reached its minimum m? . Then for all m ∈ M ,
if m ∈ B(0, R), F (m? ) ≤ F (m), if kmk > R then F (m) ≥ F (0) + 1 > F (0) ≥
F (m? ). Therefore [m? ] is a Fréchet mean of [X] in the quotient Q = M/G.
Note that this ensure the existence but not the uniqueness.
In this Section, we show that as soon as the support of the distribution of
X is big enough, the orbit of the template is not a Fréchet mean of [X]. We
provide a upper bound of the consistency bias depending on the variability of
X and an example of computation of this consistency bias.
3.1
Presence of inconsistency
The following theorem gives a sufficient condition on the random variable X for
an inconsistency:
Theorem 3.1. Let G be a finite group acting on M = Rn isometrically and
effectively. Assume that the random variable X is absolutely continuous with
respect to the Lebesgue’s measure, with E(kXk2 ) < +∞. We assume that t0 =
E(X) is a regular point.
9
g · t0
0
Cone(t0 )
t0
g 0 · t0
Figure 1: Planar representation of a part of the orbit of the template t0 . The
lines are the hyperplanes whose points are equally distant of two distinct elements of the orbit of t0 , Cone(t0 ) represented in points is the set of points closer
from t0 than any other points in the orbit of t0 . Theorem 3.1 states that if the
support (the dotted disk) of the random variable X is not included in this cone,
then there is an inconsistency.
We define Cone(t0 ) as the set of points closer from t0 than any other points
of the orbit [t0 ], see fig. 1 or item 6 in appendix A.1 for a formal definition. In
other words, Cone(t0 ) is defined as the set of points already registered with t0 .
Suppose that:
P (X ∈
/ Cone(t0 )) > 0,
(6)
then [t0 ] is not a Fréchet mean of [X].
The proof of theorem 3.1 is based on two steps: first, differentiating the
variance F of [X]. Second, showing that the gradient at the template is not
zero, therefore the template can not be a minimum of F . Theorem 3.2 makes
the first step.
Theorem 3.2. The variance F of [X] is differentiable at any regular points. For
m0 a regular point, we define g(x, m0 ) as the almost unique g ∈ G minimising
km0 − g · xk (in other words, g(x, m0 ) · x ∈ Cone(m0 )). This allows us to
compute the gradient of F at m0 :
∇F (m0 ) = 2(m0 − E(g(X, m0 ) · X)).
(7)
This Theorem is proved in appendix A.1. Then we show that the gradient
of F at t0 is not zero. To ensure that F is differentiable at t0 we suppose in
the assumptions of theorem 3.1 that t0 = E(X) is a regular point. Thanks
to theorem 3.2 we have:
∇F (t0 ) = 2(t0 − E(g(X, t0 ) · X)).
Therefore ∇F (t0 )/2 is the difference between two terms, which are represented on fig. 2: on fig. 2a there is a mass under the two hyperplanes outside
10
g · t0
0
g · t0
Cone(t0 )
t0
0
g 0 · t0
Cone(t0 )
t0
Z
g 0 · t0
(a) Graphic representation of the
template t0 = E(X) mean of
points of the support of X.
(b) Graphic representation of
Z = E(g(X, t0 ) · X). The points
X which were outside Cone(t0 )
are now in Cone(t0 ) (thanks to
g(X, t0 )). This part, in grid-line,
represents the points which have
been folded.
Figure 2: Z is the mean of points in Cone(t0 ) where Cone(t0 ) is the set of points
closer from t0 than g · t0 for g ∈ G \ eG . Therefore it seems that Z is higher that
t0 , therefore ∇F (t0 ) = 2(t0 − Z) 6= 0.
Cone(t0 ), so this mass is nearer from gt0 for some g ∈ G than from t0 . In the following expression Z = E(g(X, t0 ) · X), for X ∈
/ Cone(t0 ), g(X, t0 )X ∈ Cone(t0 )
such points are represented in grid-line on fig. 2. This suggests that the point
Z = E(g(X, t0 ) · X) which is the mean of points in Cone(t0 ) is further away
from 0 than t0 . Then ∇F (t0 )/2 = t0 − Z should be not zero, and t0 = E(X) is
not a critical point of the variance of [X]. As a conclusion [t0 ] is not a Fréchet
mean of [X]. This is turned into a rigorous proof in appendix A.2.
In the proof of theorem 3.1, we took M an Euclidean space and we work with
the Lebesgue’s measure in order to have P(X ∈ H) = 0 for every hyperplane
H. Therefore the proof of theorem 3.1 can be extended immediately to any
Hilbert space M , if we make now the assumption that P(X ∈ H) = 0 for
every hyperplane H, as long as we keep a finite group acting isometrically and
effectively on M .
Figure 2 illustrates the condition of theorem 3.1: if there is no mass beyond
the hyperplanes, then the two terms in ∇F (t0 ) are equal (because almost surely
g(X, t0 ) · X = X). Therefore in this case we have ∇F (t0 ) = 0. This do not
prove necessarily that there is no inconsistency, just that the template t0 is a
critical point of F . Moreover this figure can give us an intuition on what the
consistency bias (the distance between [t0 ] and the set of all Fréchet mean in
the quotient space) depends: for t0 a fixed regular point, when the variability
of X (defined by E(kX − t0 k2 )) increases the mass beyond the hyperplanes
on fig. 2 also increases, the distance between E(g(X, t0 ) · X) and t0 (i.e. the
norm of ∇F (t0 )) augments. Therefore q the Fréchet mean should be further
from t0 , (because at this point one should have ∇F (q) = 0 or q is a singular
11
point). Therefore the consistency bias appears to increase with the variability
of X. By establishing a lower and upper bound of the consistency bias and
by computing the consistency bias in a very simple case, sections 3.2, 3.3, 4.3
and 4.4 investigate how far this hypothesis is true.
We can also wonder if the converse of theorem 3.1 is true: if the support is
included in Cone(t0 ), is there consistency? We do not have a general answer
to that. In the simple example section 3.3 it happens that condition (6) is
necessary and sufficient. More generally the following proposition provides a
partial converse:
Cone(y)
g · t0
y
t0
O
Cone(t0 )
g 0 · t0
Figure 3: y 7→ Cone(y) is continuous. When the support of the X is bounded
and included in the interior of Cone(t0 ) the hatched cone. For y sufficiently
close to the template t0 , the support of the X (the ball in red) is still included
in Cone(y) (in grey), then F (y) = (E(kX − yk2 ). Therefore in this case, [t0 ] is
at least a Karcher mean of [X].
Proposition 3.1. If the support of X is a compact set included in the interior
of Cone(t0 ), then the orbit of the template [t0 ] is at least a Karcher mean of [X]
(a Karcher mean is a local minimum of the variance).
Proof. If the support of X is a compact set included in the interior of Cone(t0 )
then we know that X-almost surely: dQ ([X], [t0 ]) = kX −t0 k. Thus the variance
at t0 in the quotient space is equal to the variance at t0 in the top space. Now
by continuity of the distance map (see fig. 3) for y in a small neighbourhood
of t0 , the support of X is still included in the interior of Cone(y). We still
have dQ ([X], [y]) = kX − yk X-almost surely. In other words, locally around
t0 , the variance in the quotient space is equal to the variance in the top space.
Moreover we know that t0 = E(X) is the only global minimiser of the variance
of X: m 7→ E(km − Xk2 ) = E(m). Therefore t0 is a local minimum of F
the variance in the quotient space (since the two variances are locally equal).
Therefore [t0 ] is at least a Karcher mean of [X] in this case.
3.2
Upper bound of the consistency bias
In this Subsection we show an explicit upper bound of the consistency bias.
12
Theorem 3.3. When G is a finite group acting isometrically on M = Rn ,
we denote |G| the cardinal of the group G. If X is Gaussian vector: X ∼
N (t0 , s2 IdRn ), and m? ∈ argmin F , then we have the upper bound of the consistency bias:
p
(8)
dQ ([t0 ], [m? ]) ≤ s 8 log(|G|).
The proof is postponed in appendix A.3. When X ∼ N (t0 , s2 Idn ) the
variability of X is σ 2 = E(||X − t0 ||2 )p= ns2 and we can write the upper
bound of the bias: dQ ([t0 ], [m? ]) ≤ √σn 8 log |G|. This Theorem shows that
the consistency bias is low when the variability of X is small, which tends to
confirm our hypothesis in section 3.1. It is important to notice that this upper
bound explodes when the cardinal of the group tends to infinity.
3.3
Study of the consistency bias in a simple example
In this Subsection, we take a particular case of example 3.1: the action of
translation with T = Z/2Z. We identify RT with R2 and we note by (u, v)T an
element of RT . In this setting, one can completely describe the action of T on
RT : 0 · (u, v)T = (u, v)T and 1 · (u, v)T = (v, u)T . The set of singularities is the
line L = {(u, u)T , u ∈ R}. We note HPA = {(u, v)T , v > u} the half-plane
above L and HPB the half-plane below L. This simple example will allow us
to provide necessary and sufficient condition for an inconsistency at regular and
singular points. Moreover we can compute exactly the consistency bias, and
exhibit which parameters govern the bias. We can then find an equivalent of
the consistency bias when the noise tends to zero or infinity. More precisely, we
have the following theorem proved in appendix A.4:
Proposition 3.2. Let X be a random variable such that E(kXk2 ) < +∞ and
t0 = E(X).
1. If t0 ∈ L, there is no inconsistency if and only if the support of X is
included in the line L = {(u, u), u ∈ R}. If t0 ∈ HPA (respectively in
HPB ), there is no inconsistency if and only if the support of X is included
in HPA ∪ L (respectively in HPB ∪ L).
2. If X is Gaussian: X ∼ N (t0 , s2 Id2 ), then the Fréchet mean of [X] exists
and is unique. This Fréchet mean [m? ] is on the line passing through E(X)
and perpendicular to L and the consistency bias ρ̃ = dQ ([t0 ], [m? ]) is the
function of s and d = dist(t0 , L) given by:
2
Z
2 +∞ 2
r
d
ρ̃(d, s) = s
r exp −
g
dr,
(9)
π ds
2
rs
where g is a non-negative function on [0, 1] defined by g(x) = sin(arccos(x))−
x arccos(x).
(a) If d > 0 then s 7→ ρ̃(d, s) has an asymptotic linear expansion:
2
Z
2 +∞ 2
r
ρ̃(d, s) ∼ s
r exp −
dr.
s→∞ π 0
2
13
(10)
(b) If d > 0, then ρ̃(d, s) = o(sk ) when s → 0, for all k ∈ N.
(c) s →
7 ρ̃(0, s) is linear with respect to s (for d = 0 the template is a
fixed point).
Remark 3.1. Here, contrarily to the case of the action of rotation in [MHP16],
it is not the ratio kE(X)k over the noise which matters to estimate the consistency bias. Rather the ratio dist(E(X), L) over the noise. However in both cases
we measure the distance between the signal and the singularities which was {0}
in [MHP16] for the action of rotations, L in this case.
4
Inconsistency for any group when the template
is not a fixed point
In section 3 we exhibited sufficient condition to have an inconsistency, restricted
to the case of finite group acting on an Euclidean space. We now generalize this
analysis to Hilbert spaces of any dimension included infinite. Let M be such
an Hilbert space with its dot product noted by h , i and its associated norm
k k. In this section, we do not anymore suppose that the group G is finite.
In the following, we prove that there is an inconsistency in a large number of
situations, and we quantify the consistency bias with lower and upper bounds.
Example 4.1. The action of continuous translation: We take G = (R/Z)D
acting on M = L2 ((R/Z)D , R) with:
∀τ ∈ G
∀f ∈ M
(τ · f ) : t 7→ f (t + τ )
This isometric action is the continuous version of the example 3.1: the elements
of M are now continuous images in dimension D.
4.1
Presence of an inconsistency
We state here a generalization of theorem 3.1:
Theorem 4.1. Let G be a group acting isometrically on M an Hilbert space,
and X a random variable in M , E(kXk2 ) < +∞ and E(X) = t0 6= 0. If:
P (dQ ([t0 ], [X]) < kt0 − Xk) > 0,
(11)
P sup hg · X, t0 i > hX, t0 i > 0.
(12)
or equivalently:
g∈G
Then [t0 ] is not a Fréchet mean of [X] in Q = M/G.
The condition of this Theorem is the same condition of theorem 3.1: the
support of the law of X contains points closer from gt0 for some g than t0 .
Thus the condition (12) is equivalent to E(dQ ([X], [t0 ])2 ) < E(kX − t0 k2 ). In
other words, the variance in the quotient space at t0 is strictly smaller than the
variance in the top space at t0 .
14
Proof. First the two conditions are equivalent by definition of the quotient distance and by expansion of the square norm of kt0 − Xk and of kt0 − gXk for
g ∈ G.
As above, we define the variance of [X] by:
2
F (m) = E inf kg · X − mk .
g∈G
In order to prove this Theorem, we find a point m such that F (m) < F (t0 ),
which directly implies that [t0 ] is not be a Fréchet mean of [X].
In the proof of theorem 3.1, we showed that under condition (6) we had
h∇F (t0 ), t0 i < 0. This leads us to study F restricted to R+ t0 : we define for
a ∈ R+ f (a) = F (at0 ) = E(inf g∈G kg · X − ak2 ). Thanks to the isometric action
we can expand f (a) by:
f (a) = a2 kt0 k2 − 2aE sup hg · X, t0 i + E(kXk2 ),
(13)
g∈G
and explicit the unique element of R+ which minimises f :
E sup hg · X, t0 i
g∈G
.
a? =
kt0 k2
(14)
For all x ∈ M , we have sup hg · x, t0 i ≥ hx, t0 i and thanks to condition (12) we
g∈G
get:
E(sup hg · X, t0 i) > E(hX, t0 i) = hE(X), t0 i = kt0 k2 ,
(15)
g∈G
which implies a? > 1. Then F (a? t0 ) < F (t0 ).
Note that kt0 k2 (a? − 1) = E supg∈G hg · X, t0 i − E(hX, t0 i) (which is positive) is exactly − h∇F (t0 ), t0 i /2 in the case of finite group, see Equation (44).
Here we find the same expression without having to differentiate the variance
F , which may be not possible in the current setting.
4.2
Analysis of the condition in theorem 4.1
We now look for general cases when we are sure that Equation (12) holds which
implies the presence of inconsistency. We saw in section 3 that when the group
was finite, it is possible to have no inconsistency only if the support of the
law is included in a cone delimited by some hyperplanes. The hyperplanes were
defined as the set of points equally distant of the template t0 and g ·t0 for g ∈ G.
Therefore if the cardinal of the group becomes more and more important, one
could think that in order to have no inconsistency the space where X should
takes value becomes smaller and smaller. At the limit it leaves only at most an
hyperplane. In the following, we formalise this idea to make it rigorous. We
show that the cases where theorem 4.1 cannot be applied are not generic cases.
15
First we can notice that it is not possible to have the condition (12) if t0 is a
fixed point under the action of G. Indeed in this case hg · X, t0 i = X, g −1 t0 =
hX, t0 i). So from now, we suppose that t0 is not a fixed point. Now let us see
some settings when we have the condition (11) and thus condition (12).
Proposition 4.1. Let G be a group acting isometrically on an Hilbert space M ,
and X a random variable in M , with E(kXk2 ) < +∞ and E(X) = t0 6= 0. If:
1. [t0 ] \ {t0 } is a dense set in [t0 ].
2. There exists η > 0 such that the support of X contains a ball B(t0 , η).
Then condition (12) holds, and the estimator is inconsistent according to theorem 4.1.
B(t0 , η)
O
t0
g · t0
[t0 ]
Figure 4: The smallest disk is included in the support of X and the points in
that disk is closer from g · t0 than from t0 . According to theorem 4.1 there is an
inconsistency.
Proof. By density, one takes g · t0 ∈ B(t0 , η) \ {t0 } for some g ∈ G, now if we
take r < min(kg ·t0 −t0 k/2, η −kg ·t0 −t0 k) then B(g ·t0 , r) ⊂ B(t0 , ). Therefore
by the assumption we made on the support one has P(X ∈ B(g · t0 , r)) > 0.
For y ∈ B(g · t0 , r) we have that kgt0 − yk < kt0 − yk (see fig. 4). Then we
have: P (dQ ([X], [t0 ]) < kX − t0 k) ≥ P(X ∈ B(g · t0 , r)) > 0. Then we verify
condition (12), and we can apply theorem 4.1.
Proposition 4.1 proves that there is a large number of cases where we can
ensure the presence of an inconsistency. For instance when M is a finite dimensional vector space and the random variable X has a continuous positive
density (for the Lebesgue’s measure) at t0 , condition 2 of Proposition 4.1 is
fulfilled. Unfortunately this proposition do not cover the case where there is no
mass at the expected value t0 = E(X). This situation could appear if X has
two modes for instance. The following proposition deals with this situation:
16
Proposition 4.2. Let G be a group acting isometrically on M . Let X be a
random variable in M , such that E(kXk2 ) < +∞ and E(X) = t0 6= 0. If:
1. ∃ϕ s.t. ϕ : (−a, a) → [t0 ] is C 1 with ϕ(0) = t0 , ϕ0 (0) = v 6= 0.
2. The support of X is not included in the hyperplane v ⊥ : P(X ∈
/ v ⊥ ) > 0.
Then condition (12) is fulfilled, which leads to an inconsistency thanks to Theorem 4.1.
Proof. Thanks to the isometric action: ht0 , vi = 0. We choose y ∈
/ v ⊥ in the
support of X and make a Taylor expansion of the following square distance (see
also Figure 5) at 0:
kϕ(x) − yk2 = kt0 + xv + o(x) − yk2 = kt0 − yk2 − 2x hy, vi + o(x).
Then: ∃x? ∈ (−a, a) s.t. kx? k < a, x hy, vi > 0 and kϕ(x? ) − yk < kt0 − yk. For
some g ∈ G, ϕ(x? ) = g · t0 . By continuity of the norm we have:
∃r > 0 s.t. ∀z ∈ B(y, r) kg · t0 − zk < kt0 − zk.
Then P(kg·t0 −Xk < kt0 −Xk) ≥ P(X ∈ B(y, r)) > 0. Theorem 4.1 applies.
Proposition 4.2 was a sufficient condition on inconsistency in the case of an
orbit which contains a curve. This brings us to extend this result for orbits
which are manifolds:
Proposition 4.3. Let G be a group acting isometrically on an Hilbert space M ,
X a random variable in M , with E(kXk2 ) < +∞. Assume X = t0 + σ, where
t0 6= 0 and E() = 0, and E(kk) = 1. We suppose that [t0 ] is a sub-manifold of
M and write Tt0 [t0 ] the linear tangent space of [t0 ] at t0 . If:
P(X ∈
/ Tt0 [t0 ]⊥ ) > 0,
(16)
P( ∈
/ Tt0 [t0 ]⊥ ) > 0,
(17)
which is equivalent to:
then there is an inconsistency.
Proof. First t0 ⊥ Tt0 [t0 ] (because the action is isometric), Tt0 [t0 ]⊥ = t0 +
Tt0 [t0 ]⊥ , then the event {X ∈ Tt0 [t0 ]⊥ } is equal to { ∈ Tt0 [t0 ]⊥ }. This proves
that equations (16) and (17) are equivalent. Thanks to assumption (16), we can
choose y in the support of X such that y ∈
/ Tt0 [t0 ]⊥ . Let us take v ∈ Tt0 [t0 ]
1
such that hy, vi =
6 0 and choose ϕ a C curve in [t0 ], such that ϕ(0) = t0 and
ϕ0 (0) = v. Applying proposition 4.2 we get the inconsistency.
Note that Condition (16) is very weak, because Tt0 [t0 ] is a strict linear
subspace of M .
17
[t0 ]
Tt0 [t0 ]
y
g · t0
O
t0
Tt0 [t0 ]⊥
Figure 5: y ∈
/ Tt0 [t0 ]⊥ therefore y is closer from g · t0 for some g ∈ G than t0
itself. In conclusion, if y is in the support of X, there is an inconsistency.
4.3
Lower bound of the consistency bias
Under the assumption of Theorem 4.1, we have an element a? t0 such that
F (a? t0 ) < F (t0 ) where F is the variance of [X]. From this element, we deduce lower bounds of the consistency bias:
Theorem 4.2. Let δ be the unique positive solution of the following equation:
δ 2 + 2δ (kt0 k + EkXk) − kt0 k2 (a? − 1)2 = 0.
Let δ? be the unique positive solution of the following equation:
p
δ 2 + 2δkt0 k 1 + 1 + σ 2 /kt0 k2 − kt0 k2 (a? − 1)2 = 0,
(18)
(19)
where σ 2 = E(kX − t0 k2 ) is the variability of X. Then δ and δ? are two lower
bounds of the consistency bias.
Proof. In order to prove this Theorem, we exhibit a ball around t0 such that the
points on this ball have a variance bigger than the variance at the point a? t0 ,
where a? was defined in Equation (14): thanks to the expansion of the function
f we did in (13) we get :
F (t0 ) − F (a? t0 ) = kt0 k2 (a? − 1)2 > 0,
(20)
Moreover we can show (exactly like equation (43)) that for all x ∈ M :
2
2
|F (t0 ) − F (x)| ≤ E inf kg · X − t0 k − inf kg · X − xk
g∈G
g∈G
≤ kx − t0 k (2kt0 k + kx − t0 k + E(k2Xk)) .
(21)
With Equations (20) and (21), for all x ∈ B(t0 , δ) we have F (x) > F (a? t0 ).
No point in that ball mapped in the quotient space is a Fréchet mean of [X]. So
18
δ is a lower bound of the consistency bias. Now by usingthe fact that E(kXk) ≤
p
p
kt0 k2 + σ 2 , we get: 2|F (t0 )−F (x)| ≤ 2kx−t0 k×kt0 k 1 + 1 + σ 2 /kt0 k2 +
kx − t0 k2 . This proves that δ? is also a lower bound of the consistency bias.
δ? is smaller than δ, but the variability of X intervenes in δ? . Therefore we
propose to study the asymptotic behaviour of δ? when the variability tends to
infinity. We have the following proposition:
Proposition 4.4. Under the hypotheses of Theorem 4.2, we write X = t0 + σ,
with E() = 0, and E(kk2 ) = 1 and note ν = E(supg∈G hg, t0 /kt0 ki) ∈ (0, 1],
we have that:
p
δ? ∼ σ( 1 + ν 2 − 1),
σ→+∞
In particular, the consistency bias explodes when the variability of X tends
to infinity.
Proof. First, let us prove that that ν ∈ (0, 1] under the condition (12). We
have ν ≥ E(h, t0 /kt0 ki = 0. By a reductio ad absurdum: if ν = 0, then
sup hg, t0 i = h, t0 i almost surely. We have then almost surely: hX, t0 i ≤
g∈G
supg∈G hgX, t0 i ≤ kt0 k2 + supg∈G σ hg, t0 i = kt0 k2 + σ h, t0 i ≤ hX, t0 i , which
p
is in contradiction with (12). Besides ν ≤ E(kk) ≤ Ekk2 = 1
Second, we exhibit equivalent of the terms in equation (19) when σ → +∞:
p
2kt0 k 1 + 1 + σ 2 /kt0 k2 ∼ 2σ.
(22)
Now by definition of a? in Equation (14) and the decomposition of X = t0 + σ
we get:
1
E sup (hg · t0 , t0 i + hg · σ, t0 i) − kt0 k
kt0 k(a? − 1) =
kt0 k
g∈G
1
kt0 k(a? − 1) ≤
E sup hg · σ, t0 i = σν
(23)
kt0 k
g∈G
1
kt0 k(a? − 1) ≥
E sup hg · σ, t0 i − 2kt0 k = σν − 2kt0 k,
(24)
kt0 k
g∈G
The lower bound and the upper bound of kt0 k(a? −1) found in (23) and (24) are
both equivalent to σν, when σ → +∞. Then the constant term of the quadratic
Equation (19) has an equivalent:
− kt0 k2 (a? − 1)2 ∼ −σ 2 ν 2 .
(25)
Finallye if we solve the quadratic Equation (19), we write δ? as a function of
the coefficients of the quadratic equation (19). We use the equivalent of each of
these terms thanks to equation (22) and (25), this proves proposition 4.4.
19
Remark 4.1. Thanks to inequality (24), if ktσ0 k < ν2 , then kt0 k2 (1 − a? )2 ≥
(σν −2kt0 k)2 , then we write δ? as a function of the coefficients of Equation (19),
we obtain a lower bound of the inconsistency bias as a function of kt0 k, σ and
ν for σ > 2kt0 k/ν:
q
p
p
δ?
2
2
≥ −(1 + 1 + σ /kt0 k ) + (1 + 1 + σ 2 /kt0 k2 )2 + (σν/kt0 k − 2)2 .
kt0 k
Although the constant ν intervenes in this lower bound, it is not an explicit
term. We now explicit its behaviour depending on t0 . We remind that:
1
ν=
E sup hg, t0 i .
kt0 k
g∈G
To this end, we first note that the set of fixed points under the action of G is a
closed linear space, (because we can write it as an intersection of the kernel of
the continuous and linear functions: x 7→ g · x − x for all g ∈ G). We denote by
p the orthogonal projection on the set of fixed points Fix(M ). Then for x ∈ M ,
we have: dist(x, Fix(M )) = kx − p(x)k. Which yields:
hg, t0 i = hg, t0 − p(t0 )i + h, p(t0 )i .
(26)
The right hand side of Equation (26) does not depend on g as p(t0 ) ∈ Fix(M ).
Then:
kt0 kν = E sup hg, t0 − p(t0 )i + hE(), p(t0 )i .
g∈G
Applying the Cauchy-Schwarz inequality and using E() = 0, we can conclude
that:
ν≤
1
dist(t0 , Fix(M ))E(kk) = dist(t0 /kt0 k, Fix(M ))E(kk).
kt0 k
(27)
This leads to the following comment: our lower bound of the consistency bias is
smaller when our normalized template t0 /kt0 k is closer to the set of fixed points.
4.4
Upper bound of the consistency bias
In this Section, we find a upper bound of the consistency bias. More precisely
we have the following Theorem:
Proposition 4.5. Let X be a random variable in M , such that X = t0 + σ
where σ > 0, E() = 0 and E(||||2 ) = 1. We suppose that [m? ] is a Fréchet
mean of [X]. Then we have the following upper bound of the quotient distance
between the orbit of the template t0 and the Fréchet mean of [X]:
p
dQ ([m? ], [t0 ]) ≤ σν(m∗ −m0 )+ σ 2 ν(m∗ − m0 )2 + 2dist(t0 , Fix(M ))σν(m∗ − m0 ),
(28)
where we have noted ν(m) = E(supg hg, m/kmki) ∈ [0, 1] if m 6= 0 and
ν(0) = 0, and m0 the orthogonal projection of t0 on F ix(M ).
20
Note that we made no hypothesis on the template
pin this proposition. We
deduce from Equation (28) that √
dQ ([m? ], [t0 ]) ≤ σ + σ 2 + 2σdist(t0 , Fix(M ))
is a O(σ) when σ → ∞, but a O( σ) when σ → 0, in particular the consistency
bias can be neglected when σ is small.
Proof. First we have:
F (m? ) ≤ F (t0 ) = E(inf ||t0 − g(t0 + σ)||2 ) ≤ E(||σ||2 ) = σ 2 .
g
(29)
Secondly we have for all m ∈ M , (in particular for m? ):
F (m) =
E(inf (km − gt0 k2 + σ 2 kk2 − 2hgσ, m − gt0 i))
≥
dQ ([m], [t0 ])2 + σ 2 − 2E(suphσ, gmi).
g
(30)
g
With Inequalities (29) and (30) one gets:
dQ ([m∗ ], [t0 ])2 ≤ 2E(sup hσ, gm? i) = 2σν(m? )||m? ||,
g
note that at this point, if m? = 0 then E(supg hσ, gm? i) = 0 and ν(m? ) = 0
although Equation (4.4) is still true even if m? = 0. Moreover with the triangular
inequality applied at [m? ], [0] and [t0 ], one gets: km? k ≤ kt0 k + dQ ([m? ], [t0 ])
and then:
dQ ([m∗ ], [t0 ])2 ≤ 2σν(m? )(dQ ([m∗ ], [t0 ]) + kt0 k).
(31)
We can solve inequality (31) and we get:
p
dQ ([m? ], [t0 ]) ≤ σν(m? ) + σ 2 ν(m? )2 + 2kt0 kσν(m? ),
(32)
We note by FX instead of F the variance in the quotient space of [X], and we
want to apply inequality (32) to X − m0 . As m0 is a fixed point:
2
FX (m) = E inf kX − m0 − g · (m − m0 )k = FX−m0 (m − m0 )
g∈G
Then m? minimises FX if and only if m? − m0 minimises FX−m0 . We apply
Equation (32) to X − m0 , with E(X − m0 ) = t0 − m0 and [m? − m0 ] a Fréchet
mean of [X − m0 ]. We get:
p
dQ ([m? −m0 ], [t0 −m0 ]) ≤ σν(m∗ −m0 )+ σ 2 ν(m∗ − m0 )2 + 2kt0 − m0 kσν(m∗ − m0 ).
Moreover dQ ([m? ], [t0 ]) = dQ ([m? − m0 ], [t0 − m0 ]), which concludes the proof.
21
4.5
Empirical Fréchet mean
In practice, we never compute the Fréchet mean in quotient space, only the
empirical Fréchet mean in quotient space when the size of a sample is supposed
to be large enough. If the empirical Fréchet in the quotient space means converges to the Fréchet mean in the quotient space then we can not use these
empirical Fréchet mean in order to estimate the template. In [BB08], it has
been proved that the empirical Fréchet mean converges to the Fréchet mean
with a √1n convergence speed, however the law of the random variable is supposed to be included in a ball whose radius depends on the geometry on the
manifold. Here we are not in a manifold, indeed the quotient space contains
singularities, moreover we do not suppose that the law is necessarily bounded.
However in [Zie77] the empirical Fréchet means is proved to converge to the
Fréchet means but no convergence rate is provided.
We propose now to prove that the quotient distance between the template
and the empirical Fréchet mean in quotient space have an lower bound which
is the asymptotic of the one lower bound of the consistency bias found in (18).
Take X, X1 , . . . , Xn independent and identically distributed (with t0 = E(X)
not a fixed point). We define the empirical variance of [X] by:
n
m ∈ M 7→ Fn (m) =
n
1X
1X
dQ ([m], [Xi ])2 =
inf km − g · Xi k2 ,
n i=1
n i=1 g∈G
and we say that [mn? ] is a empirical Fréchet mean of [X] if mn? is a global
minimiser of Fn .
Proposition 4.6. Let X, X1 , . . . , Xn independent and identically distributed
random variables, with t0 = E(X). Let be [mn? ] be an empirical Fréchet mean
of [X]. Then δn is a lower bound of the quotient distance between the orbit of
the template and [mn? ], where δn is the unique positive solution of:
!
n
1X
2
kXi k δ − kt0 k2 (an? − 1)2 = 0.
δ + 2 ||t0 || +
n i=1
an? is defined like a? in section 4.1 by:
n
P
1
sup hg · Xi , t0 i
n
i=1g∈G
an? =
.
kt0 k2
We have that δn → δ by the law of large numbers.
The proof is a direct application of theorem 4.2, but applied to the empirical
law of X given by the realization of X1 , . . . , Xn .
4.6
Examples
In this Subsection, we discuss, in some examples, the application of theorem 4.1
and see the behaviour of the constant ν. This constant intervened in lower
bound of the consistency bias.
22
4.6.1
Action of translation on L2 (R/Z)
We take an orbit O = [f0 ], where f0 ∈ C 2 (R/Z), non constant. We show
easily that O is a manifold of dimension 1 and the tangent space at f0 is2
Rf00 . Therefore a sufficient condition on X such that E(X) = f0 to have an
inconsistency is: P(X ∈
/ f00⊥ ) > 0 according to proposition 4.3. Now if we
denote by 1 the constant function on R/Z equal to 1. We have in this setting:
that the set of fixed points under the action of G is the set of constant functions:
Fix(M ) = R1 and:
s
2
Z 1
Z 1
f0 (t) −
f0 (s)ds dt.
dist(f0 , Fix(M )) = kf0 − hf0 , 1i 1k =
0
0
This distance to the fixed points is used in the upper bound of the constant ν in
Equation (27). Note that if f0 is not differentiable, then [f0 ] is not necessarily
a manifold, and (4.3) does not apply. However proposition 4.1 does: if f0 is not
a constant function, then [f0 ] \ {f0 } is dense in [f0 ]. Therefore as soon as the
support of X contains a ball around f0 , there is an inconsistency.
4.6.2
Action of discrete translation on RZ/NZ
We come back on example 3.1, with D = 1 (discretised signals). For some signal
t0 , ν previously defined is:
1
ν=
E max h, τ · t0 i .
kt0 k
τ ∈Z/NZ
Therefore if we have a sample of size I of iid, then:
ν=
I
1X
1
lim
max hi , τi · t0 i ,
kt0 k I→+∞ I i=1 τi ∈Z/N Z
By an exhaustive research, we can find the τi ’s which maximise the dot product, then with this sample and t0 we can approximate ν. We have done this
approximation for several signals t0 on fig. 6. According the previous results,
the bigger ν is, the more important the lower bound of the consistency bias is.
We remark that the ν estimated is small, ν 1 for different signals.
4.6.3
Action of rotations on Rn
Now we consider the action of rotations on Rn with a Gaussian noise. Take
X ∼ N (t0 , s2 Idn ) then the variability of X is ns2 , then X has a decomposition:
] − 21 , 12 [ →
O
is a local parametrisation of O: f0 = ϕ(0), and we
t
7→ f0 (. − t)
0
check that: lim kϕ(x) − ϕ(0) − xf0 kL2 = 0 with Taylor-Lagrange inequality at the order
2 Indeed
ϕ :
x→0
2. As a conclusion ϕ is differentiable at 0, and it is an immersion (since f00 6= 0), and
D0 ϕ : x 7→ xf00 , then O is a manifold of dimension 1 and the tangent space of O at f0 is:
Tf0 O = D0 ϕ(R) = Rf00 .
23
nu value for each signal
0.4
0.14456
0.082143
0.24981
0.3
0.2
0.1
0
-0.1
-0.2
-0.3
-0.4
0
0.2
0.4
0.6
0.8
1
Figure 6: Different signals and their ν approximated with a sample of size 103
in RZ/100Z . is here a Gaussian noise in RZ/100Z , such that E() = 0 and
E(kk2 ) = 1. For instance the blue signal is a signal defined randomly, and
when we approximate the ν which corresponds to that t0 we find ' 0.25.
√
X = t0 + ns with E() = 0 and E(kk2 ) = 1. According to proposition 4.4
we have by noting δ? the lower bound of the consistency bias when s → ∞:
p
√
δ?
→ n(−1 + 1 + ν 2 ).
s
Now ν = E(supg∈G hg, t0 )i /kt0 k = E(kk) → 1 when n tends to infinity (expected value of the Chi distribution) we have that for n large enough:
√ √
δ?
' n( 2 − 1).
s→∞ s
We compare this result with the exact computation of the consistency bias
(noted here CB) made by Miolane et al. [MHP16], which writes with our current
notations:
CB √ Γ((n + 1)/2)
lim
= 2
.
s→∞ s
Γ(n/2)
lim
Using a standard Taylor expansion on the Gamma function, we have that for n
large enough:
CB √
lim
' n.
s→∞ s
As a conclusion, when the dimension of the space is large enough our lower
bound and the exact computation of the
√ bias have the same asymptotic behaviour. It differs only by the constant 2 − 1 ' 0.4 in our lower bound, 1 in
the work of Miolane et al. [MP15].
24
5
Fréchet means top and quotient spaces are not
consistent when the template is a fixed point
In this Section, we do not assume that the top space M is a vector space, but
rather a manifold. We need then to rewrite the generative model likewise: let
t0 ∈ M , and X any random variable of M such as t0 is a Fréchet mean of X.
Then Y = S · X is the observed variable where S is a random variable whose
value are in G. In this Section we make the assumption that the template t0 is
a fixed point under the action of G.
5.1
Result
Let X be a random variable on M and define the variance of X as:
E(m) = E(dM (m, X)2 ).
We say that t0 is a Fréchet mean of X if t0 is a global minimiser of the variance
E. We prove the following result:
Theorem 5.1. Assume that M is a complete finite dimensional Riemannian
manifold and that dM is the geodesic distance on M . Let X be a random variable
on M , with E(d(x, X)2 ) < +∞ for some x ∈ M . We assume that t0 is a fixed
point and a Fréchet mean of X and that P(X ∈ C(t0 )) = 0 where C(t0 ) is the
cut locus of t0 . Suppose that there exists a point in the support of X which is
not a fixed point nor in the cut locus of t0 . Then [t0 ] is not a Fréchet mean of
[X].
The previous result is finite dimensional and does not cover interesting infinite dimensional setting concerning curves for instance. However, a simple
extension to the previous result can be stated when M is a Hilbert vector space
since then the space is flat and some technical problems like the presence of cut
locus point do not occur.
Theorem 5.2. Assume that M is a Hilbert space and that dM is given by the
Hilbert norm on M . Let X be a random variable on M , with E(kXk2 ) < +∞.
We assume that t0 = E(X). Suppose that there exists a point in the support of
the law of X that is not a fixed point for the action of G. Then [t0 ] is not a
Fréchet mean of [X].
Note that the reciprocal is true: if all the points in the support of the law
of X are fixed points, then almost surely, for all m ∈ M and for all g ∈ G we
have:
dM (X, m) = dM (g · X, m) = dQ ([X], [m]).
Up to the projection on the quotient, we have that the variance of X is equal to
the variance of [X] in M/G, therefore [t0 ] is a Fréchet mean of [X] if and only
if t0 is a Fréchet mean of X. There is no inconsistency in that case.
25
Example 5.1. Theorem 5.2 covers the interesting case of the Fisher Rao metric
on functions:
F = {f : [0, 1] → R
|
f is absolutely continuous}.
Then considering for G the group of smooth diffeomorphisms γ on [0, 1] such
that γ(0) = 0 and γ(1) = 1, we have a right group action G × F → F given
by γ · f = f ◦ γ. The Fisher Rao metric is built as a pull back metric
q of the
2
2
˙
L ([0, 1], R) space through the map Q : F → L given by: Q(f ) = f / |f˙|. This
square root trick is often used, see for instance [KSW11]. Note that in this case,
Rt
Q is a bijective mapping with inverse given by q 7→ f with √
f (t) = 0 q(s)|q(s)|ds.
We can define a group action on M = L2 as: γ · q = q ◦ γ γ̇, for which one can
check easily by a change of variable that:
p
p
kγ · q − γ · q 0 k2 = kq ◦ γ γ̇ − q 0 ◦ γ γ̇k2 = kq − q 0 k2 .
So up to the mapping Q, the Fisher Rao metric on curve corresponds to the
situation M where theorem 5.2 applies. Note that in this case the set of fixed
points under the action of G corresponds in the space F to constant functions.
We can also provide an computation of the consistency bias in this setting:
Proposition 5.1. Under the assumptions of theorem 5.2, we write X = t0 + σ
where t0 is a fixed point, σ > 0, E() = 0 and E(kk2 ) = 1, if there is a Fréchet
mean of [X], then the consistency bias is linear with respect to σ and it is equal
to:
σ sup E(sup hv, g · i).
kvk=1
g∈G
Proof. For λ > 0 and kvk = 1, we compute the variance F in the quotient space
of [X] at the point t0 + λv. Since t0 is a fixed point we get:
F (t0 +λv) = E( inf kt0 +λv−gXk2 ) = E(kXk2 )−kt0 k2 −2λE(sup hv, g(X − t0 )i)+λ2 .
g∈G
g
Then we minimise F with respect to λ, and after we minimise with respect to
v (with kvk = 1). Which concludes.
5.2
5.2.1
Proofs of these theorems
Proof of theorem 5.1
We start with the following simple result, which aims to differentiate the variance
of X. This classical result (see [Pen06] for instance) is proved in appendix B in
order to be the more self-contained as possible:
Lemma 5.1. Let X a random variable on M such that E(d(x, X)2 ) < +∞ for
some x ∈ M . Then the variance m 7→ E(m) = E(dM (m, X)2 ) is a continuous
26
function which is differentiable at any point m ∈ M such that P(X ∈ C(m)) = 0
where C(m) is the cut locus of m. Moreover at such point one has:
∇E(m) = −2E(logm (X)),
where logm : M \ C(m) → Tm M is defined for any x ∈ M \ C(m) as the unique
u ∈ Tm M such that expm (u) = x and kukm = dM (x, m).
We are now ready to prove theorem 5.1.
Proof. (of theorem 5.1) Let m0 be a point in the support of M which is not a
fixed point and not in the cut locus of t0 . Then there exists g0 ∈ G such that
m1 = g0 m0 6= m0 . Note that since x 7→ g0 x is a symmetry (the distance is
equivariant under the action of G) have that m1 = g0 m0 ∈
/ C(g0 t0 ) = C(t0 ) (t0
is a fixed point under the action of G). Let v0 = logt0 (m0 ) and v1 = logt0 (m1 ).
We have v0 6= v1 and since C(t0 ) is closed and the logt0 is continuous application
on M \ C(t0 ) we have:
lim
→0 P(X
1
E(1X∈B(m0 ,) logt0 (X)) = v0 .
∈ B(m0 , ))
(we use here the fact that since m0 is in the support of the law of X, P(X ∈
B(m0 , )) > 0 for any > 0 so that the denominator does not vanish and the
fact that since M is a complete manifold, it is a locally compact space (the
closed balls are compacts) and logt0 is locally bounded). Similarly:
lim
→0 P(X
1
E(1X∈B(m0 ,) logt0 (g0 X)) = v1 .
∈ B(m0 , ))
Thus for sufficiently small > 0 we have (since v0 6= v1 ):
E(logt0 (X)1X∈B(m0 ,) ) 6= E(logt0 (g0 X)1X∈B(m0 ,) ).
(33)
By using using a reductio ad absurdum, we suppose that [t0 ] is a Fréchet mean
of [X] and we want to find a contradiction with (33). In order to do that we
introduce simple functions as the function x 7→ 1x∈B(m0 ,) which intervenes in
Equation (33). Let s : M → G be a simple function (i.e. a measurable function
with finite number of values in G). Then x 7→ h(x) = s(x)x is a measurable
function3 . Now, let Es (x) = E(d(x, s(X)X)2 ) be the variance of the variable
s(X)X. Note that (and this is the main point):
∀g ∈ G
3 Indeed
if: s =
dM (t0 , x) = dM (gt0 , gx) = dM (t0 , gx) = dQ ([t0 ], [x]),
n
P
gi 1Ai where (Ai )1≤i≤n is a partition of M (such that the sum is always
i=1
defined). Then for any Borel set B ⊂ M we have: h−1 (B) =
n
S
gi−1 (B) ∩ Ai is a measurable
i=1
set since x 7→ gi x is a measurable function.
27
we have: Es (t0 ) = E(t0 ). Assume now that [t0 ] a Fréchet mean for [X] on the
quotient space and let us show that Es has a global minimum at t0 . Indeed for
any m, we have:
Es (m) = E(dM (m, s(X)X)2 ) ≥ E(dQ ([m], [X])2 ) ≥ E(dQ ([t0 ], [X])2 ) = Es (t0 ).
Now, we want to apply lemma 5.1 to the random variables s(X)X and X at the
point t0 . Since we assume that X ∈
/ C(t0 ) almost surely and X ∈
/ C(t0 ) implies
s(X)X ∈
/ C(t0 ) we get P(s(X)X ∈ C(t0 )) = 0 and the lemma 5.1 applies. As
t0 is a minimum, we already know that the differential of Es (respectively E)
at t0 should be zero. We get:
E(logt0 (X)) = E(logt0 (s(X)X)) = 0.
(34)
Now we apply Equation (34) to a particular simple function defined by s(x) =
g0 1x∈B(m0 ,) + eG 1x∈B(m
. We split the two expected values in (34) into two
/
0 ,)
parts:
E(logt0 (X)1X∈B(m0 ,) ) + E(logt0 (X)1X ∈B(m
) = 0,
(35)
/
0 ,)
) = 0.
E(logt0 (g0 X)1X∈B(m0 ,) ) + E(logt0 (X)1X ∈B(m
/
0 ,)
(36)
By substrating (35) from (36), one gets:
E(logt0 (X)1X∈B(m0 ,) ) = E(logt0 (g0 X)1X∈B(m0 ,) ),
which is a contradiction with (33). Which concludes.
5.2.2
Proof of theorem 5.2
Proof. The extension to theorem 5.2 is quite straightforward. In this setting
many things are now explicit since d(x, y) = kx − yk , ∇x d(x, y)2 = 2(x − y),
logx (y) = y − x and the cut locus is always empty. It is then sufficient to go
along the previous proof and to change the quantity accordingly. Note that the
local compactness of the space is not true in infinite dimension. However this
was only used to prove that the log was locally bounded but this last result is
trivial in this setting.
6
Conclusion and discussion
In this article, we exhibit conditions which imply that the template estimation
with the Fréchet mean in quotient space is inconsistent. These conditions are
rather generic. As a result, without any more information, a priori there is
inconsistency. The behaviour of the consistency bias is summarized in table 1.
Surely future works could improve these lower and upper bounds.
In a more general case: when we take an infinite-dimensional vector space
quotiented by a non isometric group action, is there always an inconsistency?
An important example of such action is the action of diffeomorphisms. Can we
estimate the consistency bias? In this setting, one estimates the template (or
28
Table 1: Behaviour of the consistency bias with respect to σ 2 the variability of
X = t0 + σ. The constants Ki ’s depend on the kind of noise, on the template
t0 and on the group action.
Consistency bias : CB
G is any group
Supplementary properties for
G a finite group
√
Upper bound of CB
CB ≤ σ + 2 σ 2 + K1 σ CB ≤ K2 σ (theorem 3.3)
(proposition 4.5)
Lower bound of CB for σ → ∞
CB ≥ L ∼ K3 σ (proposition 4.4)
σ→∞
when the template is not a fixed
point
√
Behavior of CB for σ → 0 when CB ≤ U ∼ K4 σ
CB = o(σ k ), ∀k ∈ N in the
σ→0
0
the template is not a fixed point
section 3.3, can we extend this
result for finite group?
CB = σ sup E(supg∈G hv, gi) (proposition 5.1)
CB when the template is a fixed
point
kvk=1
an atlas), but does not exactly compute the Fréchet mean in quotient space,
because a regularization term is added. In this setting, can we ensure that the
consistency bias will be small enough to estimate the original template? Otherwise, one has to reconsider the template estimation with stochastic algorithms
as in [AKT10] or develop new methods.
A
Proof of theorems for finite groups’ setting
A.1
Proof of theorem 3.2: differentiation of the variance
in the quotient space
In order to show theorem 3.2 we proceed in three steps. First we see some
following properties and definitions which will be used. Most of these properties
are the consequences of the fact that the group G is finite. Then we show that
the integrand of F is differentiable. Finally we show that we can permute
gradient and integral signs.
1. The set of singular points in Rn , is a null set (for the Lebesgue’s measure),
since it is equal to:
[
ker(x 7→ g · x − x),
g6=eG
a finite union of strict linear subspaces of Rn thanks to the linearity and
effectively of the action and to the finite group.
2. If m is regular, then for g, g 0 two different elements of G, we pose:
H(g · m, g 0 · m) = {x ∈ Rn , kx − g · mk = kx − g 0 · mk}.
Moreover H(g · m, g 0 · m) = (g · m − g 0 · m)⊥ is an hyperplane.
29
3. For m a regular point we define the set of points which are equally distant
from two different points of the orbit of m:
[
H(g · m, g 0 · m).
Am =
g6=g 0
Then Am is a null set. For m regular and x ∈
/ Am the minimum in the
definition of the quotient distance :
dQ ([m], [x]) = minkm − g · xk,
g∈G
(37)
is reached at a unique g ∈ G, we call g(x, m) this unique element.
4. By expansion of the squared norm: g minimises km − g · xk if and only if
g maximises hm, g · xi.
5. If m is regular and x ∈
/ Am then:
∀g ∈ G \ {g(x, m)}, km − g(x, m) · xk < km − g · xk,
by continuity of the norm and by the fact that G is a finite group, we can
find α > 0, such that for µ ∈ B(m, α) and y ∈ B(x, α):
∀g ∈ G \ {g(x, m)} kµ − g(x, m) · yk < kµ − g · yk.
(38)
Therefore for such y and µ we have:
g(x, m) = g(y, µ).
6. For m a regular point, we define Cone(m) the convex cone of Rn :
Cone(m) = {x ∈ Rn / ∀g ∈ G kx − mk ≤ kx − g · mk}
(39)
n
= {x ∈ R / ∀g ∈ G hm, xi ≥ hgm, xi}.
This is the intersection of |G| − 1 half-spaces: each half space is delimited
by H(m, gm) for g 6= eG (see fig. 1). Cone(m) is the set of points whose
projection on [m] is m, (where the projection of one point p on [m] is one
point g · m which minimises the set {kp − g · mk, g ∈ G}).
7. Taking a regular T
point m allows us to see the
T quotient. For every point x ∈
Rn we have: [x] Cone(m) 6= ∅, card([x] Cone(m)) ≥ 2 if and only if
x ∈ Am . The borders of the cone is Cone(m)\Int(Cone(m)) = Cone(m)∩
Am (we denote by Int(A) the interior of a part A). Therefore Q = Rn /G
can be seen like Cone(m) whose border have been glued together.
The proof of theorem 3.2 is the consequence of the following lemmas. The
first lemma studies the differentiability of the integrand, and the second allows
us to permute gradient and integral sign. Let us denote by f the integrand of
F:
30
∀ m, x ∈ M
f (x, m) = minkm − g · xk2 .
(40)
g∈G
Thus we have: F (m) = E(f (X, m)). The min of differentiable functions is not
necessarily differentiable, however we prove the following result:
Lemma A.1. Let m0 be a regular point, if x ∈
/ Am0 then m 7→ f (x, m) is
differentiable at m0 , besides we have:
∂f
(x, m0 ) = 2(m0 − g(x, m0 ) · x)
∂m
(41)
Proof. If m0 is regular and x ∈
/ Am0 then we know from the item 5 of the
appendix A.1 that g(x, m0 ) is locally constant. Therefore around m0 , we have:
f (x, m) = km − g(x, m0 ) · xk2 ,
which can differentiate with respect to m at m0 . This proves the lemma A.1.
Now we want to prove that we can permute the integral and the gradient
sign. The following lemma provides us a sufficient condition to permute integral
and differentiation signs thanks to the dominated convergence theorem:
Lemma A.2. For every m0 ∈ M we have the existence of an integrable function
Φ : M → R+ such that:
∀m ∈ B(m0 , 1), ∀x ∈ M
|f (x, m0 ) − f (x, m)| ≤ km − m0 kΦ(x).
(42)
Proof. For all g ∈ G, m ∈ M we have:
kg · x − m0 k2 − kg · x − mk2 = hm − m0 , 2g · x − (m0 + m)i
≤ km − m0 k × (km0 + mk + k2xk)
2
minkg · x − m0 k ≤ km − m0 k (km0 + mk + k2xk) + kg · x − mk2
g∈G
minkg · x − m0 k2 ≤ km − m0 k (km0 + mk + k2xk) + minkg · x − mk2
g∈G
2
g∈G
2
minkg · x − m0 k − minkg · x − mk ≤ km − m0 k (2km0 k + km − m0 k + k2xk)
g∈G
g∈G
By symmetry we get also the same control of f (x, m) − f (x, m0 ), then:
|f (x, m0 ) − f (x, m)| ≤ km0 − mk (2km0 k + km − m0 k + k2xk)
(43)
The function Φ should depend on x or m0 , but not on m. That is why we take
only m ∈ B(m0 , 1), then we replace km−m0 k by 1 in (43), which concludes.
31
A.2
Proof of theorem 3.1: the gradient is not zero at the
template
To prove it, we suppose that ∇F (t0 ) = 0, and we take the dot product with t0 :
h∇F (t0 ), t0 i = 2E(hX, t0 i − hg(X, t0 ) · X, t0 i) = 0.
(44)
The item 4 of (x, m) 7→ g(x, m) seen at appendix A.1 leads to:
hX, t0 i − hg(X, t0 ) · X, t0 i ≤ 0 almost surely.
So the expected value of a non-positive random variable is null. Then
hX, t0 i − hg(X, t0 ) · X, t0 i = 0 almost surely hX, t0 i = hg(X, t0 ) · X, t0 i almost surely.
Then g = eG maximizes the dot product almost surely. Therefore (as we know
that g(X, t0 ) is unique almost surely, since t0 is regular):
g(X, t0 ) = eG almost surely,
which is a contradiction with Equation (6).
A.3
Proof of theorem 3.3: upper bound of the consistency
bias
In order to show this Theorem, we use the following lemma:
Lemma A.3. We write X = t0 + where E() = 0 and we make the assumption
that the noise is a subgaussian random variable. This means that it exists c > 0
such that:
2
s kmk2
.
(45)
∀m ∈ M = Rn , E(exp(h, mi)) ≤ c exp
2
If for m ∈ M we have:
p
ρ̃ := dQ ([m], [t0 ]) ≥ s 2 log(c|G|),
(46)
p
ρ̃2 − ρ̃s 8 log(c|G|) ≤ F (m) − E(kk2 ).
(47)
then we have:
Proof. (of lemma A.3) First we expand the right member of the inequality (47):
E(kk2 ) − F (m) = E max(kX − t0 k2 − kX − gmk2 )
g∈G
We use the formula kAk2 − kA + Bk2 = −2 hA, Bi − kBk2 with A = X − t0 and
B = t0 − gm:
E(kk2 ) − F (m) = E max −2 hX − t0 , t0 − gmi − kt0 − gmk2 = E(max ηg ),
g∈G
g∈G
(48)
32
with ηg = −kt0 − gmk2 + 2 h, gm − t0 i. Our goal is to find a lower bound of
F (m) − E(kk2 ), that is why we search an upper bound of E(maxηg ) with the
g∈G
Jensen’s inequality. We take x > 0 and we get by using the assumption (45):
X
exp(xE(max ηg )) ≤ E(exp(max xηg )) ≤ E
exp(xηg )
g∈G
g∈G
≤
X
g∈G
2
exp(−xkt0 − gmk )E(exp(h, 2x(gm − t0 )i)
g
X
≤c
exp(−xkt0 − gmk2 ) exp(2s2 x2 kgm − t0 k2 )
g
X
≤c
exp(kgm − t0 k2 (−x + 2x2 s2 ))
(49)
g
Now if (−x + 2t2 x2 ) < 0, we can take an upper bound of the sum sign in (49)
by taking the smallest value in the sum sign, which is reached when g minimizes
kg · m − t0 k multiplied by the number of elements summed. Moreover (−x +
2x2 s) < 0 ⇐⇒ 0 < x < 2s12 . Then we have:
exp(xE(max ηg )) ≤ c|G| exp(ρ̃2 (−x + 2x2 s2 )) as soon as 0 < x <
g∈G
1
.
2s2
Then by taking the log:
E(maxηg ) ≤
g∈G
log c|G|
+ (2xs2 − 1)ρ̃2 .
x
(50)
Now we find the x which optimizes inequality (50).p By differentiation, the
right member of inequality (50) is minimal for x? = log c|G|/2/(sρ̃) which is
a valid choice because x? ∈ (0, 2s12 ) by using the assumption (46). With the
equations (48) and (50) and x? we get the result.
Proof. (of theorem 3.3) We take m? ∈ argmin F , ρ̃ = dQ ([m? ], [t0 ]), and =
2
2
X − tp
0 . We have: F (m? ) ≤ F (t0 ) ≤ E(kk ) then F (m? ) − E(kk ) ≤ 0. If
ρ̃ > s 2 log(|G|) then we can apply lemma A.3 with c = 1. Thus:
p
ρ̃2 − ρ̃s 8 log(|G|) ≤ 2F (m? ) − E(kk2 ) ≤ 0,
p
p
which yields to ρ̃ ≤ s 8 log(|G|). If ρ̃ ≤ s 2 log(|G|), we have nothing to
prove.
Note that the proof of this upper bound does not use the fact that the action
is isometric, therefore this upper bound is true for every finite group action.
33
A.4
Proof of proposition 3.2: inconsistency in R2 for the
action of translation
Proof. We suppose that E(X) ∈ HPA ∪ L. In this setting we call τ (x, m) one
of element of the group G = T which minimises kτ · x − mk see (37) instead of
g(x, m). The variance in the quotient space at the point m is:
F (m) = E min kτ · X − mk2 = E(kτ (X, m) · X − mk2 ).
τ ∈Z/2Z
As we want to minimize F and F (1 · m) = F (m), we can suppose that m ∈
HPA ∪ L. We can completely write what take τ (x, m) for x ∈ M :
• If x ∈ HPA ∪ L we can set τ (x, m) = 0 (because in this case x, m are on
the same half plane delimited by L the perpendicular bisector of m and
−m).
• If x ∈ HPB then we can set τ (x, m) = 1 (because in this case x, m are
not on the same half plane delimited by L the perpendicular bisector of
m and −m).
This allows use to write the variance at the point m ∈ HPA :
F (m) = E kX − mk2 1{X∈HPA ∪L} + E k1 · X − mk2 1{X∈HPB }
Then we define the random variable Z by: Z = X1X∈HPA ∪L + 1 · X1X∈HPB ,
such that for m ∈ HPA we have: F (m) = E(kZ − mk2 ) and F (m) = F (1 · m).
Thus if m? is a global minimiser of F , then m? = E(Z) or m? = 1 · E(Z). So the
Fréchet mean of [X] is [E(Z)]. Here instead of using theorem 3.1, we can work
explicitly: Indeed there is no inconsistency if and only if E(Z) = E(X), (E(Z) =
1 · E(X) would be another possibility, but by assumption E(Z), E(X) ∈ HPA ),
by writing X = X1X∈HPA + X1X∈HPB ∪L , we have:
E(Z) = E(X) ⇐⇒ E(1 · X1X∈HPB ∪L ) = E(X1X∈HPB ∪L )
⇐⇒ 1 · E(X1X∈HPB ∪L ) = E(X1X∈HPB ∪L )
⇐⇒ E(X1X∈HPB ∪L ) ∈ L
⇐⇒ P(X ∈ HPB ) = 0,
Therefore there is an inconsistency if and only if P(X ∈ HPB ) > 0 (we remind
that we made the assumption that E(X) ∈ HPA ∪ L). If E(X) is regular (i.e.
E(X) ∈
/ L), then there is an inconsistency if and only if X takes values in HPB ,
(this is exactly the condition of theorem 3.1, but in this particular case, this is
a necessarily and sufficient condition). This proves point 1. Now we make the
assumption that X follows a Gaussian noise in order compute E(Z) (note that
we could take another noise, as long as we are able to compute E(Z)). For that
we convert to polar coordinates: (u, v)T = E(X) + (r cos θ, r sin θ)T where r > 0
et θ ∈ [0, 2π]. We also define: d = dist(E(X), L), E(X) is a regular point if
34
and only if d > 0. We still suppose that E(X) = (α, β)T ∈ HPA ∪ L. First we
parametrise in function of (r, θ) the points which are in HPB :
v < u ⇐⇒ β + r sin θ < α + r cos θ ⇐⇒
β−α √
π
< 2 cos(θ + )
r
4
d
π
< cos(θ + )
r h
4
i
π
π
⇐⇒ θ ∈ − − arccos(d/r), − + arccos(d/r) and d < r
4
4
⇐⇒
Then we compute E(Z):
E(Z) =E(X1X∈HPA ) + E(1 · X1X∈HPB )
exp − r2
Z d Z 2π
2s2
α + r cos θ
rdθdr
E(Z) =
2
β
+
r
sin
θ
2πs
0
0
exp − r2
Z +∞ Z 2π− π4 −arccos( dr )
2
2s
α + r cos θ
rdrdθ
+
2
β
+
r
sin
θ
d
π
2πs
arccos( r )− 4
d
2
r
Z +∞ Z − π4 +arccos( dr )
β + r sin θ exp − 2s2
+
rdrdθ
α + r cos θ
d
2πs2
d
−π
4 −arccos( r )
Z +∞ 2
r2 √
r exp(− 2s
d
2)
=E(X) +
2g
dr × (−1, 1)T ,
2
πs
r
d
We compute ρ̃ = dQ ([E(X)], [E(Z)]) where dQ is the distance in the quotient
space defined in (1). As we know that E(X), E(Z) are in the same half-plane
delimited by L, we have: ρ̃ = dQ ([E(Z)], [E(X)]) = kE(Z) − E(X)k. This proves
eq. (9), note that items 2a to 2c are the direct consequence of eq. (9) and basic
analysis.
B
Proof of lemma 5.1: differentiation of the variance in the top space
Proof. By triangle inequality it is easy to show that E is finite and continuous
everywhere. Moreover, it is a well known fact that x 7→ dM (x, z)2 is differentiable at any m ∈ M \ C(z) (i.e. z ∈
/ C(m)) with derivative −2 logm (z). Now
since:
|dM (x, z)2 − dM (y, z)2 | = |dM (x, z) − dM (y, z)kdM (x, z) + dM (y, z)|
≤ dM (x, y)(2dM (x, z) + dM (y, x)),
we get in a local chart φ : U → V ⊂ Rn at t = φ(m) we have locally around t
that:
h 7→ dM (φ−1 (t), φ−1 (t + h)),
35
is smooth and |dM (φ−1 (t), φ−1 (t+h))| ≤ C|h| for a C > 0. Hence for sufficiently
small h, |dM (φ−1 (t), z)2 − dM (φ−1 (t + h), z)2 | ≤ C|h|(2dM (m, z) + 1). We get
the result from dominated convergence Lebesgue theorem with E(dM (m, X)) ≤
E(dM (m, X)2 + 1) < +∞.
References
[AAT07]
Stéphanie Allassonnière, Yali Amit, and Alain Trouvé. Towards a
coherent statistical framework for dense deformable template estimation. Journal of the Royal Statistical Society: Series B (Statistical
Methodology), 69(1):3–29, 2007.
[ADP15]
Stéphanie Allassonnière, Loïc Devilliers, and Xavier Pennec. Estimating the template in the total space with the fréchet mean on
quotient spaces may have a bias: a case study on vector spaces quotiented by the group of translations. In Mathematical Foundations
of Computational Anatomy (MFCA’15), 2015.
[AKT10]
Stéphanie Allassonnière, Estelle Kuhn, and Alain Trouvé. Construction of bayesian deformable models via a stochastic approximation
algorithm: a convergence study. Bernoulli, 16(3):641–678, 2010.
[BB08]
Abhishek Bhattacharya and Rabi Bhattacharya. Statistics on riemannian manifolds: asymptotic distribution and curvature. Proceedings of the American Mathematical Society, 136(8):2959–2967,
2008.
[BC11]
Jérémie Bigot and Benjamin Charlier. On the consistency of fréchet
means in deformable models for curve and image analysis. Electronic
Journal of Statistics, 5:1054–1089, 2011.
[BG14]
Dominique Bontemps and Sébastien Gadat. Bayesian methods
for the shape invariant model. Electronic Journal of Statistics,
8(1):1522–1568, 2014.
[CWS16]
Jason Cleveland, Wei Wu, and Anuj Srivastava. Norm-preserving
constraint in the fisher–rao registration and its application in signal estimation. Journal of Nonparametric Statistics, 28(2):338–359,
2016.
[DPC+ 14] Stanley Durrleman, Marcel Prastawa, Nicolas Charon, Julie R Korenberg, Sarang Joshi, Guido Gerig, and Alain Trouvé. Morphometry of anatomical shape complexes with dense deformations and
sparse parameters. NeuroImage, 101:35–49, 2014.
[Fré48]
Maurice Fréchet. Les elements aléatoires de nature quelconque dans
un espace distancié. In Annales de l’institut Henri Poincaré, volume 10, pages 215–310, 1948.
36
[GM98]
Ulf Grenander and Michael I. Miller. Computational anatomy: An
emerging discipline. Q. Appl. Math., LVI(4):617–694, December
1998.
[HCG+ 13] Sebastian Hitziger, Maureen Clerc, Alexandre Gramfort, Sandrine
Saillet, Christian Bénar, and Théodore Papadopoulo. Jitter-adaptive
dictionary learning-application to multi-trial neuroelectric signals.
arXiv preprint arXiv:1301.3611, 2013.
[JDJG04] Sarang Joshi, Brad Davis, Mathieu Jomier, and Guido Gerig. Unbiased diffeomorphic atlas construction for computational anatomy.
Neuroimage, 23:S151–S160, 2004.
[Kar77]
Hermann Karcher. Riemannian center of mass and mollifier smoothing. Communications on pure and applied mathematics, 30(5):509–
541, 1977.
[Ken89]
David G Kendall. A survey of the statistical theory of shape. Statistical Science, pages 87–99, 1989.
[Ken90]
Wilfrid S Kendall. Probability, convexity, and harmonic maps with
small image i: uniqueness and fine existence. Proceedings of the
London Mathematical Society, 3(2):371–406, 1990.
[KSW11]
Sebastian A. Kurtek, Anuj Srivastava, and Wei Wu. Signal estimation under random time-warpings and nonlinear signal alignment. In J. Shawe-Taylor, R.S. Zemel, P.L. Bartlett, F. Pereira, and
K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 24, pages 675–683. Curran Associates, Inc., 2011.
[LAJ+ 12] Sidonie Lefebvre, Stéphanie Allassonnière, Jérémie Jakubowicz,
Thomas Lasne, and Eric Moulines. Aircraft classification with a
low resolution infrared sensor. Machine Vision and Applications,
24(1):175–186, 2012.
[MHP16]
Nina Miolane, Susan Holmes, and Xavier Pennec. Template
shape estimation: correcting an asymptotic bias. arXiv preprint
arXiv:1610.01502, 2016.
[MP15]
Nina Miolane and Xavier Pennec. Biased estimators on quotient
spaces. In Geometric Science of Information. Second International
Conference, GSI 2015, Palaiseau, France, October 28-30, 2015, Proceedings, volume 9389. Springer, 2015.
[Pen06]
Xavier Pennec. Intrinsic statistics on riemannian manifolds: Basic
tools for geometric measurements. Journal of Mathematical Imaging
and Vision, 25(1):127–154, 2006.
37
[SBG08]
Mert Sabuncu, Serdar K. Balci, and Polina Golland. Discovering
modes of an image population through mixture modeling. Proceeding
of the MICCAI conference, LNCS(5242):381–389, 2008.
[Zie77]
Herbert Ziezold. On expected figures and a strong law of large numbers for random elements in quasi-metric spaces. In Transactions of
the Seventh Prague Conference on Information Theory, Statistical
Decision Functions, Random Processes and of the 1974 European
Meeting of Statisticians, pages 591–602. Springer, 1977.
[ZSF13]
Miaomiao Zhang, Nikhil Singh, and P.Thomas Fletcher. Bayesian
estimation of regularization and atlas building in diffeomorphic image registration. In JamesC. Gee, Sarang Joshi, KilianM. Pohl,
WilliamM. Wells, and Lilla Zöllei, editors, Information Processing
in Medical Imaging, volume 7917 of Lecture Notes in Computer Science, pages 37–48. Springer Berlin Heidelberg, 2013.
38
| 10 |
Tiny SSD: A Tiny Single-shot Detection Deep Convolutional Neural Network for
Real-time Embedded Object Detection
arXiv:1802.06488v1 [] 19 Feb 2018
Alexander Wong, Mohammad Javad Shafiee, Francis Li, Brendan Chwyl
Dept. of Systems Design Engineering
University of Waterloo, DarwinAI
{a28wong, mjshafiee}@uwaterloo.ca, {francis, brendan}@darwinai.ca
Abstract—Object detection is a major challenge in computer
vision, involving both object classification and object localization within a scene. While deep neural networks have been
shown in recent years to yield very powerful techniques for
tackling the challenge of object detection, one of the biggest
challenges with enabling such object detection networks for
widespread deployment on embedded devices is high computational and memory requirements. Recently, there has been
an increasing focus in exploring small deep neural network
architectures for object detection that are more suitable for embedded devices, such as Tiny YOLO and SqueezeDet. Inspired
by the efficiency of the Fire microarchitecture introduced in
SqueezeNet and the object detection performance of the singleshot detection macroarchitecture introduced in SSD, this paper
introduces Tiny SSD, a single-shot detection deep convolutional
neural network for real-time embedded object detection that
is composed of a highly optimized, non-uniform Fire subnetwork stack and a non-uniform sub-network stack of highly
optimized SSD-based auxiliary convolutional feature layers
designed specifically to minimize model size while maintaining
object detection performance. The resulting Tiny SSD possess
a model size of 2.3MB (∼26X smaller than Tiny YOLO) while
still achieving an mAP of 61.3% on VOC 2007 (∼4.2% higher
than Tiny YOLO). These experimental results show that very
small deep neural network architectures can be designed for
real-time object detection that are well-suited for embedded
scenarios.
Keywords-object detection; deep neural network; embedded;
real-time; single-shot
I. I NTRODUCTION
Object detection can be considered a major challenge in
computer vision, as it involves a combination of object classification and object localization within a scene (see Figure 1).
The advent of modern advances in deep learning [7], [6]
has led to significant advances in object detection, with the
majority of research focuses on designing increasingly more
complex object detection networks for improved accuracy
such as SSD [9], R-CNN [1], Mask R-CNN [2], and other
extended variants of these networks [4], [8], [15]. Despite the
fact that such object detection networks have showed stateof-the-art object detection accuracies beyond what can be
achieved by previous state-of-the-art methods, such networks
are often intractable for use for embedded applications due
to computational and memory constraints. In fact, even faster
variants of these networks such as Faster R-CNN [13] are only
Figure 1. Tiny SSD results on the VOC test set. The bounding boxes,
categories, and confidences are shown.
capable of single-digit frame rates on a high-end graphics
processing unit (GPU). As such, more efficient deep neural
networks for real-time embedded object detection is highly
desired given the large number of operational scenarios that
such networks would enable, ranging from smartphones to
aerial drones.
Recently, there has been an increasing focus in exploring
small deep neural network architectures for object detection
that are more suitable for embedded devices. For example,
Redmon et al. introduced YOLO [11] and YOLOv2 [12],
which were designed with speed in mind and was able to
achieve real-time object detection performance on a high-end
Nvidia Titan X desktop GPU. However, the model size of
YOLO and YOLOv2 remains very large in size (753 MB and
193 MB, respectively), making them too large from a memory
perspective for most embedded devices. Furthermore, their
object detection speed drops considerably when running on
embedded chips [14]. To address this issue, Tiny YOLO [10]
was introduced where the network architecture was reduced
considerably to greatly reduce model size (60.5 MB) as well
as greatly reduce the number of floating point operations
required (just 6.97 billion operations) at a cost of object
detection accuracy (57.1% on the twenty-category VOC 2017
test set). Similarly, Wu et al. introduced SqueezeDet [16], a
fully convolutional neural network that leveraged the efficient
Fire microarchitecture introduced in SqueezeNet [5] within
an end-to-end object detection network architecture. Given
that the Fire microarchitecture is highly efficient, the resulting
SqueezeDet had a reduced model size specifically for the
purpose of autonomous driving. However, SqueezeDet has
only been demonstrated for objection detection with limited
object categories (only three) and thus its ability to handle
larger number of categories have not been demonstrated.
As such, the design of highly efficient deep neural network
architectures that are well-suited for real-time embedded
object detection while achieving improved object detection
accuracy on a variety of object categories is still a challenge
worth tackling.
In an effort to achieve a fine balance between object
detection accuracy and real-time embedded requirements
(i.e., small model size and real-time embedded inference
speed), we take inspiration by both the incredible efficiency
of the Fire microarchitecture introduced in SqueezeNet [5]
and the powerful object detection performance demonstrated
by the single-shot detection macroarchitecture introduced
in SSD [9]. The resulting network architecture achieved
in this paper is Tiny SSD, a single-shot detection deep
convolutional neural network designed specifically for realtime embedded object detection. Tiny SSD is composed
of a non-uniform highly optimized Fire sub-network stack,
which feeds into a non-uniform sub-network stack of highly
optimized SSD-based auxiliary convolutional feature layers,
designed specifically to minimize model size while retaining
object detection performance.
This paper is organized as follows. Section 2 describes the
highly optimized Fire sub-network stack leveraged in the Tiny
SSD network architecture. Section 3 describes the highly
optimized sub-network stack of SSD-based convolutional
feature layers used in the Tiny SSD network architecture.
Section 4 presents experimental results that evaluate the
efficacy of Tiny SSD for real-time embedded object detection.
Finally, conclusions are drawn in Section 5.
II. O PTIMIZED F IRE S UB - NETWORK S TACK
The overall network architecture of the Tiny SSD network
for real-time embedded object detection is composed of two
main sub-network stacks: i) a non-uniform Fire sub-network
stack, and ii) a non-uniform sub-network stack of highly
optimized SSD-based auxiliary convolutional feature layers,
with the first sub-network stack feeding into the second subnetwork stack. In this section, let us first discuss in detail
the design philosophy behind the first sub-network stack
of the Tiny SSD network architecture: the optimized fire
sub-network stack.
A powerful approach to designing smaller deep neural
network architectures for embedded inference is to take a
more principled approach and leverage architectural design
strategies to achieve more efficient deep neural network
microarchitectures [3], [5]. A very illustrative example of
such a principled approach is the SqueezeNet [5] network architecture, where three key design strategies were leveraged:
1) reduce the number of 3 × 3 filters as much as possible,
Figure 2. An illustration of the Fire microarchitecture. The output of
previous layer is squeezed by a squeeze convolutional layer of 1 × 1 filters,
which reduces the number of input channels to 3 × 3 filters. The result of
the squeeze convolutional layers is passed into the expand convolutional
layer which consists of both 1 × 1 and 3 × 3 filters.
2) reduce the number of input channels to 3 × 3 filters
where possible, and
3) perform downsampling at a later stage in the network.
This principled designed strategy led to the design of what
the authors referred to as the Fire module, which consists of
a squeeze convolutional layer of 1x1 filters (which realizes
the second design strategy of effectively reduces the number
of input channels to 3 × 3 filters) that feeds into an expand
convolutional layer comprised of both 1 × 1 filters and 3 × 3
filters (which realizes the first design strategy of effectively
reducing the number of 3 × 3 filters). An illustration of the
Fire microarchitecture is shown in Figure 2.
Inspired by the elegance and simplicity of the Fire
microarchitecture design, we design the first sub-network
stack of the Tiny SSD network architecture as a standard
convolutional layer followed by a set of highly optimized
Fire modules. One of the key challenges to designing this
sub-network stack is to determine the ideal number of Fire
modules as well as the ideal microarchitecture of each of
the Fire modules to achieve a fine balance between object
detection performance and model size as well as inference
speed. First, it was determined empirically that 10 Fire
modules in the optimized Fire sub-network stack provided
strong object detection performance. In terms of the ideal
microarchitecture, the key design parameters of the Fire
microarchitecture are the number of filters of each size
(1 × 1 or 3 × 3) that form this microarchitecture. In the
SqueezeNet network architecture that first introduced the
Fire microarchitecture [5], the microarchitectures of the Fire
modules are largely uniform, with many of the modules
sharing the same microarchitecture configuration. In an effort
to achieve more optimized Fire microarchitectures on a permodule basis, the number of filters of each size in each Fire
Table I
T HE OPTIMIZED F IRE SUB - NETWORK STACK OF THE T INY SSD
NETWORK ARCHITECTURE . T HE NUMBER OF FILTERS AND INPUT SIZE TO
EACH LAYER ARE REPORTED FOR THE CONVOLUTIONAL LAYERS AND
F IRE MODULES . E ACH F IRE MODULE IS REPORTED IN ONE ROW FOR A
BETTER REPRESENTATION . ”x@S – y@E1 – z@E3" STANDS FOR x
NUMBERS OF 1 × 1 FILTERS IN THE SQUEEZE CONVOLUTIONAL LAYER , y
NUMBERS OF 1 × 1 FILTERS AND z NUMBERS OF 3 × 3 FILTERS IN THE
EXPAND CONVOLUTIONAL LAYER .
Type / Stride
Conv1 / s2
Pool1 / s2
Fire1
Fire2
Figure 3.
An illustration of the network architecture of the second
sub-network stack of Tiny SSD. The output of three Fire modules and
two auxiliary convolutional feature layers, all with highly optimized
microarchitecture configurations, are combined together for object detection.
module is optimized to have as few parameters as possible
while still maintaining the overall object detection accuracy.
As a result, the optimized Fire sub-network stack in the Tiny
SSD network architecture is highly non-uniform in nature for
an optimal sub-network architecture configuration. Table I
shows the overall architecture of the highly optimized Fire
sub-network stack in Tiny SSD, and the number of parameters
in each layer of the sub-network stack.
III. O PTIMIZED S UB - NETWORK S TACK OF SSD- BASED
C ONVOLUTIONAL F EATURE L AYERS
In this section, let us first discuss in detail the design
philosophy behind the second sub-network stack of the Tiny
SSD network architecture: the sub-network stack of highly
optimized SSD-based auxiliary convolutional feature layers.
One of the most widely-used and effective object detection
network macroarchitectures in recent years has been the
single-shot multibox detection (SSD) macroarchitecture [9].
The SSD macroarchitecture augments a base feature extraction network architecture with a set of auxiliary convolutional
feature layers and convolutional predictors. The auxiliary
convolutional feature layers are designed such that they
decrease in size in a progressive manner, thus enabling the
flexibility of detecting objects within a scene across different
scales. Each of the auxiliary convolutional feature layers
can then be leveraged to obtain either: i) a confidence score
for a object category, or ii) a shape offset relative to default
bounding box coordinates [9]. As a result, a number of object
detections can be obtained per object category in this manner
in a powerful, end-to-end single-shot manner.
Inspired by the powerful object detection performance
and multi-scale flexibility of the SSD macroarchitecture [9],
the second sub-network stack of Tiny SSD is comprised of
a set of auxiliary convolutional feature layers and convo-
Pool3 / s2
Fire3
Fire4
Pool5 / s2
Fire5
Fire6
Fire7
Fire8
Pool9 / s2
Fire 9
Pool10 / s2
Fire10
Filter Shapes
3 × 3 × 57
3×3
15@S – 49@E1 – 53@E3
Concat1
15@S – 54@E1 – 52@E3
Concat2
3×3
29@S – 92@E1 – 94@E3
Concat3
29@S – 90@E1 – 83@E3
Concat4
3×3
44@S – 166@E1 – 161@E3
Concat5
45@S – 155@E1 – 146@E3
Concat6
49@S – 163@E1 – 171@E3
Concat7
25@S – 29@E1 – 54@E3
Concat8
3×3
37@S – 45@E1 – 56@E3
Concat9
3×3
38@S – 41@E1 – 44@E3
Concat10
Input Size
300 × 300
149 × 149
74 × 74
74 × 74
74 × 74
37 × 37
37 × 37
37 × 37
18 × 18
18 × 18
18 × 18
18 × 18
18 × 18
9×9
4×4
lutional predictors with highly optimized microarchitecture
configurations (see Figure 3).
As with the Fire microarchitecture, a key challenge to
designing this sub-network stack is to determine the ideal
microarchitecture of each of the auxiliary convolutional
feature layers and convolutional predictors to achieve a fine
balance between object detection performance and model
size as well as inference speed. The key design parameters
of the auxiliary convolutional feature layer microarchitecture
are the number of filters that form this microarchitecture.
As such, similar to the strategy taken for constructing
the highly optimized Fire sub-network stack, the number
of filters in each auxiliary convolutional feature layer is
optimized to minimize the number of parameters while
preserving overall object detection accuracy of the full Tiny
SSD network. As a result, the optimized sub-network stack
of auxiliary convolutional feature layers in the Tiny SSD
network architecture is highly non-uniform in nature for
an optimal sub-network architecture configuration. Table II
shows the overall architecture of the optimized sub-network
stack of the auxiliary convolutional feature layers within the
Tiny SSD network architecture, along with the number of
Table II
T HE OPTIMIZED SUB - NETWORK STACK OF THE AUXILIARY
CONVOLUTIONAL FEATURE LAYERS WITHIN THE T INY SSD NETWORK
ARCHITECTURE . T HE INPUT SIZES TO EACH CONVOLUTIONAL LAYER
AND KERNEL SIZES ARE REPORTED .
Type / Stride
Conv12-1 / s2
Conv12-2
Conv13-1
Conv13-2
Fire5-mbox-loc
Fire5-mbox-conf
Fire9-mbox-loc
Fire9-mbox-conf
Fire10-mbox-loc
Fire10-mbox-conf
Fire11-mbox-loc
Fire11-mbox-conf
Conv12-2-mbox-loc
Conv12-2-mbox-conf
Conv13-2-mbox-loc
Conv13-2-mbox-conf
Filter Shape
3 × 3 × 51
3 × 3 × 46
3 × 3 × 55
3 × 3 × 85
3 × 3 × 16
3 × 3 × 84
3 × 3 × 24
3 × 3 × 126
3 × 3 × 24
3 × 3 × 126
3 × 3 × 24
3 × 3 × 126
3 × 3 × 24
3 × 3 × 126
3 × 3 × 16
3 × 3 × 84
Input Size
4×4
4×4
2×2
2×2
37 × 37
37 × 37
18 × 18
18 × 18
9×9
9×9
4×4
4×4
2×2
2×2
1×1
1×1
parameters in each layer.
Model
size
60.5MB
2.3MB
mAP
(VOC 2007)
57.1%
61.3%
Table IV
R ESOURCE USAGE OF T INY SSD.
Model
Name
Tiny SSD
V. E XPERIMENTAL R ESULTS AND D ISCUSSION
To study the utility of Tiny SSD for real-time embedded object detection, we examine the model size, object
detection accuracies, and computational operations on the
VOC2007/2012 datasets. For evaluation purposes, the Tiny
YOLO network [10] was used as a baseline reference comparison given its popularity for embedded object detection,
and was also demonstrated to possess one of the smallest
model sizes in literature for object detection on the VOC
2007/2012 datasets (only 60.5MB in size and requiring
just 6.97 billion operations). The VOC2007/2012 datasets
consist of natural images that have been annotated with 20
different types of objects, with illustrative examples shown
in Figure 4. The tested deep neural networks were trained
using the VOC2007/2012 training datasets, and the mean
average precision (mAP) was computed on the VOC2007
test dataset to evaluate the object detection accuracy of the
deep neural networks.
A. Training Setup
Table III
O BJECT DETECTION ACCURACY RESULTS OF T INY SSD ON VOC 2007
TEST SET. T INY YOLO RESULTS ARE PROVIDED AS A BASELINE
COMPARISON .
Model
Name
Tiny YOLO [10]
Tiny SSD
reductions while having a negligible effect on object detection
accuracy.
Total number
of Parameters
1.13M
Total number
of MACs
571.09M
IV. PARAMETER P RECISION O PTIMIZATION
In this section, let us discuss the parameter precision
optimization strategy for Tiny SSD. For embedded scenarios
where the computational requirements and memory requirements are more strict, an effective strategy for reducing
computational and memory footprint of deep neural networks
is reducing the data precision of parameters in a deep neural
network. In particular, modern CPUs and GPUs have moved
towards accelerated mixed precision operations as well as
better handling of reduced parameter precision, and thus the
ability to take advantage of these factors can yield noticeable
improvements for embedded scenarios. For Tiny SSD, the
parameters are represented in half precision floating-point,
thus leading to further deep neural network model size
The proposed Tiny SSD network was trained for 220,000
iterations in the Caffe framework with training batch size of
24. RMSProp was utilized as the training policy with base
learning rate set to 0.00001 and γ = 0.5.
B. Discussion
Table III shows the model size and the object detection
accuracy of the proposed Tiny SSD network on the VOC
2007 test dataset, along with the model size and the object
detection accuracy of Tiny YOLO. A number of interesting
observations can be made. First, the resulting Tiny SSD
possesses a model size of 2.3MB, which is ∼26X smaller
than Tiny YOLO. The significantly smaller model size of
Tiny SSD compared to Tiny YOLO illustrates its efficacy
for greatly reducing the memory requirements for leveraging
Tiny SSD for real-time embedded object detection purposes.
Second, it can be observed that the resulting Tiny SSD
was still able to achieve an mAP of 61.3% on the VOC
2007 test dataset, which is ∼4.2% higher than that achieved
using Tiny YOLO. Figure 5 demonstrates several example
object detection results produced by the proposed Tiny SSD
compared to Tiny YOLO. It can be observed that Tiny SSD
has comparable object detection results as Tiny YOLO in
some cases, while in some cases outperforms Tiny YOLO in
assigning more accurate category labels to detected objects.
For example, in the first image case, Tiny SSD is able to
detect the chair in the scene, while Tiny YOLO misses the
chair. In the third image case, Tiny SSD is able to identify
the dog in the scene while Tiny YOLO detects two bounding
boxes around the dog, with one of the bounding boxes
incorrectly labeling it as cat. This significant improvement
Figure 4.
Example images from the Pascal VOC dataset. The ground-truth bounding boxes and object categories are shown for each image.
in object detection accuracy when compared to Tiny YOLO
illustrates the efficacy of Tiny SSD for providing more
reliable embedded object detection performance. Furthermore,
as seen in Table IV, Tiny SSD requires just 571.09 million
MAC operations to perform inference, making it well-suited
for real-time embedded object detection. These experimental
results show that very small deep neural network architectures
can be designed for real-time object detection that are wellsuited for embedded scenarios.
VI. C ONCLUSIONS
In this paper, a single-shot detection deep convolutional
neural network called Tiny SSD is introduced for real-time
embedded object detection. Composed of a highly optimized,
non-uniform Fire sub-network stack and a non-uniform subnetwork stack of highly optimized SSD-based auxiliary
convolutional feature layers designed specifically to minimize
model size while maintaining object detection performance,
Tiny SSD possesses a model size that is ∼26X smaller than
Tiny YOLO, requires just 571.09 million MAC operations,
while still achieving an mAP of that is ∼4.2% higher than
Tiny YOLO on the VOC 2007 test dataset. These results
demonstrates the efficacy of designing very small deep neural
network architectures such as Tiny SSD for real-time object
detection in embedded scenarios.
ACKNOWLEDGMENT
The authors thank Natural Sciences and Engineering Research Council of Canada, Canada Research Chairs Program,
DarwinAI, and Nvidia for hardware support.
R EFERENCES
[1] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra
Malik. Rich feature hierarchies for accurate object detection
and semantic segmentation. In Proceedings of the IEEE
conference on computer vision and pattern recognition, pages
580–587, 2014.
[2] K. He, G. Gkioxari, P. Dollar, and R. Girshick. Mask r-cnn.
ICCV, 2017.
[3] Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry
Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv
preprint arXiv:1704.04861, 2017.
[4] Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu,
Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna,
Yang Song, Sergio Guadarrama, et al. Speed/accuracy tradeoffs for modern convolutional object detectors. In IEEE CVPR,
2017.
[5] Forrest N Iandola, Song Han, Matthew W Moskewicz, Khalid
Ashraf, William J Dally, and Kurt Keutzer. Squeezenet:
Alexnet-level accuracy with 50x fewer parameters and< 0.5
mb model size. arXiv preprint arXiv:1602.07360, 2016.
[6] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks.
In NIPS, 2012.
[7] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep
learning. Nature, 2015.
[8] Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He,
Bharath Hariharan, and Serge Belongie. Feature pyramid
networks for object detection. In CVPR, volume 1, page 4,
2017.
[9] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian
Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C Berg.
SSD: Single shot multibox detector. In European conference
on computer vision, pages 21–37. Springer, 2016.
[10] J. Redmon.
YOLO: Real-time object
https://pjreddie.com/darknet/yolo/, 2016.
detection.
[11] Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali
Farhadi. You only look once: Unified, real-time object
detection. In Proceedings of the IEEE conference on computer
vision and pattern recognition, pages 779–788, 2016.
[12] Joseph Redmon and Ali Farhadi. YOLO9000: better, faster,
stronger. arXiv preprint, 1612, 2016.
[13] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun.
Faster R-CNN: Towards real-time object detection with region
proposal networks. In Advances in neural information
processing systems, pages 91–99, 2015.
[14] Mohammad Javad Shafiee, Brendan Chywl, Francis Li, and
Alexander Wong. Fast YOLO: A fast you only look once
system for real-time embedded object detection in video. arXiv
preprint arXiv:1709.05943, 2017.
Input Image
Tiny YOLO
Tiny SSD
Figure 5. Example object detection results produced by the proposed Tiny SSD compared to Tiny YOLO. It can be observed that Tiny SSD has comparable
object detection results as Tiny YOLO in some cases, while in some cases outperforms Tiny YOLO in assigning more accurate category labels to detected
objects. This significant improvement in object detection accuracy when compared to Tiny YOLO illustrates the efficacy of Tiny SSD for providing more
reliable embedded object detection performance.
[15] Abhinav Shrivastava, Abhinav Gupta, and Ross Girshick.
Training region-based object detectors with online hard
example mining. In Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition, pages 761–769,
2016.
[16] Bichen Wu, Forrest Iandola, Peter H Jin, and Kurt Keutzer.
Squeezedet: Unified, small, low power fully convolutional
neural networks for real-time object detection for autonomous
driving. arXiv preprint arXiv:1612.01051, 2016.
| 1 |
1
An overview of deep learning based methods for
unsupervised and semi-supervised anomaly
detection in videos
arXiv:1801.03149v2 [] 30 Jan 2018
B Ravi Kiran, Dilip Mathew Thomas, Ranjith Parakkal
Abstract—Videos represent the primary source of information for surveillance applications and are available in large amounts but in
most cases contain little or no annotation for supervised learning. This article reviews the state-of-the-art deep learning based methods
for video anomaly detection and categorizes them based on the type of model and criteria of detection. We also perform simple studies
to understand the different approaches and provide the criteria of evaluation for spatio-temporal anomaly detection.
Index Terms—Unsupervised methods, Anomaly detection, Representation learning, Autoencoders, LSTMs, Generative adversarial
networks, Variational Autoencoders, Predictive models.
F
1
I NTRODUCTION
Unsupervised representation learning has become an important domain with the advent of deep generative models which
include the variational autoencoder (VAE) [1] , generative
adversarial networks (GANs) [2], Long Short Term memory
networks (LSTMs) [3] , and others. Anomaly detection is
a well-known sub-domain of unsupervised learning in the
machine learning and data mining community. Anomaly detection for images and videos are challenging due to their high
dimensional structure of the images, combined with the nonlocal temporal variations across frames.
We focus on reviewing firstly, deep convolution architectures for feature or representation learnt “end-to-end” and
secondly, predictive and generative models specifically for
the task of video anomaly detection. Anomaly detection is
an unsupervised learning task where the goal is to identify
abnormal patterns or motions in data that are by definition
infrequent or rare events. Furthermore, anomalies are rarely
annotated and labeled data rarely available to train a deep
convolutional network to separate normal class from the
anomalous class. This is a fairly complex task since the class of
normal points includes frequently occurring objects and regular foreground movements while the anomalous class include
various types of rare events and unseen objects that could
be summarized as a consistent class. Long streams of videos
containing no anomalies are made available using which one
is required to build a representation for a moving window over
the video stream that estimates the normal behavior class
while detecting anomalous movements and appearance, such
as unusual objects in the scene.
•
•
•
Preprint of an article under review at MDPI Journal of Imaging
Bangalore Ravi Kiran is with Uncanny Vision Solutions, Bangalore
and CRIStAL Lab, UMR 9189, Université Charles de Gaulle, Lille 3.
Email: beedotkiran@gmail.com
Dilip Mathew Thomas and Ranjith Parakkal, are with Uncanny
Vision Solutions, Bangalore. Email: dilip@uncannyvision.com, ranjith@uncannyvision.com
Given a set of training samples containing no anomalies,
the goal of anomaly detection is to design or learn a feature
representation, that captures “normal” motion and spatial
appearance patterns. Any deviations from this normal can
be identified by measuring the approximation error either
geometrically in a vector space or the posterior probability of a
given model which fits training sample representation vectors
or by modeling the conditional probability of future samples
given their past values and measuring the prediction error of
test samples by training a predictive model, thus accounting
for temporal structure in videos.
1.1
Anomaly detection
Anomaly detection is an unsupervised pattern recognition
task that can be defined under different statistical models.
In this study we will explore models that perform linear
approximations by PCA, non-linear approximation by various
types of autoencoders and finally deep generative models.
Intuitively, a complex system under the action of various
transformations is observed, the normal behavior is described
through a few samples and a statistical model is built using
the said normal behavior samples that is capable of generalizing well on unseen samples. The normal class distribution
D is estimated using the training samples x i ∈ X train , by
building a representation f θ : X train → R which minimizes
model prediction loss
X
X
θ ∗ = arg min
L D (θ ; x i ) = arg min
k f θ (x i ) − x i k2
θ
x i ∈ X train
θ
x i ∈ X train
(1)
error over all the training samples, over all i , is evaluated. Now the deviation of the test samples x j ∈ X test under
this representation f θ∗ is evaluated as the anomaly score,
a(x j ) = k f θ∗ (x j ) − x j k2 is used as a measure of deviation.
For said models, the anomalous points are samples that are
poorly approximated by the estimated model f θ∗ . Detection
is achieved by evaluating a threshold on the anomaly score
a j > Tthresh . The threshold is a parameter of the detection
2
algorithm and the variation of the threshold w.r.t detection
performance is discussed under the Area under ROC section.
For probabilistic models, anomalous points can be defined as
samples that lie in low density or concentration regions of the
domain of an input training distribution P (x|θ ).
Representation learning automates feature extraction for
video data for tasks such as action recognition, action similarity, scene classification, object recognition, semantic video
segmentation [4], human pose estimation, human behavior
recognition and various other tasks. Unsupervised learning
tasks in video include anomaly detection [5], [6], unsupervised
representation learning [7], generative models for video [8],
and video prediction [9].
1.2
Datasets
We now define the video anomaly detection problem setup.
The videos considered come from a surveillance camera where
the background remains static, while the foreground constitutes of moving objects such as pedestrians, traffic and so on.
The anomalous events are the change in appearance and motion patterns that deviate from the normal patterns observed
in the training set. We see a few examples demonstrated in
figure 1 :
Here we list the frequently evaluated datasets, though this
is not exhaustive. The UCSD dataset [5] consists of pedestrian
videos where anomalous time instances correspond to the
appearance of objects like a cyclist, a wheelchair, and a car in
the scene that is usually populated with pedestrians walking
along the roads. People walking in unusual locations are also
considered anomalous. In CUHK Avenue Dataset [10] anomalies correspond to strange actions such as a person throwing
papers or bag, moving in unusual directions, and appearance
of unusual objects like bags and bicycle. In the Subway
entry and exit datasets people moving in the wrong direction,
loitering and so on are considered as anomalies. UMN dataset
[11] consists of videos showing unusual crowd activity, and
is a particular case of the video anomaly detection problem.
The Train dataset [12] contains moving people in a train. The
anomalous events are mainly due to unusual movements of
people in the train. And finally the Queen Mary University
of London U-turn dataset [13] contains normal traffic with
anomalous events such as jaywalking and movement of a
fire engine. More recently, a controlled environment based
LV dataset has been introduced by [14], with challenging
examples for the task of online video anomaly detection.
2 R EPRESENTATION LEARNING
A NOMALY D ETECTION (VAD)
FOR
V IDEO
Videos are high dimensional signals with both spatialstructure, as well as local temporal variations. An important
problem of anomaly detection in videos is to learn a representation of input sample space f θ : X → Rd , to d -dimensional
vectors. The idea of feature learning is to automate the process
of finding a good representation of the input space, that takes
into account important prior information about the problem
[15]. This follows from the No-Free-Lunch-Theorem which
states that no universal learner exists for every training
distribution D . Following work already established for video
anomaly detection, the task concretely consists in detecting
deviations from models of static background, normal crowd
appearance and motion from optical flow, change in trajectory
and other priors. Representation learning consists of building
a parameterized model f θ : X → Z → X , and in this study we
focus on representations that reconstruct the input, while the
latent space Z is constrained to be invariant to changes in the
input, such as change in luminance, translations of objects in
the scene that don’t deviate normal movement patterns, and
others. This provides a way to introduce prior information to
reconstruct normal samples.
2.1
Taxonomy
The goal of this survey is to provide a compact review of the
state of the art in video anomaly detection based on unsupervised and semi-supervised deep learning architectures. The
survey characterizes the underlying video representation or
model as one of the following :
1)
2)
3)
Representation learning for reconstruction :
Methods such as Principal component analysis (PCA),
Autoencoders (AEs) are used to represent the different linear and non-linear transformations to the appearance (image) or motion (flow), that model the normal behavior in surveillance videos. Anomalies represent any deviations that are poorly reconstructed.
Predictive modeling : Video frames are viewed
as temporal patterns or time series, and the
goal is to model the conditional distribution
P (x t |(x t−1 , x t−2 , ..., x t− p )). In contrast to reconstruction, where the goal is to learn a generative model
that can successfully reconstruct frames of a video,
the goal here is to predict the current frame or its
encoded representation using the past frames. Examples include autoregressive models and convolutional
Long-Short-Term-Memory models.
Generative models : Variational Autoencoders
(VAE), Generative Adversarial Networks (GAN) and
Adversarially trained AutoEncoders (AAE), are used
for the purpose of modeling the likelihood of normal
video samples in an end-to-end deep learning framework.
An important common aspect in all these models is the
problem of representation learning, which refers to the feature
extraction or transformation of input training data for the
task of anomaly detection. We shall also remark the other
secondary feature transformations performed in each of these
different models and their purposes.
2.2
Context of the review
A short review on the subject of video anomaly detection
is provided here [16]. To the best of our knowledge, there
has not been a systematic study of deep architectures for
video anomaly detection, which is characterized by abnormal
appearance and motion features, that occur rarely. We cite
below the other domains which do not fall under this study.
•
A detailed review of abnormal human behavior and
crowd motion analysis is provided in [17] and [18]. This
includes deep architectures such as Social-LSTM [19]
based on the social force model [20] where the goal is
to predict pedestrian motion taking into account the
movement of neighboring pedestrians.
3
Training Samples
Test Samples
Anomaly Mask
Fig. 1. UCSD dataset (top two rows) : Unlike normal training video streams, anomalies consist of a person on a bicycle and a skating board, ground
truth detection shown in anomaly mask. Avenue dataset (bottom row) : Unlike normal training video streams, anomalies consist of a person throwing
papers. Other examples include walking with an abnormal object (bicycle), a person running (strange action), and a person walking in the wrong
direction.
•
•
Action recognition is an important domain in computer
vision which requires supervised learning of efficient
representations of appearance and motion for the purpose of classification [21]. Convolutional networks were
employed to classify various actions in video quite
early [22]. Recent work involves fusing feature maps
evaluated on different frames (over time) of video [23]
yielding state of the art results. Finally, convolutional
networks using 3-D filters (C3D) have become a recent
base-line for action recognition [24].
Unsupervised representation learning is a wellestablished domain and the readers are directed towards a complete review of the topic in [25], as well as
the deep learning book [26].
In this review, we shall mainly focus on the taxonomy provided and restrict our review to deep convolutional networks
and deep generative models that enable end-to-end spatiotemporal representation learning for the task of anomaly
detection in videos. We also aim to provide an understanding
of what aspects of detection do these different models target.
Apart from the taxonomy being addressed in this study, there
have been many other approaches. One could cite work on
anomaly detection based on K-Nearest Neighbors [27], unsupervised clustering [28], and object speed and size [29].
We briefly review the set of hand-engineered features used
for the task of video anomaly detection, though our focus still
remains deep learning based architectures. Mixture of dynamic textures (MDT) is a generative mixture model defined
for each spatio-temporal windows or cubes of the raw training
video [5], [6]. It models appearance and motion features and
thus detects both spatial and temporal anomalies. Histogram
of oriented optical flow and oriented gradients [30] is a baseline used in anomaly detection and crowd analysis, [31], [32],
[33]. Tracklets are representation of movements in videos and
have been applied to abnormal crowd motion analysis [34],
[35]. More recently, there has been work on developing optical
flow acceleration features for motion description [36].
Problem setup : Given a training sequence of images from a
video, X train ∈ R Ntrain ×r× c , which contains only “normal motion
patterns” and no anomalies, and given a test sequence X test ∈
R Ntest ×r× c , which is susceptible to contain anomalies, the task
consists in associating each frame with an anomaly score for
the temporal variation, as well as a spatial score to localize
the anomaly in space. This is demonstrated in figure 2.
The anomaly detection task is usually considered unsupervised when there is no direct information or labels available
about the positive rare class. However the samples in the
study with no anomalies are available, and thus is a semisupervised learning problem.
For Z = {x i , yi }, i ∈ [1, N ] we have samples only with yi = 0.
The goal thus of anomaly detection is two-fold : first, find the
representation of the input feature f θ (x i ), for example, using
convolutional neural networks (CNNs), and then the decision
rule s( f θ (x i )) ∈ {0, 1} that detects anomalies, whose detection
rate can be parameterized as per the application.
4
the training set, and evaluate reconstruction error on the test
optical flow magnitude. This serves a baseline for our study. A
refined version was implemented and evaluated in [37] which
evaluated the atomic movement patterns using a probabilistic
PCA [38], over rectangular regions over the image domain.
Optical flow estimation is a costly step of this algorithm,
and there has been large progress in the improving its evaluating speed. Authors [39], propose to trade of accuracy of for
fast approximation of optical flow using PCA to interpolate
flow fields.
Input
layer
Hidden
layer
Output
layer
X1
X̂ 1
X2
h(1)
1
X̂ 2
X3
h(1)
2
X̂ 3
X4
h(1)
3
X̂ 4
X5
+1
X̂ 5
Fig. 2. Visualizing anomalous regions and temporal anomaly score.
X6
3
We begin with an input training video X train ∈ R N ×d , with
N frames and d = r × c pixels per frame, which represents
the dimensionality of each vector. In this section, we shall
focus on reducing the expected reconstruction error by different methods. We shall describe the Principal Component
Analysis (PCA), Convolutional AutoEncoder (ConvAE), and
Contractive AutoEncoders (CtractAE), and their setup for
dimensionality reduction and reconstruction.
3.1
X̂ 6
R ECONSTRUCTION M ODELS
Principal Component Analysis
PCA finds the directions of maximal variance in the training
data. In the case of videos, we are aiming to model the spatial
correlation between pixel values which are components of the
vector representing a frame at a particular time instant.
With input training matrix X , which has zero mean, we
are looking for a set of orthogonal projections that whiten/decorrelate the features in the training set :
min k X − ( X W )W T k2F = k X − X̂ k2F
W T W =I
(2)
where, W ∈ Rd ×k , with the constraint W T W = I representing
an orthonormal reconstruction of the input X . The projection
X W is a vector in a lower dimensional subspace, with fewer
components than the vectors from X . This reduction in dimensionality is used to capture the anomalous behavior, as
samples that are not well reconstructed. The anomaly score is
given by the Mahalanobis distance between the input and the
reconstruction, or the variance scaled reconstruction error :
A = ( X − X̂ )Σ−1 ( X − X̂ )T
(3)
We associate each frame with continual optical flow magnitude, and learn atomic motion patterns with standard PCA on
+1
Fig. 3. Autoencoder with single hidden layer.
3.2
Autoencoders
An Autoencoder is a neural network trained by backpropagation and provides an alternative to PCA to perform
dimensionality reduction by reducing the reconstruction error
on the training set, shown in figure 3. It takes an input x ∈ Rd
and maps it to the latent space representation z ∈ Rk , by a
deterministic application, z = σ(W x + b).
Unlike the PCA the autoencoder (AE) performs a nonlinear point-wise transform of the input σ : R → R, which
is required to be a differentiable function. It is usually a
rectified linear unit (ReLU) (σ( x) = max(0, x)) or Sigmoid
(σ( x) = (1 + e− x )−1 ). Thus we can write a similar reconstruction
of the input matrix given by :
min k X − σ( XU )V k2F
U,V
(4)
The low-dimensional representation is given by σ( XU ∗ ),
where U ∗ represents the optimal linear encoding that minimizes the reconstruction loss above. There are multiple ways
of regularizing the parameters U, V . One of the constraints is
the average value of the activations in the hidden layer, this
enforces sparsity.
3.3
Convolutional AutoEncoders (CAEs)
Autoencoders in their original form do view the input as a
signal decomposed as the sum of other signals. Convolutional
5
AutoEncoders (CAEs) [40], makes this decomposition explicit
by weighting the result of the convolution operator. For a
single channel input x (for example a gray-scale image), the
latent representation of the kth filter would be :
h k = σ(x ∗ W k ) + b k
(5)
The reconstruction is obtained by mapping back to the
original image domain with the latent maps H and the
gk :
decoding convolutional filter W
X k
gk + c)
x̂ = σ(
h ∗W
(6)
k∈ H
where σ : R → R is a point-wise non-linearity like the
sigmoid or hyperbolic tangent function. A single bias value is
broadcast to each component of a latent map. These k-output
maps can be used as an input to the next layer of the CAE.
Several CAEs can be stacked into a deep hierarchy, which we
again refer as a CAE to simplify the naming convention. We
represent the stack of such operations as a single function
f W : Rr× c× p → Rr× c× p where the convolutional weights and
biases are together represented by the weights W .
In retrospect, the PCA, and traditional AE, ignore the
spatial structure and location of pixels in the image. This is
also termed as being permutation invariant. It is important to
note that when working with image frames of few 100 × 100
pixels, these methods introduce large redundancy in network
parameters W , and furthermore span the entire visual receptive field. CAEs have fewer parameters on account of their
weights being shared across many input locations/pixels.
3.4
CAEs for Video anomaly detection
In the recent work by [41], a deep convolutional autoencoder
was trained to reconstruct an input sequence of frames from
a training video set. We call this a Spatio-Temporal Stacked
frame AutoEncoder (STSAE), to avoid confusion with similar
names in the rest of the article. The STSAE in [41] stacks p
frames x i = [ X i , X i−1 , ..., X i− p+1 ] with each time slice treated
as a different channel in the input tensor to a convolutional
autoencoder. The model is regularized by augmenting the loss
function with L2-norm of the model weights :
1 X
kx i − f W (x i )k22 + νkW k22
L (W ) =
2N i
(7)
where the tensor x i ∈ Rr× c× p is a cuboid with spatial dimensions r, c are the spatial dimensions and p is the number
of frames temporally back into the past, with hyper-parameter
ν which balances the reconstruction error and norm of the
parameters, and N is the mini-batch size. The architecture of
the convolutional autoencoder is reproduced in figure 4.
The image or tensor xbi reconstructed by the autoencoder
enforces temporal regularity since the convolutional (weights)
representation along with the bottleneck architecture of the
autoencoder compresses information. The spatio-temporal autoencoder in [42] is shown in the right panel of figure 4. The
reconstruction error map at frame t is given by E t = | X t − X̂ t |,
while the temporal regularity score is given by the inverted,
normalized reconstruction error :
P
(x,y) E t − min(x,y) (E t )
s( t) = 1 −
(8)
max(x,y) (E t )
P
where the , min and max operators are across the spatial
indices’s ( x, y). In other models, the normalized reconstruction
error is directly used as the anomaly score. One could envisage
the use of Mahalanobis distance here since the task is to
evaluate the distance between test points from the points from
the normal ones. This is evaluated as the error between the
original tensor and the reconstruction from the autoencoder.
Robust versions of Convolutional AutoEncoders (RCAE)
are studied in [43], where the goal is to evaluate anomalies in
images by imposing L2-constraints on parameters W as well
as adding a bias term. A video-patch (spatio-temporal) based
autoencoder was employed by [44] to reconstruct patches, with
a sparse autoencoder whose average activations were set to
parameter ρ , enforcing sparseness, following the work in [45].
3.5
Contractive Autoencoders
Contractive autoencoders explicitly create invariance by
adding the Jacobian of the latent space representation w.r.t
the input of the autoencoder, to the reconstruction loss
L(x, r (x)). This forces the latent space representation to remain the same for small changes in the input [46]. Let us
consider the autoencoder with the encoder mapping the input
image to the latent space z = f (x) and the decoder mapping
back to the input image space r (x) = g( f (x)). The regularized
loss function is :
°
°
h
° ∂ f (x) ° i
°
L (W ) = Ex∼ X train L(x, r (x)) + λ °
(9)
° ∂x °
Authors in [47] describe what regularized autoencoders
learn from the data generating the density function, and show
for contractive and denoising encoders that this corresponds
to the direction in which density is increasing the most. Regularization forces the autoencoders to become less sensitive to
input variation, though enforcing minimal reconstruction error keeps it sensitive to variations along the manifold having
high density. Contractive autoencoders capture variations on
the manifold, while mostly ignoring variations orthogonal to
it. Contractive autoencoder estimates the tangent plane of the
data manifold [26].
3.6
Other deep models
De-noising AutoEncoders (DAE) and Stacked DAEs (SDAEs)
are well-known robust feature extraction methods in the
domain of unsupervised learning [48], where the reconstruction error minimization criteria is augmented with that of
reconstructing from corrupted inputs. SDAEs are used to
learn representations from a video using both appearance, i.e.
raw values, and motion information, i.e. optical flow between
consecutive frames [49]. Correlations between optical flow and
raw image values are modeled by coupling these two SDAE
pipelines to learn a joint representation.
Deep belief networks (DBNs) are generative models, created by stacking multiple hidden layer units, which are usually trained greedily to perform unsupervised feature learning. They are generative models in the sense that they can reconstruct the original inputs. They have been discriminatively
trained using back-propagation [50] to achieve improved accuracies for supervised learning tasks. The DBNs have been
used to perform a raw image value based representation
learning in [51].
6
Input : 10 × 227 × 227
Input : 10 × 227 × 227
Conv : 11 × 11, 512 Filters, Stride 4(10, 512, 55, 55)
Conv : 11 × 11, 128 Filters, Stride 4 (10, 128, 55, 55)
Max Pool : 2 × 2, (10, 512, 27, 27)
Conv : 5 × 5, 64 Filters, Stride 2 (10, 64, 26, 26)
Conv : 5 × 5, 256 Filters, (10, 256, 27, 27)
Max Pool : 2 × 2, (10, 256, 13, 13)
ConvLSTM2D : 3 × 3, 64 filters (10, 64, 26, 26)
Conv : 3 × 3, 128 Filters, (10, 128, 13, 13)
ConvLSTM2D : 3 × 3, 32 filters (10, 32, 26, 26)
Deconv : 3 × 3, 128 Filters, (10, 128, 13, 13)
ConvLSTM2D : 3 × 3, 64 filters (10, 64, 26, 26)
Un Pool : 2 × 2, (10, 128, 27, 27)
Deconv : 5 × 5, 256 Filters, (10, 256, 27, 27)
Deconv : 5 × 5, 128 Filters, Stride 2 (10, 128, 55, 55)
Un Pool : 2 × 2, (10, 256, 55, 55)
Deconv : 5 × 5, 128 Filters, Stride 2(10, 1, 227, 227)
Deconv : 11 × 11, 512 Filters, (10, 512, 55, 55)
Output : 10 × 227 × 227
Output : 10 × 227 × 227
Spatio-Temporal Stacked frame AutoEncoder (STSAE)
Convolutional LSTM based autoencoder (CLSTM-AE)
Fig. 4. Spatio-Temporal Stacked frame AutoEncoder (left) : A sequence of 10 frames are being reconstructed by a convolutional autoencoder,
image reproduced from [41]. Convolutional LSTM based autoencoder (right) : A sequence of 10 frames are being reconstructed by a spatiotemporal autoencoder, [42]. The Convolutional LSTM layers are predictive models that model the spatio-temporal correlation of pixels in the video.
This is described in the predictive model section.
An early application of autoencoders to anomaly detection
was performed in [52], on non-visual data. The Replicating
neural network [52], constitutes of a feed-forward multi-layer
perceptron with three hidden layers, trained to map the
training dataset to itself and anomalies correspond to large
reconstruction error over test datasets. This is an autoencoder
setup with a staircase like non-linearity applied at the middle
hidden layer. The activation levels of this hidden units are
thus quantized into N discrete values, 0, N1−1 , N1−2 , ..., 1. The
step-wise activation function used for the middle hidden
layer divides the continuously distributed data points into a
number of discrete-valued vectors. The staircase non-linearity
quantizes data points into clusters. This approach identifies
cluster labels for each sample, and this often helps interpret
resulting outliers.
A rejection cascade over spatio-temporal cubes was generated to improve the performance speed of Deep-CNN based
video anomaly detection framework by authors in [53].
Videos can be viewed as a special case of spatio-temporal
processes. A direct approach to video anomaly detection can be
estimating the spatio-temporal mean and covariance. A major
issue is estimating the spatio-temporal covariance matrix due
to its large size n2 (where n = N pixels × p frames). In [54],
space-time pixel covariance for crowd videos were represented
as a sum of Kronecker products using only a few Kronecker
P
factors, Σn×n u ri=1 T i ⊗ S i . To evaluate the anomaly score,
the Mahanalobis distance for clips longer than the learned
covariance needs to be evaluated. The inverse of the larger
covariance matrix needs to be inferred from the estimated
one, by block Toeplitz extension [55]. It is to be noted that this
study [54] only evaluates performance on the UMN dataset.
4
P REDICTIVE MODELING
Predictive models aim to model the current output frame X t
as a function of the past p frames [ X t−1 , X t−2 , ..., X t− p+1 ]. This
is well-known in time series analysis under auto-regressive
models, where the function over the past is linear. Recurrent
neural networks (RNN) model this function as a recurrence
relationship, frequently involving a non-linearity such as a
sigmoid function. LSTM is the standard model for sequence
prediction. It learns a gating function over the classical RNN
architecture to prevent the vanishing gradient problem during
backpropagation through time (BPTT) [3]. Recently there
have also been attempts to perform efficient video prediction
using feed-forward convolutional networks for video prediction by minimizing the mean-squared error (MSE) between
predicted and future frames [9]. Similar efforts were performed in [56] using a CNN-LSTM-deCNN framework while
combining MSE and an adversarial loss.
4.1
Composite Model : Reconstruction and prediction
This composite LSTM model in [7], combines an autoencoder
model and predictive LSTM model, see figure 5. Autoencoders
suffer learning trivial representations of input, by memorization, while memorization is not useful for predicting future
frames. On the other hand, the future predictor’s role requires
memory of temporally past few frames, though this would not
be compatible with the autoencoder loss which is more global.
The composite model was used to extract features from video
data for the tasks of action recognition. The composite LSTM
model is defined using a fully connected LSTM (FC-LSTM)
layer.
4.2
Convolutional LSTM
Convolutional long short-term memory (ConvLSTM) model
[57] is a composite LSTM based encoder-decoder model. FCLSTM does not take spatial correlation into consideration
and is permutation invariant to pixels, while a ConvLSTM
has convolutional layers instead of fully connected layers,
thus modeling spatio-temporal correlations. The ConvLSTM
as described in equations 10 evaluates future states of cells in
7
Input reconstruction
Xˆ3
Xˆ2
Prediction
Encoding n/w
Xˆ1
ConvLSTM2
W2
ConvLSTM4
copy
W2
Input sequence
W1
W2
Xˆ3
copy
Xˆ2
Future prediction
X1
X2
ConvLSTM3
ConvLSTM1
X3
Xˆ5
Xˆ4
W3
Input
Forecasting n/w
Xˆ6
Fig. 6. A convolutional LSTM architecture for spatio-temporal prediction.
W3
4.3
Xˆ4
Xˆ5
Fig. 5. Composite LSTM module [7]. The LSTM model weights W
represent fully connected layers.
a spatial grid as a function of the inputs and past states of its
local neighbors.
Authors in [57], consider a spatial grid, with each grid cell
containing multiple spatial measurements, which they aim to
forecast for the next K future frames, given J observations in
the past. The spatio-temporal correlations are used as input
to a recurrent model, the convolutional LSTM. The equations
for input, gating and the output are presented below.
L rec (W ) =
i t = σ(Wxi ∗ X t + Whi ∗ H t−1 + Wci ◦ C t−1 + b i )
f t = σ(Wx f ∗ X t + Wh f ∗ H t−1 + Wc f ◦ C t−1 + b f )
C t = f t ◦ C t−1 + i t ◦ tanh(Wxc + X t + Whc ∗ H t−1 + b c )
3D-Autoencoder and Predictor
As remarked by authors in [61], while 2D-ConvNets are
appropriate representations learnt for image recognition and
detection tasks, they are incapable of capturing the temporal
information encoded in consecutive frames for video analysis problems. 3-D convolutional architectures are known to
perform well for action recognition [24], and are used in
the form of an autoencoder. Such a 3D autoencoder learns
representations that are invariant to spatio-temporal changes
(movement) encoded by the 3-D convolutional feature maps.
Authors in [61] propose to use a 3D kernel by stacking Tframes together as in [41]. The output feature map of each
kernel is a 3D tensor including the temporal dimension and
are aimed to summarize motion information.
The reconstruction branch follows an autoencoder loss:
(10)
o t = σ(Wxo ∗ X t + Who ∗ H t−1 + Wco ◦ C t + b o )
N
1 X
W
k X i − f rec
( X i )k22
N i=1
(11)
The prediction branch loss is inversely weighted by moving
window’s length that falls off symmetrically w.r.t the current
frame, to reduce the effect of past frames on the predicted
frame :
H t = o t ◦ tanh(C t )
Here, ∗ refers to the convolution operation while ◦ refers to
the Hadamard product, the element-wise product of matrices. Encoding network compresses the input sequence into
a hidden state tensor while the forecasting network unfolds
the hidden state tensor to make a prediction. The hidden
representations can be used to represent moving objects in
the scene, a larger transitional kernel captures faster motions
compared to smaller kernels [57].
The ConvLSTM model was used as a unit within the
composite LSTM model [7] following an encoder-decoder, with
a branch for reconstruction and another for prediction. This
architecture was applied for video anomaly detection by [58],
[59], with promising results.
In [60], a convolutional representation of the input video is
used as input to the convolutional LSTM and a de-convolution
to reconstruct the ConvLSTM output to the original resolution. The authors call this a ConvLSTM Autoencoder, though
fundamentally it is not very different from a ConvLSTM.
L pred (W ) =
N 1 X
T
1 X
W
(T − t)k X i+T − f pred
( X i )k22
N i=1 T 2 t=1
(12)
Thus the final optimization objective minimized is :
L (W ) = L rec (W ) + L pred (W ) + λkW k22
(13)
Anomalous regions where spatio-temporal blocks, that
even when poorly reconstructed by the autoencoder branch,
would be well predicted by the prediction branch. The prediction loss was designed to enforce local temporal coherence by
tracking spatio-temporal correlation, and not for the prediction of the appearance of new objects in the relatively longterm future.
4.4
Slow Feature Analysis (SFA)
Slow feature analysis [62] is an unsupervised representation
learning method which aims at extracting slowly varying
8
Input : 16 × 1 × 128 × 128(T, C, X , Y )
3D-Conv-BN-Lrelu-3DPool : (8, 32, 64, 64)
3D-Conv-BN-Lrelu-3DPool : (4, 48, 64, 64)
3D-Conv-BN-Lrelu-3DPool : (2, 64, 16, 16)
3D-Conv-BN-Lrelu-3DPool : (2, 64, 16, 16)
3D-Conv-BN-Lrelu-3DPool : (4, 48, 32, 32)
3D-Conv-BN-Lrelu-3DPool : (4, 48, 32, 32)
3D-Conv-BN-Lrelu-3DPool : (8, 32, 64, 64)
3D-Conv-BN-Lrelu-3DPool : (8, 32, 64, 64)
3D-Conv-BN-Lrelu-3DPool : (16, 32, 128, 128)
3D-Conv-BN-Lrelu-3DPool : (16, 32, 128, 128)
3D-Conv-Sigmoid : (16:, 1
, 128
3D-Conv-BN-Lrelu-3DPool
(16
, 32, ,128)
128, 128)
3D-Conv-Sigmoid : (16:, 1
, 128
3D-Conv-BN-Lrelu-3DPool
(16
, 32, ,128)
128, 128)
Reconstruction
Prediction
Fig. 7. Spatio-temporal autoencoder architecture from [61] with reconstruction and prediction branches, following the composite model in [7]. Batch
Normalization(BN) is applied at each layer, following which a leaky Relu non-linearity is applied, finally followed by a 3D max-pooling operation.
representations of rapidly varying high dimensional input.
The SFA is based on the slowness principle, which states that
the responses of individual receptors or pixel variations are
highly sensitive to local variations in the environment, and
thus vary much faster, while the higher order internal visual
representations vary on a slow timescale. From a predictive
modeling perspective SFA extracts a representation y( t) of
the high dimensional input x t that maximizes information on
the next time sample x t+1 . Given a high dimensional input
varying over time, [x1 , x2 , ..., xT ], t ∈ [ t 0 , t 1 ] SFA extracts a
representation y = f θ (x) which is a solution to the following
optimization problem [63] :
arg min
E t [y t ]=0, E t [y2j ]=1, E t [y j y j0 ]=0]=1,
E t [y t+1 − y t ]
(14)
As seen in the constraints, the representation is enforced
to have, zero mean to ensure a unique solution, while unit covariance to avoid trivial zero solution. Feature de-correlation
removes redundancy across the features.
SFA has been well known in pattern recognition and has
been applied to the problem of activity recognition [64], [65].
Authors in [66] propose an incremental application of SFA
that updates slow features incrementally. The SFA is calculated using batch PCA, iterated twice. The first PCA to whiten
the inputs. The second PCA is applied on the derivative of the
normalized input to evaluate the flow features. To achieve a
computationally tractable solution, a two-layer localized SFA
architecture is proposed by authors [67] for the task of online
slow feature extraction and consequent anomaly detection.
Other Predictive models : A convolutional feature representation was fed into an LSTM model to predict the latent
space representation and its prediction error was used to
evaluate anomalies in a robotics application [68]. A recurrent
autoencoder using an LSTM that models temporal dependence between patches from a sequence of input frames is
used to detect video forgery [69].
5
5.1
D EEP GENERATIVE M ODELS
Generative Vs Discriminative
Let us consider a supervised learning setup ( X i , yi ) ∈ Rd ×
{C j }Kj=1 , where i indexes the number of samples i = 1 : N in
the dataset. Generative models estimate class conditional posterior distribution P ( X | y), which can be difficult if the input
data are high dimensional images or spatio-temporal tensors.
A discriminative model evaluates the class probability P ( y| X )
directly from the data to classify the samples X into different
classes {C j }Kj=1 .
Deep generative models that can learn via the principle of
maximum likelihood differ with respect to how they represent
or approximate the likelihood. The explicit models are ones
where the density p model (x, θ ) is evaluated explicitly and the
likelihood maximized.
In this section, we will review the stochastic autoencoders;
the variational autoencoder and the adversarial autoencoder,
and their applications to the problem of anomaly detection.
And finally the generative adversarial networks to anomaly
detection in images and videos.
5.2
Variational Autoencoders (VAEs)
Variational Autoencoders [1] are generative models that approximate the data distribution P ( X ) of a high dimensional
input X , an image or video. Variational approximation of the
latent space is achieved using an autoencoder architecture,
with a probabilistic encoder q φ (x|z) that produces Gaussian
distribution in the latent space, and a probabilistic decoder
p θ (z|x), which given a code produces distribution over the
input space. The motivation behind variational methods is
to pick a family of distributions over the latent variables
with its own variational parameters q φ (z), and estimate the
parameters for this family so that it approaches q φ .
The loss function constitutes of the KL-Divergence regularization term, and the expected negative reconstruction
error with an additional KL-divergence term between the
latent space vector and the representation with a mean
vector and a standard deviation vector, that optimizes the
9
variational lower bound on the marginal log-likelihood of each
observation.
NX
train
standard autoencoder, where latent variables are defined by
deterministic mappings.
L
5.4 Generative Adversarial Networks (GANs)
1 X
(log p θ (x(i) |z(i,l) ))
L l =1
A GAN [2] consists of a generator G , usually a decoder, and
i =1
a discriminator D , usually an binary classifier that assigns
where z(i,l) = g φ (²(i,l) , x(i) ) and ²(l) ∼ p(²)
(15) a probability of an image being generated (fake), or sampled
from the training data (real). The generator G in fact learns a
The function g φ maps sample x(i) and noise vector ²(l) distribution p g over data x via a mapping G (z) of samples z,
to a sample from the approximate posterior for that data- 1D vectors of uniformly distributed input noise sampled from
point z(i,l) = g φ (²(l) , x(i) ) where, z(i,l) ∼ q φ (z|x(i) ). To solve this latent space Z , to 2D images in the image space manifold X
sampling problem authors [1] propose the reparameterization , which is populated by normal examples. In this setting, the
trick. The random variable z ∼ q φ (z|x) is expressed as function network architecture of the generator G is equivalent to a conof a deterministic variable z = g φ (², z) where ² is an auxiliary volutional decoder that utilizes a stack of strided convolutions.
variable with independent marginal p(²). This reparameter- The discriminator D is a standard CNN that maps a 2D image
ization rewrites an expectation w.r.t q φ (z|x) such that the to a single scalar value D (·). The discriminator output D (·) can
Monte Carlo estimate of the expectation is differentiable w.r.t. be interpreted as the probability that the given input to the
φ. A valid reparameterization was the unit-Gaussian case discriminator D was a real image x sampled from training
z(i,l) = µ(i,l) + σ(i) ¯ ²(l) where ²(l) ∼ N (0, I ).
data X or a generated image using G (z) by the generator G .
Specifically for a VAE, the goal is to learn a low dimen- D and G are simultaneously optimized through the following
sional representation z by modeling p θ (x|z) with a simpler two-player minimax game with value function V (G, D ) :
distribution, a centered isotropic multivariate Gaussian, i.e.
p θ (z) = N (z, 0, I ). In this model both the prior p θ (z), and
max V (D,G ) = Ex∼ pdata [log D (x)] + Ez∼ pz (z) [log(1 − D (G (z)))]
q φ (z|x) are Gaussian; and the resulting loss function was min
D
G
(17)
described in equation 15.
The discriminator is trained to maximize the probability of
assigning real training examples the “real” and samples from
p g the “fake” label. The generator G is simultaneously trained
to fool D via minimizing V (G ) = log(1 − D (G (z))), which is
z
φ
θ
equivalent to maximizing V (G ) = D (G (z)). During adversarial
training, the generator improves in generating realistic images and the discriminator progresses in correctly identifying
real and generated images. GANs are implicit models [71],
that sample directly from the distribution represented by the
x
model.
L (θ , φ, x(i) ) =
−D K L ( q φ (z|x(i) )|| p θ (z)) +
N
5.5
Fig. 8. Graphical model for the VAE : Solid lines denote the generative
model p θ (z) p θ (x|z), dashed lines denote the variational approximation
q φ (z|x) to the intractable posterior p θ (z|x) [1].
5.3
Anomaly detection using VAE
Anomaly detection using the VAE framework has been
studied in [70]. Authors define the reconstruction probability as E qφ (z|x) [log p θ (x|z)]. Once the VAE is trained, for
a new test sample x(i) , one first evaluates the mean and
standard deviation vectors with the probabislistic encoder,
(µ z(i) , σz(i) ) = f θ (z|x(i) ). Then samples L latent space vectors,
z(i,l) ∼ N (µ z(i) , σ z(i) ). The parameters of the input distribution are reconstructed using these L samples, µ x̂(i,l) , σx̂(i,l) =
g φ (x|z(i,l) ) then the reconstruction probability for test sample
x(i) is given by :
Precon (x(i) ) =
³
´
L
1 X
p θ x(i) |µx̂(i,l) , σx̂(i,l)
L l =1
(16)
Multiple samples drawn from the latent variable distribution, lets Precon (x(i) ) take into account the variability of the
latent variable space, which is one of the essential distinctions between the stochastic variational autoencoder and a
GANs for anomaly detection in Images
This section reviews work done by authors [72] who apply
a GAN model for the task of anomaly detection in medical
images. GANs are generative models that best produce a set
of training data points x ∼ Pdata (x) where Pdata represents
the probability density of the training data points. The basic
idea in anomaly detection is to be able to evaluate the density
function of the normal vectors in the training set containing
no anomalies while for the test set we evaluate a negative loglikelihood score which serves as the final anomaly score. The
score corresponds to the test sample’s posterior probability of
being generated from the same generative model representing
the training data points. GANs provide a generative model
that minimizes the distance between the training data distribution and the generative model samples without explicitly
defining a parametric function, which is why it is called an
implicit generative model [71]. Thus to be successfully used in
an anomaly detection framework the authors [72] evaluate the
mapping x → z, i.e. Image domain → latent representation.
This was done by choosing the closest point zγ using backpropagation. Once done the residual loss in the image space
P
was defined as L R (zγ ) = |x − G (zγ )|.
GANs are generative models and to evaluate a likelihood
one requires a mapping from the image domain to the latent
10
space. This is achieved by authors in [72], which we shall
shortly describe here. Given a query image x ∼ p test , the authors aim to find a point z in the latent space that corresponds
to an image G ( z) that is visually most similar to the query
image x and that is located on the manifold X . The degree
of similarity of x and G (z) depends on to which extent the
query image follows the data distribution p g that was used
for training of the generator.
To find the best z, one starts randomly sampling z1 from
the latent space distribution Z and feeds it into the trained
generator which yields the generated image G (z1 ). Based on
the generated image G (z1 ) we can define a loss function, which
provides gradients for the update of the coefficients of z1
resulting in an updated position in the latent space, z2 . In
order to find the most similar image G (zΓ ), the location of z
in the latent space Z is optimized in an iterative process via
γ = 1, 2, ..., Γ back-propagation steps.
G F →O ,G O→F are trained to map training frames and their
optical flow to their cross-channel counterparts. The goal is
to force a poor cross-channel prediction on test video frames
containing an anomaly so that the trained discriminators
shall provide a low probability score.
The trained discriminators D F →O , D O→F are patchdiscriminators that produce scores S O , S F on a grid with
resolution smaller than the image. These scores do not require
the reconstruction of the different channels to be evaluated.
The final score is S = S O + S F which is normalized between
[0, 1] based on the maximum value of individual scores for
each frame. The U-net uses the Markovian structure present
spatially by the skip connections shown between the input
and the output of the generators in figure 9. Cross-channel
prediction aims at modeling the spatio-temporal correlation
present across channels in the context of video anomaly
detection.
5.6 Adversarial Discriminators using Cross-channel
prediction
Here we shall review the work done in [73] applied to
anomaly detection in videos. The anomaly detection problem
in this paper is formulated as a cross-channel prediction
task, where the two channels are the raw-image values F t
and the optical flow vectors O t for frames F t , F t−1 in the
videos. This work combines two architectures, the pixel-GAN
architecture by [74] to model the normal/training data distribution, and the Split-Brain Autoencoders [75]. The SplitBrain architectures aims at predicting a multi-channel output
by building cross-channel autoencoders. That is, given training examples X ∈ RH ×W ×C , we split data into X 1 ∈ RH ×W ×C1
and X 2 ∈ RH ×W ×C2 , where C 1 , C 2 ⊂ C , and the authors train
f2 = F1 ( X 1 ) and X
f1 = F2 ( X 2 ),
multiple deep representations X
which when concatenated provided a reconstruction of the
input tensor X , just like an autoencoder. Various manners
of aggregating these predictors have been explored in [75].
In the same spirit as the cross-channel autoencoders [75],
Conditional GANs were developed [74] to learn a generative
model that learns a mapping from one input domain to the
other.
The authors [73] train two networks much in the spirit
of the conditional GAN [74] where : N O→F which generates
the raw image frames from the optical flow and N F →O which
generates the optical flow from the raw images. F t are image
frames with RGB channels and O t are vertical and horizontal
optical flow vector arrays. The input to discriminator D is
thus a 6-D tensor. We now describe the adaption of the crosschannel autoencoders for the task of anomaly detection.
•
N F →O : Training set is X = {(F t , O t )} N
t=1 . The L1 loss
function with x = F t , y = O t :
L L1 ( x, y) = k y − G ( x, z)k1
(18)
with the conditional adversarial loss being
L cG AN (G, D ) =E(x,y)∈ X [log D ( x, y)]
+E x∈{F t },z∈Z log(1 − D ( x,G ( x, z)))
•
Conversely in N
{(O t , F t )} N
t=1 .
O →F
N
∆S
O →F
Ot
F̂ t
−
∆o
−
Ft
Ô t
N
F →O
Fig. 9. The cross-channel prediction conditional GAN architecture in
[73]. There are two GAN models : flow→RGB predictor (N O→F ) and
RGB→Flow predictor (N F →O ). Each of the generators shown has a
U-net architecture which uses the common underlying structure in the
image RGB channels and optical flow between two frames.
5.7
Adversarial Autoencoders (AAEs)
Adversarial Autoencoders are probabilistic autoencoders that
use GANs to perform variational approximation of the aggregated posterior of the latent space representation [76] using
an arbitrary prior.
AAEs were applied to the problem of anomalous event
detection over images by authors [77]. In figure 10, x denotes
input vectors from training distribution, q(z|x) the encoder’s
posterior distribution, p(z) the prior that the user wants
to impose on the latent space vectors z. The latent space
distribution is given by
Z
q(z) =
q(z|x) p d (x) d x
(20)
x∈ X train
(19)
the training set changes to X =
The generators/discriminators follow a U-net architecture as in [74] with skip connections. The two generators
where p d (x) represents the training data distribution. In an
AAE, the encoder acts like the generator of the adversarial
network, and it tries to fool the discriminator into believing
that q(z) comes from the actual data distribution p(z). During
the joint training, the encoder is updated to improve the reconstruction error in the autoencoder path, while it is updated
by the discriminator of the adversarial network to make the
11
In similar work by [80], discriminative autoencoders aim
at learning low-dimensional discriminative representations
for positive ( X + ) and negative ( X − ) classes of data. The
discriminative autoencoders build a latent space representation under the constraint that the positive data should be
better reconstructed than the negative data. This is done
by minimizing the reconstruction error for positive examples
while ensuring that those of the negative class are pushed
away from the manifold.
q(z|x)
z ∼ q(z)
x
b
x
−
p(z)
+
Ld (X + ∪ X −) =
1
1+ e− x
Input
latent space distribution approach the imposed prior. As prior
distribution for the generator of the network, authors [77] use
the Gaussian distribution of 256 dimensions, with the dropout
set to 0.5 probability. The method achieves close to state of the
art performance. As the authors remark themselves, the AAE
does not take into account the temporal structure in the video
sequences.
Controlling reconstruction for anomaly detection
One of the common problems using deep autoencoders is
their capability to produce low reconstruction errors for test
samples, even over anomalous events. This is due to the way
autoencoders are trained in a semi-supervised way on videos
with no anomalies, but with sufficient training samples, they
are able to approximate most test samples well.
In [78], the authors propose to limit the reconstruction
capability of the generative adversarial networks by learning
conflicting objectives for the normal and anomalous data.
They use negative examples to enforce explicit poor reconstruction. Thus this setup is weakly supervised, not requiring labels. Given two random variables X , Y with samples
{x}K
, {y} Jj=1 , we want the network to reconstruct the input
i =1
distribution X while poorly reconstruct Y . This was achieved
by maximizing the following objective function:
K
X
i =1
log p θ (xˆi |x i ) +
J
X
log p θ (yˆi |y j )
(22)
In the above loss function, t(x) = {−1, +1} denotes as the label
e k the distance of that example
of the sample, and e(x) = kx − x
to the manifold. Minimizing the hinge loss in equation 22
achieves reconstruction such that the discriminative autoencoders build a latent space representation of data that better
reconstructs positive data compared to the negative data
6
p θ ( X̂ | X ) − p θ (Ŷ |Y ) =
max(0, t(x) · (kx̂ − xk − 1))
x∈ X + ∪ X −
Fig. 10. Two paths in the Adversarial Autoencoder : Top path refers to the
standard autoencoder configuration that minimizes reconstruction error.
The bottom path constitutes of an adversarial network that ensures an
approximation of the user input defined samples from distribution p(z),
and the latent or code vector distribution, provided by q(z|x).
5.8
X
(21)
j =1
where θ refers to the autoencoders parameters. This setup
assumes strong class imbalance, i.e.very few samples of the
anomalous class Y are available compared to the normal class
X . The motivation for negative learning using anomalous
examples is to consistently provide poor reconstruction of
anomalous samples. During the training phase, authors [68]
reconstruct positive samples by minimizing the reconstruction
error between samples, while negative samples are forced
to have a bad reconstruction by maximizing the error. This
last step was termed as negative learning. The datasets
evaluated were the reconstruction of the images from MNIST
and Japanese highway video patches [79].
E XPERIMENTS
There are two large classes of experiments : first, recone t,
structing the input video on a single frame basis X t → X
e
second the reconstruction of a stack of frames X t− p:t → X t− p:t .
These reconstruction schemes are performed either on raw
frame values, or on the optical flow between consequent
frame pairs. Reconstructing raw image values modeled the
back-ground image, since minimizing the reconstruction error
was in fact evaluating the background. Convolutional autoencoders reconstructing a sequence of frames captured temporal
appearance changes as described by [41]. When learning
feature representations on optical flow we indirectly operate
on two frames, since each optical flow map evaluates the
relative motion between two consequent frame pairs. In the
case of predictive models the current frame X t was predicted
after observing the past p frames. This provides a different
temporal structure as compared to a simple reconstruction
e t− p:t , where the temporal
of a sequence of frames X t− p:t → X
coherence results from enforcing a bottleneck in the autoencoder architectures. The goal of these experiments were not
evaluate the best performing model, and were intended as a
tool to understand how background estimation and temporal
appearance were approximated by the different models. A
complete detailed study is beyond the scope of this review.
In this section, we evaluate the performance of the following classes of models on the UCSD and CUHK-Avenue
datasets. As a baseline, we use the reconstruction of a
dense optical flow calculated using the Farneback method
in OpenCV 3, by principal component analysis, with around
150 components. For predictive models, as a baseline we
use a vector autoregressive model (VAR), referred to as
LinPred. The coefficients of the model are estimated on a
lower dimensional, random projection of the raw image or
optical flow maps from the input training video stream. The
random projection avoids badly conditioned and expensive
matrix inversion. We compare the performance of Contractive
autoencoders, simple 3D autoencoders based on C3D [24]
CNNs (C3D-AE), the ConvLSTM and ConvLSTM autoencoder
from the predictive model family and finally the VAE from the
generative models family. The VAE’s loss function consists of
the binary cross-entropy (similar to a reconstruction error)
12
between the model prediction and the input image, and the
KL-divergence D K L [Q ( z| X )kP ( z)], between the encoded latent
space vectors and P ( z) = N (0, I ) multivariate unit-Gaussian.
These models were built in Keras [81] with Tensorflow backend and executed on a K-80 GPU.
6.1
Architectures
Our Contractive and Variational AE (VAE) constitutes of a
random projection to reduce the dimensionality to 2500 from
an input frame of size 200 × 200. The Contractive AE constitutes of one fully connected hidden layer of size 1250 which
map back to the reconstruction of the randomly projected
vector of size 2500. While the VAE contains two hidden layers
(dimensions: 1024, 32), which maps back to the output of 2500
dimensions. We use the latent space representation of the
variational autoencoders to fit a multivariate 1-Gaussian on
the training dataset and evaluate the negative-log probability
for the test samples.
6.2
Observations and Issues
The results are summarized in the table 1 and 2. The performance measures reported are the Area Under ReceiverOutput-Characteristics plot (AU-ROC), Area Under PrecisionRecall plot. These scores are calculated when the input channels correspond to the raw image (raw) and the optical flow
(flow), each of which has been normalized by the maximum
value. The final temporal anomaly score is given by equation
8. These measures are described in the next section. We
also describe the utility of these measures under different
frequencies of occurrences of the anomalous positive class.
Reconstruction model issues : Deep autoencoders identify anomalies by poor reconstruction of objects that have
never appeared in the training set, when raw image pixels
are used as input. It is difficult to achieve this in practice due
to a stable reconstruction of new objects by deep autoencoders.
This pertains to the high capacity of autoencoders, and their
tendency to well approximate even the anomalous objects.
Controlling reconstruction using negative examples could be a
possible solution. This holds true, but to a lower extent, when
reconstructing a sequence of frames (spatio-temporal block).
AUC-ROC vs AUC-PR : The anomalies in the UCSD
pedestrian dataset have a duration of several hundred frames
on average, compared to the anomalies in the CUHK avenue
dataset which occur only for a few tens of frames. This makes
the anomalies statistically less probable. This can be seen
by looking at the AU-PR table 2, where the average scores
for CUHK-avenue are much lower than for UCSD pedestrian
datasets. It is important to note that this does not mean the
performance over the CUHK-Avenue dataset is lower, but
just the fact that the positive anomalous class is rarer in
occurrence.
Rescaling image size : The models used across different
experiments in the articles that were reviewed, varied in the
input image size. In some cases the images were resized to
sizes (128,128), (224, 224), (227, 227). We have tried to fix this
to be uniformly (200,200). Though it is essential to note that
there is a substantial change in performance when this image
is resized to certain sizes.
Generating augmented video clips : Training the
convolutional LSTM for video anomaly detection takes a
large number of epochs. Furthermore, the training video data
required is much higher and data augmentation for video
anomaly detection requires careful thinking. Translations
and rotations may be transformations to which the anomaly
detection algorithm requires to be sensitive to, based on the
surveillance application.
Performance of models : VAEs perform consistently as
well as or better than PCA on optical flow. It is still left as a
future study to understand clearly, why the performance of a
stochastic autoencoder such as VAE is better. Convolutional
LSTM on raw image values follow closely behind as the first
predictive model performing as good as PCA but sometimes
poorer. Convolutional LSTM-AE is a similar architecture with
similar performance. Finally, the 3D convolutional autoencoder, based on the work by [24], performs as well as PCA
on optical flow, while modeling local motion patterns.
To evaluate the specific advantages of each of these models, a larger number of real world, video surveillance examples
are required demonstrating the representation or feature that
is most discriminant. In experiments, we have also observed
that application of PCA on the random projection of individual
frames performed well in avenue dataset, indicating that very
few frames were sufficient to identify the anomaly; while the
PCA performed poorly on UCSD pedestrian datasets, where
motion patterns were key to detect the anomalies.
For single frame input based models, optical flow served
as a good input since it already encoded part of the predictive information in the training videos. On the other hand,
convolutional LSTMs and linear predictive models required
p = [2, 10] input raw image values, in the training videos, to
predict the current frame raw image values.
6.3
Evaluation measures
The anomaly detection task is a single class estimation
task, where 0 is assigned to samples with likelihood (or
reconstruction error) above (below) a certain threshold, and
1 assigned to detected anomalies with low likelihood (high
reconstruction error). Statistically, the anomalies are a rare
class and occurs less frequently compared to the normal class.
The most common characterization of this behavior is the
expected frequency of occurrence of anomalies. We briefly
review the anomaly detection evaluation procedure as well
as the performance measures that were used across different
studies. For a complete treatment of the subject, the reader is
referred to [82].
The final anomaly score is a value that is treated as a
probability which lies in s( t) ∈ [0, 1], ∀ t, t ∈ [1, T ], T being the
maximum time index. For various level sets or thresholds of
the anomaly score a 1:T , one can evaluate the True Positives
(TP, the samples which are truly anomalous and detected
as anomalous), True Negatives (TN, the samples that are
truly normal and detected as normal), False Positives (FP,
the samples which are truly normal samples but detected
as anomalous) and finally False Negatives (FN, the samples
which are truly anomalous but detected as normal).
P
TP
True positive rate (TPR) or Recall =
TP+FN
P
FP
(23)
False positive rate(FPR) =
TP+FN
P
TP
Precision =
TP+FP
13
TABLE 1
Area Under-ROC (AUROC)
Methods
Feature
PCA
flow
LinPred
(raw, flow)
C3D-AE
(raw, flow)
ConvLSTM
(raw, flow)
ConvLSTM-AE
(raw, flow)
CtractAE
(raw, flow)
VAE
(raw, flow)
UCSDped1
UCSDped2
CUHK-avenue
0.75
0.80
0.82
(0.71, 0.71)
(0.73, 0.78)
(0.74, 0.84)
(0.70, 0.70)
(0.64, 0.81)
(0.86, 0.18)
(0.67, 0.71)
(0.77, 0.80)
(0.84, 0.18)
(0.43, 0.74)
(0.25, 0.81)
(0.50, 0.84)
(0.66, 0.75)
(0.65, 0.79)
(0.83, 0.84)
(0.63, 0.72)
(0.72, 0.86)
(0.78, 0.80)
TABLE 2
Area Under-Precision Recall (AUPR)
Methods
Feature
PCA
flow
LinPred
(raw, flow)
C3D-AE
(raw, flow)
ConvLSTM
(raw, flow)
ConvLSTM-AE
(raw, flow)
CtractAE
(raw, flow)
VAE
(raw, flow)
UCSDped1
UCSDped2
CUHK-avenue
0.78
0.95
0.66
(0.71, 0.71)
(0.92, 0.94)
(0.49, 0.65)
(0.75, 0.77)
(0.88, 0.95)
(0.65, 0.15)
(0.67, 0.71)
(0.94, 0.95)
(0.66, 0.15)
(0.52, 0.81)
(0.74, 0.95)
(0.34, 0.70)
(0.70, 0.81)
(0.88, 0.95)
(0.69, 0.70)
(0.66, 0.76)
(0.92, 0.97)
(0.54, 0.68)
We require these measures to evaluate the ROC curve, which
measures the performance of the detection at various False
positive rates. That is ROC plots TPR vs FPR, while the PR
plots precision vs recall.
The performance of anomaly detection task is evaluated
based on an important criterion, the probability of occurrence
of the anomalous positive class. Based on this value, different
performance curves are useful. We define the two commonly
used performance curves : Precision-Recall (PR) curves and
Receiver-Operator-Characteristics (ROC) curves. The area under the PR curve (AU-PR) is useful when true negatives
are much more common than true positives (i.e., TN » TP).
The precision recall curve only focuses on predictions around
the positive (rare) class. This is good for anomaly detection
because predicting true negatives (TN) is easy in anomaly
detection. The difficulty is in predicting the rare true positive
events. Precision is directly influenced by class (im)balance
since FP is affected, whereas TPR only depends on positives.
This is why ROC curves do not capture such effects. Precisionrecall curves are better to highlight differences between
models for highly imbalanced data sets. For this reason, if
one would like to evaluate models under imbalanced class
settings, AU-PR scores would exhibit larger differences than
the area under the ROC curve.
7
C ONCLUSION
In this review paper, we have focused on categorizing the different unsupervised learning models for the task of anomaly
detection in videos into three classes based on the prior
information used to build the representations to characterize
anomalies.
They are reconstruction based, spatio-temporal predictive
models, and generative models. Reconstruction based models
build representations that minimize the reconstruction error
of training samples from the normal distribution. Spatiotemporal predictive models take into account the spatiotemporal correlation by viewing videos as a spatio-temporal
time series. Such models are trained to minimize the prediction error on spatio-temporal sequences from the training
series, where the length of the time window is a parameter.
Finally, the generative models learn to generate samples from
the training distribution, while minimizing the reconstruction
error as well as distance between generated and training
distribution, where the focus is on modeling the distance
between sample and distributions.
Each of these methods focuses on learning certain prior
information that is useful for constructing the representation
for the video anomaly detection task. One key concept which
occurs in various architectures for video anomaly detection
is how temporal coherence is implemented. Spatio-temporal
autoencoders and Convolutional LSTM learn a reconstruction
based or spatio-temporal predictive model that both use some
form of (not explicitly defined) spatio-temporal regularity
assumptions. We can conclude from our study that evaluating how sensitive the learned representation is to certain
transformations such as time warping, viewpoint, applied to
the input training video stream, is an important modeling
criterion. Certain invariances are as well defined by the
choice of the representation (translation, rotation) either due
to reusing convolutional architectures or imposing a predictive structure. A final component in the design of the video
anomaly detection system is the choice of thresholds for the
anomaly score, which was not covered in this review. The
performance of the detection systems were evaluated using
ROC plots which evaluated performance across all thresholds.
Defining a spatially variant threshold is an important but
non-trivial problem.
Finally, as more data is acquired and annotated in a
video-surveillance setup, the assumption of having no labeled
anomalies progressively turns false, partly discussed in the
section on controlling reconstruction for anomaly detection.
Certain anomalous points with well defined spatio-temporal
regularities become a second class that can be estimated well;
and methods to include the positive anomalous class information into detection algorithms becomes essential. Handling
class imbalance becomes essential in such a case.
Another problem of interest in videos is the variation in
temporal scale of motion patterns across different surveillance
videos, sharing a similar background and foreground. Learning a representation that is invariant to such time warping
would be of practical interest.
There are various additional components of the stochastic
14
gradient descent algorithm that were not covered in this
review. The Batch Normalization [83] and drop-out based
regularization [84] play an important role in the regularization of deep learning architectures, and a systematic study is
important to be successful in using them for video anomaly
detection.
Acknowledgments : The authors would like to thank
Benjamin Crouzier for his help in proof reading the
manuscript, and Y. Senthil Kumar (Valeo) for helpful suggestions. The authors would also like to thank their employer to
perform fundamental research.
R EFERENCES
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
M. W. Diederik P Kingma, “Stochastic gradient vb and the variational auto-encoder,” in Proceedings of the 2nd International Conference on Learning Representations (ICLR), 2014. 1, 8, 9
I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley,
S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,”
in Advances in neural information processing systems, 2014, pp.
2672–2680. 1, 9
S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997. 1, 6
M. Fayyaz, M. H. Saffar, M. Sabokrou, M. Fathy, F. Huang, and
R. Klette, “Stfcn: Spatio-temporal fully convolutional neural network
for semantic segmentation of street scenes,” in Asian Conference on
Computer Vision. Springer, 2016, pp. 493–509. 2
V. Mahadevan, W.-X. LI, V. Bhalodia, and N. Vasconcelos, “Anomaly
detection in crowded scenes,” in Proceedings of IEEE Conference on
Computer Vision and Pattern Recognition, 2010, pp. 1975–1981. 2, 3
W. Li, V. Mahadevan, and N. Vasconcelos, “Anomaly detection and
localization in crowded scenes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014. 2, 3
N. Srivastava, E. Mansimov, and R. Salakhudinov, “Unsupervised
learning of video representations using lstms,” in International
Conference on Machine Learning, 2015, pp. 843–852. 2, 6, 7, 8
C. Vondrick, H. Pirsiavash, and A. Torralba, “Generating videos
with scene dynamics,” in Advances In Neural Information Processing
Systems, 2016, pp. 613–621. 2
M. Mathieu, C. Couprie, and Y. LeCun, “Deep multi-scale video prediction beyond mean square error,” arXiv preprint arXiv:1511.05440,
2015. 2, 6
C. Lu, J. Shi, and J. Jia, “Abnormal event detection at 150 fps in
matlab,” 2013. 2
UMN, “Unusual crowd activity dataset.” http://mha.cs.umn.edu/
Movies/Crowd-Activity-All.avi. 2
A. Zaharescu and R. Wildes, “Anomalous behaviour detection using
spatiotemporal oriented energies, subset inclusion histogram comparison and event-driven processing,” in European Conference on
Computer Vision. Springer, 2010, pp. 563–576. 2
Y. Benezeth, P.-M. Jodoin, V. Saligrama, and C. Rosenberger, “Abnormal events detection based on spatio-temporal co-occurences,” in
Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE
Conference on. IEEE, 2009, pp. 2458–2465. 2
R. Leyva, V. Sanchez, and C.-T. Li, “The lv dataset: A realistic
surveillance video dataset for abnormal event detection,” in Biometrics and Forensics (IWBF), 2017 5th International Workshop on.
IEEE, 2017, pp. 1–6. 2
S. Shalev-Shwartz and S. Ben-David, Understanding machine learning: From theory to algorithms. Cambridge university press, 2014.
2
Y. S. Chong and Y. H. Tay, “Modeling representation of videos for
anomaly detection using deep learning: A review,” arXiv preprint
arXiv:1505.00523, 2015. 2
O. P. Popoola and K. Wang, “Video-based abnormal human behavior
recognition : A review,” IEEE Transactions on Systems, Man, and
Cybernetics, Part C (Applications and Reviews), vol. 42, no. 6, pp.
865–878, 2012. 2
T. Li, H. Chang, M. Wang, B. Ni, R. Hong, and S. Yan, “Crowded
scene analysis: A survey,” IEEE transactions on circuits and systems
for video technology, vol. 25, no. 3, pp. 367–386, 2015. 2
A. Alahi, K. Goel, V. Ramanathan, A. Robicquet, L. Fei-Fei, and
S. Savarese, “Social lstm: Human trajectory prediction in crowded
spaces,” in Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition, 2016, pp. 961–971. 2
[20] D. Helbing and P. Molnar, “Social force model for pedestrian dynamics,” Physical review E, vol. 51, no. 5, p. 4282, 1995. 2
[21] V. Kantorov and I. Laptev, “Efficient feature extraction, encoding
and classification for action recognition,” in Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition, 2014, pp.
2593–2600. 3
[22] M. Baccouche, F. Mamalet, C. Wolf, C. Garcia, and A. Baskurt,
“Sequential deep learning for human action recognition,” in International Workshop on Human Behavior Understanding. Springer,
2011, pp. 29–39. 3
[23] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and
L. Fei-Fei, “Large-scale video classification with convolutional neural
networks,” in Proceedings of the IEEE conference on Computer Vision
and Pattern Recognition, 2014, pp. 1725–1732. 3
[24] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri, “Learning spatiotemporal features with 3d convolutional networks,” in
Proceedings of the IEEE international conference on computer vision,
2015, pp. 4489–4497. 3, 7, 11, 12
[25] Y. Bengio, A. Courville, and P. Vincent, “Representation learning: A
review and new perspectives,” IEEE transactions on pattern analysis
and machine intelligence, vol. 35, no. 8, pp. 1798–1828, 2013. 3
[26] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT
Press, 2016, http://www.deeplearningbook.org. 3, 5
[27] V. Saligrama and Z. Chen, “Video anomaly detection based on local
statistical aggregates,” in Computer Vision and Pattern Recognition
(CVPR), 2012 IEEE Conference on. IEEE, 2012, pp. 2112–2119. 3
[28] A. A. Abuolaim, W. K. Leow, J. Varadarajan, and N. Ahuja, “On the
essence of unsupervised detection of anomalous motion in surveillance videos,” in International Conference on Computer Analysis of
Images and Patterns. Springer, 2017, pp. 160–171. 3
[29] A. Basharat, A. Gritai, and M. Shah, “Learning object motion
patterns for anomaly detection and improved object detection,” in
Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE
Conference on. IEEE, 2008, pp. 1–8. 3
[30] B. Zhao, L. Fei-Fei, and E. P. Xing, “Online detection of unusual
events in videos via dynamic sparse coding,” in Proceedings of the
2011 IEEE Conference on Computer Vision and Pattern Recognition.
IEEE Computer Society, 2011. 3
[31] A. Adam, E. Rivlin, I. Shimshoni, and D. Reinitz, “Robust real-time
unusual event detection using multiple fixed-location monitors,”
IEEE Trans. Pattern Anal. Mach. Intell., vol. 30, no. 3, pp. 555–560,
2008. 3
[32] T. Wang and H. Snoussi, “Histograms of optical flow orientation for
abnormal events detection,” in 2013 IEEE International Workshop
on Performance Evaluation of Tracking and Surveillance (PETS),
2013. 3
[33] R. V. H. M. Colque, C. A. C. JÞnior, and W. R. Schwartz, “Histograms of optical flow orientation and magnitude to detect anomalous events in videos,” in 2015 28th SIBGRAPI Conference on
Graphics, Patterns and Images, 2015. 3
[34] H. Mousavi, S. Mohammadi, A. Perina, R. Chellali, and V. Murino,
“Analyzing tracklets for the detection of abnormal crowd behavior,”
in Applications of Computer Vision (WACV), 2015 IEEE Winter
Conference on. IEEE, 2015, pp. 148–155. 3
[35] H. Mousavi, M. Nabi, H. K. Galoogahi, A. Perina, and V. Murino,
“Abnormality detection with improved histogram of oriented tracklets,” in International Conference on Image Analysis and Processing.
Springer, 2015, pp. 722–732. 3
[36] A. Edison and J. C. V., “Optical acceleration for motion description
in videos,” in 2017 IEEE Conference on Computer Vision and Pattern
Recognition Workshops (CVPRW), 2017, pp. 1642–1650. 3
[37] J. Kim and K. Grauman, “Observe locally, infer globally: a space-time
mrf for detecting abnormal activities with incremental updates,” in
Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE
Conference on. IEEE, 2009, pp. 2921–2928. 4
[38] M. E. Tipping and C. M. Bishop, “Mixtures of probabilistic principal
component analyzers,” Neural computation, vol. 11, no. 2, pp. 443–
482, 1999. 4
[39] J. Wulff and M. J. Black, “Efficient sparse-to-dense optical flow
estimation using a learned basis and layers,” in Proceedings of the
IEEE Conference on Computer Vision and Pattern Recognition, 2015,
pp. 120–130. 4
[40] J. Masci, U. Meier, D. Cireşan, and J. Schmidhuber, “Stacked convolutional auto-encoders for hierarchical feature extraction,” Artificial
Neural Networks and Machine Learning–ICANN 2011, pp. 52–59,
2011. 5
15
[41] M. Hasan, J. Choi, J. Neumann, A. K. Roy-Chowdhury, and L. S.
Davis, “Learning temporal regularity in video sequences,” in Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, 2016, pp. 733–742. 5, 6, 7, 11
[42] Y. S. Chong and Y. H. Tay, “Abnormal event detection in videos using
spatiotemporal autoencoder,” in International Symposium on Neural
Networks. Springer, 2017, pp. 189–196. 5, 6
[43] R. Chalapathy, A. K. Menon, and S. Chawla, “Robust, deep and
inductive anomaly detection,” in ECML PKDD 2017 : European
Conference on Machine Learning and Principles and Practice of
Knowledge Discovery, 2017. 5
[44] M. Sabokrou, M. Fathy, and M. Hoseini, “Video anomaly detection
and localisation based on the sparsity and reconstruction error of
auto-encoder,” Electronics Letters, vol. 52, no. 13, pp. 1122–1124,
2016. 5
[45] A. Ng, “Sparse autoencoder,” CS294A Lecture notes, vol. 72, no. 2011,
pp. 1–19, 2011. 5
[46] S. Rifai, P. Vincent, X. Muller, X. Glorot, and Y. Bengio, “Contractive
auto-encoders: Explicit invariance during feature extraction,” in
Proceedings of the 28th international conference on machine learning
(ICML-11), 2011, pp. 833–840. 5
[47] G. Alain and Y. Bengio, “What regularized auto-encoders learn from
the data-generating distribution,” The Journal of Machine Learning
Research, vol. 15, no. 1, pp. 3563–3593, 2014. 5
[48] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol, “Extracting
and composing robust features with denoising autoencoders,” in Proceedings of the 25th international conference on Machine learning.
ACM, 2008, pp. 1096–1103. 5
[49] D. Xu, Y. Yan, E. Ricci, and N. Sebe, “Detecting anomalous events in
videos by learning deep representations of appearance and motion,”
Computer Vision and Image Understanding, vol. 156, pp. 117 – 127,
2017, image and Video Understanding in Big Data. 5
[50] D. Erhan, Y. Bengio, A. Courville, P.-A. Manzagol, P. Vincent, and
S. Bengio, “Why does unsupervised pre-training help deep learning?”
Journal of Machine Learning Research, vol. 11, no. Feb, pp. 625–660,
2010. 5
[51] H. Vu, T. D. Nguyen, A. Travers, S. Venkatesh, and D. Phung,
Energy-Based Localized Anomaly Detection in Video Surveillance.
Springer International Publishing, 2017, pp. 641–653. 5
[52] S. Hawkins, H. He, G. Williams, and R. Baxter, “Outlier detection
using replicator neural networks,” in DaWaK, vol. 2454. Springer,
2002, pp. 170–180. 6
[53] M. Sabokrou, M. Fayyaz, M. Fathy, and R. Klette, “Deep-cascade:
Cascading 3d deep neural networks for fast anomaly detection
and localization in crowded scenes,” IEEE Transactions on Image
Processing, vol. 26, no. 4, pp. 1992–2004, 2017. 6
[54] K. Greenewald and A. Hero, “Detection of anomalous crowd behavior
using spatio-temporal multiresolution model and kronecker sum
decompositions,” arXiv preprint arXiv:1401.3291, 2014. 6
[55] A. Wiesel, O. Bibi, and A. Globerson, “Time varying autoregressive
moving average models for covariance estimation.” IEEE Trans.
Signal Processing, vol. 61, no. 11, pp. 2791–2801, 2013. 6
[56] W. Lotter, G. Kreiman, and D. Cox, “Unsupervised learning of visual structure using predictive generative networks,” arXiv preprint
arXiv:1511.06380, 2015. 6
[57] S. Xingjian, Z. Chen, H. Wang, D.-Y. Yeung, W.-K. Wong, and W.-c.
Woo, “Convolutional lstm network: A machine learning approach
for precipitation nowcasting,” in Advances in neural information
processing systems, 2015, pp. 802–810. 6, 7
[58] J. R. Medel, Anomaly Detection Using Predictive Convolutional Long
Short-Term Memory Units. Rochester Institute of Technology, 2016.
7
[59] J. R. Medel and A. Savakis, “Anomaly detection in video using
predictive convolutional long short-term memory networks,” arXiv
preprint arXiv:1612.00390, 2016. 7
[60] W. Luo, W. Liu, and S. Gao, “Remembering history with convolutional lstm for anomaly detection,” in Multimedia and Expo (ICME),
2017 IEEE International Conference on. IEEE, 2017, pp. 439–444.
7
[61] Y. Zhao, B. Deng, C. Shen, Y. Liu, H. Lu, and X.-S. Hua,
“Spatio-temporal autoencoder for video anomaly detection,” in
Proceedings of the 2017 ACM on Multimedia Conference, ser. MM
’17. New York, NY, USA: ACM, 2017, pp. 1933–1941. [Online].
Available: http://doi.acm.org/10.1145/3123266.3123451 7, 8
[62] L. Wiskott and T. J. Sejnowski, “Slow feature analysis: Unsupervised
learning of invariances,” Neural computation, vol. 14, no. 4, pp. 715–
770, 2002. 7
[63] F. Creutzig and H. Sprekeler, “Predictive coding and the slowness
principle: An information-theoretic approach,” Neural Computation,
vol. 20, no. 4, pp. 1026–1041, 2008. 8
[64] L. Sun, K. Jia, T.-H. Chan, Y. Fang, G. Wang, and S. Yan, “Dlsfa: deeply-learned slow feature analysis for action recognition,” in
Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, 2014, pp. 2625–2632. 8
[65] Z. Zhang and D. Tao, “Slow feature analysis for human action
recognition,” IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 34, no. 3, pp. 436–450, 2012. 8
[66] V. R. Kompella, M. Luciw, and J. Schmidhuber, “Incremental slow
feature analysis: Adaptive low-complexity slow feature updating
from high-dimensional input streams,” Neural Computation, vol. 24,
no. 11, pp. 2994–3024, 2012. 8
[67] X. Hu, S. Hu, Y. Huang, H. Zhang, and H. Wu, “Video anomaly
detection using deep incremental slow feature analysis network,”
IET Computer Vision, vol. 10, no. 4, pp. 258–265, 2016. 8
[68] A. Munawar, P. Vinayavekhin, and G. De Magistris, “Spatiotemporal anomaly detection for industrial robots through prediction
in unsupervised feature space,” in Applications of Computer Vision
(WACV), 2017 IEEE Winter Conference on. IEEE, 2017, pp. 1017–
1025. 8, 11
[69] D. DâĂŹAvino, D. Cozzolino, G. Poggi, and L. Verdoliva, “Autoencoder with recurrent neural networks for video forgery detection,”
in IS&T International Symposium on Electronic Imaging: Media
Watermarking, Security, and Forensics, 2017. 8
[70] J. An and S. Cho, “Variational autoencoder based anomaly detection
using reconstruction probability,” Technical Report, Tech. Rep., 2015.
9
[71] I. J. Goodfellow, “NIPS 2016 tutorial: Generative adversarial
networks,” CoRR, vol. abs/1701.00160, 2017. [Online]. Available:
http://arxiv.org/abs/1701.00160 9
[72] T. Schlegl, P. Seeböck, S. M. Waldstein, U. Schmidt-Erfurth, and
G. Langs, “Unsupervised anomaly detection with generative adversarial networks to guide marker discovery,” in International Conference on Information Processing in Medical Imaging. Springer, 2017,
pp. 146–157. 9, 10
[73] M. Ravanbakhsh, E. Sangineto, M. Nabi, and N. Sebe, “Training adversarial discriminators for cross-channel abnormal event detection
in crowds,” CoRR, vol. abs/1706.07680, 2017. 10
[74] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” 2017. 10
[75] R. Zhang, P. Isola, and A. A. Efros, “Split-brain autoencoders:
Unsupervised learning by cross-channel prediction,” in CVPR, 2017.
10
[76] A. Makhzani, J. Shlens, N. Jaitly, and I. Goodfellow, “Adversarial
autoencoders,” in International Conference on Learning Representations, 2016. [Online]. Available: http://arxiv.org/abs/1511.05644
10
[77] A. Dimokranitou, “Adversarial autoencoders for anomalous event
detection in images,” Master’s thesis, 2017. 10, 11
[78] A. Munawar, P. Vinayavekhin, and G. De Magistris, “Limiting the
reconstruction capability of generative neural network using negative learning,” in 27th IEEE International Workshop on Machine
Learning for Signal Processing, MLSP, Roppongi, Tokyo, Japan,
2017, 2017. 11
[79] “Wataken777. youtube. tokyo express way,” https://www.youtube.
com/watch?v=UQgj3zkh8zk. 11
[80] S. Razakarivony and F. Jurie, “Discriminative autoencoders for
small targets detection,” in Pattern Recognition (ICPR), 2014 22nd
International Conference on. IEEE, 2014, pp. 3528–3533. 11
[81] F. Chollet et al., “Keras,” https://github.com/keras-team/keras, 2015.
12
[82] T. Fawcett, “An introduction to roc analysis,” Pattern recognition
letters, vol. 27, no. 8, pp. 861–874, 2006. 12
[83] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in Proceedings
of the 32nd International Conference on Machine Learning, 2015, pp.
448–456. 14
[84] N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and
R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting.” Journal of machine learning research,
vol. 15, no. 1, pp. 1929–1958, 2014. 14
| 1 |
Energy Clustering
Guilherme França∗ and Joshua T. Vogelstein†
Johns Hopkins University
Abstract
arXiv:1710.09859v1 [stat.ML] 26 Oct 2017
Energy statistics was proposed by Székely in the 80’s inspired by the Newtonian gravitational
potential from classical mechanics, and it provides a hypothesis test for equality of distributions. It
was further generalized from Euclidean spaces to metric spaces of strong negative type, and more
recently, a connection with reproducing kernel Hilbert spaces (RKHS) was established. Here we
consider the clustering problem from an energy statistics theory perspective, providing a precise
mathematical formulation yielding a quadratically constrained quadratic program (QCQP) in the
associated RKHS, thus establishing the connection with kernel methods. We show that this QCQP
is equivalent to kernel k-means optimization problem once the kernel is fixed. These results imply
a first principles derivation of kernel k-means from energy statistics. However, energy statistics
fixes a family of standard kernels. Furthermore, we also consider a weighted version of energy
statistics, making connection to graph partitioning problems. To find local optimizers of such
QCQP we propose an iterative algorithm based on Hartigan’s method, which in this case has
the same computational cost as kernel k-means algorithm, based on Lloyd’s heuristic, but usually
with better clustering quality. We provide carefully designed numerical experiments showing the
superiority of the proposed method compared to kernel k-means, spectral clustering, standard
k-means, and Gaussian mixture models in a variety of settings.
∗
guifranca@gmail.com
†
jovo@jhu.edu
1
I.
INTRODUCTION
Energy statistics [1] is based on a notion of statistical potential energy between probability
distributions, in close analogy to Newton’s gravitational potential in classical mechanics. It
provides a model-free hypothesis test for equality of distributions which is achieved under
minimum energy. When probability distributions are different the statistical potential energy
diverges as sample size increases, while tends to a nondegenerate limit distribution when
probability distributions are equal. Energy statistics has been applied to several goodnessof-fit hypothesis tests, multi-sample tests of equality of distributions, analysis of variance
[2], nonlinear dependence tests through distance covariance and distance correlation, which
generalizes the Pearson correlation coefficient, and hierarchical clustering [3] by extending
Ward’s method of minimum variance. Moreover, in Euclidean spaces, an application of
energy statistics to clustering was already proposed [4]. We refer the reader to [1], and
references therein, for an overview of energy statistics theory and its applications.
In its original formulation, energy statistics has a compact representation in terms of
expectations of pairwise Euclidean distances, providing straightforward empirical estimates.
The notion of distance covariance was further generalized from Euclidean spaces to metric
spaces of strong negative type [5]. Furthermore, the missing link between energy distance
based tests and kernel based tests has been recently resolved [6], establishing an equivalence
between generalized energy distances to maximum mean discrepancies (MMD), which are
distances between embeddings of distributions in reproducing kernel Hilbert spaces (RKHS).
This equivalence immediately relates energy statistics to kernel methods often used in machine learning, and form the basis of our approach.
Clustering has such a long history in machine learning, making it impossible to mention
all important contributions in a short space. Perhaps, the most used method is k-means [7–
9], which is based on Lloyd’s heuristic [7] of assigning a data point to the cluster with closest
center. The only statistical information about each cluster comes from its mean, making it
sensitive to outliers. Nevertheless, k-means works very well when data is linearly separable
in Euclidean space. Gaussian mixture models (GMM) is another very common approach,
providing more flexibility than k-means, however, it still makes strong assumptions about
the distribution of the data.
To account for nonlinearities, kernel methods were introduced [10, 11]. A mercer kernel
2
[12] is used to implicitly map data points to a RKHS, then clustering can be performed in
the associated Hilbert space by using its inner product. However, the kernel choice remains
the biggest challenge since there is no principled theory to construct a kernel for a given
dataset, and usually a kernel introduces hyperparameters that need to be carefully chosen.
A well-known kernel based clustering method is kernel k-means, which is precisely k-means
formulated in the feature space [11]. Furthermore, kernel k-means algorithm [13, 14] is still
based on Loyd’s heuristic [7] of grouping points that are closer to a cluster center. We refer
the reader to [15] for a survey of clustering methods.
Although clustering from energy statistics, in Euclidean spaces, was considered in [4], the
precise optimization problem behind this approach remains elusive, as well as the connection with kernel methods. The main theoretical contribution of this paper is to fill this gap.
Since the statistical potential energy is minimum when distributions are equal, the principle
behind clustering is to maximize the statistical energy, enforcing probability distributions
associated to each cluster to be different from one another. We provide a precise mathematical formulation to this statement, leading to a quadratically constrained quadratic
program (QCQP) in the associated RKHS. This immediately establishes the connection between energy statistics based clustering, or energy clustering for short, with kernel methods.
Moreover, our formulation holds for general semimetric spaces of negative type. We also
show that such QCQP is equivalent to kernel k-means optimization problem, however, the
kernel is fixed by energy statistics. The equivalence between kernel k-means, spectral clustering, and graph partitioning problems is well-known [13, 14]. We further demonstrate how
these relations arise from a weighted version of energy statistics.
Our main algorithmic contribution is to use Hartigan’s method [16] to find local solutions
of the above mentioned QCQP, which is NP-hard in general. Hartigan’s method was also
used in [4], but without any connection to kernels. More importantly, the advantages of
Hartigan’s over Lloyd’s method was already demonstrated in some simple settings [17, 18],
but apparently this method did not receive the deserved attention. To the best of our
knowledge, Hartigan’s method was not previously employed together with kernel methods.
We provide a fully kernel based Hartigan’s algorithm for clustering, where the kernel is fixed
by energy statistics. We make clear the advantages of this proposal versus Lloyd’s method,
which kernel k-means is based upon and will also be used to solve our QCQP. We show that
both algorithms have the same time complexity, but Hartigan’s method in kernel spaces
3
offer several advantages. Furthermore, in the examples considered in this paper, it also
provides superior performance compared spectral clustering, which is more expensive and
in fact solves a relaxed version of our QCQP.
Our numerical results provide compelling evidence that Hartigan’s method applied to
energy clustering is more accurate and robust than kernel k-means algorithm. Furthermore,
our experiments illustrate the flexibility of energy clustering, showing that it is able to
perform accurately on data coming from very different distributions, contrary to k-means
and GMM for instance. More specifically, the proposed method performs closely to k-means
and GMM on normally distributed data, however, it is significantly better on data that is
not normally distributed. Its superiority in high dimensions is striking, being more accurate
than k-means and GMM even on Gaussian settings.
II.
BACKGROUND ON ENERGY STATISTICS AND RKHS
In this section we introduce the main concepts from energy statistics and its relation to
RKHS which form the basis of our work. For more details we refer the reader to [1] and [6].
iid
iid
Consider random variables in RD such that X, X 0 ∼ P and Y, Y 0 ∼ Q, where P and Q
are cumulative distribution functions with finite first moments. The quantity
E(P, Q) ≡ 2EkX − Y k − EkX − X 0 k − EkY − Y 0 k,
(1)
called energy distance [1], is rotationally invariant and nonnegative, E(P, Q) ≥ 0, where
equality to zero holds if and only if P = Q. Above, k · k denotes the Euclidean norm in RD .
Energy distance provides a characterization of equality of distributions, and E 1/2 is a metric
on the space of distributions.
The energy distance can be generalized as, for instance,
Eα (P, Q) ≡ 2EkX − Y kα − EkX − X 0 kα − EkY − Y 0 kα
(2)
where 0 < α ≤ 2. This quantity is also nonnegative, Eα (P, Q) ≥ 0. Furthermore, for
0 < α < 2 we have that Eα (P, Q) = 0 if and only if P = Q, while for α = 2 we have
E2 (P, Q) = 2kE(X) − E(Y )k2 which shows that equality to zero only requires equality of the
means, and thus E2 (P, Q) = 0 does not imply equality of distributions.
The energy distance can be even further generalized. Let X, Y ∈ X where X is an
arbitrary space endowed with a semimetric of negative type ρ : X × X → R, which is
4
required to satisfy
n
X
i,j=1
where Xi ∈ X and ci ∈ R such that
ci cj ρ(Xi , Xj ) ≤ 0,
Pn
i=1 ci
= 0. Then, X is called a space of negative type.
We can thus replace RD → X and kX − Y k → ρ(X, Y ) in the definition (1), obtaining the
generalized energy distance
E(P, Q) ≡ 2Eρ(X, Y ) − Eρ(X, X 0 ) − Eρ(Y, Y 0 ).
(3)
For spaces of negative type there exists a Hilbert space H and a map ϕ : X → H such that
ρ(X, Y ) = kϕ(X) − ϕ(Y )k2H . This allows us to compute quantities related to probability
distributions over X in the associated Hilbert space H. Even though the semimetric ρ may
not satisfy the triangle inequality, ρ1/2 does since it can be shown to be a proper metric. Our
energy clustering formulation, proposed in the next section, will be based on the generalized
energy distance (3).
There is an equivalence between energy distance, commonly used in statistics, and distances between embeddings of distributions in RKHS, commonly used in machine learning.
This equivalence was established in [6]. Let us first recall the definition of RKHS. Let H be
a Hilbert space of real-valued functions over X . A function K : X × X → R is a reproducing
kernel of H if it satisfies the following two conditions:
1. hx ≡ K(·, x) ∈ H for all x ∈ X .
2. hhx , f iH = f (x) for all x ∈ X and f ∈ H.
In other words, for any x ∈ X and any function f ∈ H, there is a unique hx ∈ H that
reproduces f (x) through the inner product of H. If such a kernel function K exists, then
H is called a RKHS. The above two properties immediately imply that K is symmetric
and positive definite. Indeed, notice that hhx , hy i = hy (x) = K(x, y), and by definition
hhx , hy i∗ = hhy , hx i, but since the inner product is real we have hhy , hx i = hhx , hy i, or
P
equivalently K(y, x) = K(x, y). Moreover, for any w ∈ H we can write w = ni=1 ci hxi
P
where {hxi }ni=1 is a basis of H. It follows that hw, wiH = ni,j=1 ci cj K(xi , xj ) ≥ 0, showing
that the kernel is positive definite. If G is a matrix with elements Gij = K(xi , xj ) this is
equivalent to G being positive semidefinite, i.e. v > G v ≥ 0 for any vector v ∈ Rn .
5
The Moore-Aronszajn theorem [19] establishes the converse of the above paragraph. For
every symmetric and positive definite function K : X ×X → R, there is an associated RKHS
HK with reproducing kernel K. The map ϕ : x 7→ hx ∈ HK is called the canonical feature
map. Given a kernel K, this theorem enables us to define an embedding of a probability
R
measure P into the RKHS as follows: P 7→ hP ∈ HK such that f (x)dP (x) = hf, hP i for
R
all f ∈ HK , or alternatively, hP ≡ K( · , x)dP (x). We can now introduce the notion of
distance between two probability measures using the inner product of HK , which is called
the maximum mean discrepancy (MMD) and is given by
γK (P, Q) ≡ khP − hQ kHK .
(4)
2
γK
(P, Q) = EK(X, X 0 ) + EK(Y, Y 0 ) − 2EK(X, Y )
(5)
This can also be written as [20]
iid
iid
where X, X 0 ∼ P and Y, Y 0 ∼ Q. From the equality between (4) and (5) we also have
hhP , hQ iHK = E K(X, Y ).
Thus, in practice, we can estimate the inner product between embedded distributions by
averaging the kernel function over sampled data.
The following important result shows that semimetrics of negative type and symmetric
positive definite kernels are closely related [21]. Let ρ : X × X → R and x0 ∈ X an arbitrary
but fixed point. Define
K(x, y) ≡ 21 [ρ(x, x0 ) + ρ(y, x0 ) − ρ(x, y)] .
(6)
Then, it can be shown that K is positive definite if and only if ρ is a semimetric of negative
type. We have a family of kernels, one for each choice of x0 . Conversely, if ρ is a semimetric
of negative type and K is a kernel in this family, then
ρ(x, y) = K(x, x) + K(y, y) − 2K(x, y)
= khx − hy k2HK
(7)
and the canonical feature map ϕ : x 7→ hx is injective [6]. When these conditions are satisfied
we say that the kernel K generates the semimetric ρ. If two different kernels generate the
same ρ they are said to be equivalent kernels.
6
Now we can state the equivalence between the generalized energy distance (3) and inner
products on RKHS, which is one of the main results of [6]. If ρ is a semimetric of negative
type and K a kernel that generates ρ, then replacing (7) into (3), and using (5), yields
2
(P, Q).
E(P, Q) = 2 [E K(X, X 0 ) + E K(Y, Y 0 ) − 2E K(X, Y )] = 2γK
Due to (4) we can compute the energy distance E(P, Q) between two probability distributions
using the inner product of HK .
Finally, let us recall the main formulas from generalized energy statistics for the test
statistic of equality of distributions [1]. Assume we have data X = {x1 , . . . , xn } where
S
xi ∈ X , and X is a space of negative type. Consider a disjoint partition X = kj=1 Cj ,
with Ci ∩ Cj = ∅. Each expectation in the generalized energy distance (3) can be computed
through the function
g(Ci , Cj ) ≡
1 XX
ρ(x, y),
ni nj x∈C y∈C
i
(8)
j
where ni = |Ci | is the number of elements in partition Ci . The within energy dispersion is
defined by
W ≡
k
X
nj
j=1
2
g(Cj , Cj ),
(9)
and the between-sample energy statistic is defined by
S≡
where n =
Pk
j=1
X
ni nj
[2g(Ci , Cj ) − g(Ci , Ci ) − g(Cj , Cj )] ,
2n
1≤i<j≤k
(10)
nj . Given a set of distributions {Pj }kj=1 , where x ∈ Cj if and only if x ∼ Pj ,
the quantity S provides a test statistic for equality of distributions [1]. When the sample size
is large enough, n → ∞, under the null hypothesis H0 : P1 = P2 = · · · = Pk we have that
S → 0, and under the alternative hypothesis H1 : Pi 6= Pj for at least two i 6= j, we have
that S → ∞. Note that this test does not make any assumptions about the distributions
Pj , thus it is said to be non-parametric or distribution-free.
One can make a physical analogy by thinking that points x ∈ Cj form a massive body
whose total mass is characterized by the distribution function Pj . The quantity S is thus a
potential energy of the from S(P1 , . . . , Pk ) which measures how different the distribution of
these masses are, and achieves the ground state S = 0 when all bodies have the same mass
distribution. The potential energy S increases as bodies have different mass distributions.
7
III.
CLUSTERING BASED ON ENERGY STATISTICS
This section contains the main theoretical results of this paper, where we formulate an
optimization problem for clustering based on energy statistics and RKHS introduced in the
previous section.
Due to the previous test statistic for equality of distributions, the obvious criterion for
clustering data is to maximize S which makes each cluster as different as possible from
the other ones. In other words, given a set of points coming from different probability
distributions, the test statistic S should attain a maximum when each point is correctly
classified as belonging to the cluster associated to its probability distribution. The following
straightforward result shows that maximizing S is, however, equivalent to minimizing W
which has a more convenient form.
Lemma 1. Let X = {x1 , . . . , xn } where each data point xi lives in a space X endowed with a
S
semimetric ρ : X × X → R of negative type. For a fixed integer k, the partition X = kj=1 Cj ,
where Ci ∩ Cj = ∅ for all i 6= j, maximizes the between-sample statistic S, defined in equation
(10), if and only if
min W (C1 , . . . , Ck ),
C1 ,...,Ck
(11)
where the within energy dispersion W is defined by (9).
Proof. From (9) and (10) we have
k
k
k
X
1 X
1 X
nj ni g(Ci , Ci )
ni nj g(Ci , Cj ) +
n−
S+W =
2n i,j=1
2n i=1
j6=i=1
i6=j
=
k
1 X
1 XX
n
ni nj g(Ci , Cj ) =
ρ(x, y) = g(X, X).
2n i,j=1
2n x∈X y∈X
2
Note that the right hand side of this equation only depends on the pooled data, so it is a
constant independent of the choice of partition. Therefore, maximizing S over the choice of
partition is equivalent to minimizing W .
For a given k, the clustering problem amounts to finding the best partition of the data by
minimizing W . Notice that this is a hard clustering problem as partitions are disjoint. The
optimization problem (11) based on energy statistics was already proposed in [4]. However,
it is important to note that this is equivalent to maximizing S, which is the test statistic for
8
equality of distributions. In this current form, the relation with kernels and other clustering
methods is obscure. In the following, we show what is the explicit optimization problem
behind (11) in the corresponding RKHS, establishing the connection with kernel methods.
Based on the relation between kernels and semimetrics of negative type, assume that the
kernel K : X × X → R generates ρ. Define the Gram matrix
K(x1 , x1 ) K(x1 , x2 ) · · · K(x1 , xn )
K(x2 , x1 ) K(x2 , x2 ) · · · K(x2 , xn )
.
G≡
..
..
..
..
.
.
.
.
K(xn , x1 ) K(xn , x2 ) · · · K(xn , xn )
(12)
Let Z ∈ {0, 1}n×k be the label matrix, with only one nonvanishing entry per row, indicating
to which cluster (column) each point (row) belongs to. This matrix satisfies Z > Z = D,
where the diagonal matrix D = diag(n1 , . . . , nk ) contains the number of points in each
cluster. We also introduce the rescaled matrix Y ≡ ZD−1/2 . In component form they are
given by
Zij ≡
1 if xi ∈ Cj
Yij ≡
0 otherwise
√1
nj
0
if xi ∈ Cj
.
(13)
otherwise
Throughout the paper, we use the notation Mi• to denote the ith row of a matrix M , and
M•j denotes its jth column. Our next result shows that the optimization problem (11) is
NP-hard since it is a quadratically constrained quadratic program (QCQP) in the RKHS.
Proposition 2. The optimization problem (11) is equivalent to
max Tr Y > G Y
Y
s.t. Y ≥ 0, Y > Y = I, Y Y > e = e,
(14)
where e = (1, 1, . . . , 1)> ∈ Rn is the all-ones vector, and G is the Gram matrix (12).
Proof. From (7), (8), and (9) we have
k
k X
X
1X 1 X
1 X
W (C1 , . . . , Ck ) =
K(x, x) −
ρ(x, y) =
K(x, y) .
2 j=1 nj x,y∈C
nj y∈C
j=1 x∈C
j
j
(15)
j
Note that the first term is global so it does not contribute to the optimization problem.
Therefore, minimizing (15) is equivalent to
k
X
1 X
max
K(x, y).
C1 ,...,Ck
n
j=1 j x,y∈C
j
9
(16)
But
X
K(x, y) =
n X
n
X
Zpj Zqj Gpq = (Z > G Z)jj ,
p=1 q=1
x,y∈Cj
−1
where we used the definitions (12) and (13). Notice that n−1
= Djj
, where the diagonal
j
matrix D = diag(n1 , . . . , nk ) contains the number of points in each cluster, thus the objective
P
−1
function in (16) is equal to kj=1 Djj
Z > GZ jj = Tr D−1 Z > GZ . Now we can use the
cyclic property of the trace, and by the definition of the matrix Z in (13), we obtain the
following integer programing problem:
max Tr
Z
ZD−1/2
>
G ZD−1/2
s.t. Zij ∈ {0, 1},
Pk
j=1
Zij = 1,
Pn
i=1
Zij = nj .
(17)
Now we write this in terms of the matrix Y = ZD−1/2 . The objective function immedi
ately becomes Tr Y > G Y . Notice that the above constraints imply that Z T Z = D, which
in turn gives D−1/2 Y T Y D−1/2 = D, or Y > Y = I. Also, every entry of Y is positive by
definition, Y ≥ 0. Now it only remains to show the last constraint in (14), which comes
from the last constraint in (17). In matrix form this reads Z T e = De. Replacing Z = Y D1/2
we have Y > e = D1/2 e. Multiplying this last equation on the left by Y , and noticing that
Y D1/2 e = Ze = e, we finally obtain Y Y > e = e. Therefore, the optimization problem (17)
is equivalent to (14) .
Based on Proposition 2, to group data X = {x1 , . . . , xn } into k clusters we first compute
the Gram matrix G and then solve the optimization problem (14) for Y ∈ Rn×k . The
ith row of Y will contain a single nonzero element in some jth column, indicating that
xi ∈ Cj . This optimization problem is nonconvex, and also NP-hard, thus a direct approach
is computational prohibitive even for small datasets. However, one can find approximate
solutions by relaxing some of the constraints, or obtaining a relaxed SDP version of it. For
instance, the relaxed problem
max Tr Y > G Y
Y
s.t. Y > Y = I
has a well-known closed form solution Y ? = U R, where the columns of U ∈ Rn×k contain
the top k eigenvectors of G corresponding to the k largest eigenvalues λ1 ≥ λ2 ≥ . . . ≥ λk ,
and R ∈ Rk×k is an arbitrary orthogonal matrix. The resulting optimal objective function
P
assumes the value max Tr Y ? > G Y ? = ki=1 λi . Spectral clustering is based on the above
10
approach, where one further normalize the rows of Y ? , then cluster the resulting rows as
data points. A procedure on these lines was proposed in the seminal papers [22, 23].
Note that the optimization problem (14) based on energy statistics is valid for data living
in an arbitrary space of negative type, where a semimetric ρ, and thus the kernel K, are
assumed to be known. The standard energy distance (2) fixes a family of choices in Euclidean
spaces given by
ρα (x, y) = kx − ykα ,
Kα (x, y) = 21 (kxkα + kykα − kx − ykα ) ,
for 0 < α ≤ 2 and we fix x0 = 0 in (6). The same would be valid for data living in a more
general semimetric space (X , ρ) where ρ fixes the kernel. In practice, the clustering quality
strongly depend on the choice of a suitable ρ. Nevertheless, if prior information is available
to make this choice, it can be immediately incorporated in the optimization problem (14).
Relation to Kernel k-Means
One may wonder how energy clustering relates to the well-known kernel k-means problem1
which is extensively used in machine learning. For a positive semidefinite Gram matrix G,
as defined in (12), there exists a map ϕ : X → HK such that K(x, y) = hϕ(x), ϕ(y)i. Kernel
k-means optimization problem, in feature space, is defined by
k X
X
2
min J(C1 , . . . , Ck ) ≡
kϕ(x) − ϕ(µj )k
C1 ,...,Ck
where µj =
1
nj
P
x∈Cj
(18)
j=1 x∈Cj
x is the mean of cluster Cj in the ambient space. Notice that the above
objective function is strongly tied to the idea of minimizing distances between points and
cluster centers, which arises from k-means objective function based on Lloyd’s method [7].
It is known [13, 14] that problem (18) can be cast into a trace maximization in the same
form as (14). The next result makes this explicit, showing that (11) and (18) are actually
equivalent.
Proposition 3. For a fixed kernel, the clustering optimization problem (11) based on energy statistics is equivalent to the kernel k-means optimization problem (18), and both are
equivalent to (14).
1
When we refer to kernel k-means problem we mean specifically the optimization problem (18), which
should not be confused with kernel k-means algorithm that is just one possible recipe to solve (18). The
distinction should also be clear from the context.
11
Proof. Notice that kϕ(x)−ϕ(µj )k2 = hϕ(x), ϕ(x)i−2hϕ(x), ϕ(µj )i+hϕ(µj ), ϕ(µj )i, therefore
k X
X
1 X
2 X
K(x, y) + 2
K(y, z) .
J=
K(x, x) −
nj y∈C
nj y,z∈C
j=1 x∈C
j
j
(19)
j
The first term is global so it does not contribute to the optimization problem. Notice that
P
P
P
the third term gives x∈Cj n12 y,z∈Cj K(y, z) = n1j y,z∈Cj K(y, z), which is the same as the
j
second term. Thus, problem (18) is equivalent to
k
X
1 X
K(x, y)
C1 ,...,Ck
n
j
j=1
x,y∈C
max
j
which is exactly the same as (16) from the energy statistics formulation. Therefore, once the
kernel K is fixed, the function W given by (9) is the same as J in (18). The remaining of the
proof proceeds as already shown in the proof of Proposition 2, leading to the optimization
problem (14).
The above result shows that kernel k-means optimization problem is equivalent to the
clustering problem formulated in the energy statistics framework, when operating on the
same kernel. Notice, however, that energy statistics is valid for arbitrary semimetric spaces
of negative type, fixing the kernel function in the associated RKHS, which is guaranteed to
be positive definite. On the other hand, kernel k-means (18) by itself is just an heuristic
approach that does not make any explicit mention to the kernel. Based on Proposition 3
one may view kernel k-means as being derived from the energy statistics framework.
Kernel k-means, spectral clustering, and graph partitioning problems such as ratio association, ratio cut, and normalized cut are all equivalent to a QCQP of the form (14)
[13, 14]. One can thus use kernel k-means algorithm to solve these problems as well. This
correspondence involves a weighted version of problem (14), that will be demonstrated in
the following from the perspective of energy statistics.
IV.
CLUSTERING BASED ON WEIGHTED ENERGY STATISTICS
We now generalize energy statistics to incorporate weights associated to each data point.
Let w(x) be a weight function associated to point x ∈ X . Define
g(Ci , Cj ) ≡
1 XX
w(x)w(y)ρ(x, y),
si sj x∈C y∈C
i
j
12
si ≡
X
x∈Ci
w(x).
(20)
Replace this function in the formulas (9) and (10), with ni → si and n → s, where s =
Pk
j=1 sj . With these changes Proposition 1 remains the unaltered, so the clustering problem
becomes
k
X
sj
g(Cj , Cj )
min W (C1 , . . . , Ck ) ≡
C1 ,...,Ck
2
j=1
where now g is given by (20). Define the following matrices and vector:
√1
if xi ∈ Cj
sj
Yij ≡
,
W ≡ diag(w1 , . . . , wn ),
H ≡ W 1/2 Y,
0
otherwise
(21)
ω ≡ We, (22)
where wi = w(xi ) and e ∈ Rn is the all-ones vector.The analogous of Proposition 2 is as
follows.
Proposition 4. The weighted energy clustering given by problem (21) is equivalent to
max Tr H > (W 1/2 GW 1/2 )H
s.t. H ≥ 0, H > H = I, HH > ω = ω,
H
(23)
where G is the Gram matrix (12), ω = (w1 , . . . , wn )T contains the weights of each point, and
W = diag(ω).
Proof. Replacing (7) and eliminating the global terms which do not contribute, the optimization problem (21) becomes
k
X
1 XX
w(x)w(y)K(x, y).
C1 ,...,Ck
s
j
j=1
x∈C y∈C
max
j
j
This objective function can be written as
√
k
n
n
k X
n X
n
>√
X
X
Zjp
wp 1/2
wq Zqj
1 XX
1/2
wp wq Zpj Zqj Gpq =
wp Gpq wq
√
√
s
sj
sj
j=1 j p=1 q=1
j=1 p=1 q=1
=
k
X
j=1
H > W 1/2 GW 1/2 H
jj
= Tr H > W 1/2 GW 1/2 H .
To obtain the constraints, note that Hij ≥ 0 by definition, and
(H > H)ij =
n
n
X
1
δij X
w` Z`i Z`j =
w` Z`i = δij ,
Y`i W`` Y`j = √ √
s
s
i sj
i
`=1
`=1
`=1
n
X
13
where δij = 1 if i = j and δij = 0 if i 6= j is the Kronecker delta. Therefore, H > H = I. This
is a constraint on the rows of H. To obtain a condition on its columns observe that
√
wp wq
k
if both xp , xq ∈ Ci
X
Z
Z
√
pj qj
si
=
H > H pq = wp wq
sj
0
j=1
otherwise.
Therefore, (H > HW 1/2 )pq =
√
wp wq s−1
if both points xp and xq belong to the same cluster,
i
which we denote by Ci for some i ∈ {1, . . . , k}, and (H > HW 1/2 )pq = 0 otherwise. Thus, the
pth line of this matrix is nonzero only on entries corresponding to points that are in the same
Pn
√
√
wp ,
cluster as xp . If we sum over the columns of this line we obtain wp s−1
i
q=1 wq Zqi =
or equivalently HH > W 1/2 e = W 1/2 e, which gives the constraint HH > ω = ω.
Connection with Graph Partitioning
The relation between kernel k-means and graph partitioning problems is known [13, 14].
For conciseness, we repeat a similar analysis due to the relation of these problems to energy
statistics and RKHS, which provides a different perspective.
Consider a graph G = (V, E, A) where V is the set of vertices, E the set of edges, and A
is an affinity matrix of the graph, which measures the similarities between pairs of nodes.
Thus, Aij 6= 0 if (i, j) ∈ E, and Aij = 0 otherwise. We also associate weights to every vertex,
P
wi = w(i) for i ∈ V, and let sj = i∈Cj wi , where Cj ⊆ V is one partition of V. Let
links(C` , Cm ) ≡
X
Aij .
i∈C` ,j∈Cm
We want to partition the set of vertices V into k disjoint subsets, V =
generalized ratio association problem is given by
max
Ci ,...,Ck
Sk
j=1
k
X
links(Cj , Cj )
j=1
sj
Cj . The
(24)
and maximizes the within cluster association. The generalized ratio cut problem
min
Ci ,...,Ck
k
X
links(Cj , V\Cj )
sj
j=1
(25)
minimizes the cut between clusters. These two problems are equivalent, in analogous way
as minimizing (9) is equivalent to maximizing (10) as shown in Proposition 1. Here this is
14
due to the equality links(Cj , V\Cj ) = links(Cj , V) − links(Cj , Cj ). Several graph partitioning
methods [22, 24–26] can be seen as a particular case of (24) or (25).
Consider the ratio association problem (24), whose objective function can be written as
k X
k
n X
n
>
X
X
Zjp
Zqj
1 XX
>
Apq =
√ Apq √ = Tr Y AY ,
s
sj
sj
j=1 p=1 q=1
j=1 j p∈C q∈C
j
j
with Z defined in (13) and Y in (22). Therefore, the ratio association problem can be written
in the form (23), i.e.
max Tr H > W −1/2 AW −1/2 H
H
s.t. H ≥ 0, H > H = I, HH > ω = ω.
This is exactly the same problem as weighted energy clustering with G = W −1 AW −1 .
Assuming this matrix is positive semidefinite, this generates a semimetric (7) for graphs
given by
ρ(i, j) =
Aii Ajj
2Aij
+ 2 −
2
wi
wj
wi wj
or
ρ(i, j) = −
2Aij
wi wj
(26)
for vertices i, j ∈ V, and where in the second equation we assume the graph has no self-loops,
i.e. Aii = 0. Using (26) in the energy statistics formulation allows one to make inference on
graphs. Above, the weight wi = w(i) of node i ∈ V can be, for instance, its degree wi = d(i).
V.
TWO-CLASS PROBLEM IN ONE DIMENSION
Before stating a general algorithm to solve the optimization problem (14) we first consider
the simplest possible case which is one-dimensional data and a two-class problem. This will
be useful to test energy clustering on a simple setting.
Fixing ρ(x, y) = |x − y| according to the standard energy distance, we can actually
compute the function (8) in O(n log n) and minimize W directly. This is done by noting
that
|x − y| = (x − y)1x≥y − (x − y)1x<y
= x (1x≥y − 1x<y ) + y (1y>x − 1y≤x )
where we have the indicator function defined by 1A = 1 if A is true, and 1A = 0 otherwise.
Let C be a partition with n elements. Using the above distance we have
g (C, C) =
1 XX
x (1x≥y + 1y>x − 1x≥y − 1x<y ) .
n2 x∈C y∈C
15
Algorithm 1 E 1D -clustering algorithm to find local solutions to the optimization problem
(11) for a two-class problem in one dimension.
input data X
output label matrix Z
e = [x1 , . . . , xn ]
1: sort X obtaining X
2: for j ∈ [1, . . . , n] do
3:
4:
Ce1,j ← [xi : i = 1, . . . , j], and Ce2,j ← [xi : i = j + 1, . . . , n]
W (j) ← W Ce1,j , Ce2,j , from (27)
5: end for
6: j ? ← arg minj W (j)
7: Zj• ← (1, 0) if j ≤ j ? , and Zj• ← (0, 1) otherwise, for j = 1, . . . , n
The sum over y can be eliminated since each term in the parenthesis is simply counting the
number of elements in C that satisfy the condition of the indicator function. Assuming that
we first order the data in C, obtaining Ce = [xj ∈ C : x1 ≤ x2 ≤ · · · ≤ xn ], we get
n
2 X
e
e
(2` − 1 − n)x` .
g C, C = 2
n `=1
e Ce is O(n) and the cost of sorting the data is at the
Note that the cost of computing g C,
S
most O(n log n). Assuming that each partition is ordered, X = kj=1 Cej , the within energy
dispersion can be written explicitly as
W Ce1 , . . . , Cek =
nj
k X
X
2` − 1 − nj
j=1 `=1
nj
x` .
(27)
For a two-class problem we can use the formula (27) to cluster the data through a simple
e Then we compute (27)
algorithm as follows. We first order the entire dataset, X → X.
e and pick the point which gives the minimum value of W . This
for each possible split of X
procedure is described in Algorithm 1 and called E 1D -clustering. Note that this algorithm
is deterministic, however, it only works for one-dimensional data with Euclidean distance.
The total complexity of E 1D -clustering is O(n log n + n2 ) = O(n2 ).
Assuming the true label matrix Z is available, a direct measure of how different the
16
estimated matrix Ẑ is from Z, up to label permutations, is given by
n
k
1 XX
accuracy(Ẑ) ≡ max
Ẑiσ(j) Zij
σ n
i=1 j=1
(28)
where σ is a permutation of the k cluster groups. The accuracy is always between [0, 1],
where 1 corresponds to all points correctly clustered, and 0 to all points wrongly clustered.
For a balanced two-class problem the value 1/2 correspond to chance.
We now consider two simple experiments where we sample n points from a two-class
mixture. We plot the average accuracy (28) versus n, with error bars indicating standard
error. The data is clustered using E 1D -clustering algorithm, GMM and k-means. For GMM
and k-means we use the implementations from the well-known scikit-learn library in Python
[27], where k-means is initialized through k-means++ procedure [28], and GMM is initialized
with the output of k-means. We run both algorithms 5 times with different initializations
and pick the answer with best objective function value. Notice that E 1D -clustering does not
require random initialization so we only run it once. For each n we use use 100 Monte Carlo
runs. In Fig. 1a we have the results for data sampled from the Gaussian mixture
iid
µ1 = 1.5, σ1 = 0.3, µ2 = 0, σ2 = 1.5.
x ∼ 21 N µ1 , σ12 + 12 N µ2 , σ22 ,
(29)
In this case the optimal accuracy obtained from Bayes classification error is ≈ 0.88, indicated
by the dashed line in the plot. The three methods perform closely, with a slight advantage
of GMM, as expected since it is a consistent model to the data, and E 1D -clustering performs
slightly better than k-means. In Fig. 1c we show a density estimation from clustering
1000 points from this mixture using the three algorithms. Notice that all of them are able
to distinguish the two classes. On the other hand, in Fig. 1b we consider a mixture of
lognormal distributions,
iid
x ∼ 21 exp N µ1 , σ12 + 12 exp N µ2 , σ22 ,
µ1 = 1.5, σ1 = 0.3, µ2 = 0, σ2 = 1.5. (30)
The optimal Bayes accuracy is again ≈ 0.88. We can now see that E 1D -clustering is still
very accurate, while GMM and k-means basically cluster at chance. Density estimation
after clustering 1000 points this mixture using the three algorithms are is shown in Fig. 1d.
Note that only E 1D -clustering was able to distinguish the two classes. k-means and GMM
put most of the points in a single cluster, and points on the tail of the second component of
(30) in the other cluster. The experiments of Fig. 1 illustrate how energy clustering is more
flexible compared to k-means and GMM.
17
0.90
0.90
0.80
0.85
accuracy
accuracy
0.85
1D
E -clustering
k-means
GMM
0.80
0.75
E 1D -clustering
k-means
GMM
0.70
0.65
0.60
0.55
0.75
100
200
300
400
500
600
700
800
0.50
100
200
300
0.8
truth
E 1D
k-means
GMM
700
800
0.5
0.4
0.3
0.4
0.2
0.2
0.1
−4
truth
E 1D
k-means
GMM
0.6
0.6
0.0
−6
600
0.7
1.4
1.0
500
(b)
(a)
1.2
400
# points
# points
−2
0
2
4
6
x
0.0
−2
0
2
4
6
8
10
12
14
x
(c)
(d)
FIG. 1. E 1D -clustering versus k-means and GMM. (a,b) We plot the mean accuracy (28) over
100 Monte Carlo trials, versus the number of sampled points. Error bars are standard error. The
dashed line indicates Bayes accuracy (≈ 0.88 in both cases). (a) Clustering results for data normally
distributed as in (29). (b) Data lognormally distributed as in (30). (c) Density estimation of each
component in the mixture (29) after clustering 1000 sampled points using the three algorithms,
compared to the ground truth. (d) The same but for lognormal data (30).
VI.
ITERATIVE ALGORITHMS FOR ENERGY CLUSTERING
In this section we introduce an iterative algorithm to find a local maximizer of the optimization problem (14). Due to Proposition 3 we can also find an approximate solution
by the well-known kernel k-means algorithm based on Lloyd’s heuristic [13, 14], which for
convenience will also be restated in the present context.
Consider the optimization problem (16) written as
max
{C1 ,...,Ck }
k
X
Qj
Q=
,
nj
j=1
18
Qj ≡
X
x,y∈Cj
K(x, y),
(31)
where Qj represents an internal energy cost of cluster Cj , and Q is the total energy cost
where each Qj is weighted by the inverse of the number of points in Cj . For a data point xi
we denote its own energy cost with the entire cluster C` by
Q` (xi ) ≡
X
y∈C`
K(xi , y) = Gi• · Z•` ,
where we recall that Gi• (G•i ) denotes the ith row (column) of matrix G.
Lloyd’s Method for Energy Clustering
To optimize kernel k-means objective function (19) we remove the global term and define
the function
J (`) (xi ) ≡
2
1
Q − Q` (xi ).
2 `
n`
n`
(32)
We are thus solving
min
Z
n X
k
X
Zi` J (`) (xi ).
i=1 `=1
One possible strategy is to assign xi to cluster Cj ? according to
j ? = arg min J (`) (xi ).
`=1,...,k
This is done for every data point xi and repeated until convergence, i.e. until no new
assignments are made. The entire procedure is described in Algorithm 2, which we name
E L -clustering to emphasize that we are optimizing the within energy function W based on
Lloyd’s method [7]. It can be shown that this algorithm converges provided G is positive
semidefinite.
E L -clustering is precisely kernel k-means algorithm [13, 14] but written more concisely and
with the kernel induced by energy statistics. Indeed, recalling that K(x, y) = hϕ(x), ϕ(y)i
where ϕ : X → HK is the feature map, we have from (32) that
J (`) (xi ) = hϕ(µ` ), ϕ(µ` )i − 2hϕ(xi ), ϕ(µ` )i = kϕ(xi ) − ϕ(µ` )k2 − kϕ(xi )k2 ,
where µ` =
1
n`
P
x∈C`
x is the mean of cluster C` . Therefore, min` J (`) (xi ) = min` kϕ(xi ) −
ϕ(µ` )k2 , i.e. we are assigning xi to the cluster with closest center (in feature space), which
is the familiar Lloyd’s heuristic approach that kernel k-means is based upon.
19
Algorithm 2 E L -clustering is Lloyd’s method for energy clustering, which is precisely
kernel k-means algorithm, with the kernel induced by energy statistics. This procedure
finds local solutions to the optimization problem (14).
input number of clusters k, Gram matrix G, initial label matrix Z ← Z0
output label matrix Z
1: q ← (Q1 , . . . , Qk )> have the costs of each cluster, defined in (31)
2: n ← (n1 , . . . , nk )> have the number of points in each cluster
3: repeat
4:
for i = 1, . . . , n do
5:
let j be such that xi ∈ Cj
6:
j ? ← arg min`=1,...,k J (`) (xi ), where J (`) (xi ) is defined in (32)
7:
if j ? 6= j then
8:
move xi to Cj ? : Zij ← 0 and Zij ? ← 1
9:
update n: nj ← nj − 1 and nj ? ← nj ? + 1
10:
update q: qj ← qj − 2Qj (xi ) and qj ? ← qj ? + 2Qj ? (xi )
11:
end if
12:
end for
13: until convergence
To check the complexity of E L -clustering, notice that to compute the second term of
J (`) (xi ) in (32) requires O(n` ) operations, and although the first term requires O(n2` ) it
only needs to be computed once outside loop through data points (step 1 of Algorithm 2).
Therefore, the time complexity of E L -clustering is O(nk max` n` ) = O(kn2 ). For a sparse
Gram matrix G having n0 nonzero elements this complexity can be further reduced to O(kn0 ).
Hartigan’s Method for Energy Clustering
We now consider Hartigan’s method [16] applied to the optimization problem in the
form (31), which gives a local solution to the QCQP defined in (14). The method is based
in computing the maximum change in the total cost function Q when moving each data
20
point to another cluster. More specifically, suppose point xi is currently assigned to cluster
Cj yielding a total cost function denoted by Q(j) . Moving xi to cluster C` yields another
total cost function denoted by Q(`) . We are interested in computing the maximum change
∆Qj→` (xi ) ≡ Q(`) − Q(j) , for ` 6= j. From (31), by explicitly writing the costs related to
these two cluster we obtain
−
∆Q
j→`
Qj
Qj
Q`
Q+
`
+
−
−
(xi ) =
n` + 1 nj − 1
nj
n`
−
where Q+
` denote the cost of the new `th cluster with the point xi added to it, and Qj is
the cost of new jth cluster with xi removed from it. Noting that Q+
` = Q` + 2Q` (xi ) + Gii
and Q−
j = Qj − 2Qj (xi ) + Gii , we get the formula
Qj
1
Q`
1
j→`
− 2Qj (xi ) + Gii −
− 2Q` (xi ) − Gii .
∆Q (xi ) =
nj − 1 nj
n` + 1 n`
(33)
Therefore, if ∆Qj→` (xi ) > 0 we get closer to a maximum of (31) by moving xi to C` , otherwise
we keep xi in Cj .
We thus propose the following algorithm. We start with an initial configuration for the
label matrix Z, then for each point xi we compute the cost of moving it to another cluster
C` , i.e. ∆Qj→` (xi ) for ` = 1, . . . , k with ` 6= j, where j denotes the index of its current
partition, x ∈ Cj . Hence, we choose
j ? = arg max ∆j→` (xi ).
`=1,...,k | `6=j
?
If ∆Qj→j (xi ) > 0 we move xi to cluster Cj ? , otherwise we keep xi in its original cluster Cj .
This process is repeated until no points are assigned to new clusters. The entire procedure
is explicitly described in Algorithm 3, which we denote E H -clustering to emphasize that it is
based on Hartigan’s method. This method automatically ensures that the objective function
is monotonically increasing at each iteration, and consequently the algorithm converges in
a finite number of steps.
The complexity analysis of E H -clustering is the following. Computing the Gram matrix
G requires O(Dn2 ) operations, where D is the dimension of each data point and n is the
data size. However, both algorithms E L - and E H -clustering assume that G is given. There
are more efficient methods to compute G, specially if it is sparse, but we will not consider
this further and just assume that G is given. The computation of each cluster cost Qj has
complexity O(n2j ), and overall to compute q we have O(n21 + · · · + n2k ) = O(k maxj n2j ).
21
Algorithm 3 E H -clustering is Hartigan’s method for energy clustering. This algorithm
finds local solutions to the optimization problem (14). The steps 6 and 10 are different
than E L -clustering described in Algorithm 2.
input number of clusters k, Gram matrix G, initial label matrix Z ← Z0
output label matrix Z
1: q ← (Q1 , . . . , Qk )> have the energy costs of each cluster, defined in (31)
2: n ← (n1 , . . . , nk )> have the number of points in each cluster
3: repeat
4:
for i = 1, . . . , n do
5:
let j be such that xi ∈ Cj
6:
j ? ← arg max`=1,...,k | `6=j ∆Qj→` (xi ) using (33)
7:
if ∆Qj→j (xi ) > 0 then
?
8:
move xi to Cj ? : Zij ← 0 and Zij ? ← 1
9:
update n: nj ← nj − 1 and nj ? ← nj ? + 1
10:
update q: qj ← qj − 2Qj (xi ) + Gii and qj ? ← qj ? + 2Qj ? (xi ) + Gii
11:
end if
12:
end for
13: until convergence
These operations only need to be performed a single time. For each point xi we need to
compute Qj (xi ) once, which is O(nj ), and we need to compute Q` (xi ) for each ` 6= j. The
cost of computing Q` (xi ) is O(n` ), thus the cost of step 6 in Algorithm 3 is O(k max` n` ) for
` = 1, . . . , k. For the entire dataset this gives a time complexity of O(nk max` n` ) = O(kn2 ).
Note that this is the same cost as in E L -clustering, or kernel k-means algorithm. Again, if
G is sparse this can be reduced to O(kn0 ) where n0 is the number of nonzero entries of G.
In the following we mention some important known results about Hartigan’s method.
Theorem 5 (Telgarsky-Vattani [17]). Hartigan’s method has the cost function strictly decreasing in each iteration. Moreover, if n > k then
1. the resulting partition has no empty clusters, and
22
2. the resulting partition has distinct means.
Neither of these two conditions are guaranteed to be satisfied by Lloyd’s method, and
consequently by E L -clustering algorithm. The next result indicates that Hartigan’s method
can potentially escape local optima of Lloyd’s method.
Theorem 6 (Telgarsky-Vattani [17]). The set of local optima of Hartigan’s method is a
(possibly strict) subset of local optima of Lloyd’s method.
The above theorem implies that E L -clustering cannot improve on a local optima of E H -
clustering. On the other hand, E H might improve on a local optima of E L . Lloyd’s method
forms Voronoi partitions, while Hartigan’s method groups data in regions formed by the
intersection of spheres called circlonoi cells. It can be shown that the circlonoi cells are
contained within a smaller volume of a Voronoi cell, and this excess volume grows exponentially with the dimension of X [17, Theorems 2.4 and 3.1]. Points in this excess volume
force Hartigan’s method to iterate, contrary to Lloyd’s method. Therefore, Hartigan’s can
escape local optima of Lloyd’s. Moreover, this improvement should be more prominent as
dimension increases. Also, the improvement grows as the number of clusters k increases.
The empirical results of [17] show that an implementation of Hartigan’s method has comparable execution time to an implementation of Lloyd’s method, but no explicit complexity
was provided. We show that both E L - and E H -clustering have the same time complexity. To
the best of our knowledge, Hartigan’s method was not previously considered together with
kernels, as we are proposing in E H -clustering algorithm.
In [18], Hartigan’s method was applied to k-means problem with any Bregman divergence.
It was shown that the number of Hartigan’s local optima is upper bounded by O(1/k) [18,
Proposition 5.1]. In addition, it was provided examples where any initial partition correspond to a local optima of Lloyd’s method, while the number of local optima in Hartigan’s
method is small and correspond to true partitions of the data. Empirically, the number of
Hartigan’s local optima was considerably smaller than the number of Lloyd’s local optima.
The above results indicate that Hartigan’s method provides several advantages over
Lloyd’s method, a fact that will also be supported by our numerical experiments in the
next section where E H outperforms of E L (kernel k-means) in several settings, specially in
high dimensions.
23
VII.
NUMERICAL EXPERIMENTS
The main goal of this section is threefold. First, we want to compare E H -clustering in
Euclidean space to k-means and GMM. Second, we want to compare E H -clustering, based
on Hartigan’s method, to E L -clustering or kernel k-means, based on Lloyd’s method, and
also to spectral clustering, when they all operate on the same kernel. Third, we want to
illustrate the flexibility provided by energy clustering, which is able to cluster accurately in
different settings while keeping the same kernel.
The following experimental setup holds unless specified otherwise. We consider E H -
clustering, E L -clustering and spectral clustering with the following semimetrics and corresponding generating kernels:
ρα (x, y) = kx − ykα ,
Kα (x, y) = 21 (kxkα + kykα − kx − ykα ) ,
(34)
ρeσ (x, y) = 2 − 2e−
e σ (x, y) = e−
K
kx−yk
2σ ,
(35)
kx−yk2
2σ 2 .
(36)
ρbσ (x, y) = 2 − 2e−
kx−yk
2σ ,
kx−yk2
2σ 2 ,
b σ (x, y) = e−
K
The relation between kernel and semimetric is given by formula (6) where we fix x0 = 0. The
standard ρ1 , from the original energy distance (1), will always be present in the experiments
as a reference, being the implied choice unless explicitly mentioned. For k-means, GMM
and spectral clustering we use the robust implementations of scikit-learn library [27], where
k-means is initialized with k-means++ [28], and GMM with the output of k-means, making
it more robust and preventing it from breaking in high dimensions. Spectral clustering
implementation is based on [22]. We implemented E L -clustering as described in Algorithm 2,
and E H -clustering as described in Algorithm 3. Both will also be initialized with k-means++.
We run the algorithms 5 times with different initializations, picking the result with best
objective function value. We evaluate clustering quality by the accuracy (28) based on the
true labels. For each setting we show the average accuracy over 100 Monte Carlo trials, with
error bars indicating standard error.
We briefly mention that we compared E H -clustering, as described in Algorithm 3, to
E 1D -clustering, described in Algorithm 1, for several univariate distributions. Both perform
very closely. However, we omit these results since we will analyse more interesting scenarios
in high dimensions.
From the results of [17], summarized in the end of the previous section, we expect the
24
x1
−3
5
4
3
2
1
0
−1
−2
−3
x2
4
3
2
1
0
−1
−2
−3
−4
x2
8
6
4
2
0
−2
−4
−6
−8
x3
4
4
3
2
1
0
−1
−2
−3
−4
3
x1
2
1
0
−1
6
4
x3
2
−4
−6
4
10
3
8
2
6
1
4
0
−1
2
0
−2
−2
4
4
3
3
2
2
−3
−4
1
x5
x5
0
−2
x4
x4
−2
0
1
0
−1
−1
−3
−3 −2 −1 0
−3
−2 −1
−2
−2
1
x1
2
3
4
−4 −3 −2 −1 0 1 2 3 4
x2
−3 −2 −1
0
x3
1
2
3
−3 −2 −1 0
1
x4
2
3
4
−3 −2 −1 0
1
x5
2
3
4
0
1
x1
2
3
4
−6 −4 −2
0
x2
2
4
(a)
6
−4−3−2−1 0 1 2 3 4 5
x3
−4 −2
0
2
x4
4
6
8
−3 −2 −1 0
1
x5
2
3
4
(a)
FIG. 2. Pair plots for the first 5 dimensions. (a) Data normally distributed as in (37). (b) Data
normally distributed as in (38). We sample 200 points for both cases. We can see that there is a
considerable overlap between the clusters.
improvement of Hartigan’s over Lloyd’s method to be more accentuated in high dimensions.
Thus, we analyze how the algorithms degrade as the number of dimensions increase while
keeping the number of points in each cluster fixed. Consider data from the Gaussian mixture
iid
x ∼ 21 N (µ1 , Σ1 ) + 12 N (µ2 , Σ2 ),
Σ1 = Σ2 = ID ,
µ1 = (0, . . . , 0)> ,
| {z }
×D
µ2 = 0.7(1, . . . , 1, 0, . . . , 0)> .
| {z } | {z }
×10
(37)
×(D−10)
To get some intuition about how separated data points from each class are, we show scatter
plots between the first 5 dimensions in Fig. 2a. Note that the Bayes error is fixed as D
increases, yielding an optimal accuracy of ≈ 0.86. We sample 200 points on each trial. The
results are shown in Fig. 3a. We can see that E H and spectral clustering have practically
the same performance, which is higher than E L -clustering (kernel k-means). Moreover, E H
outperforms k-means and GMM, where the improvement is noticeable specially in high
dimensions. Note that in this setting k-means and GMM are consistent models to the data,
however, energy clustering degrades much less as dimension increases.
Still for a two-class Gaussian mixture, we now allow the diagonal entries of one of the
25
0.95
0.85
0.90
0.75
0.70
0.65
0.60
0.55
0.85
accuracy
accuracy
0.80
EH
EL
spectral
k-means
GMM
0.80
0.75
0.70
0.65
0.60
50
100
150
200
0.55
EH
EL
spectral
k-means
GMM
100
# dimensions
200
300
400
500
600
700
# dimensions
(a)
(b)
FIG. 3. Comparison of E H -clustering, E L -clustering (kernel k-means), spectral clustering, k-means
and GMM in high dimensional Gaussian settings. We plot the mean accuracy versus the number
of dimensions, with error bars indicating standard error from 100 Monte Carlo runs. (a) Data
normally distributed as in (37), with Bayes accuracy ≈ 0.86, over the range D ∈ [10, 200]. (b)
Data normally distributed as in (38), with Bayes accuracy ≈ 0.95, over the range D ∈ [10, 700].
covariances to have different values by choosing
iid
x ∼ 12 N (µ1 , Σ1 ) + 12 N (µ2 , Σ2 ),
µ1 = (0, . . . , 0)> ,
| {z }
×D
µ2 = (1, . . . , 1, 0, . . . , 0)> ,
| {z } | {z }
×10
×(D−10)
Σ1 = ID ,
Σ2 =
e 10
Σ
0
0
ID−10
e 10 = diag(1.367, 3.175, 3.247, 4.403, 1.249, 1.969, 4.035, 4.237, 2.813, 3.637).
Σ
,
(38)
We simply chose a fixed set of 10 numbers uniformly at random on the interval [1, 5] for the
e 10 , and any other choice would give analogous results. We show pair plots of
diagonal of Σ
this data in Fig. 2b. We sample a total of 200 points from (38) on each trial. The Bayes error
is kept fixed when increasing D yielding an optimal accuracy ≈ 0.95. In Fig. 3b we see that
GMM performs better in low dimensions, but it quickly degenerates as D increases. The
same is true for k-means and E L -clustering. However, E H and spectral clustering remains
much more stable in high dimensions. Notice that a naive implementation of GMM should
not be able to estimate the covariances when D & 100, however, scikit-learn library uses
k-means output as initialization, therefore the output of GMM in this implementation is at
least as good as k-means and the algorithm is more robust in high dimensions.
26
Consider sampling data from the following Gaussian mixture in R20 :
iid
x ∼ 21 N (µ1 , Σ1 ) + 12 N (µ2 , Σ2 ),
µ1 = (0, . . . , 0)> ,
| {z }
×20
µ2 = 12 (1, . . . , 1, 0, . . . , 0)> ,
| {z } | {z }
5
Σ1 = 12 I20 ,
Σ2 = I20 .
(39)
15
The optimal accuracy based on Bayes classification error is ≈ 0.90. We increase the sample
size n ∈ [10, 400] and show the accuracy versus n for the different kernels (34) and (35)
within E H -clustering algorithm, which are compared to k-means and GMM. The results are
in Fig. 4a. Note that for small n all methods are superior than GMM, which slowly catches
up and tend to optimal Bayes, as expected since it is a consistent model to the data. Note
e 1 is as accurate as GMM for large number of points,
also that E H -clustering with kernel K
however, it is superior for small number of points. Still for the same setting, in Fig. 4b we
show the difference in accuracy provided by E H minus E L and E H minus spectral clustering,
e 1 . Note that E H was always superior than kernel k-means and
when using the kernel K
spectral clustering, otherwise there would be points negative values on the y-axis.
Consider the same experiment but now with a lognormal mixture,
iid 1
2
x∼
exp {N (µ1 , Σ1 )} + 12 exp {N (µ2 , Σ2 )} ,
µ1 = (0, . . . , 0)> ,
| {z }
×20
µ2 = 12 (1, . . . , 1, 0, . . . , 0)> ,
| {z } | {z }
5
Σ1 = 12 I20 ,
Σ2 = I20 .
(40)
15
The results are in Fig. 4c. Energy clustering still performs accurately, with any of the
utilized kernels, providing better results than k-means and GMM on this non-normal data.
e 1 still provides the best results for small number of points, but its performance is
The kernel K
eventually achieved by K1/2 , indicating that α ≈ 1/2 in the standard energy distance should
be more appropriate for skewed distributions. In Fig. 4d we show the difference between E H e 1 . Again, the accuracy
clustering to kernel k-means and spectral clustering, with the kernel K
provided by E H is higher than the other methods, although not much higher than spectral
clustering in this example. The two experiments of Fig. 4 illustrate how energy clustering is
more flexible, performing well in different settings with the same kernel, contrary to k-means
and GMM.
In Fig. 5a–c we have complex two dimensional datasets. The two parallel cigars in (a)
have 200 points each. The concentric circles in (b) and (c) have 400 points for each class.
We apply E H -clustering with the kernels (34), (35) and (36). We also consider the best
27
0.90
0.85
0.85
0.80
E H , ρ1
0.75
E H , ρ1/2
E H , ρe1
0.70
50
100
150
200
250
300
350
k-means
GMM
0.70
50
100
150
200
250
# points
# points
(a)
(c)
300
350
400
0.25
difference in accuracy
difference in accuracy
E H , ρe1
0.75
0.55
400
0.25
0.20
0.15
0.10
0.00
E H , ρ1/2
0.80
0.60
0.30
0.05
E H , ρ1
0.65
k-means
GMM
0.65
0.60
accuracy
accuracy
0.90
EH − EL
E H − spectral
50
100
150
200
250
300
350
400
EH − EL
0.20
E H − spectral
0.15
0.10
0.05
0.00
50
100
150
200
250
# points
# points
(b)
(d)
300
350
400
FIG. 4. E H -clustering with kernels (34) and (35) versus k-means and GMM. In both settings Bayes
accuracy is ≈ 0.9. We show average accuracy (error bars are standard error) versus number of
points for 100 Monte Carlo trials. (a,b) Gaussian mixture (39). (c,d) Lognormal mixture (40).
The plots in (c) and (d) consider the difference in accuracy between E H versus E L (kernel k-means)
e1.
and spectral clustering, with the kernel K
(a)
(b)
(c)
(d)
FIG. 5. (a) Parallel cigars. (b) Two concentric circles with noise. (c) Three concentric circles with
noise. (d) MNIST handwritten digits. Clustering results are in Table I and Table II.
28
TABLE I. Clustering data from Fig. 5a–c.
Fig. 5a
ρ1
E H -clustering
spectral-clustering
k-means
GMM
0.705 ± 0.065
ρ1/2 0.952 ± 0.048
ρ1
Fig. 5b
Fig. 5c
0.521 ± 0.005 ρ1
0.393 ± 0.020
ρ1/2 0.522 ± 0.004 ρ1/2 0.486 ± 0.040
ρe2
0.9987 ± 0.0008 ρe1
0.778 ± 0.075 ρe2
0.666 ± 0.007
ρe2
0.557 ± 0.014
ρb1
0.732 ± 0.002 ρb2
0.364 ± 0.004
0.522 ± 0.004 7
0.368 ± 0.005
7
0.595 ± 0.011 7
0.465 ± 0.030
ρb2
0.956 ± 0.020
7
0.550 ± 0.011
7
0.903 ± 0.064
ρb1
7
1.0 ± 0.0
ρb2
0.676 ± 0.002
TABLE II. Clustering MNIST data from Fig. 5d.
{0, 1, . . . , 4}
{0, 1, . . . , 6}
{0, 1, . . . , 8}
{0, 1, . . . , 9}
σ
10.41
10.41
10.37
10.19
ρ1
0.873 ± 0.025
0.731 ± 0.016
0.687 ± 0.016
0.581 ± 0.011
ρ1/2 0.874 ± 0.027
0.722 ± 0.017
0.647 ± 0.017
0.600 ± 0.009
0.847 ± 0.031
0.695 ± 0.023
0.657 ± 0.014
0.584 ± 0.013
Class Subset
parameter
E H -clustering
ρeσ
ρbσ
0.891 ± 0.009 0.759 ± 0.011 0.704 ± 0.011 0.591 ± 0.012
0.769 ± 0.012
0.678 ± 0.014
0.649 ± 0.018
0.565 ± 0.009
k-means
7
0.878 ± 0.010
0.744 ± 0.008
0.695 ± 0.012
0.557 ± 0.012
GMM
7
0.839 ± 0.015
0.694 ± 0.010
0.621 ± 0.009
0.540 ± 0.009
spectral-clustering
ρbσ
kernel choice for each example for spectral clustering. Moreover, we consider k-means and
GMM. We perform 10 Monte Carlo runs for each example. The results are in Table I. For
(a) we initialize all algorithms with k-means++, and for (b) and (c) we initialize at random.
E H has superior performance in every example, and in particular better than the spectral
clustering. In (a) the standard kernel from energy statistics in Euclidean space, K1 and
K1/2 , are able to provide accurate results, however, for the examples in (b) and (c) the
b 1 and K
b 2 provide a significant improvement.
kernel choice is more sensitive, where K
Next, we consider the infamous MNIST handwritten digits as illustrated in Fig. 5d. Each
29
data point is an 8-bit gray scale image forming a 784-dimensional vector corresponding to
the digits {0, 1, . . . , 9}. We compute the parameter
n
1 X
kxi − xj k2 ,
σ = 2
n i,j=1
2
from a separate training set, to be used in the kernels (35) and (36). We consider subsets
of {0, 1, . . . , 9}, sampling 100 points for each class. The results are shown in Table II,
where kernels and parameters are indicated. E H -clustering performs slightly better than
k-means and GMM, however the difference is not considerable. Unsupervised clustering on
MNIST without any feature extraction is not trivial. For instance, the same experiment
was performed in [29] where a low-rank transformation is learned then subsequently used
in subspace clustering, providing very accurate results. It would be interesting to explore
analogous methods for learning a better representation of the data and subsequently apply
E H -clustering.
VIII.
DISCUSSION
We proposed clustering from the perspective of generalized energy statistics, valid for
arbitrary spaces of negative type. Our mathematical formulation of energy clustering reduces to a QCQP in the associated RKHS, as demonstrated in Proposition 2. We showed
that the optimization problem is equivalent to kernel k-means, once the kernel is fixed; see
Proposition 3. Energy statistics, however, fixes a family of standard kernels in Euclidean
space, and more general kernels on spaces of negative type can also be obtained. We also
considered a weighted version of energy statistics, whose clustering formulation establishes
connections with graph partitioning. We proposed the iterative E H -clustering algorithm
based on Hartigan’s method, which was compared to kernel k-means algorithm based on
Lloyd’s heuristic. Both have the same time complexity, however, numerical and theoretical results provide compelling evidence that E H -clustering is more robust with a superior
performance, specially in high dimensions. Furthermore, energy clustering, with standard
kernels from energy statistics, outperformed k-means and GMM on several settings, illustrating the flexibility of the proposed method which is model-free. In many settings, the
iterative E H -clustering also surpassed spectral clustering, which is solves a relaxation of the
original QCQP, and in other settings performed closely but never worse. Note that spec30
accuracy
1.00
0.95
0.90
0.85
0.80
0.75
0.70
0.65
0.60
0.55
EH
EL
spectral
k-means
GMM
0
50
100
150
200
# unbalanced points
FIG. 6. Comparison of energy clustering algorithms to k-means and GMM on unbalanced clusters.
The data is normally distributed as (41), where we vary m ∈ [0, 240], and in each case we do 100
Monte Carlo runs showing the average accuracy with standard error.
tral clustering is more expensive than our iterative method, going up to O(n3 ), and finding
eigenvectors of very large matrices is problematic.
A limitation of the proposed methods for energy clustering is that it cannot handle
accurately highly unbalanced clusters. As an illustration, consider the following Gaussian
mixture:
n1
n1
N (µ1 , Σ1 ) +
N (µ1 , Σ1 ), µ1 = (0, 0, 0, 0)> , µ2 = 1.5 × (1, 1, 0, 0)> ,
2N
2N
1
I2 0
, n1 = N − m, n2 = N + m, N = 300.
Σ1 = I4 , Σ2 = 2
0 I2
iid
x∼
(41)
We then increase m ∈ [0, 240] making the clusters progressively more unbalanced. We plot
the average accuracy over 100 Monte Carlo runs for each m, with error bars indicating
standard error. The results are shown in Fig. 6. For highly unbalanced clusters we see that
GMM performs better than the other methods, which have basically similar performance.
Based on this experiment, an interesting problem would be to extend E H -clustering algorithm
to account for highly unbalanced clusters.
Moreover, it would be interesting to formally demonstrate cases where energy clustering
is a consistent in the large n limit. A soft version of energy clustering is also an interesting
extension. Finally, kernel methods can benefit from sparsity and fixed-rank approximations
of the Gram matrix, and there is plenty of room to make E H -clustering algorithm more
scalable.
31
ACKNOWLEDGMENTS
We would like to thank Carey Priebe for discussions. We would like to acknowledge
the support of the Transformative Research Award (NIH #R01NS092474) and the Defense
Advanced Research Projects Agencys (DARPA) SIMPLEX program through SPAWAR contract N66001-15-C-4041.
[1] G. J. Székely and M. L. Rizzo. Energy Statistics: A Class of Statistics Based on Distances.
Journal of Statistical Planning and Inference, 143:1249–1272, 2013.
[2] M. L. Rizzo and G. J. Székely. DISCO Analysis: A Nonparametric Extension of Analysis of
Variance. The Annals of Applied Statistics, 4(2):1034–1055, 2010.
[3] G. J. Székely and M. L. Rizzo. Hierarchical Clustering via Joint Between-Within Distances:
Extending Ward’s Minimum Variance Method. Journal of Classification, 22(2):151–183, 2005.
[4] S. Li. k-Groups: A Generalization of k-Means by Energy Distance. PhD Thesis, Bowling
Green State University, 2015.
[5] R. Lyons. Distance Covariance in Metric Spaces. The Annals of Probability, 41(5):3284–3305,
2013.
[6] D. Sejdinovic, B. Sriperumbudur, A. Gretton, and K. Fukumizu. Equivalence of DistanceBased and RKHS-Based Statistic in Hypothesis Testing. The Annals of Statistics, 41(5):2263–
2291, 2013.
[7] S. P. Lloyd. Least Squares Quantization in PCM. IEEE Transactions on Information Theory,
28(2):129–137, 1982.
[8] J. B. MacQueen. Some Methods for Classification and Analysis of Multivariate Observations.
In Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability,
volume 1, pages 281–297. University of California Press, 1967.
[9] E. Forgy. Cluster Analysis of Multivariate Data: Efficiency versus Interpretabiliby of Classification. Biometrics, 21(3):768–769, 1965.
[10] B. Schölkopf, A. J. Smola, and K. R. Müller. Nonlinear Component Analysis as a Kernel
Eigenvalue Problem. Neural Computation, 10:1299–1319, 1998.
[11] M. Girolami. Kernel Based Clustering in Feature Space. Neural Networks, 13(3):780–784,
32
2002.
[12] J. Mercer. Functions of Positive and Negative Type and their Connection with the Theory of
Integral Equations. Proceedings of the Royal Society of London, 209:415–446, 1909.
[13] I. S. Dhillon, Y. Guan, and B. Kulis. Kernel K-means: Spectral Clustering and Normalized
Cuts. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge
Discovery and Data Mining, KDD ’04, pages 551–556, New York, NY, USA, 2004. ACM.
[14] I. S. Dhillon, Y. Guan, and B. Kulis. Weighted Graph Cuts without Eigenvectors: A Multilevel
Approach. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(11):1944–
1957, 2007.
[15] M. Filippone, F. Camastra, F. Masulli, and S. Rovetta. A Survey of Kernel and Spectral
Methods for Clustering. Pattern Recognition, 41:176–190, 2008.
[16] J. A. Hartigan and M. A. Wong. Algorithm AS 136: A k-Means Clustering Algorithm. Journal
of the Royal Statistical Society. Series C (Applied Statistics), 28(1):100–108, 1979.
[17] M. Telgarsky and A. Vattani. Hartigan’s Method: k-Means Clustering without Voronoi.
In Proceedings of the 13th International Conference on Artificial Intelligence and Statistics
(AISTATS), volume 9, pages 313–319. JMLR, 2010.
[18] N. Slonim, E. Aharoni, and K. Crammer. Hartigan’s k-Means versus Lloyd’s k-Means —
Is it Time for a Change? In Proceedings of the 20th International Conference on Artificial
Intelligence, pages 1677–1684. AAI Press, 2013.
[19] N. Aronszajn. Theory of Reproducing Kernels. Transactions of the American Mathematical
Society, 68(3):337–404, 1950.
[20] A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Schölkopf, and A. Smola. A Kernel Two-Sample
Test. Journal of Machine Learning Research, 13:723–773, 2012.
[21] C. Berg, J. P. R. Christensen, and P. Ressel. Harmonic Analysis on Semigroups: Theory of
Positive Definite and Related Functions. Graduate Text in Mathematics 100. Springer, New
York, 1984.
[22] J. Shi and J. Malik. Normalized Cust and Image Segmentation. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 22(8):888–905, 2000.
[23] A. Y. Ng, M. I. Jordan, and Y. Weiss. On Spectral Clustering: Analysis and an Algorithm. In
Advances in Neural Information Processing Systems, volume 14, pages 849–856, Cambridge,
MA, 2001. MIT Press.
33
[24] B. Kernighan and S. Lin. An Efficient Heuristic Procedure for Partitioning Graphs. The Bell
System Technical Journal, 49(2):291–307, 1970.
[25] P. Chan, M. Schlag, and J. Zien. Spectral k-Way Ratio Cut Partitioning. IEEE Transactions
on Computer-Aided Design of Integrated Circuits and Systems, 13:1088–1096, 1994.
[26] S. X. Yu and J. Shi. Multiclass Spectral Clustering. In Proceedings Ninth IEEE International
Conference on Computer Vision, volume 1, pages 313–319, 2003.
[27] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel,
P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher,
M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine
Learning Research, 12:2825–2830, 2011.
[28] D. Arthur and S. Vassilvitskii. k-means++: The Advantage of Careful Seeding. In Proceedings
of the Eighteenth annual ACM-SIAM Symposium on Discrete Algorithms, pages 1027–1035,
Philadelphia, PA, USA, 2007. Society for Industrial and Applied Mathematics.
[29] Q. Qui and G. Sapiro. Learning Transformations for Clustering and Classification. Journal
of Machine Learning Research, 16:187–225, 2015.
34
| 1 |
Bayesian Probabilistic Numerical Methods
Jon Cockayne∗
Chris Oates†
Tim Sullivan‡
Mark Girolami§
arXiv:1702.03673v2 [stat.ME] 7 Jul 2017
July 10, 2017
The emergent field of probabilistic numerics has thus far lacked clear statistical principals. This paper establishes Bayesian probabilistic numerical methods as
those which can be cast as solutions to certain inverse problems within the Bayesian
framework. This allows us to establish general conditions under which Bayesian
probabilistic numerical methods are well-defined, encompassing both non-linear and
non-Gaussian models. For general computation, a numerical approximation scheme
is proposed and its asymptotic convergence established. The theoretical development is then extended to pipelines of computation, wherein probabilistic numerical
methods are composed to solve more challenging numerical tasks. The contribution
highlights an important research frontier at the interface of numerical analysis and
uncertainty quantification, with a challenging industrial application presented.
1. Introduction
Numerical computation underpins almost all of modern scientific and industrial research and
development. The impact of a finite computational budget is that problems whose solutions are
high- or infinite-dimensional, such as the solution of differential equations, must be discretised in
order to be solved. The result is an approximation to the object of interest. The declining rate
of processor improvement as physical limits are reached is in contrast to the surge in complexity
of modern inference problems, and as a result the error incurred by discretisation is attracting
increased interest (e.g. Capistrán et al., 2016).
The situation is epitomised in modern climate models, where use of single-precision arithmetic
has been explored to permit finer temporal resolution. However, when computing in singleprecision, a detailed time discretisation can increase total error, due to the increased number
of single precision computations, and in practice some form of ad-hoc trade-off is sought (Harvey and Verseghy, 2015). It has been argued that statistical considerations can permit more
principled error control strategies for such models (Hennig et al., 2015).
Numerical methods are designed to mitigate discretisation errors of all forms (Press et al.,
2007). Nonetheless, the introduction of error is unavoidable and it is the role of the numerical
analyst to provide control of this error (Oberkampf and Roy, 2013). The central theoretical
results of numerical analysis have in general not been obtained through statistical considerations.
∗
University of Warwick, j.cockayne@warwick.ac.uk
Newcastle University and Alan Turing Institute, chris.oates@ncl.ac.uk
‡
Free University of Berlin and Zuse Institute Berlin, sullivan@zib.de
§
Imperial College London and Alan Turing Institute, m.girolami@imperial.ac.uk
†
1
More recently, the connection of discretisation error to statistics was noted as far back as
Henrici (1963); Hull and Swenson (1966), who argued that discretisation error can be modelled
using a series of independent random perturbations to standard numerical methods. However,
numerical analysts have cast doubt on this approach, since discretisation error can be highly
structured; see Kahan (1996) and Higham (2002, Section 2.8). To address these objections, the
field of probabilistic numerics has emerged with the aim to properly quantify the uncertainty
introduced through discretisation in numerical methods.
The foundations of probabilistic numerics were laid in the 1970s and 1980s, where an important shift in emphasis occurred from the descriptive statistical models of the 1960s to the use
of formal inference modalities that generalise across classes of numerical tasks. In a remarkable
series of papers, Larkin (1969, 1970, 1972); Kuelbs et al. (1972); Larkin (1974, 1979a,b), Mike
Larkin presented now classical results in probabilistic numerics, in particular establishing the
correspondence between Gaussian measures on Hilbert spaces and optimal numerical methods.
Re-discovered and re-emphasised on a number of occasions, the role for statisticians in this new
outlook was clearly captured in Kadane and Wasilkowski (1985):
Statistics can be thought of as a set of tools used in making decisions and inferences
in the face of uncertainty. Algorithms typically operate in such an environment.
Perhaps then, statisticians might join the teams of scholars addressing algorithmic
issues.
The 1980s culminated in development of Bayesian optimisation methods (Mockus, 1989; Törn
and Žilinskas, 1989), as well as the relation of smoothing splines to Bayesian estimation (Kimeldorf and Wahba, 1970b; Diaconis and Freedman, 1983).
The modern notion of a probabilistic numerical method (henceforth PNM) was described in
Hennig et al. (2015); these are algorithms whose output is a distribution over an unknown,
deterministic quantity of interest, such as the numerical value of an integral. Recent research in
this field includes PNMs for numerical linear algebra (Hennig, 2015; Bartels and Hennig, 2016),
numerical solution of ordinary differential equations (ODEs; Schober et al., 2014; Kersting and
Hennig, 2016; Schober et al., 2016; Conrad et al., 2016; Chkrebtii et al., 2016), numerical
solution of partial differential equations (PDEs; Owhadi, 2015; Cockayne et al., 2016; Conrad
et al., 2016) and numerical integration (O’Hagan, 1991; Briol et al., 2016).
Open Problems Despite numerous recent successes and achievements, there is currently no
general statistical foundation for PNMs, due to the infinite-dimensional nature of the problems
being solved. For instance, at present it is not clear under what conditions a PNM is welldefined, except for in the standard conjugate Gaussian framework considered in (Larkin, 1972).
This limits the extent to which domain-specific knowledge, such as boundedness of an integrand
or monotonicity of a solution to a differential equation, can be encoded in PNMs. In contrast,
classical numerical methods often exploit such information to achieve substantial reduction in
discretisation error. For instance, finite element methods for solution of PDEs proceed based
on a mesh that is designed to be more refined in areas of the domain where greater variation of
the solution is anticipated (Strang and Fix, 1973).
Furthermore, although PNMs have been proposed for many standard numerical tasks (see
Section 2.6.1), the lack of common theoretical foundations makes comparison of these methods
difficult. Again taking PDEs as an example, Cockayne et al. (2016) placed a probability distribution on the unknown solution of the PDE, whereas Conrad et al. (2016) placed a probability
distribution on the unknown discretisation error of a numerical method. The uncertainty modelled in each case is fundamentally different, but at present there is no framework in which to
2
articulate the relationship between the two approaches. Furthermore, though PNMs are often
reported as being “Bayesian” there is no clear definition of what this ought to entail.
A more profound consequence of the lack of common foundation occurs when we seek to
compose multiple PNMs. For example, multi-physics cardiac models involve coupled ODEs and
PDEs which must each be discretised and approximately solved to estimate a clinical quantity of
interest (Niederer et al., 2011). The composition of successive discretisations leads to non-trivial
error propagation and accumulation that could be quantified, in a statistical sense, with PNMs.
However, proper composition of multiple PNMs for solutions of ODEs and PDEs requires that
these PNMs share common statistical foundations that ensure coherence of the overall statistical
output. These foundations remain to be established.
Contributions The main contribution of this paper is to establish rigorous foundations for
PNMs:
The first contribution is to argue for an explicit definition of a “Bayesian” PNM. Our framework generalises the seminal work of Larkin (1972) and builds on the modern and popular
mathematical framework of Stuart (2010). This illuminates subtle distinctions among existing
methods and clarifies the sense in which non-Bayesian methods are approximations to Bayesian
PNMs.
The second contribution is to establish when PNMs are well-defined outside of the conjugate
Gaussian context. For exploration of non-linear, non-Gaussian models, a numerical approximation scheme is developed and shown to asymptotically approach the posterior distribution of
interest. Our aim here is not to develop new or more computationally efficient PNMs, but to
understand when such development can be well-defined.
The third contribution is to discuss pipelines of composed PNMs. This is a critical area of
development for probabilistic numerics; in isolation, the error of a numerical method can often
be studied and understood, but when composed into a pipeline the resulting error structure may
be non-trivial and its analysis becomes more difficult. The real power of probabilistic numerics
lies in its application to pipelines of numerical methods, where the probabilistic formulation
permits analysis of variance (ANOVA) to understand the contribution of each discretisation to
the overall numerical error. This paper introduces conditions under which a composition of
PNMs can be considered to provide meaningful output, so that ANOVA can be justified.
Structure of the Paper In Section 2 we argue for an explicit definition of Bayesian PNM
and establish when such methods are well-defined. Section 3 establishes connections to other
related fields, in particular with relation to evaluating the performance of PNMs. In Section 4 we
develop useful numerical approximations to the output of Bayesian PNMs. Section 5 develops
the theory of composition for multiple PNMs. Finally, in Section 6 we present applications of
the techniques discussed in this paper.
All proofs can be found in either the Appendix or the Electronic Supplement.
2. Probabilistic Numerical Methods
The aim of this section is to provide rigorous statistical foundations for PNMs.
2.1. Notation
For a measurable space (X , ΣX ), the shorthand PX will be used to denote the set of all distributions on (X , ΣX ). For µ, µ0 ∈ PX we write µ µ0 when µ is absolutely continuous with respect
3
to µ. The notation δ(x) will be used to denote a Dirac measure on x ∈ X , so that δ(x) ∈ PX . Let
1[S] denote the indicator function of an event S ∈ ΣX . For a measurable function
f :X →R
R
and a distribution µ ∈ PX , we will on occasion use the notation µ(f ) = f (x)µ(dx) and
kf k∞ = supx∈X |f (x)|. The point-wise product of two functions f and g is denoted f · g. For a
function or operator T , T# denotes the associated push-forward operator1 that acts on measures
on the domain of T . Let ⊥
⊥ denote conditional
independence. The subset `p ⊂ R∞ is defined
P∞
to consist of sequences (ui ) for which i=1 |ui |p is convergent. C(0, 1) will be used to denote
the set of continuous functions on (0, 1).
2.2. Definition of a PNM
To first build intuition, consider numerical approximation of the Lebesgue integral
Z
x(t)ν(dt)
for some integrable function x : D → R, with respect to a measure ν on D. Here we may
directly interrogate the integrand x(t) at any t ∈ D, but unless D is finite we cannot evaluate
x at all t ∈ D with a finite computational budget. Nonetheless, there are many algorithms for
approximation of this integral based on information {x(ti )}ni=1 at some collection of locations
{ti }ni=1 .
To see the abstract structure of this problem, assume the state variable x exists in a measurable space (X , ΣX ). Information about x is provided through an information operator
A : X → A whose range is a measurable space (A, ΣA ). Thus, for the Lebesgue integration
problem, the information operator is
x(t1 )
A(x) = ... = a ∈ A.
(2.1)
x(tn )
The space X , in this case a space of functions, can be high- or infinite-dimensional, but the space
A of information is assumed to be finite-dimensional in accordance with our finite computational
budget. In this paper we make explicit a quantity of interest (QoI) Q(x), defined by a map
Q : X → Q into a measurable space (Q, ΣQ ). This captures that x itself may not be the object
of interest for the numerical
problem; for the Lebesgue integration illustration, the QoI is not
R
x itself but Q(x) = x(t)ν(dt).
The standard approach to such computational problems is to construct an algorithm which,
when applied, produces some approximation q̂(a) of Q(x) based on the information a, whose
theoretical convergence order can be studied. A successful algorithm will often tailor the information operator A to the QoI Q. For example, classical Gaussian cubature specifies sigma
points {t∗i }ni=1 at which the integrand must be evaluated, based on exact integration of certain
polynomial test functions.
The probabilistic numerical approach, instead, begins with the introduction of a random
variable X on (X , ΣX ). The true state X = x is fixed but unknown; the randomness is used
an abstract device used to represent epistemic uncertainty about x prior to evolution of the
information operator (Hennig et al., 2015). This is now formalised:
1
Recall that, for measurable T : X → A, the pushforward T# µ of a distribution µ ∈ PX is defined as T# µ(A) =
µ(T −1 (A)) for all A ∈ ΣA .
4
Definition 2.1 (Belief Distribution). An element µ ∈ PX is a belief distribution 2 for x if it
carries the formal semantics of belief about the true, unknown state variable x.
Thus we may consider µ to be the law of X. The construction of an appropriate belief distribution µ for a specific numerical task is not the focus of this research and has been considered
in detail in previous work; see the Electronic Supplement for an overview of this material.
Rather we consider the problem of how one updates the belief distribution µ in response to the
information A(x) = a obtained about the unknown x. Generic approaches to update belief distributions, which generalise Bayesian inference beyond the unique update demanded in Bayes
theorem, were formalised in Bissiri et al. (2016); de Carvalho et al. (2017).
Definition 2.2 (Probabilistic Numerical Method). Let (X , ΣX ), (A, ΣA ) and (Q, ΣQ ) be measurable spaces and let A : X → A, Q : X → Q and B : PX × A → PQ where A and Q are
measurable functions. The pair M = (A, B) is called a probabilistic numerical method for estimation of a quantity of interest Q. The map A is called an information operator, and the map
B is called a belief update operator.
The output of a PNM is a distribution B(µ, a) ∈ PQ . This holds the formal status of a belief
distribution for the value of Q(x), based on both the initial belief µ about the value of x and
the information a that are input to the PNM.
An objection sometimes raised to this construction is that x itself is not random. We emphasise that this work does not propose that x should be considered as such; the random variable
X is a formal statistical device used to represent epistemic uncertainty (Kadane, 2011; Lindley,
2014). Thus, there is no distinction from traditional statistics, in which x represents a fixed but
unknown parameter and X encodes epistemic uncertainty about this parameter.
Before presenting specific instances of this general framework, we comment on the potential
analogy between A and the likelihood function, and between B and Bayes’ theorem. Whilst
intuitively correct, the mathematical developments in this paper are not well-suited to these
terms; in Section 2.5 we show that Bayes formula is not well-defined, as the posterior distribution
is not absolutely continuous with respect to the prior.
To strengthen intuition we now give specific examples of established PNMs:
Example 2.3 (Probabilistic Integration). Consider the numerical integration problem earlier
discussed. Take D ⊆ Rd , X a separable Banach space of real-valued functions on D, and ΣX
the Borel σ-algebra for X . The space (X , ΣX ) is endowed with a Gaussian belief distribution
µ ∈ PX . Given information A(x) = a, define µa to be the restriction of µ to those functions
which interpolate x at the points {ti }ni=1 ; that µa is again Gaussian follows from linearity
of the
R
information operator (see Bogachev, 1998, for details). The QoI Q remains Q(x) = x(t)ν(dt).
This problem was first considered by Larkin (1972). The belief update operator proposed
therein, and later considered in Diaconis (1988); O’Hagan (1991) and others, was B(µ, a) =
Q# µa . Since Gaussians are closed under linear projection, the PNM output B(µ, a) is a univariate Gaussian whose mean and variance can be expressed in closed-form for certain choices of
Gaussian covariance function and reference measure ν on D. Specifically, if µ has mean function
m : X → R and covariance function k : X × X → R, then
B(µ, a) = N(z > K−1 (a − m̄), z0 − z > K−1 z)
2
(2.2)
Two remarks are in order: First, we have avoided the use of “prior” as this abstract framework encompasses
both Bayesian and non-Bayesian PNMs (to be defined). Second, the use of “belief” differs to the set-valued
belief functions in Dempster–Shafer theory, which do not require that µ(E) + µ(E c ) = 1 (Shafer, 1976).
5
R
where m̄, z ∈ Rn are defined
as m̄i = m(ti ), zi = k(t, ti )ν(dt), K ∈ Rn×n is defined as
RR
Ki,j = k(ti , tj ) and z0 =
k(t, t0 )(ν × ν)d(t × t0 ) ∈ R. This method was extensively studied in
Briol et al. (2016), who provided a listing of (ν, k) combinations for which z and z0 possess a
closed-form.
An interesting fact is that the mean of B(µ, a) coincides with classical cubature rules for
different choices of µ and A (Diaconis, 1988; Särkkä et al., 2016). In Section 3 we will show
that this is a typical feature of PNMs. The crucial distinction between PNMs and classical
numerical methods is the distributional nature of B(µ, a), which carries the formal semantics
of belief about the QoI. The full distribution B(µ, a) was examined in Briol et al. (2016), who
established contraction to the exact value of the integral under certain smoothness conditions
on the Gaussian covariance function and on the integrand. See also Kanagawa et al. (2016);
Karvonen and Särkkä (2017).
Example 2.4 (Probabilistic Meshless Method). As a canonical example of a PDE, take the
following elliptic problem with Dirichlet boundary conditions
−∇ · (κ∇x) = f
in D
x=g
on ∂D
(2.3)
where we assume D ⊂ Rd and κ : D → Rd×d is a known coefficient. Let X be a separable
Banach space of appropriately differentiable real-valued functions and take ΣX to be the Borel
σ-algebra for X . In contrast to the first illustration, the QoI here is Q(x) = x, as the goal is to
make inferences about the solution of the PDE itself.
Such problems were considered in Cockayne et al. (2016) wherein µ was restricted to be a
Gaussian distribution on X . The information operator was constructed by choosing finite sets
of locations T1 = {t1,1 , . . . , t1,n1 } ⊂ D and T2 = {t2,1 , . . . , t2,n2 } ⊂ ∂D at which the system
defined in Eq. (2.3) was evaluated, so that
−∇ · (κ(t1,1 )∇x(t1,1 ))
f (t1,1 )
..
..
.
.
−∇ · (κ(t1,n1 )∇x(t1,n1 ))
f (t1,n1 )
.
A(x) =
a=
x(t2,1 )
g(t2,1 )
..
..
.
.
x(t2,n2 )
g(t2,n2 )
The belief update operator was chosen to be B(µ, a) = µa , where µa is the restriction of µ to
those functions for which A(x) = a is satisfied. In the setting of a linear system of PDEs such
as that in Eq. (2.3), the distribution B(µ, a) is again Gaussian (Bogachev, 1998). Full details
are provided in Cockayne et al. (2016).
As in the previous example, we note that the mean of B(µ, a) coincides with the numerical solution to the PDE provided by a classical method (the symmetric collocation method;
Fasshauer, 1999). The full distribution B(µ, a) provides uncertainty quantification for the unknown exact solution and can again be shown to contract to the exact solution under certain
smoothness conditions (Cockayne et al., 2016). This method was further analysed for a specific
choice of covariance operator in the belief distribution µ, in an impressive contribution from
Owhadi (2017).
6
2.2.1. Classical Numerical Methods
Standard numerical methods fit into the above framework, as can be seen by taking
B(µ, a) = δ ◦ b(a)
(2.4)
independent of the distribution µ, where a function b : A → Q gives the output of some classical
numerical method for solving the problem of interest. Here δ : Q → PQ maps b(a) ∈ Q to a
Dirac measure centred on b(a). Thus, information in a ∈ A is used to construct a point estimate
b(a) ∈ Q for the QoI.
The formal language of probabilities is not used in classical numerical analysis to describe
numerical error. However, in many cases the classical and probabilistic analyses are mathematically equivalent. For instance, there is an equivalence between the standard deviation of
B(µ, a) for probabilistic integration and the worst-case error for numerical cubature rules from
numerical analysis (Novak and Woźniakowski, 2010). The explanation for this phenomenon will
be given in Section 3.
2.3. Bayesian PNMs
Having defined a PNM, we now state the central definition of this paper, that is of a Bayesian
PNM. Define µa to be the conditional distribution of the random variable X, given the event
A(X) = a. For now we assume that this can be defined without ambiguity and reserve a more
technical treatment of conditional probabilities for Section 2.5.
In this work we followed Larkin (1972) and cast the problem of determining x in Eq. (2.1)
as a problem of Bayesian inversion, a framework now popular in applied mathematics and
uncertainty quantification research (Stuart, 2010). However, in a standard Bayesian inverse
problem the observed quantity a is assumed to be corrupted with measurement error, which is
described by a “likelihood”. This leads, under mild assumptions, to general versions of Bayes’
theorem (see Stuart, 2010, Section 2.2)
For PNM, however, the information is not corrupted with measurement error. As a result,
the support of the likelihood is a null set under the prior, making the standard approaches
to such problems, including Bayes’ theorem, ill-defined outside of the conjugate Gaussian case
when unknowns are infinite-dimensional. This necessitates a new definition:
Definition 2.5 (Bayesian Probabilistic Numerical Method). A probabilistic numerical method
M = (A, B) is said to be Bayesian 3 for a quantity of interest Q if, for all µ ∈ PX , the output
B(µ, a) = Q# µa ,
for A# µ-almost-all a ∈ A.
That is, a PNM is Bayesian if the output of the PNM is the push-forward of the conditional
distribution µa through Q. This definition is familiar from the examples in Section 2.2, which
are both examples of Bayesian PNMs.
For Bayesian PNMs we adopt the traditional terminology in which µ is the prior for x and
the output Q# µa the posterior for Q(x). Note that, for fixed A and µ, the Bayesian choice of
belief update operator B (if it exists) is uniquely defined.
It is emphasised that the class of Bayesian PNMs is a subclass of all PNMs; examples of nonBayesian PNMs are provided in Section 2.6.1. Our analysis is focussed on Bayesian PNMs due to
3
The use of “Bayesian” contrasts with Bissiri et al. (2016), for whom all belief update operators represent
Bayesian learning algorithms to some greater or lesser extent. An alternative term could be “lossless”, since
all the information in a is conditioned upon in µa .
7
their appealing Bayesian interpretation and ease of generalisation to pipelines of computation in
Section 5. For non-Bayesian PNMs, careful definition and analysis of the belief update operator
is necessary to enable proper interpretation of the uncertainty quantification being provided.
In particular, the analysis of non-Bayesian PNMs may present considerable challenges in the
context of computational pipelines, whereas for Bayesian PNMs this is shown in Section 5 to
be straight-forward.
2.4. Model Evidence
A cornerstone of the Bayesian framework is the model evidence, or marginal likelihood (MacKay,
1992). Let A ⊆ Rn be equipped with the Lebesgue reference measure λ, such that A# µ admits
a density pA = dA# µ/dλ. Then the model evidence pA (a), based on the information that
A(x) = a, can be used as the basis for Bayesian model comparison. In particular, two prior
distributions µ, µ̃, can be compared through the Bayes factor
BF :=
dA# µ̃
p̃A (a)
=
(a),
pA (a)
dA# µ
(2.5)
where p̃A = dA# µ̃/dλ. Here the second expression is independent of the choice of reference
measure λ and is thus valid for general A. The model evidence has been explored in connection
with the design of Bayesian PNM. For the integration and PDE examples 2.3 and 2.4, the model
evidence has a closed form and was investigated in Briol et al. (2016); Cockayne et al. (2016).
In Section 6 we investigate the model evidence in the context of non-linear ODEs and PDEs
for which it must be approximated.
2.5. The Disintegration Theorem
The purpose of this section is to formalise µa and to determine conditions under which µa exists
and is well-defined. From Definition 2.5, the output of a Bayesian PNM is B(µ, a) = Q# µa . If
µa exists, the pushforward Q# µa exists as Q is assumed to be measurable; thus, in this section,
we focus on the rigorous definition of µa .
Unlike many problems of Bayesian inversion, proceeding by an analogue of Bayes’ theorem
is not possible. Let X a = {x ∈ X : A(x) = a}. Then we observe that, if it is measurable, X a
may be a set of zero measure under µ. Standard techniques for infinite-dimensional Bayesian
inversion rely on constructing a posterior distribution based on its Radon–Nikodým derivative
with respect to the prior (Stuart, 2010). However, when µa 6 µ no Radon–Nikodým derivative
exists and we must turn to other approaches to establish when a Bayesian PNM is well-defined.
Conditioning on null sets is technical and was formalised in the celebrated construction of
measure-theoretic probability by Kolmogorov (1933). The central challenge is to establish
uniqueness of conditional probabilities. For this work we exploit the disintegration theorem to
ensure our constructions are well-defined. The definition below is due to Dellacherie and Meyer
(1978, p.78), and a statistical introduction to disintegration can be found in Chang and Pollard
(1997).
Definition 2.6 (Disintegration). For µ ∈ PX , a collection {µa }a∈A ⊂ PX is a disintegration of
µ with respect to the (measurable) map A : X → A if:
1 (Concentration:) µa (X \ X a ) = 0 for A# µ-almost all a ∈ A;
and for each measurable f : X → [0, ∞) it holds that
8
2 (Measurability:) a 7→ µa (f ) is measurable;
R
3 (Conditioning:) µ(f ) = µa (f )A# µ(da).
The concept of disintegration extends the usual concept of conditioning of random variables
to the case where X a is a null set, in a way closely related to regular conditional distributions
(Kolmogorov, 1933). Existence of disintegrations is guaranteed under general weak conditions:
Theorem 2.7 (Disintegration Theorem; Thm. 1 of Chang and Pollard (1997)). Let X be a
metric space, ΣX be the Borel σ-algebra and µ ∈ PX be Radon. Let ΣA be countably generated
and contain all singletons {a} for a ∈ A. Then there exists a disintegration {µa }a∈A of µ with
respect to A. Moreover, if {ν a }a∈A is another such disintegration, then {a ∈ A : µa 6= ν a } is a
A# µ null set.
The requirement that µ is Radon is weak and is implied when X is a Radon space, which
encompasses, for example, separable complete metric spaces. The requirement that ΣA is
countably generated is also weak and includes the standard case where A = Rn with the Borel
σ-algebra. From Theorem 2.7 it follows that {µa }a∈A exists and is essentially unique for all of
the examples considered in this paper. Thus, under mild conditions, we have established that
Bayesian PNMs are well-defined, in that an essentially unique disintegration {µa }a∈A exists. It
is noted that a variational definition of µa has been posited as an alternative approach, for when
the existence of a disintegration is difficult to establish (p3 of Garcia Trillos and Sanz-Alonso,
2017).
2.6. Prior Construction
The Gaussian distribution is popular as a prior in the PNM literature for its tractability, both in
the fact that finite-dimensional distributions take a closed-form and that an explicit conditioning
formula exists. More general priors, such as Besov priors (Dashti et al., 2012) and Cauchy priors
(Sullivan, 2016) are less easily accessed. In this section we summarise a common construction
for these prior distributions, designed to ensure that a disintegration will exist.
Let {φi }∞
i=0 denote an orthogonal Schauder basis for X , assumed to be a separable Banach
space in this section. Then any x ∈ X can be represented through an expansion
x = x0 +
∞
X
ui φi
(2.6)
i=0
for some fixed element x0 ∈ X and a sequence u ∈ R∞ . Construction of measures µ ∈ PX
is then reduced to construction of almost-surely convergent measures on R∞ and studying the
pushforward of such measures into X . In particular, this will ensure that µ ∈ PX is Radon
(as X is a separable complete metric space), a key requirement for existence of a disintegration
{µa }a∈A .
To this end it is common to split u into a stochastic and deterministic component; let ξ ∈ R∞
represent an i.i.d sequence of random variables, and γ ∈ `p for some p ∈ (1, ∞). Then with ui =
γi ξi , for the prior distribution to be well-posed we require that almost-surely u ∈ `1 . Different
choices of (ξ, γ) give rise to different distributions on X . For instance, ξi ∼ Uniform(−1, 1),
γ ∈ `1 is termed a uniform prior and ξi ∼ N (0, 1) gives a Gaussian prior, where γ determines
the regularity of the covariance operator C (Bogachev, 1998). The choice of ξi ∼ Cauchy(0, 1)
gives a Cauchy prior in the sense of Sullivan (2016); here we require γ ∈ `1 ∩ ` log ` for X a
separable Banach space, or γ ∈ `2 for when X is a Hilbert space.
A range of prior specifications will be explored in Section 6, including non-Gaussian prior
distributions for numerical solution of nonlinear ODEs.
9
2.6.1. Dichotomy of Existing PNMs
This section concludes with an overview of existing PNMs with respect to our definition of a
Bayesian PNM. This serves to clarify some subtle distinctions in existing literature, as well as
to highlight the generality of our framework. To maintain brevity we have summarised our
findings in Table 1.
3. Decision-Theoretic Treatment
Next we assess the performance of PNMs from a decision-theoretic perspective (Berger, 1985)
and explore connections to average-case analysis of classical numerical methods (Ritter, 2000).
Note that the treatment here is agnostic to whether the PNM in question is Bayesian, and
also encompasses classical numerical methods. Throughout, the existence of a disintegration
{µa }a∈A will be assumed.
3.1. Loss and Risk
Consider a generic loss function L : Q × Q → R where L(q † , q) describes the loss incurred when
the true QoI q † = Q(x) is estimated with q ∈ Q. Integrability of L is assumed.
The belief update operator B returns a distribution over Q which can be cast as a randomised
decision rule for estimation of q † . For randomised decision rules, the risk function r : Q×PQ → R
is defined as
Z
†
r(q , ν) = L(q † , q)ν(dq) .
The average risk of the PNM M = (A, B) with respect to µ ∈ PX is defined as
Z
R(µ, M ) = r(Q(x), B(µ, A(x)))µ(dx).
(3.1)
Here a state x ∼ µ is drawn at random and the risk of the PNM output B(µ, A(x)) is computed.
We follow the convention of terming R(µ, M ) the Bayes risk of the PNM, though the usual
objection that a frequentist expectation enters into the definition of the Bayes risk could be
raised.
Next, we consider a sequence A(n) of information operators indexed such that A(n) (x) is
n-dimensional (i.e. n pieces of information are provided about x).
Definition 3.1 (Contraction). A sequence M (n) = (A(n) , B (n) ) of PNMs is said to contract at
a rate rn under a belief distribution µ if R(µ, M (n) ) = O(rn ).
This definition allows for comparison of classical and probabilistic numerical methods (Kadane
and Wasilkowski, 1983; Diaconis, 1988). In each case an important goal is to determine methods
M (n) that contract as quickly as possible for a given distribution µ that defines the Bayes risk.
This is the approach taken in average-case analysis (ACA; Ritter, 2000) and will be discussed
in Section 3.4. For Examples 2.3 and 2.4 of Bayesian PNMs, Briol et al. (2016) and Cockayne
et al. (2016) established rates of contraction for particular prior distributions µ; we refer the
reader to those papers for details.
10
Method
Integrator
QoI Q(x)
R
x(t)ν(dt)
Information A(x)
{x(ti )}n
i=1
Non- (or Approximate) Bayesian PNMs
Osborne et al. (2012b,a); Gunter et al. (2014)
{ti }n
i=1 s.t. ti ∼ x
{(ti , x1 (ti ))}n
i=1 s.t. ti ∼ x2
{x(ti )}n
i=1
{∇x(ti )}n
i=1
{(x(ti ), ∇x(ti )}n
i=1
Kong et al. (2003); Tan (2004); Kong et al. (2007)
Optimiser
R
R f (t)x(dt)
x1 (t)x2 (dt)
arg min x(t)
Oates et al. (2016a)
Bayesian Optimisation (Mockus, 1989)
Hennig and Kiefel (2013)
Probabilistic Line Search (Mahsereci and Hennig,
2015)
Probabilistic Bisection Algorithm (Horstein,
1963)
{I[tmin < ti ]}n
i=1
Linear Solver
x
ODE Solver
x
−1
b
{I[tmin < ti ] + error}n
i=1
{xti }n
i=1
Waeber et al. (2013)
{∇x(ti )}n
i=1
(Skilling, 1992)
Filtering Methods for IVPs (Schober et al., 2014;
Chkrebtii et al., 2016; Kersting and Hennig, 2016;
Teymur et al., 2016; Schober et al., 2016)
Finite Difference Methods (John and Wu, 2017)
Hull and Swenson (1966); Mosbach and Turner
(2009)
Stochastic Euler (Krebs, 2016)
Chkrebtii et al. (2016); Raissi et al. (2017)
11
∇x + rounding error
PDE Solver
x(tend )
x
{∇x(ti )}n
i=1
{Dx(ti )}n
i=1
Dx + discretisation error
Bayesian PNMs
Bayesian Quadrature (Larkin, 1974; Diaconis,
1988; O’Hagan, 1991)
Probabilistic Linear Solvers (Hennig, 2015; Bartels and Hennig, 2016)
Probabilistic Meshless Methods (Owhadi, 2015,
2017; Cockayne et al., 2016; Raissi et al., 2016)
Conrad et al. (2016)
Table 1: Comparison of several existing Probabilistic Numerical Methods (PNMs).
3.2. Bayes Decision Rules
A (possibly randomised) decision rule is said to be a Bayes rule if it achieves the minimum
Bayes risk among all decision rules. In the context of (not necessarily Bayesian) PNMs, let
M = (A, B) and let
0
B(A) = B : R(µ, (A, B)) = inf0 R(µ, (A, B )) .
B
That is, for fixed A, B(A) is the set of all belief update operators that achieve minimum Bayes
risk.
This raises the natural question of which belief update operators yield Bayes rules. Although
the definition of a Bayes rule applies generically to both probabilistic and deterministic numerical methods, it can be shown4 that if B(A) is non-empty, then there exists a B ∈ B(A) which
takes the form of a classical numerical method, as expressed in Eq. (2.4). Thus in general,
Bayesian PNMs do not constitute Bayes rules, as the extra uncertainty inflates the Bayes risk,
so that such methods are not optimal.
Nonetheless, there is a natural connection between Bayesian PNMs and Bayes rules, as exposed in Kadane and Wasilkowski (1983):
Theorem 3.2. Let M = (A, B) be a Bayesian probabilistic numerical method for the QoI Q.
Let (Q, h·, ·iQ ) be an inner-product space and let the loss function L have the form L(q † , q) =
kq † − qk2Q , where k · kQ is the norm induced by the inner product. Then the decision rule that
returns the mean of the distribution B(µ, a) is a Bayes rule for estimation of q † .
This well-known fact from Bayesian decision theory5 is interesting in light of recent research in
constructing PNMs whose mean functions correspond to classical numerical methods (Schober
et al., 2014; Hennig, 2015; Särkkä et al., 2016; Teymur et al., 2016; Schober et al., 2016).
Theorem 3.2 explains the results in Examples 2.3 and 2.4, in which both instances of Bayesian
PNMs were demonstrated to be centred on an established classical method.
3.3. Optimal Information
The previous section considered selection of the belief update operator B, but not of the information operator A. The choice of A determines the Bayes risk for a PNM, which leads to a
problem of experimental design to minimise that risk.
The theoretical study of optimal information is the focus of the information complexity literature (Traub et al., 1988; Novak and Woźniakowski, 2010), while other fields such as quasi-Monte
Carlo (QMC, Dick and Pillichshammer, 2010) attempt to develop asymptotically optimal information operators for specific numerical tasks, such as the choice of evaluation points for
numerical approximation of integrals in the case of QMC. Here we characterise optimal information for Bayesian PNMs.
Consider the choice of A from a fixed subset Λ of the set of all possible information operators.
To build intuition, for the task of numerical integration, Λ could represent all possible choices of
locations {ti }ni=1 where the integrand is evaluated. For Bayesian PNM, one can ask for optimal
information:
Aµ ∈ arg inf R(µ, M ) s.t. M = (A, B), B = Q# µA
A∈Λ
4
5
The proof is included in the Electronic Supplement.
This is the fact that the Bayes act is the posterior mean under squared-error loss (Berger, 1985).
12
where we have made explicit the fact that the optimal information depends on the choice of prior
µ. Next we characterise Aµ , while an explicit example of optimal information for a Bayesian
PNM is detailed in Example 3.4.
3.4. Connection to Average Case Analysis
The decision theoretic framework in Section 3.1 is closely related to average-case analysis (ACA)
of classical numerical methods (Ritter, 2000). In ACA the performance of a classical numerical
method b : A → Q is studied in terms of the Bayes risk R(µ, M ) given in Eq. (3.1), for the PNM
M = (A, B) with belief operator B(µ, a) = δ ◦ b(a) as in Eq. (2.4). ACA is concerned with the
study of optimal information:
A∗µ ∈ arg inf inf R(µ, M ) s.t. M = (A, B), B = δ ◦ b .
A∈Λ
b
In general there is no reason to expect Aµ and A∗µ to coincide, since Bayesian PNM are not Bayes
rules6 . Indeed, an explicit example where Aµ 6= A∗µ is presented in Appendix S3. However, we
can establish sufficient conditions under which optimal information for a Bayesian PNM is the
same as optimal information for ACA:
Theorem 3.3. Let (Q, h·, ·iQ ) be an inner product space and the loss function L have the form
L(q † , q) = kq † − qk2Q where k · kQ is the norm induced by the inner product. Then the optimal
information Aµ for a Bayesian PNM and A∗µ for ACA are identical.
It is emphasised that this result is not a trivial consequence of the correspondance between
Bayes rules and worst case optimal methods, as exposed in Kadane and Wasilkowski (1983). To
the best of our knowledge, information-based complexity research has studied A∗µ but not Aµ .
Theorem 3.3 establishes that, for the squared norm loss, we can extract results on optimal
average case information from the ACA literature and use them to construct optimal Bayesian
PNMs. An example is provided next.
Example 3.4 (Optimal Information for Probabilistic Integration). To illustrate optimal information for Bayesian PNMs, we revisit the first worked example of ACA, due to Sul0 din (1959,
1960). Set X = {x ∈ C(0, 1) : x(0) = 0} and take the belief distribution µ to be induced
from the Weiner process on X , i.e. a Gaussian
process with mean 0 and covariance function
R1
k(t, t0 ) = min(t, t0 ). Our QoI is Q(x) = 0 x(t)dt and the loss function is L(q, q 0 ) = (q − q 0 )2 .
Consider standard information A(x) = (x(t1 ), . . . , x(tn )) for n fixed knots 0 ≤ t1 < · · · <
tn ≤ 1. Our aim is to determine knots ti that represent optimal information for a Bayesian
PNM with respect to µ and L.
Motivated by Theorem 3.3 we first solve the optimal information problem for ACA and
then derive
Pn the associated PNM. It will be sufficient to restrict attention to linear methods
b(a) = i=1 wi x(ti ) with wi ∈ R. This allows a closed-form expression for the average error:
X
n
n
X
1
1 2
R(µ, (A, δ ◦ b)) = − 2
wi ti − ti +
wi wj min(ti , tj ).
3
2
i=1
(3.2)
i,j=1
Standard calculus can be used to minimise Eq. (3.2) over both the weights {wi }ni=1 and the
locations {ti }ni=1 ; the full calculation can be found in Chapter 2, Section 3.3 of Ritter (2000).
6
The distribution Q# µa will in general not be supported on the set of Bayes acts.
13
The result is an ACA optimal method
n
b(A(x)) =
2 X ∗
x(ti ),
2n + 1
t∗i =
i=1
2i
2n + 1
which is recognised as the trapezium rule with equally spaced knots. The associated contraction
rate rn is n−1 (Lee and Wasilkowski, 1986).
From Theorem 3.3 we have that ACA optimal information is also optimal information for the
Bayesian PNM. Thus the optimal Bayesian PNM M = (A, B) for the belief distribution µ is
uniquely determined:
∗
!
x(t1 )
n
2 X
1
..
A(x) = . ,
B(µ, a) = N
ai ,
.
2n + 1
3(2n + 1)2
∗
i=1
x(tn )
Note how the PNM is centred on the ACA optimal method. However the PNM itself is not a
Bayes rule; it in fact carries twice the Bayes risk as the ACA method.
This illustration can be generalised. It is known that for µ induced from the Weiner process
on ∂ s x, Q a linear functional and φ a loss function that is convex and symmetric, equi-spaced
evaluation points are essentially optimal information, the Bayes rule is the natural spline of
degree 2s + 1, and the contraction rate rn is essentially n−(s+1) ; see Lee and Wasilkowski (1986)
for a complete treatment.
This completes our performance assessment for PNMs; next we turn to computational matters.
4. Numerical Disintegration
In this section we discuss algorithms to access the output from a Bayesian PNM. The approach
considered in this paper is to form an explicit approximation to µa that can be sampled. The
construction of a sampling scheme can exploit sophisticated Monte Carlo methods and allow
probing B(µ, a) at a computational cost that is de-coupled from the potentially substantial cost
of obtaining the information a itself.
The construction of an approximation to µa is non-trivial on a technical level. As shown in
Section 2.5, under weak conditions on the space X and the operator A, the disintegration µa
is well-defined for A# µ-almost all a ∈ A. The approach considered in this work is based on
sampling from an approximate distribution µaδ which converges in an appropriate sense to µa
in the δ ↓ 0 limit. This follows in a similar spirit to Ackerman et al. (2017).
4.1. Sequential Approximation of a Disintegration
Suppose that A is an open subset of Rn and that the distribution A# µ ∈ PA , admits a continuous and positive density pA with respect to Lebesgue measure on A. Further endow A with
the structure of a Hilbert space, with norm k · kA .
Let φ : R+ → R+ denote a decreasing function, to be specified, that is continuous at 0, with
φ(0) = 1 and limr→∞ φ(r) = 0. Consider
1
kA(x) − akA
µaδ (dx) := a φ
µ(dx)
Zδ
δ
14
where the normalisation constant
Zδa
:=
Z
φ
kã − akA
δ
pA (dã)
is non-zero since pA is bounded away from 0 on a neighbourhood of a ∈ A and φ is bounded
away from 0 on a sufficiently small interval [0, γ]. Our aim is to approximate µa with µaδ
for small bandwidth parameter δ. The construction, which can be considered a mathematical
generalisation of approximate Bayesian computation (Del Moral et al., 2012), ensures that
µaδ µ. The role of φ is to admit states x ∈ X for which A(x) is close to a but not necessarily
equal. It is assumed to be sufficiently regular:
R
Assumption 4.1. There exists α > 0 such that Cφα := rα+n−1 φ(r)dr < ∞.
To discuss the convergence of µaδ to µa we must first select a metric on PX . Let F be a
normed space of (measurable) functions f : X → R with norm k·kF . For measures ν, ν 0 ∈ PX ,
define
dF (ν, ν 0 ) = sup |ν(f ) − ν 0 (f )|.
kf kF ≤1
This formulation encompasses many common probability metrics such as the total variation
distance and Wasserstein distance (Müller, 1997). However, not all spaces of functions F lead
0
to useful theory. In particular the total variation distance between µa and µa for a 6= a0 will be
one in general. Furthermore depending on the choice of F, dF may be merely a pseudometric7 .
Sufficient conditions for weak convergence with respect to F are now established:
Assumption 4.2. The map a 7→ µa is almost everywhere α-Hölder continuous in dF , i.e.
0
dF (µa , µa ) ≤ Cµα ka − a0 kαA
for some constant Cµα > 0 and for A# µ almost all a, a0 ∈ A.
Sufficient conditions for Assumption 4.2 are discussed in Ackerman et al. (2017), but are
somewhat technical.
Theorem 4.3. Let C̄φα := Cφα /Cφ0 . Then, for δ > 0 sufficiently small,
dF (µaδ , µa ) ≤ Cµα (1 + C̄φα )δ α
for A# µ almost all a ∈ A.
This result justifies the approximation of µa by µaδ when the QoI can be well-approximated by
integrals with respect to F. This result is stronger than that of earlier work, such as Pfanzagl
(1979), in that it holds for infinite-dimensional X , though it also relies upon the stronger Hölder
continuity assumption.
The specific form for φ is not fundamental, but can impact upon rate constants. For the
n
choice φ(r) = 1[r < 1] we have C̄φα = α+n
, which can be bounded independent of the dimension
n of A. On the other hand, for φ(r) = exp(− 12 r2 ) it can be shown that, for α ∈ N,
C̄φα =
(α + n − 1)!!
(n − 1)!!
(4.1)
so that the constant C̄φα might not be bounded. In general this necessitates effective Monte
Carlo methods that are able to sample from the regime where δ can be extremely small, in
order to control the overall approximation error.
7
For a pseudometric, dF (x, y) = 0 =⇒ x = y need not hold.
15
4.2. Computation for Series Priors
The series representation of µ in Eq. (2.6) of Section 2.6 is infinite-dimensional and thus cannot,
in general, be instantiated. To this end, define XN = x0 + span{φ0 , . . . , φN } and define the
associated projection operator PN : X → XN as
!
∞
N
X
X
PN x0 +
ui φi := x0 +
ui φi .
i=0
i=0
A natural approach is to compute with the modified information operator A ◦ PN instead of
A. This has the effect of updating the distribution of the first N + 1 coefficients and leaving
the tail unchanged, to produce an output µaδ,N . Then computation performed in the Bayesian
update step is finite-dimensional, whilst instantiation of the posterior itself remains infinitedimensional. A “likelihood-informed” choice of basis {φi } in such problems was considered in
Cui et al. (2016).
Inspired by this approach, we next considered convergence of the output µaδ,N to µaδ in the
limit N → ∞. In this section it is additionally required that φ be everywhere continuous with
φ > 0. Let ϕ = − log φ, so that ϕ is a continuous bijection of R+ to itself. The following are
also assumed:
Assumption 4.4. For each R > 0, it holds that |ϕ(r) − ϕ(r0 )| ≤ CR |r − r0 | for some constant
CR and all r, r0 < R.
Assumption 4.5. kA(x) − A ◦ PN (x)kA ≤ exp(m(kxkX ))Ψ(N ) for all x ∈ X , where m is
measurable and satisfies EX∼µ [exp(2m(kXkX ))] < ∞ and Ψ(N ) vanishes as N is increased.
Assumption 4.6. supx∈X kA(x)kA < ∞.
Assumption 4.7. kf k∞ ≤ CF kf kF for some constant CF and all f ∈ F.
Assumption 4.4 holds for the case ϕ(r) = 12 r2 with constant CR = R. Assumption 4.5 is
standard in the inverse problem literature; for instance it is shown to hold for certain series
priors in Theorem 3.4 of Cotter et al. (2010). Assumption 4.6 is, in essence, a compactness
assumption, in that it is implied by compactness of the state space X when A is linear. In
this sense it is a strong assumption; however it can be enforced in our experiments, where X is
unbounded, through a threshold map
(
A(x)
if kA(x)kA ≤ λmax ,
Ã(x) :=
A(x)
λmax kA(x)kA if kA(x)kA > λmax ,
where λmax is a large pre-defined constant. Assumption 4.7 places a restriction on the probability
metric dF in which our result is stated.
The following theorem has its proof in the Electronic Supplement:
Theorem 4.8. For some constant Cδ , dependent on δ, it holds that dF (µaδ,N , µaδ ) ≤ Cδ Ψ(N ).
An immediate consequence of Theorems 4.3 and 4.8 is that the total approximation error can
be bounded by applying the triangle inequality:
dF (µa , µaδ,N ) ≤ Cµα (1 + C̄φα )δ α + Cδ Ψ(N ).
In particular, we have convergence of µaδ,N to µa in the δ ↓ 0 limit provided that the number of
basis functions satisfies Cδ Ψ(N ) = o(1).
16
The approximate posterior µaδ,N analysed above can be sampledP
when µ is Gaussian, since the
first N + 1 coefficients can be handled with MCMC and the tail ∞
i=N +1 ui φi , being Gaussian,
can be sampled. However, when µ is non-Gaussian the tail is not recognised in a form that can
be sampled. For the experiments in Section 6, in which both Gaussian and non-Gaussian priors
µ are considered, the series in Eq. (2.6) was truncated at level N + 1, with the resultant prior
denoted µN . The associated posterior was then entirely supported on the finite-dimensional
subspace XN ; this is mathematically equivalent to working with the projected output PN µaδ,N .
Analysis of prior truncation, as opposed to modification of the information operator just reported, is known to be difficult. Indeed, while µN converges to µ weakly, it does not do so in
total variation, and this deficiency generally transfers to the associated posteriors. In general
the impact of prior perturbation is a subtle topic — see e.g. Owhadi et al. (2015) and the
references therein — and we therefore defer theoretical analysis of this approximation to future
work.
4.3. Monte Carlo Methods for Numerical Disintegration
The previous sections established a sequence of well-defined distributions µaδ (or µaδ,N for nonGaussian models) which converge (in a specific weak sense) to the exact disintegration µa .
From construction, µaδ µa and this is sufficient to allow standard Monte Carlo methods to
be used. The construction of Monte Carlo methods is de-coupled from the core material in the
main text and the main methodological considerations are well-documented (e.g. Girolami and
Calderhead, 2011).
For the experiments reported in subsequent sections two approaches were explored; a Sequential Monte Carlo (SMC) method (Doucet et al., 2001) and a parallel tempering method (Geyer,
1991). This provided a transparent sampling scheme, whose non-asymptotic approximation
error can be theoretically understood. In particular, they provide robust estimators of model
evidence that can be used for Bayesian model comparison. Full details of the Monte-Carlo
methods used for this work, along with associated theoretical analysis for the SMC method, are
contained in Section S4.1 of the Electronic Supplement.
5. Computational Pipelines and PNM
The last theoretical development in this paper concerns composition of several PNMs. Most
analysis of numerical methods focuses on the error incurred by an individual method. However,
real-world computational procedures typically rely on the composition of several numerical
methods. The manner in which accumulated discretisation error affects computational output
may be highly non-trivial (Roy, 2010; Anderson, 2011; Babuška and Söderlind, 2016). An
extreme example occurs when one of the numerical methods in a pipeline is charged with
integration of a chaotic dynamical system (Strogatz, 2014).
In recent work, Chkrebtii et al. (2016), Conrad et al. (2016) and Cockayne et al. (2016) each
used PNMs within a broader statistical procedure to estimate unknown parameters in systems
of ODEs and PDEs. The probabilistic description of discretisation error was incorporated into
the data-likelihood, resulting in posterior distributions for parameters with inflated uncertainty
to properly account for the inferential impact of discretisation error. However, beyond these
limited works, no examination of the composition of PNMs has been performed. In particular,
the question of which PNMs can be composed, and when the output of such a composition
is meaningful, has not been addressed. This is important; for instance, if the output of a
17
composition of PNMs is to be used for analysis of variance to elucidate the main sources of
discretisation error, then it is important that such output is meaningful.
This section defines a pipeline as an abstract graphical object that may be combined with a
collection of compatible PNMs. It is proven that when compatible Bayesian PNMs are employed
in the pipeline, the distributional output of the pipeline carries a Bayesian interpretation under
an explicit conditional independence condition on the prior µ.
To build intuition, for the simple case where two Bayesian PNMs are composed in series,
our results provide conditions for when, informally, the output B2 (B1 (µ, a1 ), a2 ) corresponds to
a single Bayesian procedure B(µ, (a1 , a2 )). To reduce the notational and technical burden, in
this section we will not provide rigorous measure theoretic details; however we note that those
details broadly follow the same pattern as in Section 2.5.
5.1. Computational Pipelines
To analyse pipelines of PNMs, we consider n such methods M1 , . . . , Mn , where each method
Mi = (Ai , Bi ) is defined on a common8 state space X and targets a QoI Qi ∈ Qi . A pipeline
will be represented as a directed graphical model, wherein the QoIs Qi from parent methods constitute information operators for child methods. It may be that a method will take
quantities from multiple parents as input. To allow for this, we suppose that the information operator Ai : X → Ai can be decomposed into components Ai,j : X → Ai,j such that
Ai = (Ai,1 , . . . , Ai,m(i) ) and Ai = Ai,1 × · · · × Ai,m(i) . Thus, each component Ai,j can be
thought of as the QoI output by one of the parents of the method Mi .
Without loss of generality we designate the nth QoI Qn to be the principal QoI. That is, the
purpose of the computational pipeline is to estimate Qn . The case of multiple principal QoI is
a simple extension not described herein. Nodes with no immediate children are called terminal
nodes, while nodes with no immediate parents are called source nodes. We denote by A the set
of all source nodes.
Definition 5.1 (Pipeline). A pipeline P is a directed acyclic graph defined as follows:
• Nodes are of two kinds: Information nodes are depicted by , and method nodes are
depicted by .
• The graph is bipartite, so that edges connect a method node to an information node or
vice-versa. That is, edges are of the form → or → .
• There are n method nodes, each with a unique label in {1, . . . , n}.
• The method node labelled i has m(i) parents and one child. Its in-edges are assigned a
unique label in {1, . . . , m(i)}.
• There is a unique terminal node and it is the child of method node n. This represents the
principal QoI Qn .
Example 5.2 (Distributed Integration). Recall the numerical integration problem of Example
3.4 and, as a thought experiment, consider partitioning the domain of integration in order to
distribute computation:
Z
Z
Z
1
8
|
0
0.5
x(t)dt =
{z } | 0
(c)
1
x(t)dt +
x(t)dt
0.5
{z
} | {z }
(a)
(5.1)
(b)
This is without loss of generality, since X can be taken as the union of all state spaces required by the individual
methods.
18
B1 (µ, ·)
x(t1 ), . . . , x(tm−1 )
x(tm )
x(tm+1 ), . . . , x(t2m )
B2 (µ, ·)
R 0.5
0
x(t)dt
B3 (µ, ·)
R1
0.5 x(t)dt
R1
0
x(t)dt
Figure 1: An intuitive representation of Example 5.2.
1
1
1
2
1
2
3
2
2
Figure 2: The pipeline P corresponding to Figure 1.
To keep presentation simple we consider an integral over [0, 1] with 2m + 1 equidistant knots
ti = i/2m. Let M1 be a Bayesian PNM for estimating Q1 (x) = (a) and M2 be a Bayesian PNM
for estimating Q2 (x) = (b).
In terms of our notational convention, we divide the information operator into four components; Ai,j , for i, j ∈ {1, 2}. A1,1 and A2,2 contain the information unique to M1 and M2 .
Specifically
x(t1 )
x(tm+1 )
..
..
A1,1 (x) =
A2,2 (x) =
,
.
.
.
x(t2m )
x(tm−1 )
A1,2 and A2,1 contain the information that is shared between the two methods; that is A1,2 =
A2,1 = {x(tm )}. To complete the specification we need a third PNM for estimation of Q3 (x) =
(c) which we denote M3 and which combines the outputs of M1 and M2 by simply adding them
together. Formally this has information operator A3 (x) = (A3,1 (x), A3,2 (x)) where A3,1 (x) =
(a) and A3,2 (x) = (b). Its belief update operator is given by:
B3 (µ, (a3,1 , a3,2 )) = δ(a3,1 + a3,2 )
An intuitive graphical representation of this set-up is shown in Figure 1. The pipeline P itself,
which is identical to Figure 1 but with additional node and edge labels, is shown in Figure 2.
In general, the method node labelled i is taken to represent the method Mi . The in-edge to
this node labelled j is taken to represent the information provided by the relationship Ai,j (x) =
ai,j . Here ai,j can either be deterministic information provided to the pipeline, or statistical
information derived from the output of another PNM. To make this formal and to “match
the input-output spaces” we next define what it means for the collection of methods Mi to be
compatible with the pipeline P . Informally, this describes the conditions that must be satisfied
for method nodes in a pipeline to be able to connect to each other.
Definition 5.3 (Compatible). The collection (M1 , . . . , Mn ) of PNMs is compatible with the
pipeline P if the following two requirements are satisfied:
19
(i) (Method nodes which share an information node must have consistent information spaces
and information operators.) For a motif
i
j0
i0
j
we have that Ai,i0 = Aj,j 0 and Ai,i0 = Aj,j 0 .
(ii) (The space Qi for the output of a previous method must be consistent with the information
space of the next method.) For a motif
j0
i
j
we have that Qi = Aj,j 0 .
Note that we do not require the converse of (i) at this stage; that is, the same information can
be represented by more than one node in the pipeline. This permits redundancy in the pipeline,
in that information is not recycled. It will transpire that pipelines with such redundancy are
non-Bayesian.
The role of the pipeline P is to specify the order in which information, either deterministic
of statistical, is propagated through the collection of PNMs. This is illustrated next:
Example 5.4 (Propagation of Information). For the pipeline in Figure 2, the propagation of
information proceeds as follows::
1. The source nodes, representing A(x) = {A1,1 (x), A1,2 (x) = A2,1 (x), A2,2 (x)} are evaluated
as {a1,1 , a1,2 = a2,1 , a2,2 }. This represents all the information on x that is provided.
2. The distributions
µ(1) := B1 (µ, (a1,1 , a1,2 ))
µ(2) := B2 (µ, (a2,1 , a2,2 ))
are computed.
3. The push-forward distribution
µ(3) := (B3 )# (µ, µ(1) × µ(2) )
is computed.
Here µ(1) × µ(2) is defined on the Cartesian product ΣA3,1 × ΣA3,2 with independent components
µ(1) and µ(2) . The notation (B3 )# refers to the push-forward of the function B3 (µ, ·) over its
second argument. The distribution µ(3) is the output of the pipeline and is a distribution over
the principal QoI Q3 (x).
The procedure in Example 5.4 can be formalised, but to keep the presentation and notation
succinct, we leave this implicit:
20
1
4
2
3
6
5
Figure 3: Dependence graph G(P ) corresponding to the pipeline P in Figure 2. The nodes are
indexed with a topological ordering (shown).
Definition 5.5 (Computation). For a collection (M1 , . . . , Mn ) of PNMs that are compatible
with a pipeline P , the computation P (M1 , . . . , Mn ) is defined as the PNM with information
operator A and belief update operator B that takes µ and A(x) = a as input and returns the
distribution µ(n) as its output B(µ, a), obtained through the procedure outlined in Example
5.4.
That is, the computation P (M1 , . . . , Mn ) is a PNM for the principal QoI Qn . Note that this definition includes a classical numerical work-flow just as a PNM encompasses a standard numerical
method.
5.2. Bayesian Computational Pipelines
Noting that P (M1 , . . . , Mn ) is itself a PNM, there is a natural definition for when such a
computation can be called Bayesian:
Definition 5.6 (Bayesian Computation). Denote by (A, B) the information and belief operators
associated with the computation P (M1 , . . . , Mn ) and let {µa }a∈A be a disintegration of µ with
respect to the information operator A. The computation P (M1 , . . . , Mn ) is said to be Bayesian
for the QoI Qn if
B(µ, a) = (Qn )# µa for A# µ-almost-all a ∈ A.
This is clearly an appealing property; the output of a Bayesian computation can be interpreted
as a posterior distribution over the QoI Qn (x) given the prior µ and the information A(x). Or,
more informally, the “pipeline is lossless with information”. However, at face value it seems
difficult to verify whether a given computation P (M1 , . . . , Mn ) is Bayesian, since it depends
on both the individual PNMs Mi and the pipeline P that combines them. Our next aim is to
establish verifiable sufficient conditions, for which we require another definition:
Definition 5.7 (Dependence Graph). The dependence graph of a pipeline P is the directed
acyclic graph G(P ) obtained by taking the pipeline P , removing the method nodes and replacing
all → → motifs with direct edges → .
The dependency graph for Example 5.2 is shown in Figure 3.
For a computation P (M1 , . . . , Mn ), each of the J distinct nodes in G(P ) can be associated
with a random variable Yj where either Yj = Ak,l (X) for some k, l, when the node is a source,
or otherwise Yj = Qk (X), for some k. Randomness here is understood to be due to X ∼ µ, so
that the distribution of the {Yj }Jj=1 is a function of µ. The convention used here is that the Yj
are indexed according to a topological ordering on G(P ), which has the properties that (i) the
source nodes correspond to indices 1, . . . , I, and (ii) the final random variable is YJ = Qn (X).
21
Definition 5.8 (Coherence). Consider a computation P (M1 , . . . , Mn ). Denote by π(j) ⊆
{1, . . . , j − 1} the parent set of node j in the dependence graph G(P ). Then we say that
µ ∈ PX is coherent for the computation P (M1 , . . . , Mn ) if the implied joint distribution of the
random variables Y1 , . . . , YJ satisfies:
Yj ⊥
⊥ Y{1,...,j−1}\π(j) | Yπ(j)
for all j = I + 1, . . . , J.
Note that this is weaker than the Markov condition for directed acyclic graphs (see Lauritzen,
1991), since we do not insist that the variables represented by the source nodes are independent.
It is emphasised that, for a given µ ∈ PX , the coherence condition can in general be checked
and verified.
The following result provides sufficient and verifiable conditions which ensure that a computation composed of individual Bayesian PNMs is a Bayesian computation:
Theorem 5.9. Let M1 , . . . , Mn be Bayesian PNMs and let µ ∈ PX be coherent for the computation P (M1 , . . . , Mn ). Then it holds that the computation P (M1 , . . . , Mn ) is Bayesian for the
QoI Qn .
Conversely, if non-Bayesian PNM are combined then the computation P (M1 , . . . , Mn ) need
not be Bayesian in general.
Example 5.10 (Example 5.2, continued). The random variables Yi in this example are:
Z 0.5
Z 1
m−1
2m
Y1 = {X(ti )}i=1 , Y2 = X(tm ), Y3 = {X(ti )}i=m+1 , Y4 =
X(t)dt, Y5 =
X(t)dt.
0
0.5
From G(P ) in Figure 3, coherence condition in Definition 5.8 requires that the non-trivial
conditional independences Y4 ⊥
⊥ Y3 | {Y1 , Y2 } and Y5 ⊥⊥ Y1 | {Y2 , Y3 } hold. Thus the distribution µ is coherent for the computation
if and only if, for X ∼R µ, the assoR 0.5 P (M1 , M2 , M3 ) 2m
1
⊥
ciated information variables satisfy 0 X(t)dt ⊥⊥ {X(ti )}i=m+1 |{X(ti )}m
i=1 and 0.5 X(t)dt ⊥
2m .
{X(ti )}m−1
|{X(t
)}
i i=m
i=1
The distribution µ induced by the Weiner process on x in Example 3.4 satisfies these conditions. Indeed, under µ the stochastic process {x(t) : t > tm } is conditionally independent of its
history {x(t) : t < tm } given the current state x(tm ). Thus for this choice of µ, from Theorem
5.9 we have that P (M1 , M2 , M3 ) is Bayesian and parallel computation of (a) and (b) in Eq. (5.1)
can be justified from a Bayesian statistical standpoint.
However, for the alternative of belief distributions induced by the Weiner process on ∂ s x, this
condition is not satisfied and the computation P (M1 , M2 , M3 ) is not Bayesian. To turn this
into a Bayesian procedure for these alternative belief distributions it would be required that
A1,2 (x) provides information about the derivatives ∂ k x(tm ) for all orders k ≤ s.
5.3. Monte Carlo Methods for Probabilistic Computation
The most direct approach to access µ(n) is to sample from each Bayesian PNM and treat the
output samples as inputs to subsequent PNM. This is sometimes known as ancestral sampling in
the Bayesian network literature (e.g. Paige and Wood, 2015), and is illustrated in the following
example:
Example 5.11 (Ancestral Sampling for PNM). For Example 5.2, ancestral sampling proceeds
as follows:
22
1. Draw initial samples
q1 ∼ B1 (µ, (a1,1 , a1,2 ))
q2 ∼ B2 (µ, (a2,1 , a2,2 ))
2. Draw a final sample
q3 ∼ B3 (µ, (q1 , q2 ))
Then q3 is a draw from µ(3) .
Ancestral sampling requires that PNM outputs can be sampled. Such sampling methods were
discussed in Section 4.3. For a more general approach, sequential Monte Carlo methods can be
used to propagate a collection of particles through the pipeline P , similar to work on SMC for
general graphical models (Briers et al., 2005; Ihler and McAllester, 2009; Lienart et al., 2015;
Lindsten et al., 2017; Paige and Wood, 2015).
6. Numerical Experiments
In this final section of the paper we present three numerical experiments. The first is a linear
PDE, the second is a nonlinear ODE and the third is an application to a problem in industrial
process monitoring, described by a pipeline of PNM. In each case we experiment with nonGaussian belief distributions and, in doing so, go beyond previous work.
6.1. Poisson Equation
Our first illustration is an instance of the Poisson equation, a linear PDE with mixed DirichletNeumann boundary conditions:
−∇2 x(t) = 0
x(t) = t1
x(t) = 1 − t1
∂x/∂t2 = 0
t ∈ (0, 1)2
(6.1)
t1 ∈ [0, 1]
t2 = 0
(6.2)
t2 = 1
(6.3)
t2 ∈ (0, 1)
t1 = 0, 1
(6.4)
t1 ∈ [0, 1]
A model solution to this system, generated with a finite-element method on a fine mesh, is
shown in Figure 4.
As the spatial domain for this problem is two-dimensional, the basis used for specification of
the belief distribution is more complex. Here tensor products of orthogonal polynomials have
been used: φi (t) = Cj (2t1 − 1)Ck (2t2 − 1), i + j ≤ Nc . The polynomials Ci were chosen to
be normalised Chebyshev polynomials of the first kind. Prior specification then follows the
formulation given in Section 2.6, where the remaining parameters were chosen to be x0 ≡ 1,
and γi = α(i + 1)−2 . The random variables ξ were taken to be either Gaussian or Cauchy and
the polynomial basis was truncated to N = 45 terms, corresponding to a maximum polynomial
degree of NC = 8. For both priors the parameter α was set to α = 1. Note that closed-form
expressions are available for analysis under the Gaussian prior (Cockayne et al., 2016) but, to
simplify interpretation of empirical results, were not exploited. Mathematical background on
Cauchy priors can be found in Sullivan (2016).
The information operator was defined by a set of locations ti ∈ [0, 1]2 , i = 0, . . . , Nt , where
either the interior condition or one of the boundary conditions was enforced. Denote by tI,i
23
1.0
0.8
t2
0.6
0.4
0.2
0.0
0.0
0.2
0.4
t1
0.6
0.8
1.0
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
Figure 4: Model solution x(t), t = (t1 , t2 ), generated by application of a finite element method
based on a triangular mesh of 50 × 50 elements.
the set of interior points, tD,j the set of Dirichlet boundary points and tN,k the set of
Neumann boundary points, where i = 1, . . . , NI , j = 1, . . . , ND and k = 1, . . . , NN , with
n = NI + ND + NN . Then, the information operator is given by the concatenation of the
conditions defined above:
A(x) = [AI (x)> , AD (x)> , AN (x)> ]> ,
∂
N,1 )
−∇2 x(tI,1 )
x(tD,1 )
∂t1 x(t
..
..
..
D
N
AI (x) =
, A (x) =
, A (x) =
.
.
.
2
I,N
D,N
∂
N,N
I
D
N
−∇ x(t
)
x(t
)
)
∂t1 x(t
The Bayesian PNM output was approximated by numerical disintegration and sampled with
a Monte Carlo method whose description is reserved for the Electronic Supplement. In Figure 5
the mean and pointwise standard-deviations of the posterior distributions are plotted for Gaussian and Cauchy priors with n = 16. There is little qualitative difference between the posterior
distributions for the Gaussian and Cauchy priors. The mean functions match closely to the
mean function from the model solution, as given in Figure 4. The posterior variance is lowest
near to the Dirichlet boundaries where the solution is known, and peaks where the Neumann
condition is imposed. This is to be expected, as evaluations of the Neumann boundary condition
provide less information about the solution itself.
Next, the posterior distribution of the spectrum {ui } was investigated. In Figure 6 the
posterior distribution over these coefficients is plotted and it is seen that the correlation structure
between coefficients is non-trivial, c.f. the joint distribution between u0 and u3 .
Last, in Figure 7 convergence of the posterior distribution is plotted as the number of design
points is varied, for n = 16, 25, 36. In each case a Gaussian prior was used. As expected, the
standard deviation in the posterior distribution is seen to decrease as the number of design
points is increased. At n = 36, the shape of the region of highest uncertainty changes markedly,
with the most uncertain region lying between the Dirichlet boundary and the first evaluation
points on the Neumann boundary. This is likely due to the fact that the number of evaluation
points is approaching the size of the polynomial basis; when the number of points equals the
size of the basis the system is completely determined for a linear model. Thus, we need N n
in order for discretisation error to be quantified.
24
0.8
t2
0.6
0.4
0.2
0.0
0.0
0.2
0.4
t1
0.6
0.8
1.0
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
0.8
0.6
t2
1.0
0.4
0.2
0.0
0.0
0.2
0.4
t1
0.6
0.8
0.0250
0.0225
0.0200
0.0175
0.0150
0.0125
0.0100
0.0075
0.0050
0.0025
0.0000
(a) Gaussian prior
1.0
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
0.8
t2
0.6
0.4
0.2
0.0
0.0
0.2
0.4
t1
0.6
0.8
0.8
0.6
t2
1.0
0.4
0.2
0.0
0.0
1.0
0.2
0.4
t1
0.6
0.8
0.0250
0.0225
0.0200
0.0175
0.0150
0.0125
0.0100
0.0075
0.0050
0.0025
0.0000
(b) Cauchy prior
Figure 5: Posterior distributions for the solution x of the Poisson equation, with n = 16 and
different choices of prior distribution. Left: Posterior mean. Design points for the
interior, Dirichlet and Neumann boundary conditions are indicated by green dots,
green squares and green crosses, respectively. Right: Posterior standard deviation.
25
u0
8.5
6.2
3.8
1.5
u1
0.0
0.0
-0.0
-0.0
u2
0.0
0.0
-0.0
-0.0
u3
0.0
0.0
-0.0
-0.0
u4
-0.8
-0.8
-0.8
-0.8
u5
0.0
0.0
-0.0
-0.0
1.5 1.6 1.7 -0.0 0.0 0.0 -0.0 0.0 0.0 -0.0 0.0 0.1 -0.8 -0.8 -0.8 -0.0 0.0 0.0
Figure 6: Posterior distributions for the first six coefficients of the spectrum for the solution
x of the Poisson equation, obtained with Monte Carlo methods and numerical disintegration, based on δ = 0.0008, n = 16. (NB: The posterior is Gaussian and can
be obtained in closed-form, but we opted to additionally illustrate the Monte Carlo
method.)
0.4
0.2
0.0
0.0
0.2
0.4
t1
0.6
0.8
(a) n = 16
0.6
0.4
0.2
0.0
0.0
0.2
0.4
t1
0.6
0.8
(b) n = 25
0.0250
0.0225
0.0200
0.0175
0.0150
0.0125
0.0100
0.0075
0.0050
0.0025
0.0000
0.8
0.6
t2
t2
0.6
0.8
t2
0.8
0.0250
0.0225
0.0200
0.0175
0.0150
0.0125
0.0100
0.0075
0.0050
0.0025
0.0000
0.4
0.2
0.0
0.0
0.2
0.4
t1
0.6
0.8
0.0250
0.0225
0.0200
0.0175
0.0150
0.0125
0.0100
0.0075
0.0050
0.0025
0.0000
(c) n = 36
Figure 7: Heat map of the point-wise standard deviation for the solution x to the Poisson
equation as the number n of design points is varied. In each case a Gaussian prior
has been used.
26
4
10 -2
10 -4
1
un
x(t)
2
0
10 -6
10 -8
1
10 -10
2
10 -12
3
Positive
Negative
10 0
3
0
2
4
t
6
8
10
10 -14
0
10
20
30
40
i
50
60
70
80
Figure 8: Two distinct solutions for the Painlevé ODE. The spectral plot on the right shows the
true coefficients {ui }, as determined by a model solver (the MatLab package chebfun).
6.2. The Painlevé ODE
In this section a Bayesian PNM is developed to solve a nonlinear ODE based on Painlevé’s first
transcendental
x00 = x2 − t,
t−1/2 x(t) → 1
t ∈ [0, ∞)x(0)
=0
as t → ∞ .
To permit computation, the right-boundary condition
was relaxed by truncating the domain to
√
[0, 10] and using the modified condition x(10) = 10.
Two distinct solutions are known, illustrated in Figure 8 (left). These model solutions were
obtained using the deflation technique described in Farrell et al. (2015). The spectrum plot
in Figure 8 (right) represents the coefficients {ui } obtained when each solution is represented
over a basis of normalised Chebyshev polynomials. As those polynomials are orthonormal with
respect to the L2 -inner-product, the slower decay for the negative solution compared to the
positive solution is equivalent to the negative solution having a larger L2 -norm. This explains
the preference that optimisation-based numerical solvers have for returning the positive solution
in general, and also explains some of the results now presented.
Such systems for which multiple solutions exist have been studied before in the context of
PNM, both in Chkrebtii et al. (2016) and in Cockayne et al. (2016). It was noted in both papers
that existence of multiple solutions can present a substantial challenge to classical numerical
methods.
To build a Bayesian PNM, a prior µ for this problem was defined by using a series expansion
as in Eq. (2.6). The basis functions were φi (t) = Ci ( 21 (t − 5)) where the Ci were normalised
Chebyshev polynomials of the first kind. Both Gaussian and Cauchy priors were considered
by taking ui := γi ξi , where ξi were taken to be either standard Gaussian or standard Cauchy
and in in each case x0 (t) ≡ 0. In accordance with the exponential convergence rate for spectral
methods when the solution to the system is a smooth function, the sequence of scale parameters
was set to γi = αβ −i , where α = 8 and β = 1.5. These values were chosen by inspection of the
true spectra (obtained with Matlab’s “chebfun” package) to ensure that both solutions were in
the support of the prior.
27
The information operator A was defined by the choice of locations {tj }, j = 1, . . . , m, which
determine the locations at which the posterior will be constrained. Analysis for several values
of m was performed. In each case t1 = 0, tm = 10 and the remaining tj were equally spaced on
[0, 10]. To be explicit, the information operator was
00
x (t1 ) − (x(t1 ))2
..
.
00
A(x) = x (tm ) − (x(tm ))2
x(0)
x(10)
with the last two√elements enforcing the boundary conditions. Thus our information was a =
[−t1 , . . . , −tm , 0, 10], which is n = m + 2 dimensional.
The Bayesian PNM output B(µ, a) was approximated via numerical disintegration with the
first N = 40 terms of the series representation used. This was sampled with Monte Carlo
methods, the details of which are reserved for the Electronic Supplement.
Results for a selection of bandwidths δ, with n = 17, are shown in Figure 9. Note that a
strong preference for the positive solution is expressed at the smallest δ, with mass around both
solutions at larger δ. For the Gaussian prior, some mass remained around the negative solution
at the smallest δ, while this was not so for the Cauchy prior. This reflects the fact that, for
a collection of independent univariate Cauchy random variables, one element is likely to be
significantly larger in magnitude than the others, which favours faster decay for the remaining
elements.
Using the calculation described in Section S4.4, model evidence was computed for both the
Gaussian and the Cauchy prior at n = 15. The Bayes factor for the Cauchy, compared to the
Gaussian prior, was found to be 20.26, which constitutes strong evidence in favour of a Cauchy
prior for this problem at the given level of discretisation.
In Figure 10 the posterior distributions for first six coefficients ui at n = 17 and δ = 1 are
plotted. Strong multimodality is clear, as well as skewed correlation structure between the
coefficients. Illustration of such posteriors for smaller δ is difficult as the posteriors become
extremely peaked.
Figure 11 displays convergence of the posterior distributions as n is increased. Of particular
interest is that for n = 12, the posterior distribution based on a Gaussian prior becomes
trimodal. For each prior, the posterior mass settles on the positive solution to the system at
n = 22. This is in accordance with the fact that this solution has smaller L2 -norm. This perhaps
reflects the fact that, while in the limiting case both solutions should have an equal likelihood,
the curvature of the likelihood at each mode may differ. Prior truncation may also be influential;
in Figure 12 the log-likelihood of the negative solution increases at a slower rate than that of
the positive solution. Thus, while in the setting of an infinite prior series neither solution should
be preferred, in practice truncation might bias one solution over the other. Lastly, it is clear
that the parameters α and β may also have a significant effect on which solution is preferred.
Further theoretical work will be required to understand many of the phenomena that we have
just described.
Of particular interest is how a preference for the negative solution could be encoded into a
PNM. Owing to the flexible specification the information operator, there is considerable choice in
this matter. An elegant approach is the introduction of additional, inequality-based information
x0 (0) ≤ 0 .
28
(6.5)
(a) Gaussian Prior
(b) Cauchy Prior.
Figure 9: Posterior samples for the Painlevé system for n = 17. Blue and green dashed lines
represent the positive and negative solutions determined with chebfun. Grey lines are
samples from an approximation to the posterior provided by numerical disintegration
(bandwidth parameter δ).
29
0.8
0.6
0.4
0.1
3.5
2.8
2.1
1.4
-0.3
-0.8
-1.2
-1.7
u0
u1
u2
u3
0.7
0.2
-0.2
-0.7
u4
1.6
0.9
0.2
-0.6
u5
0.7
0.2
-0.2
-0.7
2.3 3.0 3.7 1.4 2.5 3.5 -1.7 -1.0 -0.3 -0.7 0.0 0.7 -0.6 0.5 1.6 -0.7 0.0 0.7
Figure 10: Posterior distributions for the first six coefficients obtained with numerical disintegration (bandwidth parameter δ = 1), at n = 17. Vertical dashed lines on the diagonal
plots indicate the value of the coefficients for the positive (blue) and negative (green)
solutions determined with chebfun.
30
Figure 11: Convergence for the numerical disintegration scheme as n is increased. Left: Gaussian prior. Right: Cauchy prior. In all cases δ = 10−4 .
31
Positive
Negative
108
106
− log φ
kA(x)−ak
δ
1010
104
102
0
10
20
30
40
50
60
70
N
Figure 12: Negative-log-likelihoods for the point-estimates of coefficients for the postive and
negative solutions given by chebfun, as the truncation level N is varied. The fact
that the likelihood for the positive solution decreases more rapidly than that of the
negative solution suggests indicates that the posterior may have a preference for that
solution over the other, though the level N = 40 has been selected in an attempt to
minimise the impact.
Such information can be difficult to incorporate in standard numerical algorithms, but is of
interest in many physical problems (Kinderlehrer and Stampacchia, 2000). For Bayesian PNM
we can extend the information operator to include 1[x0 (0) ≤ 0]. Posterior distributions for the
Gaussian prior at n = 17 are shown in Figure 13. Note that posterior mass has settled close to
the negative solution. This highlights the simplicity with which Bayesian PNMs can encode a
preference for a particular solution when a multiplicity of solutions exist.
6.3. Application to Industrial Process Monitoring
This final application illustrates how statistical models for discretisation error can be propagated
through a pipeline of computation to model how these errors are accumulated.
Hydrocyclones are machines used to separate solid particles from a liquid in which they are
suspended, or two liquids of different densities, using centrifugal forces. High pressure fluid is
injected into the top of a tank to create a vortex. The induced centrifugal force causes denser
material to move to the wall of the tank while lighter material concentrates in the centre,
where it can be extracted. They have widespread applications, including in areas such as
environmental engineering and the petrochemical industry (Sripriya et al., 2007). An illustration
of the operation is given in Figure 14.
To ensure the materials are well-separated the hydrocyclone must be moitored to allow adjustment of the input flow-rate. This is also important for safe operation, owing to the high pressures
involved (Bradley, 2013). However, direct monitoring is impossible owing to the opaque walls of
the equipment and the high interior pressure. For this purpose electrical impedance tomography
(EIT) has been proposed to allow monitoring of the contents (Gutierrez et al., 2000).
EIT is a technique which allows recovery of an interior conductivity field based upon measurements of voltage obtained from applying a stimulating current on the boundary. It is suited
to this problem, as the two materials in the hydrocyclone will generally be of different conductivities. In its simplified form due to Calderón (1980), EIT is described by a linear partial
32
Figure 13: Posterior distribution at n = 17, based on a Gaussian prior, with the negative
boundary condition given by Eqn. (6.5) enforced. Left: δ = 0.99. Right: δ = 0.0001.
overflow
input flow
less dense
more dense
underflow
(a) Hydrocyclone tank schematic
(b) Cross-section (top of tank)
Figure 14: A schematic description of hydrocyclone equipment. (a) The tank is cone-shaped
with overflow and underflow pipes positioned to extract the separated contents. (b)
Fluid, a mixture to be separated, is injected at high pressure at the top of the tank
to create a vortex. Under correct operation, denser materials are directed toward
the centre of the tank and less-dense materials are forced to the peripheries of the
tank.
33
differential equation similar to that in Section 6.1, but with modified boundary conditions to
incorporate the stimulating currents and measured voltages:
−∇ · (a(t)∇x(t)) = 0
∂x
ce
a(t) (t) =
0
∂n
t∈D
t = te
e
t ∈ ∂D \ {te }N
e=1
(6.6)
where D denotes the domain, modelling the hydrocyclone tank, e indexes the stimulating electrodes, te ∈ ∂D are the corresponding locations of the electrodes on ∂D, a is the unknown
∂
conductivity field to be determined and ∂n
denotes the derivative with respect to the outward
1
pointing normal vector. The electrode t is referred to as the reference electrode. The vector
c = (c1 , . . . , cNe ) denotes the stimulation current pattern. Several stimulation patterns were
considered, denoted cj , j = 1, . . . , Nj .
The experimental data described in West et al. (2005) were considered. In the experiment, a
cylindrical perspex tank was used with a single ring of eight electrodes. Translation invariance
in the vertical direction means that the contents are effectively a single 2D region and electrical
conductivity can be modelled as a 2D field. At the start of the experiment, a mixing impeller was
used to create a rotational flow. This was then removed and, after a few seconds, concentrated
potassium chloride solution was carefully injected into the tap water initially filling the tank.
Data, denoted yτ , were collected at regular time intervals by application of several stimulation
patterns c1 , . . . , cM .
To formulate the statistical problem, consider parameterising the conductivity field as a(τ, t),
where τ ∈ [0, T ] is a temporal index while t ∈ D is the spatial coordinate and D is the circular
domain representing the perspex tank in the experiment. A log-Gaussian prior was placed over
the conductivity field so that log ais a Gaussian
process with separable covariance function
kt−t0 k2
0
0
0
ka ((τ, t), (τ , t )) := λ min(τ, τ ) exp − 2`2
where ` is a length-scale parameter representing
the anticipated spatial variation of the conductivity field and λ is a parameter controlling the
amplitude of the field. Here ` was fixed to ` = 0.3, while λ = 10−3 . The problem of estimating
a based on data can be well-posed in the Bayesian framework (Dunlop and Stuart, 2016). Full
details of this experiment can be found in the accompanying report Oates et al. (2017).
Our aim is to use a PNM to account for the effect of discretisation on inferences that are
made on the conductivity
field. For fixed τ , a Gaussian prior was posited for x, with covariance
kt−t0 k2
0
kx (t, t ) := exp − 2`2
where `x was fixed to `x = 0.3. The associated Bayesian PNM, a
x
probabilistic meshless method (PMM), was described in Example 2.4.
The statistical inference procedure is formulated in a pipeline of computations in Figure 15.
It is assumed that the desired outcome is to monitor the contents of the tank while the current
contents are being mixed. This suggests a particle filter approach where a PMM Mτ is employed
to handle the intractable likelihood p(yτ |aτ ) that involves the exact solution of a PDE. The
distribution of aτ given y1 , . . . , yτ is denoted πτ an the computation P (M1 , . . . , Mτ ) is Bayesian
only if the particle approximation error due to the use of a particle filter is overlooked.
To briefly illustrate the method, Figure 16 presents posterior means for the field a(τ, ·), for
each post-injection time point τ = 1, . . . , 8. These are based on a particle approximation of size
P = 500, with method nodes based upon a Bayesian PNM, as in Example 2.4, with n = 119
design points. The high conductivity region representing the potassium chloride solution can be
seen rotating through the domain in the frames after injection, with its conductivity reducing
as it mixes with the water. The full posterior distribution over the conductivity field is inflated
as a result of explicitly modelling the discretisation error; an extensive analysis of these results
will be reported in the upcoming Oates et al. (2017).
34
yτ
2
1
...
...
τ
dπτ −1
dπ0
dπτ
dπ0
Figure 15: Pipeline for hydrocyclone application: The method node (black) represents the use
of PMM solvers, which are incorporated into the likelihood for evolving the particles
according to a Markov transition kernel.
τ =1
1.0
τ =2
1.0
0.5
τ =3
1.0
0.5
τ =4
1.0
0.5
0.5
138.1
0.0
0.0
0.0
0.0
123.3
−0.5
−0.5
−0.5
−0.5
108.4
93.6
−1.0
−1.0
−0.5
0.0
0.5
1.0
τ =5
1.0
−1.0
−1.0
−0.5
0.0
0.5
1.0
τ =6
1.0
−1.0
−1.0
−0.5
0.0
0.5
1.0
τ =7
1.0
−1.0
−1.0
−0.5
0.5
0.5
0.5
0.0
0.0
0.0
0.0
−0.5
−0.5
−0.5
−0.5
0.5
1.0
τ =8
1.0
0.5
0.0
78.7
63.9
49.0
34.1
19.3
−1.0
−1.0
−0.5
0.0
0.5
1.0
−1.0
−1.0
−0.5
0.0
0.5
1.0
−1.0
−1.0
−0.5
0.0
0.5
1.0
−1.0
−1.0
4.4
−0.5
0.0
0.5
1.0
Figure 16: Mean conductivity fields recovered in the hydrocyclone experiment, for the first 8
frames post-injection.
35
9
Pipeline
Static
30
8
Pipeline - Static
28
24
22
R
D
σ(t) dt
26
20
18
7
6
5
4
3
16
2
14
1
2
3
4
5
6
7
8
1
τ
2
3
4
5
6
7
8
τ
Figure 17: Left: Integrated standard-deviation over the domain, for the first 8 frames postinjection, for both the pipeline and the static approaches described in the text.
Right: The difference between these two quantities.
R
In Figure 17, the integrated standard-deviation D σ(t) dt is shown for τ = 1, . . . , 8 for both
the “pipeline”, as described above, and a “static” approach in which no uncertainty was propagated. In this static approach a symmetric collocation PDE solver9 was used to solve the
forward problem, and a separate Bayesian inversion problem was solved at each time point.
The parameters of the symmetric collocation solver were identical to those used in the PMM.
In the left panel we observe some structural periodicity, present in both the pipeline and the
static approach. We speculate that this may be due to the rotation of the medium causing
the area of high conductivity to periodically reach an area of the domain, relative to the 8
sensors, in which it is particularly easy to recover. With this periodicity subtracted in the
right panel, there was a clear increase in posterior uncertainty in the pipeline compared to
the static approach, which is depicted. Temporal regularisation would usually be expected to
reduce uncertainty; thus, the fact that the overall uncertainty increased with τ , relative to the
static formulation, demonstrates that we have quantified and propagated uncertainty due to
successive discretisation of the PDE at each time point.
7. Discussion
This paper has established statistical foundations for PNMs and investigated the Bayesian case
in detail. Through connection to Bayesian inverse problems (Stuart, 2010), we have established
when Bayesian PNM can be well-defined and when the output can be considered meaningful.
The presentation touched on several important issues and a brief discussion of the most salient
points is now provided.
Bayesian vs Non-Bayesian PNMs The decision to focus on Bayesian PNMs was motivated by
the observation that the output of a pipeline of PNMs can only be guaranteed to admit a valid
Bayesian interpretation if the constituent PNMs are each Bayesian and the prior distribution is
coherent. Indeed, Theorem 5.9 demonstrated that prior coherence can be established at a local
level, essentially via a local Markov condition, so that Bayesian PNMs provide a extensible
modelling framework as required to solve more challenging numerical tasks. These results
support a research strategy that focuses on Bayesian PNMs, so that error can be propagated
in a manner that is meaningful.
9
Recall that the PMM has a corresponding symmetric collocation solution to the PDE as its mean function.
36
On the other hand, there are pragmatic reasons why either approximations to Bayesian
PNMs, or indeed, non-Bayesian PNMs might be useful. The predominant reason would be to
circumvent the off-line computational costs that can be associated with Bayesian PNMs, such
as the use of numerical disintegration developed in this research. Recent research efforts, such
as Schober et al. (2014, 2016) and Kersting and Hennig (2016) for the solution of ODEs, have
aimed for computational costs that are competitive with classical methods, at the expense of
fully Bayesian estimation for the solution of the ODE. Such methods are of interest as nonBayesian PNMs, but their role in pipelines of PNMs is unclear. Our contribution serves to
make this explicit.
Computational Cost The present research focused on the more fundamental cost of access to
the information A(x), rather than the additional CPU time required to obtain the PNM output.
Indeed, numerical disintegration constituted the predominant computational cost in the applications that were reported. However, we stress that in many challenging applications gated by
discretisation error, such as occur with climate models, the fundamental cost of the information
A(x) will be dominant. Furthermore, the Monte Carlo methods that were employed for numerical disintegration admit substantial improvements (e.g. in a similar vein to Botev and Kroese,
2012; Koskela et al., 2016). The objective of this paper was to establish statistical foundations
that will permit the development of more sophisticated and efficient Bayesian PNMs.
Prior Elicitation Throughout this work we assumed that a belief distribution µ was provided.
The question of whose belief is represented in µ has been discussed by several authors and
a chronology is included in the Electronic Supplement. Of these perspectives we mention in
particular Hennig et al. (2015), wherein µ is the belief of an agent that “we get to design”.
This offers a connection to frequentist statistics, in that an agent can be designed to ensure
favourable frequentist properties hold.
A robust statistics perspective is also relevant and one such approach would be to consider a
generalised Bayes risk (Eq. (3.1)) wherein the state variable X used for assessment is assumed
to be drawn from a distribution µ̃ 6= µ. This offers an opportunity to derive Bayesian PNMs
that are robust to certain forms of prior mis-specification. This direction was not considered in
the present paper, but has been pursued in the ACA literature for classical numerical methods
(see Chapter IV, Section 4 of Ritter, 2000).
In general, the specification of prior distributions for robust inference on an infinite-dimensional
state space can be difficult. The consistency and robustness of Bayesian inference procedures
— particularly with respect to perturbations of the prior such as those arising from numerical
approximations — in such settings is a subtle topic, with both positive (Castillo and Nickl,
2014; Doob, 1949; Kleijn and van der Vaart, 2012; Le Cam, 1953) and negative (Diaconis and
Freedman, 1986; Freedman, 1963; Owhadi et al., 2015) results depending upon fine topological
and geometric details.
In the context of computational pipelines, the challenge of eliciting a coherent prior is closely
connected to the challenge of eliciting a single unified prior based on the conflicting input of
multiple experts (French, 2011; Albert et al., 2012).
Consistent Estimation The present paper focused on foundations. Further methodological
work will be required to establish sufficient conditions for when B(µ, An (x† )) collapses to an
atom on a single element q † = Q(x† ) representing the data-generating QoI in the limit as the
amount of information, n, is increased. There are two questions here; (i) when is q † identifiable
from the given information, and (ii) at what rate does B(µ, An (x† )) concentrate on q † .
37
Generalisation and Extensions Two more directions are highlighted for extension of this work.
First, note that in this paper the information operator A : X → A was treated as a deterministic
object. However, in some applications there is auxiliary randomness in the acquisition of information. For our integration example, nodes ti might arise as random samples from a reference
distribution on [0, 1]. Or, observations x(ti ) themselves might occur with measurement error,
for example due to finite precision arithmetic. Then a more elaborate model A : X × Ω → A
would be required, where Ω is a probability space that injects randomness into the information
operator. This is the setting of, for instance, randomised quasi-Monte Carlo methods. Future
work will extend the framework of PNMs to include randomised information operators of this
kind.
As a second direction, recall that in an adaptive algorithm the choice of the information is
made in an iterative procedure that is informed by the information observed up to that point.
For the canonical illustration in Example 3.4 and its generalisations discussed there, it can be
proven that adaptive algorithms do not out-perform non-adaptive algorithms in average case
error (Lee and Wasilkowski, 1986). However, outside this setting adaptation can be beneficial
and should be investigated in the context of Bayesian PNM.
Connection with Probabilistic Programming The central goal of probabilistic programming
(PP) is to automate statistical computation, through symbolic representation of statistical objects and operations on those objects. The formalism of pipelines as graphical models presented
in this work can be compared to similar efforts to establish PP languages (Goodman et al.,
2012). For instance, a method node in a pipeline can be related to a monad aggregating several
distributions into a single output distribution (Ścibior et al., 2015). An important challenge in
PP is the automation of computing conditional distributions (Shan and Ramsey, 2017). Numerical disintegration and extensions thereof might be of independent interest to this field (e.g.
extending Wood et al., 2014).
Acknowledgements CJO was supported by the Australian Research Council (ARC) Centre
of Excellence for Mathematical and Statistical Frontiers. TJS was supported by the Excellence
Initiative of the German Research Foundation (DFG) through the Free University of Berlin.
MG was supported by the Engineering and Physical Sciences (EPSRC) grants EP/J016934/1,
EP/K034154/1, an EPSRC Mathematical Sciences Established Career Research Fellowship and
a Lloyds Register Foundation grant for Programme on Data-Centric Engineering. This material
was based upon work partially supported by the National Science Foundation under Grant DMS1127914 to the Statistical and Applied Mathematical Sciences Institute. Any opinions, findings,
and conclusions or recommendations expressed in this material are those of the author(s) and
do not necessarily reflect the views of the National Science Foundation.
The authors are grateful to Amazon for the provision of AWS credits and to the authors of
the Eigen and Eigency libraries in Python.
Appendices
A. Proofs
Proof of Theorem 3.3. The following observation will be required; the joint density of X and
A = A(X) can be expressed in two ways:
δ(A(x))(da)µ(dx) = µa (dx)A# µ(da)
38
(A.1)
which holds almost everywhere from the definition of a disintegration {µa }a∈A . Note that our
integrability assumption justifies the interchange of integrals from Fubini’s theorem.
The Bayes risk for a Bayesian PNM MBPNM = (A, BBPNM ), BBPNM (µ, a) = Q# µa , can be
expressed as:
Z
R(µ, MBPNM ) = r(x, B(µ, A(x)))µ(dx)
ZZ
=
L(Q(x), q)Q# µA(x) (dq)µ(dx) (since M Bayesian)
ZZZ
=
L(Q(x), q)Q# µa (dq)δ(A(x))(da)µ(dx)
ZZZ
=
L(Q(x), Q(x0 ))µa (dx0 )µa (dx)A# µ(da) (from Eq. (A.1))
On the other hand, let
b(a) ∈ arg min
q∈Q
Z
L(Q(x), q)µa (dx)
be a Bayes act. Then the Bayes risk associated with such a method MBR = (A, BBR ),
BBR (µ, a) = δ(b(a)), can be expressed as:
Z
R(µ, MBR ) = L(Q(x), b(A(x)))µ(dx)
ZZ
=
L(Q(x), b(a))δ(A(x))(da)µ(dx)
ZZ
=
L(Q(x), b(a))µa (dx)A# µ(da) (from Eq. (A.1))
Next we use the inner product structure on Q and the form of the loss function as L(q, q 0 ) =
kq − q 0 k2Q to argue that R(µ, MBPNM ) = 2R(µ, MBR ), which in turn implies that the optimal
information Aµ for Bayesian PNM and A∗µ for ACA are identical.
For this final step, fix a ∈ A and denote the random variables Qa (X) = Q(X) − b(a) that
are induced according to X ∼ µa . Denote by Q̃a an independent copy of Qa generated from
X̃ ∼ µa . The notation E will be used to refer to the expectation taken over X, X̃. Then we
have
Q(X) − Q(X̃) = (Q(X) − b(a)) − (Q(X̃) − b(a))
= Qa (X) − Q̃a (X̃)
and moreover, from Theorem 3.2 the posterior mean of Q(X) is b(a) and thus E[Qa ] = E[Q̃a ] = 0.
Then
Z
R(µ, MBPNM ) = E[kQa − Q̃a k2Q ]A# µ(da)
Z
= E[kQa k2Q − 2hQa , Q̃a iQ + kQ̃aA k2Q ]A# µ(da)
Z
= 2 E[kQa k2Q ]A# µ(da) (since E[Qa ] = 0 and Qa ⊥⊥ Q̃a )
= 2R(µ, MBR )
as required.
39
Proof of Theorem 4.3. Fix f ∈ F and a ∈ A. Then:
Z
kA(x) − akA
1
a
µ(dx)
µδ (f ) = a f (x)φ
Zδ
δ
ZZ
1
kã − akA
= a
f (x)φ
µã (dx)A# µ(dã)
Zδ
δ
Z
1
kã − akA
= a φ
µã (f )A# µ(dã)
Zδ
δ
Z
= µã (f )A# µaδ (dã).
(from Eq. (A.1))
Thus
|µaδ (f )
a
Z
[µã (f ) − µa (f )]A# µaδ (dã)
Z
α
≤ Cµ kf kF kã − akαA A# µaδ (dã)
− µ (f )| =
(Assumption 4.2).
(A.2)
Now consider the random variable
R :=
kA(X) − akA
δ
(A.3)
induced from X ∼ µ. The existence of a continuous and positive density pA implies that R
also admits a density on [0, ∞), denoted pR,δ . The fact that pA is uniform on an infinitesimal
neighbourhood of a implies that pR,δ (r) is proportional to the surface area of a hypersphere of
radius δr centred on a ∈ A:
pR.δ (r) =
2π n/2
(δr)n−1 (pA (a) + o(1))
Γ( n2 )
(A.4)
This is valid since A is open and the hypersphere will be contained in A for r sufficiently small.
Eq. (A.2) can then be evaluated:
R
A
Z
kã − akαA φ kã−ak
A# µ(dã)
δ
kã − akαA A# µaδ (dã) =
R kã−akA
φ
A# µ(dã)
δ
R α
r φ(r)pR,δ (r)dr
= δα R
(change of variables; Eq. A.3).
(A.5)
φ(r)pR,δ (r)dr
R α+n−1
r
φ(r)dr
−−→ R n−1
(from Eq. A.4)
δ↓0
r
φ(r)dr
Cφα
= 0 (< ∞ from Assumption 4.1).
Cφ
Thus, for δ sufficiently small, Eq. (A.5) can be bounded above by δ α (1+ C̄φα ) where C̄φα := Cφα /Cφ0
and “1” is in this case an arbitrary positive constant. This establishes the upper bound
|µaδ (f ) − µa (f )| ≤ Cµα (1 + C̄φα )kf kF δ α
for δ sufficiently small and completes the proof.
40
Proof of Theorem 5.9. To reduce the notation, suppose that the random variables Y1 , . . . , YJ
admit a joint density p(y1 , . . . , yJ ), However, we emphasise that existence of a density is not
required for the proof to hold. To further reduce notation, denote ya:b = (ya , . . . , yb ).
The output of the computation P (M1 , . . . , Mn ) was defined algorithmically in Definition 5.5
and illustrated in Example 5.4. Our aim is to show that this algorithmic output coincides with
the distribution (Qn )# µa on Qn , which is identified in the present notation with p(yJ |y1:I ).
For j ∈ {I + 1, . . . , J}, the coherence condition on Y1 , . . . , YJ translates into the present
notation as p(yj |y1:j−1 ) = p(yj |yπ(j) ). This allows us to deduce that:
Z
Z
p(yJ |y1:I ) = · · · p(yI+1:J |y1:I )dyI+1:J−1
=
=
Z
Z
···
···
Z
Z
J
Y
j=I+1
J
Y
j=I+1
p(yj |y1:j−1 )dyI+1:J−1
p(yj |yπ(j) )dyI+1:J−1 .
The right hand side is recognised as the output of the computation P (M1 , . . . , Mn ), as defined
in Definition 5.5. This completes the proof.
References
N. L. Ackerman, C. E. Freer, and D. M. Roy. On computability and disintegration. Mathematical
Structures in Computer Science, 2017. To appear.
I. Albert, S. Donnet, C. Guihenneuc-Jouyaux, S. Low-Choy, K. Mengersen, and J. Rousseau.
Combining expert opinions in prior elicitation. Bayesian Anal., 7(3):503–531, 2012.
doi:10.1214/12-BA717.
T. V. Anderson. Efficient, accurate, and non-gaussian error propagation through nonlinear,
closed-form, analytical system models. Master’s thesis, Department of Mechanical Engineering, Brigham Young University, 2011.
I. Babuška and G. Söderlind. On round-off error growth in elliptic problems, 2016. In preparation.
S. Bartels and P. Hennig. Probabilistic approximate least-squares. In Proceedings of Artificial
Intelligence and Statistics (AISTATS), 2016.
J. O. Berger. Statistical Decision Theory and Bayesian Analysis. Springer Series in Statistics.
Springer-Verlag, New York, second edition, 1985. doi:10.1007/978-1-4757-4286-2.
A. Beskos, D. Crisan, and A. Jasra. On the stability of sequential Monte Carlo methods in high
dimensions. Ann. Appl. Probab., 24(4):1396–1445, 2014. doi:10.1214/13-AAP951.
A. Beskos, A. Jasra, E. A. Muzaffer, and A. M. Stuart. Sequential Monte Carlo methods for
Bayesian elliptic inverse problems. Stat. Comput., 25(4):727–737, 2015. doi:10.1007/s11222015-9556-7.
A. Beskos, M. Girolami, S. Lan, P. E. Farrell, and A. M. Stuart.
Geometric
MCMC for infinite-dimensional inverse problems. J. Comput. Phys., 335:327–351, 2017.
doi:10.1016/j.jcp.2016.12.041.
P. G. Bissiri, C. C. Holmes, and S. G. Walker. A general framework for updating belief distributions. J. R. Stat. Soc. Ser. B. Stat. Methodol., 78(5):1103–1130, 2016. doi:10.1111/rssb.12158.
V. I. Bogachev. Gaussian Measures, volume 62 of Mathematical Surveys and Monographs.
American Mathematical Society, Providence, RI, 1998. doi:10.1090/surv/062.
41
Z. I. Botev and D. P. Kroese. Efficient Monte Carlo simulation via the generalized splitting
method. Stat. Comput., 22(1):1–16, 2012. doi:10.1007/s11222-010-9201-4.
D. Bradley. The Hydrocyclone: International Series of Monographs in Chemical Engineering,
volume 4. Elsevier, 2013.
M. Briers, A. Doucet, and S. S. Singh. Sequential auxiliary particle belief propagation. In
International Conference on Information Fusion, 2005.
F.-X. Briol, C. J. Oates, M. Girolami, M. A. Osborne, and D. Sejdinovic. Probabilistic integration: A role for statisticians in numerical analysis?, 2016. arXiv:1512.00933v4.
A.-P. Calderón. On an inverse boundary value problem. In Seminar on Numerical Analysis
and its Applications to Continuum Physics (Rio de Janeiro, 1980), pages 65–73. Soc. Brasil.
Mat., Rio de Janeiro, 1980.
M. A. Capistrán, J. A. Christen, and S. Donnet. Bayesian analysis of ODEs: solver optimal accuracy and Bayes factors. SIAM/ASA J. Uncertain. Quantif., 4(1):829–849, 2016.
doi:10.1137/140976777.
I. Castillo and R. Nickl. On the Bernstein–von Mises phenomenon for nonparametric Bayes
procedures. Ann. Statist., 42(5):1941–1969, 2014. doi:10.1214/14-AOS1246.
F. Cérou, P. Del Moral, T. Furon, and A. Guyader. Sequential Monte Carlo for rare event
estimation. Stat. Comput., 22(3):795–808, 2012. doi:10.1007/s11222-011-9231-6.
J. T. Chang and D. Pollard. Conditioning as disintegration. Statist. Neerlandica, 51(3):287–317,
1997. doi:10.1111/1467-9574.00056.
O. A. Chkrebtii, D. A. Campbell, B. Calderhead, and M. A. Girolami. Bayesian solution
uncertainty quantification for differential equations. Bayesian Anal., 11(4):1239–1267, 2016.
doi:10.1214/16-BA1017.
J. Cockayne, C. Oates, T. J. Sullivan, and M. Girolami. Probabilistic meshless methods for
partial differential equations and Bayesian inverse problems, 2016. arXiv:1605.07811v1.
P. R. Conrad, M. Girolami, S. Särkkä, A. M. Stuart, and K. C. Zygalakis. Statistical analysis of differential equations: introducing probability measures on numerical solutions. Stat.
Comput., 2016. doi:10.1007/s11222-016-9671-0.
S. L. Cotter, M. Dashti, and A. M. Stuart. Approximation of Bayesian inverse problems for
PDEs. SIAM J. Numer. Anal., 48(1):322–345, 2010. doi:10.1137/090770734.
T. Cui, Y. Marzouk, and K. Willcox. Scalable posterior approximations for large-scale Bayesian
inverse problems via likelihood-informed parameter and state reduction. J. Comput. Phys.,
315:363–387, 2016. doi:10.1016/j.jcp.2016.03.055.
M. Dashti, S. Harris, and A. Stuart. Besov priors for Bayesian inverse problems. Inverse Probl.
Imaging, 6(2):183–200, 2012. doi:10.3934/ipi.2012.6.183.
M. de Carvalho, G. L. Page, and B. J. Barney. On the geometry of Bayesian inference, 2017.
arXiv:1701.08994.
P. Del Moral. Feynman–Kac Formulae: Genealogical and Interacting Particle Systems with
Applications. Probability and its Applications (New York). Springer-Verlag, New York, 2004.
doi:10.1007/978-1-4684-9393-1.
P. Del Moral, A. Doucet, and A. Jasra. Sequential Monte Carlo samplers. J. R. Stat. Soc. Ser.
B Stat. Methodol., 68(3):411–436, 2006. doi:10.1111/j.1467-9868.2006.00553.x.
P. Del Moral, A. Doucet, and A. Jasra. An adaptive sequential Monte Carlo method for approximate Bayesian computation. Stat. Comput., 22(5):1009–1020, 2012. doi:10.1007/s11222011-9271-y.
C. Dellacherie and P.-A. Meyer. Probabilities and Potential. North-Holland Publishing Co.,
Amsterdam-New York, 1978.
42
P. Diaconis. Bayesian numerical analysis. Statistical Decision Theory and Related Topics IV,
1:163–175, 1988.
P. Diaconis and D. Freedman. Frequency properties of Bayes rules. In Scientific inference, data
analysis, and robustness (Madison, Wis., 1981), volume 48 of Publ. Math. Res. Center Univ.
Wisconsin, pages 105–115. Academic Press, Orlando, FL, 1983.
P. Diaconis and D. A. Freedman. On the consistency of Bayes estimates. Ann. Statist., 14(1):
1–67, 1986. doi:10.1214/aos/1176349830. With a discussion and a rejoinder by the authors.
J. Dick and F. Pillichshammer. Digital Nets and Sequences: Discrepancy Theory and
Quasi–Monte Carlo Integration. Cambridge University Press, 2010.
J. L. Doob. Application of the theory of martingales. In Le Calcul des Probabilités et ses
Applications, Colloques Internationaux du Centre National de la Recherche Scientifique, no.
13, pages 23–27. Centre National de la Recherche Scientifique, Paris, 1949.
A. Doucet, N. de Freitas, and N. Gordon, editors. Sequential Monte Carlo Methods in Practice. Statistics for Engineering and Information Science. Springer-Verlag, New York, 2001.
doi:10.1007/978-1-4757-3437-9.
M. M. Dunlop and A. M. Stuart. The Bayesian formulation of EIT: analysis and algorithms.
Inverse Probl. Imaging, 10(4):1007–1036, 2016. doi:10.3934/ipi.2016030.
L. Ellam, N. Zabaras, and M. Girolami. A Bayesian approach to multiscale inverse
problems with on-the-fly scale determination. J. Comput. Phys., 326:115–140, 2016.
doi:10.1016/j.jcp.2016.08.031.
P. E. Farrell, A. Birkisson, and S. W. Funke. Deflation techniques for finding distinct solutions
of nonlinear partial differential equations. SIAM J. Sci. Comput., 37(4):A2026–A2045, 2015.
doi:10.1137/140984798.
G. E. Fasshauer. Solving differential equations with radial basis functions: multilevel methods
and smoothing. Adv. Comput. Math., 11(2-3):139–159, 1999. doi:10.1023/A:1018919824891.
Radial basis functions and their applications.
D. A. Freedman. On the asymptotic behavior of Bayes’ estimates in the discrete case. Ann.
Math. Statist., 34:1386–1403, 1963. doi:10.1214/aoms/1177703871.
S. French. Aggregating expert judgement. Rev. R. Acad. Cienc. Exactas Fı́s. Nat. Ser. A Math.
RACSAM, 105(1):181–206, 2011. doi:10.1007/s13398-011-0018-6.
N. Garcia Trillos and D. Sanz-Alonso. Gradient flows: Applications to classification, image
denoising, and Riemannian MCMC, 2017. arXiv:1705.07382.
A. Gelman and X.-L. Meng.
Simulating normalizing constants: from importance
sampling to bridge sampling to path sampling.
Statist. Sci., 13(2):163–185, 1998.
doi:10.1214/ss/1028905934.
C. J. Geyer. Markov chain Monte Carlo maximum likelihood. Computing Science and Statistics,
Proceedings of the 23rd Symposium on the Interface, 1991.
M. Girolami and B. Calderhead. Riemann manifold Langevin and Hamiltonian Monte Carlo
methods. J. R. Stat. Soc. Ser. B Stat. Methodol., 73(2):123–214, 2011. doi:10.1111/j.14679868.2010.00765.x. With discussion and a reply by the authors.
N. Goodman, V. Mansinghka, D. M. Roy, K. Bonawitz, and J. B. Tenenbaum. Church: a
language for generative models, 2012. arXiv:1206.3255.
T. Gunter, M. A. Osborne, R. Garnett, P. Hennig, and S. J. Roberts. Sampling for inference
in probabilistic models with fast Bayesian quadrature. In Proceedings of Advances in Neural
Information Processing Systems (NIPS), pages 2789–2797, 2014.
J. A. Gutierrez, T. Dyakowski, M. S. Beck, and R. A. Williams. Using electrical impedance
tomography for controlling hydrocyclone underflow discharge. Powder Technology, 108(2):
180–184, 2000.
43
R. Harvey and D. Verseghy. The reliability of single precision computations in the simulation
of deep soil heat diffusion in a land surface model. Clim. Dynam., 46(3865):3865–3882, 2015.
doi:10.1007/s00382-015-2809-5.
P. Hennig. Probabilistic interpretation of linear solvers. SIAM J. Optim., 25(1):234–260, 2015.
doi:10.1137/140955501.
P. Hennig and M. Kiefel. Quasi-Newton methods: a new direction. J. Mach. Learn. Res., 14:
843–865, 2013.
P. Hennig, M. A. Osborne, and M. Girolami. Probabilistic numerics and uncertainty in computations. Proceedings of the Royal Society A, 471(2179):20150142, 2015.
P. Henrici. Error Propagation for Difference Method. John Wiley and Sons, Inc., New YorkLondon, 1963.
N. J. Higham.
Accuracy and Stability of Numerical Algorithms.
Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, second edition, 2002.
doi:10.1137/1.9780898718027.
M. Horstein. Sequential transmission using noiseless feedback. IEEE Transactions on Information Theory, 9(3):136–143, 1963.
T. E. Hull and J. R. Swenson. Tests of probabilistic models for propagation of roundoff errors.
Communications of the ACM, 9(2):108–113, 1966.
A. T. Ihler and D. A. McAllester. Particle belief propagation. In Proceedings of Artificial
Intelligence and Statistics (AISTATS), 2009.
M. John and Y. Wu. Confidence intervals for finite difference solutions, 2017. arXiv:1701.05609.
J. B. Kadane. Principles of Uncertainty. Texts in Statistical Science Series. CRC Press, Boca
Raton, FL, 2011. doi:10.1201/b11322.
J. B. Kadane and G. W. Wasilkowski. Average case -complexity in computer science: A
Bayesian view. Technical report, Columbia University, 1983.
J. B. Kadane and G. W. Wasilkowski. Bayesian Statistics, chapter Average Case -Complexity
in Computer Science: A Bayesian View, pages 361–374. Elsevier, North-Holland, 1985.
W. Kahan. The Improbability of Probabilistic Error Analyses for Numerical Computations. In
UCB Statistics Colloquium, 1996.
M. Kanagawa, B. K. Sriperumbudur, and K. Fukumizu. Convergence guarantees for kernelbased quadrature rules in misspecified settings. In Advances in Neural Information Processing
Systems, pages 3288–3296, 2016.
T. Karvonen and S. Särkkä. Fully symmetric kernel quadrature, 2017. arXiv:1703.06359.
H. Kersting and P. Hennig. Active uncertainty calibration in Bayesian ODE solvers, 2016.
arXiv:1605.03364.
G. S. Kimeldorf and G. Wahba.
A correspondence between Bayesian estimation on
stochastic processes and smoothing by splines. Ann. Math. Statist., 41:495–502, 1970a.
doi:10.1214/aoms/1177697089.
G. S. Kimeldorf and G. Wahba. Spline functions and stochastic processes. Sankhyā Ser. A, 32:
173–180, 1970b.
D. Kinderlehrer and G. Stampacchia. An Introduction to Variational Inequalities and their
Applications, 2000. doi:10.1137/1.9780898719451. Reprint of the 1980 original.
B. J. K. Kleijn and A. W. van der Vaart. The Bernstein–Von-Mises theorem under misspecification. Electron. J. Stat., 6:354–381, 2012. doi:10.1214/12-EJS675.
A. N. Kolmogorov. Foundations of Probability. Ergebnisse Der Mathematik, 1933.
A. Kong, P. McCullagh, X.-L. Meng, D. Nicolae, and Z. Tan. A theory of statistical models
for Monte Carlo integration. J. R. Stat. Soc. Ser. B Stat. Methodol., 65(3):585–618, 2003.
doi:10.1111/1467-9868.00404. With discussion and a reply by the authors.
44
A. Kong, P. McCullagh, X.-L. Meng, and D. L. Nicolae. Further explorations of likelihood theory for Monte Carlo integration. In Advances in statistical modeling and inference, volume 3 of Ser. Biostat., pages 563–592. World Sci. Publ., Hackensack, NJ, 2007.
doi:10.1142/9789812708298 0028.
J. Koskela, D. Spano, and P. A. Jenkins. Inference and rare event simulation for stopped Markov
processes via reverse-time sequential Monte Carlo, 2016. arXiv:1603.02834.
J. T. N. Krebs. Consistency and asymptotic normality of stochastic Euler schemes for ordinary
differential equations, 2016. arXiv:1609.06880.
J. Kuelbs, F. M. Larkin, and J. A. Williamson. Weak probability distributions on reproducing
kernel Hilbert spaces. Rocky Mt. J. Math., 2(3):369–378, 1972. doi:10.1216/RMJ-1972-2-3369.
F. M. Larkin. Estimation of a non-negative function. BIT Numerical Mathematics, 9(1):30–52,
1969.
F. M. Larkin. Optimal approximation in Hilbert spaces with reproducing kernel functions.
Mathematics of Computation, 24(112):911–921, 1970.
F. M. Larkin. Gaussian measure in Hilbert space and applications in numerical analysis. Rocky
Mt. J. Math., 2(3):379–421, 1972. doi:10.1216/RMJ-1972-2-3-379.
F. M. Larkin. Probabilistic error estimates in spline interpolation and quadrature. In Information processing 74 (Proc. IFIP Congress, Stockholm, 1974), pages 605–609. North-Holland,
Amsterdam, 1974.
F. M. Larkin. A modification of the secant rule derived from a maximum likelihood principle.
BIT, 19(2):214–222, 1979a. doi:10.1007/BF01930851.
F. M. Larkin. Bayesian Estimation of Zeros of Analytic Functions. Queen’s University of
Kingston. Department of Computing and Information Science, 1979b.
S. Lauritzen. Graphical Models. Oxford University Press, 1991.
L. Le Cam. On some asymptotic properties of maximum likelihood estimates and related Bayes’
estimates. Univ. California Publ. Statist., 1:277–329, 1953.
D. Lee and G. W. Wasilkowski. Approximation of linear functionals on a Banach space with a
Gaussian measure. J. Complexity, 2(1):12–43, 1986. doi:10.1016/0885-064X(86)90021-X.
T. Lienart, Y. W. Teh, and A. Doucet. Expectation particle belief propagation. In Proceedings
of Advances in Neural Information Processing Systems (NIPS), 2015.
D. V. Lindley. Understanding Uncertainty. Wiley Series in Probability and Statistics. John
Wiley & Sons, Inc., Hoboken, NJ, revised edition, 2014. doi:10.1002/9781118650158.indsp2.
F. Lindsten, A. M. Johansen, C. A. Naesseth, B. Kirkpatrick, T. B. Schön, J. A. D. Aston,
and A. Bouchard-Côté. Divide-and-conquer with sequential Monte Carlo. J. Comput. Graph.
Statist., 26(2):445–458, 2017. doi:10.1080/10618600.2016.1237363.
D. J. C. MacKay. Bayesian interpolation. Neural Computation, 4(3):415–447, 1992.
M. Mahsereci and P. Hennig. Probabilistic line searches for stochastic optimization. In Proceedings of Advances In Neural Information Processing Systems (NIPS), 2015.
J. Mockus. Bayesian Approach to Global Optimization: Theory and Applications. Springer
Science & Business Media, 1989.
S. Mosbach and A. G. Turner. A quantitative probabilistic investigation into the accumulation
of rounding errors in numerical ODE solution. Computers & Mathematics with Applications,
57(7):1157–1167, 2009.
A. Müller. Integral probability metrics and their generating classes of functions. Adv. in Appl.
Probab., 29(2):429–443, 1997. doi:10.2307/1428011.
S. Niederer, L. Mitchell, N. Smith, and G. Plank. Simulating human cardiac electrophysiology
on clinical time-scales. Front. in Physiol., 2:14, 2011. doi:10.3389/fphys.2011.00014.
45
E. Novak and H. Woźniakowski. Tractability of Multivariate Problems: Standard Information
for Functionals. European Mathematical Society, 2010.
C. Oates, F.-X. Briol, and M. Girolami. Probabilistic integration and intractable distributions,
2016a. arXiv:1606.06841.
C. J. Oates, T. Papamarkou, and M. Girolami. The controlled thermodynamic integral for
Bayesian model evidence evaluation. J. Amer. Statist. Assoc., 111(514):634–645, 2016b.
doi:10.1080/01621459.2015.1021006.
C. J. Oates, J. Cockayne, and R. G. Aykroyd. Bayesian probabilistic numerical methods for
industrial process monitoring. In preparation., 2017.
W. L. Oberkampf and C. J. Roy. Verification and Validation in Scientific Computing. Cambridge University Press, Cambridge, 2013.
A. O’Hagan. Bayes–Hermite quadrature. J. Statist. Plann. Inference, 29(3):245–260, 1991.
doi:10.1016/0378-3758(91)90002-V.
M. Osborne, R. Garnett, Z. Ghahramani, D. K. Duvenaud, S. J. Roberts, and C. E. Rasmussen.
Active learning of model evidence using Bayesian quadrature. In Proceedings of Advances in
Neural Information Processing Systems (NIPS), 2012a.
M. A. Osborne, R. Garnett, S. J. Roberts, C. Hart, S. Aigrain, N. Gibson, and S. Aigrain.
Bayesian quadrature for ratios. In Proceedings of Artificial Intelligence and Statistics (AISTATS), 2012b.
H. Owhadi. Bayesian numerical homogenization. Multiscale Model. Simul., 13(3):812–828, 2015.
doi:10.1137/140974596.
H. Owhadi. Multigrid with rough coefficients and multiresolution operator decomposition from
hierarchical information games. SIAM Rev., 59(1):99–149, 2017. doi:10.1137/15M1013894.
H. Owhadi, C. Scovel, and T. J. Sullivan. On the brittleness of Bayesian inference. SIAM Rev.,
57(4):566–582, 2015. doi:10.1137/130938633.
B. Paige and F. Wood. Inference networks for sequential Monte Carlo in graphical models. In
Proceedings of NIPS, 2015. arXiv:1602.06701.
J. Pfanzagl. Conditional distributions as derivatives. Ann. Probab., 7(6):1046–1050, 1979.
H. Poincaré. Calcul des Probabilités. Gauthier-Villars, 1912.
W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery. Numerical Recipes: The
Art of Scientific Computing. Cambridge University Press, Cambridge, third edition, 2007.
M. Raissi, P. Perdikaris, and G. E. Karniadakis. Inferring solutions of differential equations
using noisy multi-fidelity data. arXiv, 2016. arXiv:1607.04805.
M. Raissi, P. Perdikaris, and G. E. Karniadakis. Numerical Gaussian processes for timedependent and non-linear partial differential equations, 2017. arXiv:1703.10230.
K. Ritter. Average-Case Analysis of Numerical Problems, volume 1733 of Lecture Notes in
Mathematics. Springer-Verlag, Berlin, 2000. doi:10.1007/BFb0103934.
C. Robert and G. Casella. Monte Carlo Statistical Methods. Springer Science & Business Media.,
2013.
G. O. Roberts and R. L. Tweedie. Exponential convergence of Langevin distributions and their
discrete approximations. Bernoulli, 2(4):341–363, 1996. doi:10.2307/3318418.
C. Roy. Review of discretization error estimators in scientific computing. In Proceedings of AIAA
Aerospace Sciences Meeting Including the New Horizons Forum and Aerospace Exposition,
2010.
J. Sacks and D. Ylvisaker. Statistical designs and integral approximation. In Proc. Twelfth
Biennial Sem. Canad. Math. Congr. on Time Series and Stochastic Processes; Convexity and
Combinatorics (Vancouver, B.C., 1969), pages 115–136. Canad. Math. Congr., Montreal,
Que., 1970.
46
S. Särkkä, J. Hartikainen, L. Svensson, and F. Sandblom. On the relation between Gaussian
process quadratures and sigma-point methods. Journal of Advances in Information Fusion,
11(1):31–46, 2016. arXiv:1504.05994.
M. Schober, D. K. Duvenaud, and P. Hennig. Probabilistic ODE solvers with Runge–Kutta
means. In Proceedings of Advances in Neural Information Processing Systems (NIPS), 2014.
M. Schober, S. Särkkä, and P. Hennig. A probabilistic model for the numerical solution of initial
value problems, 2016. arXiv:1610.05261v1.
A. Ścibior, Z. Ghahramani, and A. D. Gordon. Practical probabilistic programming with monads. SIGPLAN Notices, 50(12):165–176, 2015.
G. Shafer. A Mathematical Theory of Evidence. Princeton University Press, Princeton, N.J.,
1976.
C. Shan and N. Ramsey. Exact Bayesian inference by symbolic disintegration. In Proceedings
of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages, pages
130–144. ACM, 2017.
J. Skilling. Bayesian solution of ordinary differential equations. In C. R. Smith, G. J. Erickson,
and P. O. Neudorfer, editors, Maximum Entropy and Bayesian Methods, volume 50 of Fundamental Theories of Physics, pages 23–37. Springer, 1992. doi:10.1007/978-94-017-2219-3.
R. Sripriya, M. Kaulaskar, S. Chakraborty, and B. Meikap. Studies on the performance of
a hydrocyclone and modeling for flow characterization in presence and absence of air core.
Chemical Engineering Science, 62(22):6391–6402, 2007.
G. Strang and G. Fix. An Analysis of the Finite Element Method. Englewood Cliffs, NJ:
Prentice-Hall., 1973.
S. H. Strogatz. Nonlinear Dynamics and Chaos. Westview Press, 2014.
A. M. Stuart. Inverse problems: a Bayesian perspective. Acta Numer., 19:451–559, 2010.
doi:10.1017/S0962492910000061.
A. V. Sul0 din. Wiener measure and its applications to approximation methods. I. Izv. Vysš.
Učebn. Zaved. Matematika, 6(13):145–158, 1959.
A. V. Sul0 din. Wiener measure and its applications to approximation methods. II. Izv. Vysš.
Učebn. Zaved. Matematika, 5(18):165–179, 1960.
T. J. Sullivan. Well-posed Bayesian inverse problems and heavy-tailed stable quasi-Banach
space priors, 2016. arXiv:1605.05898.
Z. Tan. On a likelihood approach for Monte Carlo integration. J. Amer. Statist. Assoc., 99
(468):1027–1036, 2004. doi:10.1198/016214504000001664.
O. Teymur, K. Zygalakis, and B. Calderhead. Probabilistic linear multistep methods. In
Proceedings of Advances in Neural Information Processing Systems (NIPS), 2016.
A. Törn and A. Žilinskas. Global Optimization, volume 350 of Lecture Notes in Computer
Science. Springer-Verlag, Berlin, 1989. doi:10.1007/3-540-50871-6.
J. F. Traub, G. W. Wasilkowski, and H. Woźniakowski. Information-Based Complexity. Computer Science and Scientific Computing. Academic Press, Inc., Boston, MA, 1988. With
contributions by A. G. Werschulz and T. Boult.
R. Waeber, P. I. Frazier, and S. G. Henderson. Bisection search with noisy responses. SIAM J.
Control Optim., 51(3):2261–2279, 2013. doi:10.1137/120861898.
R. M. West, S. Meng, R. G. Aykroyd, and R. A. Williams. Spatial-temporal modeling for
electrical impedance imaging of a mixing process. Review of Scientific Instruments, 76(7):
073703, 2005.
F. Wood, J.-W. van de Meent, and V. Mansinghka. A new approach to probabilistic programming inference. In Proceedings of Artificial Intelligence and Statistics (AISTATS), 2014.
47
∗
University of Warwick, j.cockayne@warwick.ac.uk
Newcastle University and Alan Turing Institute, chris.oates@ncl.ac.uk
‡
Free University of Berlin and Zuse Institute Berlin, sullivan@zib.de
§
Imperial College London and Alan Turing Institute, m.girolami@imperial.ac.uk
†
48
Electronic Supplement
to the paper Bayesian Probabilistic Numerical Methods
S1. Philosophical Status of the Belief Distribution
The aim of this section is to discuss in detail the semantic status of the belief distribution µ
in a probabilistic numerical method (PNM). In Section S1.1 we survey historical work on this
topic, while in Section S1.2 more recent literature is covered. Then in Section S1.3 we highlight
some philosophical objections and their counter-arguments.
S1.1. Historical Precedent
The use of probabilistic and statistical methods to model a deterministic mathematical object
can be traced back to Poincaré (1912), who used a stochastic model to construct interpolation
formulae. In brief, Poincaré formulated a polynomial
f (x) = a0 + a1 x + · · · + am xm
whose coefficients ai were modelled as independent Gaussian random variables. Thus Poincaré
in effect constructed a Gaussian measure over the Hilbert space with basis {1, x, . . . , xm }. This
pre-empted Kimeldorf and Wahba (1970a,b) and others, which associated spline interpolation
formulae to the means of Gaussian measures over Hilbert spaces.
The first explicit statistical model for numerical error (of which we are aware) was in the
literature on rounding error in the numerical solution of ordinary differential equations (ODE),
as summarised in Hull and Swenson (1966). Therein it was supposed that rounding, by which
we mean representation of a real number
x = 0.a1 a2 a3 a4 . . .
∈ [0, 1]
in a truncated form
x̂ = 0.a1 a2 a3 a4 . . . an ,
is such that the error e = x − x̂ can be reasonably modelled by a uniform random variable on
[−5 × 10−(n+1) , 5 × 10−(n+1) ]. This implies a distribution µ over the unknown value of x given
x̂. The contribution of Hull and Swenson (1966) and others was to replace the last digit an , in
each stored number that arises in the numerical solution of an ODE, with a uniformly chosen
element of {0, . . . , 9}. This performs approximate propagation of the numerical uncertainty due
to rounding error through further computation and, in their case, induces a distribution over
the solution space of the ODE. Note that this work focused on rounding error, rather than
the (time) discretisation error that is intrinsic to numerical ODE solvers; this could reflect the
limited precision arithmetic that was available from the computer hardware of the period.
Larkin (1972) was an important historical paper for PNMs, being the first to set out the
modern statistical agenda for PNMs:
49
In any particular problem situation we are given certain specific properties of the
solution, e.g. a finite number of ordinate or derivative values at fixed abscissae. If
we can assume no more than this basic information we can conclude only that our
required solution is a member of that class of functions which possesses the given
properties - a tautology which is unlikely to appeal to an experimental scientist!
Clearly, we need to be given, or to assume, extra information in order to make more
definite statements about the required function.
Typically, we shall assume general properties, such as continuity or non-negativity of
the solution and/or its derivatives, and use the given specific properties in order to
assist in making a selection from the class K of all functions possessing the assumed
general properties. We shall choose K either to be a Hilbert space or to be simply
related to one.
This description defines a set K of permissible functions, rather than an explicit distribution
over K, but it is clear that Larkin envisaged numerical analysis as an instance of statistical
estimation:
In the present approach, an a priori localisation is achieved effectively by making
an assumption about the relative likelihoods of elements of the Hilbert space of
possible candidates for the solution to the original problem. Among other things,
this permits, at least in principle, the derivation of joint probability density functions
for functionals on the space and also allows us to evaluate confidence limits on the
estimate of a required functional (in terms of given values of other functionals)
without any extra information about the norm of the function in question.
Later, Diaconis (1988) re-iterated this argument for the construction of K more explicitly,
considering numerical integration of the function
x + x2 + cos(x)
f (x) = exp cosh
.
3 + sin(x3 )
over the unit interval. In particular, Diaconis asked:
“What does it mean to ‘know’ a function?” The formula says some things (e.g. f is
smooth, positive and bounded by 20 on [0, 1]) but there are many other facts about
f that we don’t know (e.g. is f monotone, unimodal or convex?)
This argument was provided as justification for belief distributions that encode certain basic
features, such as the smoothness of the integrand. The belief distributions that were then
considered in Diaconis’ paper were Gaussian distributions on K. Diaconis, as well as Larkin
(1972); Kadane and Wasilkowski (1983), observed that some classical numerical methods are
Bayes rules in this context.
The arguments of these papers are intrinsic to modern PNMs. However, the associated
theoretical analysis of computation under finite information has proceeded outside of statistics,
in the applied mathematical literature, where it is usually presented without a statistical context.
That research is reviewed next.
S1.2. Contemporary Outlook
The mathematical foundations of computation based on finite information are established in
the field of information-based complexity (IBC). The monograph of Traub et al. (1988) presents
the foundations of IBC. In brief, the starting point for IBC is the mantra that
50
To compute fast you need to compute with partial information (∼ Houman Owhadi,
SIAM UQ 2016)
This motivates the search for optimal approximations based on finite information, in either
the worst-case or average-case sense of optimal. The particular development of PNMs that we
presented in the main text is somewhat aligned to average-case analysis (ACA) and we focus
on that literature in what follows.
Among the earliest work on ACA, Sul0 din (1959, 1960) studied numerical integration and L2
function approximation in the setting where µ was induced from the Weiner process, with a
focus on optimal linear methods. Later, Sacks and Ylvisaker (1970) moved from analysis with
fixed µ to analysis over a class of µ defined by the smoothness properties of their covariance
kernels. At the same time Kimeldorf and Wahba (1970a,b) established optimality properties
of splines in reproducing kernel Hilbert spaces in the ACA context. Kadane and Wasilkowski
(1985); Diaconis (1988) discussed the connection between ACA and Bayesian statistics. A
general framework for ACA was formalised in the IBC monograph of Traub et al. (1988), while
Ritter (2000) provides a more recent account.
Game theoretic arguments have recently been explored in Owhadi (2015), who argued that the
optimal prior for probabilistic meshless methods (Cockayne et al., 2016) is a particular Gaussian
measure under a game theoretic framework where the energy norm is the loss function. This
provides one route to the specification of default or objective priors for PNMs which deserves
further exploration in general.
The question of “whose” belief is captured in µ was addressed in Hennig et al. (2015), where
it was argued that the prior information in µ represents that of a hypothetical agent (numerical
analyst) which
[. . . ] we are allowed to design (∼ Michael Osborne, personal correspondence, 2016).
This represents a more pragmatic approach to the design of PNM.
S1.3. Paradise Lost?
Typical numerical algorithms contain several different sources of discretisation error. Consider
the solution of the wave equation: A standard finite element method involves both spatial and
temporal discretisations, a series of numerical quadrature problems, as well as the use of finite
precision arithmetic for all numerical calculations. Yet, decades of numerical analysis have led
to highly optimised computer codes such that these methods can be routinely used. To develop
PNM for solution of the wave equation, which accounts for each separate source of discretisation
error, is it required to unpick and reconstruct such established numerical algorithms? This would
be an unattractive prospect that would detract from further research into PNMs.
Our view is that there is a choice for which discretisation errors to model. In practice the
PNMs implemented in this work were run on floating point precision machines, yet we did not
model rounding error in their output. This was because, in our examples, floating point error
is insignificant compared to discretisation error and so we chose not to model it. This is in line
with the view that a model is a useful simplification of the real world.
S2. Existence of Non-Randomised Bayes Rule
In this section we recall an argument for the general existence of non-randomised Bayes rules,
that was stated without proof in the main text. Sufficient conditions for Fubini’s theorem to
hold are assumed.
51
Proposition S2.1. Let B(A) be non-empty. Then B(A) contains a classical numerical method
of the form B(µ, a) = δ ◦ b(a) where b(a) is a Bayes act for each a ∈ A.
Proof. Let C be the set of belief update operators of the classical form B(µ, a) = δ ◦ b(a).
Suppose there exists a belief update operator B ∗ ∈ B(A) \ C. Then B ∗ can be characterised as
a non-atomic distribution π over the elements of C. Its risk can be computed as:
Z
∗
R(µ, (A, B )) = r(Q(x), B ∗ (µ, A(x)))µ(dx)
ZZ
=
L(Q(x), b(A(x)))π(db)µ(dx)
Z
= R(µ, (A, δ ◦ b))π(db).
If we had R(µ, (A, B ∗ )) < R(µ, (A, δ ◦ b)) for all δ ◦ b ∈ C we would have a contradiction, so it
follows that B(A) ∩ C is non-empty. This completes the proof.
S3. Optimal Information: A Counterexample
In this section we demonstrate that the optimal information Aµ for Bayesian PNM and the
optimal information A∗µ from average case analysis are different in general.
Let X = {♠, ♦, ♥, ♣} be a discrete set, with quantity of interest Q(x) = 1[x = ♠] and
information operator A(x) = 1[x ∈ S] so that Q = A = {0, 1}. In particular, Q is not a vector
space and hence not an inner product space as specified in Theorem 3.3.
Consider two possible choices, S = {♠, ♦} and S = {♠, ♦, ♥}. Assume a uniform prior over
X . Consider the 0-1 loss function L(q, q 0 ) = 1[q 6= q 0 ]. It will be shown that ACA optimal
information for this example can be based on either S = {♠, ♦} or S = {♠, ♦, ♥} whereas
PNM optimal information must be based on S = {♠, ♦, ♥}. Thus Bayesian PNM optimal
information Aµ and ACA optimal information A∗µ need not coincide in general.
The classical case considers a method of the form MBR = (A, BBR ), BBR = δ ◦ b, where
b(a) = 1[a = 0]c0 + 1[a = 1]c1
for some c0 , c1 ∈ {0, 1}. The Bayes risk is
R(µ, MBR ) =
Case of S = {♠, ♦}:
1
4
X
x∈{♠,♦,♥,♣}
1[x ∈
/ S] L(c0 , 1[x = ♠]) + 1[x ∈ S] L(c1 , 1[x = ♠]).
We have
4 R(µ, MBR ) = L(c1 , 1) + L(c1 , 0) + L(c0 , 0) + L(c0 , 0)
= 1[c1 = 0] + 1[c1 = 1] + 2 × 1[c0 = 1]
which is minimised by c1 ∈ {0, 1} and c0 = 0 to obtain a minimum Bayes risk of 41 .
Case of S = {♠, ♦, ♥}: We have
4 R(µ, MBR ) = L(c1 , 1) + L(c1 , 0) + L(c1 , 0) + L(c0 , 0)
= 1[c1 = 0] + 2 × 1[c1 = 1] + 1[c0 = 1]
52
which is minimised by c0 = 0 and c1 = 0 to again obtain a minimum Bayes risk of 14 . Thus the
ACA optimal information can be based on either S = {♠, ♦} or S = {♠, ♦, ♥}.
On the other hand, for the Bayesian PNM we have that MBPNM = (A, BBPNM ), BBPNM =
Q# µA and
R(µ, MBPNM ) =
1
4
X
x∈{♠,♦,♥,♣}
1[x ∈
/ S]L(0, 0)
1
1
+ 1[x ∈ S] (1 −
)L(0, 1[x = ♠]) +
L(1, 1[x = ♠]) .
|S|
|S|
Case of S = {♠, ♦}:
We have
4 R(µ, MBPNM ) =
1 1
+ +0+0
2 2
=
1.
2 1 1
+ + +0
3 3 3
=
4
.
3
Case of S = {♠, ♦, ♥}: We have
4 R(µ, MBPNM ) =
Thus the PNM optimal information is S = {♠, ♦} and not S = {♠, ♦, ♥}. Hence, PNM and
ACA optimal information differ in general.
S4. Monte Carlo Methods for Numerical Disintegration
In this section, Monte Carlo methods for sampling from the distribution µaδ (or µaδ,N ; the N
subscript will be suppressed to reduce notation in the sequel) are considered. The Monte Carlo
approximation of µaδ is, in effect, a problem in rare event simulation as most of the mass of
µaδ will be confined to a set S such that µ(S) is small. Rare events pose some difficulties for
classical Monte Carlo, as an enormous number of draws can be required to study the rare event
of interest.
In the literature there are two major solutions proposed. Importance sampling (Robert and
Casella, 2013) samples from a modified process, under which the event of interest is more likely,
then re-weights these samples to compensate for the adjustment. Conversely, in splitting (Botev
and Kroese, 2012) trajectories of the process are constructed in a genetic fashion, by retaining
and duplicating those which approach the events of interest and discarding others. Splitting is
closely related to SMC (Cérou et al., 2012) and Feynman–Kac models (Del Moral, 2004).
The splitting approach is described in the following section, while in Section S4.3 a parallel
tempering (PT) algorithm is described. In spirit these approaches are similar in that they
employ a tempering approach to ease sampling the relaxed posterior distribution for a small
value of δ. The SMC method employs a particle approximation to accomplish this, while the
PT algorithm uses coupled Markov chains.
S4.1. Sequential Monte Carlo Algorithms for Numerical Disintegration
Let {δi }m
i=0 be such that δ0 = ∞, δm = δ and δi > δi+1 > 0 for all i < m − 1. Furthermore let
a
{Ki }m
i=1 be some set of Markov transition kernels that leave µδi invariant, for which Ki (·, S) is
measurable for all S ∈ ΣX and Ki (x, ·) is an element of PX for all x ∈ X . Then our SMC for
numerical disintegration (SMC-ND) algorithm, based on P particles, is given in Algorithm 1.
53
Sample x0j ∼ µ for j = 1, . . . , P [Initialise]
for i = 1, . . . , m do
Sample xi−1
∼ Ki (xi−1
j
j , ·) for j = 1, . . . , P [Move]
−1
i−1
φ
δ
kA(x
)−akA )
(
i
j
for j = 1, . . . , P [Re-weight]
Set wji ←
−1
φ(δi−1
kA(xi−1
)−akA )
j
P
i P
Sample xij ∼ Discrete({xi−1
j }j=1 ; {wj }j=1 ) for j = 1, . . . , P [Re-sample]
end
Algorithm 1: Sequential Monte Carlo for Numerical Disintegration (SMC-ND).
Here we have used Discrete({xj }Pj=1 ; {wj }Pj=1 ) to denote the discrete distribution which puts
mass proportional to wj on the state xj ∈ X .
The output of the SMC-ND algorithm is an empirical approximation1
µaδm ,P
P
1 X
δ(xm
=
j )
P
j=1
P
to µaδm based on a population of P particles {xm
j }j=1 . There is substantial room to extend and
improve the SMC-ND algorithm based on the wide body of literature available on this subject
(e.g. Doucet et al., 2001; Del Moral et al., 2006; Beskos et al., 2017; Ellam et al., 2016), but
we defer all such improvements for future work. Our aim in the remainder is to establish the
approximation properties of the SMC-ND output. This will be based on theoretical results in
Del Moral et al. (2006).
Assumption S4.1. φ > 0 on R+ .
Assumption S4.2. For all i = 0, . . . , m − 1 and all x, y ∈ X , it holds that Ki+1 (x, ·)
Ki+1 (y, ·). Furthermore there exist constants i > 0 such that the Radon–Nikodým derivative
dKi+1 (x, ·)
≥ i .
dKi+1 (y, ·)
Assumption S4.1 ensures that Algorithm 1 is well-defined, else it can happen that all particles
are assigned zero weight and re-sampling will fail. However, the result that we obtain in Theorem
S4.3 below can also be established in the special case of an indicator function φ(r) = 1[r < 1].
The details for this variation of the results are also included in the sequel.
The interpretation of Assumption S4.2 is that, for fixed i, transition kernels do not allocate
arbitrarily large or small amounts of mass to different areas of the state space, as a function of
their first argument. This poses a constraint on the choice of Markov kernels for the SMC-ND
algorithm.
Theorem S4.3. For all δ ∈ {δi }m
i=0 and fixed p ≥ 1 it holds that
E
µaδ,P (f ) − µaδ (f )
p 1
p
≤
Cp kf kF
√
P
m−1
for some constant Cp independent of P but dependent on {δi }m
i=0 , p and {i }i=0 .
1
The bandwidth parameter δ and the use of δ to denote an atomic distribution should not be confused.
54
The proof of Theorem S4.3 is presented next. Note that the established bound is independent of
δ ∈ {δi }m
i=0 ; this is therefore a uniform convergence result. The assumptions and the conclusion
of Theorem S4.3 can be weakened in several directions, as discussed in detail in (Del Moral
et al., 2006). Development of SMC methods in the context of high-dimensional and infinitedimensional state spaces has also been considered in Beskos et al. (2014, 2015).
S4.2. Proof of Theorem S4.3
In this section we establish the uniform convergence of the SMC-ND algorithm as claimed in
Theorem S4.3. This relies on a powerful technical result from Del Moral (2004), whose context
is now established.
S4.2.1. Feynman–Kac Models
Let (Ei , Ei ) for i = 0, . . . , m be a collection of measurable spaces. Let η0 be a measure on E0 and
let Γi index a collection of Markov transition kernels from Ei−1 to Ei . Let Gi : Ei → (0, 1] be a
collection of functions, which are referred to as potentials. The triplets (η0 , Gi , Γi ) are associated
with Feynman–Kac measures ηi on Ei defined as, for bounded and measurable functions fi on
Ei ;
ηi (fi ) =
γi (fi )
γi (1)
γi (fi ) = Eη0 fi (X i )
i−1
Y
j=0
Gj (X j )
where the expectation is taken with respect to the Markov process X i defined by X 0 ∼ η0 and
X i |X i−1 ∼ Γi (X i−1 , ·).
The Feynman–Kac measures can be associated with a (non-unique) McKean interpretation
of the form ηi+1 = ηi Λi+1,ηi where the Λi+1,η are a collection of Markov transitions for which
the following compatibility condition holds:
ηΛi+1,η =
Gi
ηΓi+1
η(Gi )
Then the ηi can be interpreted as the ith step marginal distribution of the non-homogeneous
Markov chain defined by X 0 ∼ η0 and X i+1 |X i ∼ Λi+1,ηi (X i , ·). The corresponding P -particle
model is defined on EiP = Ei × · · · × Ei and has
X0 ∼ η0P
P(Xi ∈ dxi |Xi ) =
P
Y
Λi,ηP (Xji−1 , dxij )
i−1
j=1
P
where ηiP = P1 Pj=1 δ(Xji ) is an empirical (random) measure on Ei . The SMC-ND algorithm
can be cast as an instance of such a P -particle model, as is made clear later.
The result that we require from Del Moral (2004) is given next. Denote by Osc1 (Ei ) the set
of measurable functions fi on Ei for which sup{|fi (xi ) − fi (y i )| : xi , y i ∈ Ei } ≤ 1.
Theorem (Theorem 7.4.4 in Del Moral (2004)). Suppose that:
55
i
G
i
i i
(G) There exist G
i ∈ (0, 1] such that Gi (x ) ≥ i Gi (y ) > 0 for all x , y ∈ Ei .
(M1 ) There exist Γi ∈ (0, 1) such that Γi+1 (xi , ·) ≥ Γi Γi+1 (y i , ·) for all xi , y i ∈ Ei .
Then for p ≥ 1 and any valid McKean interpretation Λi,η , the associated P -particle model ηiP
satisfies the uniform (in i) bound
√
sup
sup
P E[|ηiP (fi ) − ηi (fi )|p ]1/p ≤ Cp
0≤i≤m fi ∈Osc1 (Ei )
m
Γ m−1
for some constant Cp independent of P but dependent on {G
i }i=0 and {i }i=0 .
The actual statement in Del Moral (2004) contains a more general version of (M1 ) and a
more explicit decomposition of the constant Cp ; however the simpler version presented here is
sufficient for the purposes of the present paper.
S4.2.2. Case A: Positive Function φ(r) > 0
First we prove Theorem S4.3 as it is stated. Later the assumption of φ > 0 will be relaxed.
SMC-ND as a Feynman–Kac Model The aim here is to demonstrate that the SMC-ND
algorithm fits into the framework of Section S4.2.1 for a specific McKean interpretation. This
connection will then be used to establish uniform convergence for the SMC-ND algorithm as a
consequence of Theorem 7.4.4 in Del Moral (2004).
For the state spaces we associate each Ei = X and Ei = ΣX . For the potentials we associate
1
kA(xi ) − akA
φ δi+1
Gi (xi ) =
φ δ1i kA(xi ) − akA
which clearly does not vanish and takes values in (0, 1] since δi > δi+1 and φ is decreasing. For
the Markov transitions we associate Γi+1 with Ki+1 .
The Feynman–Kac measures associated with the SMC-ND algorithm can be cast as a nonhomogeneous Markov chain with transitions Λi+1,η . Here Λi+1,ηi acts on the current measure
ηi on X by first propagating as ηi Ki+1 and then “warping” this measure with the potential Gi ;
i.e.
Gi
ηΛi+1,η =
ηΓi+1 .
η(Gi )
This demonstrates that the SMC-ND algorithm is the P -particle model corresponding to the
McKean interpretation Λi+1,η of the Feynman–Kac triplet (η0 , Gi , Γi ). Thus the SMC-ND
algorithm can be studied in the context of Section S4.2.1, which we report next.
Note that it is common in applications of SMC to perform the “Re-sample” step before the
“Move” step - our choice of order was required for the McKean framework that is the basis of
the theoretical results in Del Moral et al. (2006). It is known in the SMC “folk lore” that the
order of these steps can be interchanged.
Proof of Uniform Convergence Result for SMC-ND It remains to verify the hypotheses of
Theorem 7.4.4 in Del Moral (2004). Condition (G) is satisfied if and only if
1
i
φ
kA(x ) − akA
δi+1
56
is bounded below, since
φ
1
kA(xi ) − akA
δi
is bounded above by 1. Since φ is continuous, decreasing and satisfies φ > 0 (Assumption S4.1),
1
it suffices to show that its argument δi+1
kA(x) − akA is upper-bounded. This is the content of
Assumption 4.6 in the main text, which shows that
1
1
kA(x) − akA ≤ sup kA(x)kA + kakA
δi
δi x∈X
1
=: G < ∞.
i
Condition (M1 ) requires that
Γi+1 (xi , S) ≥ Γi Γi+1 (y i , S)
for all xi , y i ∈ Ei and S ∈ Ei+1 . From construction this is equivalent to
Ki+1 (xi , S) ≥ Γi Ki+1 (y i , S)
for all xi , y i ∈ X and S ∈ ΣX . This is the content of Assumption S4.2.
Thus we have established the hypotheses of Theorem 7.4.4 in Del Moral (2004) for the SMCND algorithm. Theorem S4.3 is a re-statement of this result. For the statement of the result
we used the kf kF norm, based on the fact that (from Assumption 4.7) kfi kOsc(Ei ) ≤ 2kf k∞ ≤
2CF kf kF .
S4.2.3. Case B: Indicator Function φ(r) = 1[r < 1]
The previous analysis required that φ > 0 on R+ . However, the most basic choice for φ is
the indicator function φ(r) = 1[r < 1] which can take the value 0. The case of an indicator
function demands special attention, since Algorithm 1 can fail in this case if all particles are
assigned zero weight. If this occurs, then we just define µαδ,P (f ) = 0. To be specific, the SMCND algorithm associated to the indicator function φ for approximation of the integral µaδ (f ) is
stated as Algorithm 2 next.
Let Xδa = {x ∈ X : kA(x) − akA < δ}. If there is some iteration i at which, after applying
the kernel Ki to each particle, no particle lies within Xδai , the algorithm fails. As a result it
is critical to ensure that the distance between successive δi is small so that the probability of
failure is controlled. This requirement is made formal next. To establish the approximation
properties of the random measure µaδm ,P , two assumptions are required. These are intended to
replace Assumptions S4.1, S4.2 and Assumption 4.6 from the main text:
Assumption S4.4. For all i = 0, . . . , m − 1 and all xi ∈ Xδai , it holds that Ki+1 (xi , Xδai+1 ) > 0.
Assumption S4.5. For all i = 0, . . . , m − 1 and all xi , y i ∈ Xδai , Ki+1 (xi , ·) Ki+1 (y i , ·).
Furthermore there exist constants i > 0 such that the Radon–Nikodým derivative
dKi+1 (xi , ·)
≥ i .
dKi+1 (y i , ·)
Assumption S4.4 requires that the probability of reaching Xδai+1 when starting in Xδai and applying the transition kernel Ki+1 , is bounded away from zero. Assumption S4.5 ensures that, for
fixed i, transition kernels do not allocate arbitrarily large or small amounts of mass to different
areas of the state space, as a function of their first argument.
57
Sample x0j ∼ µ for j = 1, . . . , P [Initialise]
for i = 1, . . . , n do
Sample xij ∼ Ki (xi−1
j , ·) for j = 1, . . . , P [Sample]
i
i
Ei ← {xj : xj ∈ Xδai }
if Ei = ∅ then
Return µaδ,P (f ) ← 0
end
for j = 1, . . . , P do
if xij ∈
/ Ei then
xij ∼ Uniform(Ei ) [Re-sample]
end
end
end
P
Return µaδ,P (f ) ← P1 Pj=1 f (xnj ).
Algorithm 2: Sequential Monte Carlo for Numerical Disintegration (SMC-ND), for the case
where φ(r) = 1[r < 1].
Theorem S4.6. For the alternative situation of an indicator function, it holds that for all
δ ∈ {δi }m
i=0 and fixed p ≥ 1,
E
µaδ,P (f ) − µaδ (f )
p 1
p
≤
Cp kf kF
√
P
for some constant Cp independent of P but dependent on p and {i }m−1
i=0 .
Cérou et al. (2012) proposed an algorithm similar to the one herein but focussed on approximation of the probability of a rare event rather than sampling from the rare event itself. In
particular the theoretical results provided are in terms of these probabilities rather than how
well the measure restricted to the rare event is approximated. Furthermore, many of the results
therein focused upon an idealised version of the problem, in which it was assumed that the
intermediate restricted measures can be sampled directly; this avoids the issues with vanishing
potentials indicated in Del Moral (2004). A similar algorithm was discussed in Ścibior et al.
(2015) but was not shown to be theoretically sound.
The remainder of this Section establishes Theorem S4.6.
SMC-ND as a Feynman–Kac Model The aim here is to demonstrate that Algorithm 2 fits
into the framework of Section S4.2.1 for a specific McKean interpretation. This is analogous to
the proof of Theorem S4.3.
A technical complication is that the potentials Gi must take values in (0, 1], which precludes
the “obvious” choice of Ei = X and Gi (xi ) as indicator functions for the sets Xδai . Instead, we
associate Ei = Xδai and Ei with the corresponding restriction of ΣX . For the potentials we then
take Gi (xi ) = 1 for all xi ∈ Ei , which clearly does not vanish and takes values in (0, 1]. For the
Markov transitions Γi+1 from Ei to Ei+1 we consider
Γi+1 (xi , dxi+1 ) ∝ Ki+1 (xi , xi+1 )
which is the restriction of Ki+1 to Ei+1 . For the latter to be well-defined it is required that the
58
normalisation constant
Z
Ki+1 (xi , xi+1 )dxi+1 > 0
Ei+1
xi
for all
∈ Ei , so that there is a positive probability of reaching Ei+1 from Ei . This is the
content of Assumption S4.4.
The Feynman–Kac measures associated with Algorithm 2 can be cast as a non-homogeneous
Markov chain with transitions Λi+1,η . Here Λi+1,ηi acts on the current measure ηi on Ei by
first propagating as ηi Ki+1 and then restricting this measure to Ei+1 . This procedure is seen
to be identical to the Markov transition Γi+1 defined above and, since the potentials Gi ≡ 1, it
follows that
ηΛi+1,η = ηΓi+1
Gi
=
ηΓi+1 .
η(Gi )
This demonstrates that Algorithm 2 is the P -particle model corresponding to the McKean
interpretation Λi+1,η of the Feynman–Kac triplet (η0 , Gi , Γi ). Thus the SMC-ND algorithm can
be studied in the context of Section S4.2.1, which we report next.
Proof of Uniform Convergence Result for SMC-ND It remains to verify the hypotheses of
Theorem 7.4.4 in Del Moral (2004). Condition (G) is satisfied with no further assumption, since
Gi ≡ 1 and we can take G
i = 1. Condition (M1 ) requires that
Γi+1 (xi , S) ≥ Γi Γi+1 (y i , S)
for all xi , y i ∈ Ei and S ∈ Ei+1 . From construction this is equivalent to
Ki+1 (xi , S) ≥ Γi Ki+1 (y i , S)
for all xi , y i ∈ Ei and S ∈ Ei+1 . This is the content of Assumption S4.5.
Thus we have established the hypotheses of Theorem 7.4.4 in Del Moral (2004) for Algorithm
2 and in doing so have established Theorem S4.6.
S4.3. Parallel Tempering for Numerical Disintegration
a
Let Ki , {δi }m
i=1 be as in Section S4.1. The PT algorithm (Geyer, 1991) for sampling from µδm
runs m Markov chains in parallel, one for each temperature, by alternately applying Ki , then
randomly proposing to “swap” the current state of two of the chains. Commonly only swaps of
adjacent chains are considered; to this end suppose at iteration j an index q ∈ {0, . . . , m − 1}
has been selected. Denote by xq the state of the chain with µaδq as its invariant measure. Then
to ensure the correct invariant distribution of all chains is maintained, the swap of state xq and
xq+1 is accepted with probability
α(xq , xq+1 ) =
πq (xq+1 )πq+1 (xq )
πq (xq )πq+1 (xq+1 )
(S4.1)
where πq denotes the density of the target distribution µaδq with respect to a suitable reference
measure. The density notation can be justified since in our experiments the sampler was applied
to the finite-dimensional distributions µaδq ,N and so the reference measure can be taken to be
the Lebesgue measure on RN .
59
Given some initial xi0 for i = 1, . . . , m [Initialise]
for j = 1, . . . , P do
Sample x̂ij ∼ Ki (xij−1 , ·) for i = 1, . . . , m [Move]
Sample q ∼ Uniform(0, m − 1)
if U (0, 1) < α(xqj , xq+1
) then
j
q
q+1
Set xj = x̂j and xq+1
= x̂qj [Accept Swap]
j
else
Set xqj = x̂qj and xq+1
= x̂q+1
[Reject Swap]
j
j
i
i
For i 6= q, q + 1, set xj = x̂j [Update]
end
Algorithm 3: Parallel Tempering for Numerical Disintegration
The PT algorithm for numerical disintegration is described in Algorithm 3. The samples
P
a
{xm
j }j=1 are approximate draws from the distribution µδm .
Algorithms 1 and 3 are each valid for sampling from a target measure µaδ . The choice of which
algorithm to use is problem dependent, and each algorithm has been applied in the experiments
in Section 6.
S4.4. Estimation of Model Evidence
The model evidence pA (a) was estimated as a by-product of the numerical disintegration algorithm developed. Attention is restricted to the specific relaxation function φ(r) = exp(−r2 ).
Then the thermodynamic integral identity (Gelman and Meng, 1998) can be exploited to calculate the model evidence:
Z Z
1 1
log pA (a) = − lim 2
kA(x) − ak2A dµaδ/√t dt
δ↓0 δ
0
√
where the parameterisation δ 7→ δ/ t is such that t = 0 corresponds to the prior, while t = 1
corresponds to the distribution µaδ .
To approximate this integral, the outer integral is first discretised. To this end, fix a sequence
∞ = δ0 < δ1 < · · · < δm of relaxation parameters. For convenience this may be the same
√
sequence as used to apply numerical disintegration. Then for δm small, and letting ti = δm /δi :
Z
m
1 X
log pA (a) ≈ − 2
(ti − ti−1 ) kA(x) − ak2A dµaδm /√ti
δm
i=1
Thus we obtain a consistent approximation
Z
m
X
1
1
log pA (a) ≈ −
kA(x) − ak2A dµaδi
2 − δ2
δ
i
i−1 |
i=1
{z
}
(∗)
The terms (∗) were estimated via Monte Carlo, based on samples from the distributions µaδi obtained through numerical disintegration. Higher-order quadrature rules and variance reduction
techniques can be used, but were not implemented for this work (Oates et al., 2016b).
60
S4.5. Monte Carlo Details for Painlevé Transcendental
Sampling of the posterior was performed for a temperature schedule of m = 1600 steps, equally
spaced on a logarithmic scale from 10 to 10−4 , for an ensemble of P = 200 particles.
Specification of appropriate transition kernels Ki for this problem was challenging due both
to the high dimension and the empirical observation that, for small δ, mixing of the chains
tends to be poor. This is likely due to the nonlinearity of the information operator which leads
to highly a complex posterior structure. For this reason a gradient-based sampler was used to
construct the transition kernel; the Metropolis-adjusted Langevin algorithm (MALA) (Roberts
and Tweedie, 1996).
Denote by uk the coefficients [ukj ]N
j=1 at iteration k of MALA. Then, recall that MALA has
proposals given by
p
uk+1 = uk + τi Γ∇ log πi (uk ) + 2τi ΓW
where W is a standard Gaussian distribution and Γ ∈ RN ×N is a positive definite preconditioning matrix. The τi were taken to be fixed for each kernel Ki to a value found empirically to
provide a reasonable acceptance rate. πi denotes the unnormalised target distribution for Ki ,
here given by
!
N −a
Ax
πi (uk ) = φ
q N (uk )
δi
P
N
N
where xN = N
i=0 ui φi and q (·) denotes the prior density of the coefficients [uj ]j=1 .
To ensure proposals were scaled to match the decay of the prior for the coefficients, we took
Γ = diag(γ), the diagonal matrix which has the coefficients γi on its diagonal. Even with
such a transition kernel, mixing is generally poor. To compensate k was taken to be large; for
n = 12, 17 we took k = 10, 000, while for n = 22 we took k = 40, 000. We note that such a
large number of temperature levels and transitions makes computation expensive, highlighting
the importance of future work toward methods for approximating the Bayesian posterior in a
more computationally efficient manner.
S4.6. Monte Carlo Details for Poisson Equation
The posterior distribution was obtained by use of the PT algorithm, for m = 20 temperatures
equally spaced on a logarithmic scale between 10−2 and 10−4 . The transition kernels Ki were
given by 10 iterations of a MALA sampler, with preconditioner as described earlier and parameter τ chosen to achieve a good acceptance rate. The number of iterations P was taken to be
106 when n = 25 and 107 when n = 25 or n = 36.
S5. Truncation of the Prior Distribution (Proof of Theorem 4.8)
In this section we present the proof of Theorem 4.8 in the main text. We use a general result
on the well-posedness of Bayesian inverse problems:
Theorem S5.1 (Theorem 4.6 in Sullivan (2016)). Let X and A be separable quasi-Banach
spaces over R. Suppose that
dµaδ
exp(−Φδ (x; a))
=
(S5.1)
dµ
Zδa
where the potential function Φδ satisfies:
61
S0 Φδ (x; ·) is continuous for each x ∈ X , Φδ (·; a) is measurable for each a ∈ A, and for
every r > 0, there exists M0,r,δ ∈ R such that, for all (x, a) ∈ X × A with kxkX < r and
kakA < r,
|Φδ (x; a)| ≤ M0,r,δ .
S1 For every r > 0, there exists a measurable M1,r,δ : R+ → R such that, for all (x, a) ∈ X ×A
with kakA < r,
Φδ (x; a) ≥ M1,r,δ kxkX .
S2 For every r > 0, there exists a measurable M2,r,δ : R+ → R+ such that, for all (x, a, ã) ∈
X × A × A with kakA < r, kãkA < r,
|Φδ (x; a) − Φδ (x; ã)| ≤ exp M2,r,δ kxkX ka − ãkA .
Let Φδ,N be an approximation to Φδ that satisfies (S1-S3) with Mi,r,δ independent of N , and
such that
S3 Ψ : N → R+ is such that, for every r > 0, there exists a measurable M3,r,δ : R+ → R+ ,
such that, for all (x, a) ∈ X × A with kakA < r,
|Φδ,N (x; a) − Φδ (x; a)| ≤ exp M3,r,δ kxkX Ψ(N ).
S4 For some r > 0,
EX∼µ exp(2M3,r,δ (kXkX ) − M1,r,δ (kXkX )) < ∞.
(S5.2)
Let dH denote the Hellinger distance on PX . Then there exists a constant Cδ , independent of
N , such that
dH µaδ,N , µaδ ≤ Cδ Ψ(N )
where µaδ,N is the posterior distribution based on the potential function Φδ,N instead of Φδ .
This allows us to establish conditions on A and µ that guarantee stability under truncation
of the prior:
Proof of Theorem 4.8. Let ϕ be as in Section 4.1, and let
kA(x) − akA
Φδ (x; a) = ϕ
δ
kA ◦ PN (x) − akA
Φδ,N (x; a) = ϕ
.
δ
Our task is to check the conditions of Theorem S5.1 hold for Φδ and Φδ,N .
S0 First, note that Φδ (x; ·) is continuous (since ϕ is continuous from Assumption 4.1 and
Φδ (x; ·) is a composition of continuous functions) and that Φδ (·; a) is measurable (since φ
is measurable and Φδ (·; a) is a composition of measurable functions). Second, note that
ϕ is a continuous bijection from (0, ∞) to itself with ϕ(0) = 0. Thus ϕ−1 exists and we
can consider
δϕ−1 sup{|Φδ (x; a)| : kxkX , kakA < r} = sup{kA(x) − akA : kxkX , kakA < r}
≤ sup kA(x)kA + r
x∈X
≤ ∞ (Assumption 4.6).
Thus we can take M0,r,δ = ϕ( 1δ supx∈X kA(x)kA + rδ ).
62
S1 Since Φδ (x; a) ≥ 0 we can take M1,r,δ = 0.
S2 Given r > 0 let R =
bound
1
δ
supx∈X kA(x)kA + rδ , which is finite by Assumption 4.6. The upper
kA(x) − ãkA
kA(x) − akA
−ϕ
|Φδ (x; a) − Φδ (x; ã)| = ϕ
δ
δ
kA(x) − akA kA(x) − ãkA
≤ CR
−
(Assumption 4.4)
δ
δ
CR
≤
ka − ãkA (reverse triangle inequality)
δ
demonstrates that we can take M2,r,δ = max{0, log( CδR )}.
Minor variation on the above arguments show that S1-3 also hold for Φδ,N with the same
constants Mi,r,δ .
S3 Let CR be defined as in S2. The upper bound
kA ◦ PN (x) − akA
kA(x) − akA
|Φδ,N (x; a) − Φδ (x; a)| = ϕ
−ϕ
δ
δ
kA ◦ PN (x) − akA kA(x) − akA
(Assumption 4.4)
≤ CR
−
δ
δ
CR
≤
kA ◦ PN (x) − A(x)kA (reverse triangle inequality)
δ
CR
≤
exp(m(kxkX ))Ψ(N ) (Assumption 4.5)
δ
demonstrates that we can take M3,r,δ (kxkX ) = max{0, log( CδR ) + m(kxkX )}.
S4 Let CR be defined as in S2. The upper bound
EX∼µ [exp(2M3,r,δ (kXkX ) − M1,r,δ (kXkX ))] = EX∼µ [exp(2 max{0, log(CR /δ) + m(kXkX )})]
CR
≤1+
EX∼µ [exp(2m(kXkX ))]
δ
< ∞ (Assumption 4.5)
establishes the last of the conditions for Theorem S5.1 to hold.
Thus from Theorem S5.1, dH µaδ,N , µaδ ≤ Cδ Ψ(N ). The proof is completed since Assumption
4.7 implies that dF ≤ CF−1 dTV where dTV is the total
√ variation distance based on F = {f :
kf k∞ ≤ 1}; in turn it is a standard fact that dTV ≤ 2dH .
63
| 10 |
Inverse Stability Problem and Applications to
Renewables Integration
arXiv:1703.04491v5 [] 14 Oct 2017
Thanh Long Vu∗ , Hung Dinh Nguyen, Alexandre Megretski, Jean-Jacques Slotine and Konstantin Turitsyn
Abstract—In modern power systems, the operating point, at
which the demand and supply are balanced, may take different
values due to changes in loads and renewable generation levels.
Understanding the dynamics of stressed power systems with
a range of operating points would be essential to assuring
their reliable operation, and possibly allow higher integration
of renewable resources. This letter introduces a non-traditional
way to think about the stability assessment problem of power
systems. Instead of estimating the set of initial states leading to
a given operating condition, we characterize the set of operating
conditions that a power grid converges to from a given initial
state under changes in power injections and lines. We term
this problem as “inverse stability”, a problem which is rarely
addressed in the control and systems literature, and hence, poorly
understood. Exploiting quadratic approximations of the system’s
energy function, we introduce an estimate of the inverse stability
region. Also, we briefly describe three important applications
of the inverse stability notion: (i) robust stability assessment
of power systems w.r.t. different renewable generation levels,
(ii) stability-constrained optimal power flow (sOPF), and (iii)
stability-guaranteed corrective action design.
Index Terms—Power grids, renewables integration, transient
stability, inverse stability, emergency control, energy function
I. I NTRODUCTION
R
ENEWABLE generations, e.g., wind and solar, are increasingly installed into electric power grids to reduce
CO2 emission from the electricity generation sector. Yet,
their natural intermittency presents a major challenge to the
delivery of consistent power that is necessary for today’s grid
operation, in which generation must instantly meet load. Also,
the inherently low inertia of renewable generators limits the
grid’s controllability and makes it easy for the grid to lose its
stability. The existing power grids and management tools were
not designed to deal with these new challenges. Therefore,
new stability assessment and control design tools are needed
to adapt to the changes in architecture and dynamic behavior
expected in the future power grids.
Transient stability assessment of power system certifies that
the system state converges to a stable operating condition after
the system experiences large disturbances. Traditionally, this
task is handled by using either time domain simulation (e.g.,
[1]), or by utilizing the energy method (e.g., [2], [3]) and the
Lyapunov function method (e.g., [4]) to estimate the stability
region of a given equilibrium point (EP), i.e., the set of initial
states from which the system state will converge to that EP.
In modern renewable power grids, the operating point may
take different values under the real-time clearing of electricity
∗ Corresponding author. Email: longvu@mit.edu. All the authors are with
the Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
markets, intermittent renewable generations, changing loads,
and external disturbances. Dealing with the situation when the
EP can change over a wide range makes the transient stability
assessment even more technically difficult and computationally cumbersome.
In this letter, rather than considering the classical stability
assessment problem, we formulate the inverse stability assessment problem. This problem concerns with estimating the
region around a given initial state δ0 , called “inverse stability
region” A(δ0 ), so that whenever the power injections or power
lines change and lead to an EP in A(δ0 ), the system state will
converge from δ0 to that EP. Indeed, the convergence from δ0
to an EP is guaranteed when the system’s energy function is
bounded under some threshold [2], [3]. In [5], we observed
that if the EP is in the interior of the set P characterized by
phasor angular differences smaller than π/2, then the nonlinear
power flows can be strictly bounded by linear functions of
angular differences. Exploiting this observation, we show that
the energy function of power system can be approximated by
quadratic functions of the EP and the system state, and from
which we obtain an estimate of the inverse stability region.
The remarkable advantage of the inverse stability certificate
is making it possible to exploit the change in EP to achieve
useful dynamical properties. We will briefly discuss three
applications of this certificate, which are of importance to the
integration of large-scale renewable resources:
Robust stability assessment: For a typical power system
composed of several components and integrated with
different levels of renewable generations, there are many
contingencies that need to be reassessed on a regular
basis. Most of these contingencies correspond to failures of relatively small and insignificant components,
so the post-fault dynamics is probably transiently stable.
Therefore, most of the computational effort is spent on
the analysis of non-critical scenarios. This computational
burden could be greatly alleviated by a robust transient
stability assessment toolbox that could certify the system’s stability w.r.t. a broad range of uncertainties. In
this letter, we show that the inverse stability certificate
can be employed to assess the transient stability of power
systems for various levels of power injections.
Stability-constrained OPF: Under large disturbances, a
power system with an operating condition derived by
solving the conventional OPF problem may not survive.
It is therefore desirable to design operating conditions
so that the system can withstand large disturbances.
This can be carried out by incorporating the transient
stability constraint into OPF together with the normal
voltage and thermal constraints. Though this problem was
discussed in the literature (e.g., [6]), there is no way
to precisely formulate and solve the stability-constrained
OPF problem because transient stability is a dynamic
concept and differential equations are involved in the
stability constraint. Fortunately, the inverse stability certificate allows for a natural incorporation of the stability
constraint into the OPF problem as a static constraint of
placing the EP in a given set.
Stability-guaranteed corrective actions: Traditional
protection strategies focus on the safety of individual
components, and the level of coordination among component protection systems is far from perfect. Also,
they do not take full advantage of the new flexible
and fast electronics resources available in modern power
systems, and largely rely on customer-harmful actions
like load shedding. These considerations motivated us to
coordinate widespread flexible electronics resources as
a system-level customer-friendly corrective action with
guaranteed stability [7]. This letter presents a unconventional control way in which we relocate the operating
point, by appropriately redispatching power injections,
to attract the emergency state and stabilize the power
systems under emergency situations.
II. I NVERSE STABILITY PROBLEM OF POWER SYSTEMS
In this letter, we utilize the structure-preserving model to
describe the power system dynamics [8]. This model naturally
incorporates the dynamics of the generators’ rotor angle and
the response of load power output to frequency deviation.
Mathematically, the grid is described by an undirected graph
A(N , E), where N = {1, 2, . . . , |N |} is the set of buses and
E ⊆ N × N is the set of transmission lines {k, j}, k, j ∈ N .
Here, |A| denotes the number of elements of set A. The sets
of generator buses and load buses are denoted by G and L.
We assume that the grid is lossless with constant voltage
magnitudes Vk , k ∈ N , and the reactive powers are ignored.
Then, the grid’s dynamics is described by [8]:
X
mk δ¨k + dk δ˙k +
akj sin(δk − δj ) =Pk , k ∈ G, (1a)
{k,j}∈E
dk δ˙k +
X
akj sin(δk − δj ) =Pk , k ∈ L, (1b)
{k,j}∈E
where equation (1a) applies at the dynamics of generator
buses and equation (1b) applies at the dynamics of load
buses. Here akj = Vk Vj Bkj , where Bkj is the (normalized)
susceptance of the transmission line {k, j} connecting the
k th bus and j th bus. Nk is the set of neighboring buses
of the k th bus (see [9] for more details). Let δ(t) =
[δ1 (t) ... δ|N | (t) δ̇1 (t) ... δ̇|N | (t)]> be the state of the system
(1) at time t (for simplicity, we will denote the system state
by δ). Note that Eqs. (1) are invariant under any uniform
shift of the angles δk → δk + c. However, the state δ
can be unambiguously characterized by the angle differences
δkj = δk − δj and the frequencies δ̇k .
Normally, a power grid operates at an operating condition
of the pre-fault dynamics. Under the fault, the system evolves
according to the fault-on dynamics. After some time period,
the fault is cleared or self-clears, and the system is at the socalled fault-cleared state δ0 (the fault-cleared state is usually
estimated by simulating the fault-on dynamics, and hence, is
assumed to be known). Then, the power system experiences
the so-called post-fault dynamics. The transient stability assessment certifies whether the post-fault state converges from
δ0 to a stable EP δ ∗ . Mathematically, the operating condition
∗
>
δ ∗ = [δ1∗ ... δ|N
| 0 ... 0] is a solution of the power-flow like
equations:
X
akj sin δkj =Pk , k ∈ N .
(2)
{k,j}∈E
With renewable generations or under power redispatching,
the power injections Pk take different values. Also, the couplings akj can be changed by using the FACTS devices. Assume akj ≤ akj ≤ ākj . In those situations, the resulting EP δ ∗
also takes different values. Therefore, we want to characterize
the region of EPs so that the post-fault state always converges
from a given initial state δ0 to the EP whenever the EP is
in this region. Though the EP can take different values, it
is assumed to be fixed in each transient stability assessment
because the power injections and couplings can be assumed to
be unchanged in the very fast time scale of transient dynamics
(i.e., 1 to 10 seconds). We consider the following problem:
• Inverse (Asymptotic) Stability Problem: Consider a
given initial state δ0 . Assume that power injections and
the line susceptances can take different values. Estimate
the region of stable EPs so that the state of the system
(1) always converges from δ0 to the EP in this region.
This problem will be addressed with the inverse stability
certificate to be presented in the next section.
III. E NERGY FUNCTION AND INVERSE STABILITY
CERTIFICATE
A. Stability assessment by using energy function
Before introducing the inverse stability certificate addressing the inverse stability problem in the previous section, we
present a normal stability certificate for system with the fixed
power injections and line parameters. For the power system
described by Eqs. (1), consider the energy function:
X Z δkj
X mk δ̇ 2
∗
k
E(δ, δ ∗ ) =
+
akj (sin ξ − sin δkj
)dξ
∗
2
δkj
{k,j}∈E
k∈G
(3)
Then, along every trajectory of (1), we have
X
X
∗
Ė(δ, δ ∗ ) =
mk δ̇k δ̈k +
akj (sin δkj − sin δkj
)δ̇kj
{k,j}∈E
k∈G
=
X
δ̇k (Pk − dk δ˙k −
X
δ̇k (Pk − dk δ˙k −
X
X
akj sin(δk − δj ))
{k,j}∈E
k∈L
+
akj sin(δk − δj ))
{k,j}∈E
k∈G
+
X
∗
akj (sin δkj − sin δkj
)(δ̇k − δ˙j )
{k,j}∈E
=−
X
k∈N
dk (δ˙k )2 ≤ 0,
(4)
in which the last equation is obtained from (2). Hence,
E(δ, δ ∗ ) is always decreasing along every trajectory of (1).
Consider the set P defined by |δkj | ≤ π/2, {k, j} ∈ E,
and the set Φ = {δ ∈ P : E(δ, δ ∗ ) < Emin (δ ∗ )}, where
Emin (δ ∗ ) = minδ∈∂P E(δ, δ ∗ ) and ∂P is the boundary of
P. Φ is invariant w.r.t. (1), and bounded as the state δ is
characterized by the angle differences and the frequencies.
Though Φ is not closed, the decrease of E(δ, δ ∗ ) inside Φ
assures the limit set to be inside Φ. As such, we can apply
the LaSalle’s Invariance Principle and use a proof similar to
that of Theorem 1 in [5] to show that, if δ0 is inside Φ then
the system state will only evolve inside this set and eventually
converge to δ ∗ . So, to check if the system state converges from
δ0 ∈ P to δ ∗ , we only need to check if E(δ0 , δ ∗ ) < Emin (δ ∗ ).
B. Inverse stability certificate
1
0.8
R(δ0)
M
0.6
0.4
δ∗
y
0.2
δ0
B(δ0 )
0
−0.2
−0.4
Λ
−0.6
−0.8
P
−1
−1
−0.5
0
0.5
1
1.5
2
2.5
Fig. 1. For a power system with a xgiven initial state δ0 , if the EP δ ∗ is
inside the set A(δ0 ) = Λ ∩ B(δ0 ) surrounding δ0 then the system state will
converge from δ0 to the EP δ ∗ since E(δ0 , δ ∗ ) < Emin .
For a given initial state δ0 ∈ P, we will construct a region
surrounding it so that whenever the operating condition δ ∗ is
in this region then E(δ0 , δ ∗ ) < Emin (δ ∗ ). Hence, the grid state
will converge from δ0 to δ ∗ according to the stability certificate
in Section III-A. Indeed, we establish quadratic bounds of the
energy function for every δ ∗ in the set Λ defined by inequali1 − sin λ
ties |δkj | ≤ λ < π/2, ∀{k, j} ∈ E. Let g =
> 0. In
π/2 − λ
∗
[5], we observed that for δ ∈ Λ, ξ ∈ P,
∗ 2
X mk δ̇ 2
X
(δkj − δkj
)
k
E(δ, δ ) ≥
+g
akj
,
2
2
∗
(5)
{k,j}∈E
∗ 2
X mk δ̇ 2
X
(δkj − δkj
)
k
+
akj
.
2
2
(6)
{k,j}∈E
k∈G
Define the following functions
D(δ, δ ∗ ) = g
X
{k,j}∈E
F (δ, δ ∗ ) =
akj
∗ 2
(δkj − δkj
)
,
2
X
X mk δ̇ 2
(δkj −
k
+
ākj
2
2
k∈G
(9)
The following is our main result regarding inverse stability
of power system, as illustrated in Fig. 1.
Theorem 1: Consider a given initial state δ0 inside the set
P. Assume that the EP of the system takes different values in
the set A(δ0 ) = Λ ∩ B(δ0 ), where the set B(δ0 ) is defined as
in (9). Then, the system state always converges from the given
initial state δ0 to the EP.
Proof: See Appendix VI.
Remark 1: In this paper, we limit the grid to be described
by the simplified model (1) which captures the dynamics of
the generators’ rotor angle and the response of load power
output to frequency deviation. More realistic models should
take into account voltage variations, reactive powers, dynamics
of the rotor flux, and controllers (e.g., droop controls and
power system stabilizers). It should be noted that the results
in this paper is extendable to more realistic models. The
reason is that all the key results in this paper rely on the
analysis of the system’s energy function, in which we combine
the energy function-based transient stability analysis with the
quadratic bounds of the energy function. In high-order models,
the energy function is more complicated, yet it still can be
bounded by quadratic functions. Hence, we can combine the
energy function-based transient stability analysis of the more
realistic models in [2]–[4] with the quadratic bounds of the
energy function to extend the results in this paper to these
higher-order models. This approach may also extend to other
higher-order models in the port-Hamiltonian formulation (e.g.,
[10], [11]) by applying the appropriate approximation on the
Lyapunov functions established for these models.
A. Robust stability assessment
Hence, for all δ ∗ ∈ Λ, δ ∈ P, we have
E(δ, δ ∗ ) ≤
B(δ0 ) = {δ : F (δ0 , δ) ≤ R(δ0 )/4}.
IV. A PPLICATIONS OF I NVERSE S TABILITY C ERTIFICATE
∗ 2
∗
∗
∗ 2
g(ξ − δkj
) ≤ (ξ − δkj
)(sin ξ − sin δkj
) ≤ (ξ − δkj
) .
k∈G
For a given initial state δ0 inside the set P, we calculate the
“distance” from this initial state to the boundary of the set P :
R(δ0 ) = minδ∈∂P D(δ0 , δ). Let B(δ0 ) be the neighborhood
of δ0 defined by
The robust transient stability problem that we consider involves situations where there is uncertainty in power injections
Pk , e.g., due to intermittent renewable generations. Formally,
for a given fault-cleared state δ0 , we need to certify the
transient stability of the post-fault dynamics described by (1)
with respect to fluctuations of the power injections, which
consequently lead to different values of the post-fault EP δ ∗
as a solution of the power flow equations (2). Therefore, we
consider the following robust stability problem [5]:
•
∗ 2
δkj
)
(7)
.
{k,j}∈E
Using (5) and (6), we can bound the energy function as
D(δ, δ ∗ ) ≤ E(δ, δ ∗ ) ≤ F (δ, δ ∗ ), ∀δ ∈ P, δ ∗ ∈ Λ.
(8)
Robust stability assessment: Given a fault-cleared state
δ0 , certify the transient stability of (1) w.r.t. a set of stable
EPs δ ∗ resulted from different levels of power injections.
Utilizing the inverse stability certificate, we can assure
robust stability of renewable power systems whenever the
resulting EP is inside the set A(δ0 ) = Λ ∩ B(δ0 ). To check
that the EP is in the set Λ, we can apply the criterion from
[12], which states that the EP will be in the set Λ if the power
injections p = [P1 , ..., P|N | ]T satisfy
kL† pkE,∞ ≤ sin λ,
(10)
where L† is the pseudoinverse of the network Laplacian
matrix and the norm kxkE,∞ is defined by kxkE,∞ =
max{i,j}∈E |x(i) − x(j)|. On the other hand, some similar
sufficient condition could be developed so that we can verify
that the EP is in the set B(δ0 ) by checking the power
injections. This will help us certify robust stability of the
system by only checking the power injections.
B. Stability-constrained OPF
Stability-constrained OPF problem concerns with determining the optimal operating condition with respect to the voltage
and thermal constraints, as well as the stability constraint.
While the voltage and thermal constraints are well modeled
via algebraic equations or inequalities, it is still an open
question as to how to include the stability constraint into
OPF formulation since stability is a dynamic concept and
differential equations are involved [6].
Mathematically, a standard OPF problem is usually stated
as follows (refer to [6] for more detailed formulation):
s.t.
min c(P )
(11)
P (V, δ) = P
(12)
Q(V, δ) = Q
(13)
V ≤ V ≤ V̄
(14)
S ≤ |S(V, δ)| ≤ S̄
(15)
where c(P ) is a quadratic cost function, the decision variables P are typically the generator scheduled electrical power
outputs, the equality constraints (12)-(13) stand for the power
flow equations, and the inequality constraints (14)-(15) stand
for the voltage and thermal limits of branch flows through
transmission lines and transformers. Assume that the stability
constraint is to make sure that the system state will converge
from a given fault-cleared state δ0 to the designed operating
condition, and that the reactive power is negligible. With
the inverse stability certificate, the stability constraint can be
relaxed and formulated as δ ∈ A(δ0 ). Basically, the inverse
stability certificate transforms the dynamic problem of stability
into a static problem of placing the prospective EP into a set.
In summary, we obtain a relaxation of the stability-constrained
OPF problem as follows:
s.t.
C. Emergency control design
min c(P )
(16)
P (V, δ) = P
(17)
V ≤ V ≤ V̄
(18)
S ≤ |S(V, δ)| ≤ S̄
(19)
δ ∈ A(δ0 ).
(20)
Solution of this optimization problem in an optimal operating condition at which the cost function is minimized
and the voltage/thermal constraints are respected. Furthermore,
the stability constraint is guaranteed by the inverse stability
certificate, and the system state is ensured to converge from
the fault-cleared δ0 to the operating condition.
δ0
δ ∗1
δ ∗2 δ ∗N
δ ∗desired
Fig. 2. Power dispatching to relocate the stable EPs δi∗ so that the faultcleared state, which is possibly unstable if there is no controls, is driven
∗
through a sequence of EPs back to the desired EP δdesired
. The placement
of these EPs is determined by applying the inverse stability certificate.
Another application of the inverse stability certificate,
that will be detailed in this section, is designing stabilityguaranteed corrective actions that can drive the post-fault
dynamics to a desired stability regime. As illustrated in Fig.
2, for a given fault-cleared state δ0 , by applying the inverse
stability certificate, we can appropriately dispatch the power
injections Pk to relocate the EP of the system so that the postfault dynamics can be attracted from the fault-cleared state
∗
δ0 through a sequence of EPs δ1∗ , ..., δN
to the desired EP
∗
δdesired
. In other words, we subsequently redispatch the power
injections so that the system state converges from δ0 to δ1∗ ,
∗
∗
to δdesired
. This
and then, from δ1∗ to δ2∗ , and finally, from δN
type of corrective actions reduces the need for prolonged load
shedding and full state measurement. Also, this control method
is unconventional where the operating point is relocated as
desired, instead of being fixed as in the classical controls.
Mathematically, we consider the following problem:
• Emergency Control Design: Given a fault-cleared state
∗
δ0 and a desired stable EP δdesired
, determine the
feasible dispatching of power injections Pk to relocate
the EPs so that the post-fault dynamics is driven from
the fault-cleared state δ0 through the set of designed EPs
∗
to the desired EP δdesired
.
To solve this problem, we can design the first EP δ1∗ by
minimizing kL† pkE,∞ over all possible power injections. The
optimum power injection will result in an EP which is most
far away from the stability margin |δkj | = π/2, and hence,
the stability region of the first EP δ1∗ probably contains the
given fault-cleared state. To design the sequence of EPs, in
each step, we carry out the following tasks:
∗
∗
• Calculate the distance R(δi−1 ) from δi−1 to the boundary
∗
∗
of the set P, i.e., R(δi−1 ) = minδ∈∂P D(δ, δi−1
). Noting
∗
that minimization of D(δ, δi−1 ) over the boundary of the
set P is a convex problem with a quadratic objective
function and linear constraints. Hence, we can quickly
∗
obtain R(δi−1
).
∗
∗
• Determine the set B(δi−1 ) and the set A(δi−1 ).
∗
• The next EP δi will be chosen as the intersection of
∗
the boundary of the set A(δi−1
) and the line segment
∗
∗
connecting δi−1 and δdesired .
• The power injections Pk that we have to redispatch will
P
(i)
be determined by Pk = j∈Nk akj sin δi∗kj for all k.
0.05
G
δ1 (t)
δ2 (t)
δ3 (t)
δ4 (t)
δ5 (t)
δ6 (t)
δ7 (t)
δ8 (t)
δ9 (t)
G
0
2
8
7
9
3
Angles (rad)
−0.05
5
6
4
−0.1
−0.15
−0.2
−0.25
1
G
−0.3
Fig. 3. 3-generator 9-bus system with frequency-dependent dynamic loads.
2
0
1
2
3
4
5
6
time (s)
Fig. 5. Stable dynamics with power injection control: Convergence of buses
angles from the fault-cleared state to δ1∗ in the post-fault dynamics
1
δ 1(t)
δ 2(t)
δ 3(t)
δ 4(t)
δ 5(t)
δ 6(t)
δ 7(t)
δ 8(t)
δ 9(t)
−2
−3
−4
−5
−6
−7
0
0.5
1
1.5
2
2.5
3
3.5
7
D 2 (t)
6
Distance to δ2∗
Angles (rad)
0
−1
5
4
3
2
1
0
0
4
1
2
3
4
5
This power dispatch will place the new EP at δi∗ which
is in the inverse stability region of the previous EP
∗
δi−1
. Therefore, the controlled post-fault dynamics will
∗
converge from δi−1
to δi∗ .
This procedure strictly reduces the distance from EP to
∗
(it can be proved that there exists a constant d > 0 so
δdesired
that such distance reduces at least d in each step). Hence, after
∗
will be sufficiently near the desired EP
some steps, the EP δN
∗
δdesired so that the convergence of the system state to the
∗
desired EP δdesired
will be guaranteed.
Node
1
2
3
4
5
6
7
8
9
V (p.u.)
1.0284
1.0085
0.9522
1.0627
1.0707
1.0749
1.0490
1.0579
1.0521
TABLE I
Pk (p.u.)
3.6466
4.5735
3.8173
-3.4771
-3.5798
-3.3112
-0.5639
-0.5000
-0.6054
B US VOLTAGES , MECHANICAL INPUTS , AND STATIC LOADS .
To illustrate that this control works well in stabilizing some
possibly unstable fault-cleared state δ0 , we consider the 3machine 9-bus system with 3 generator buses and 6 frequencydependent load buses as in Fig. 3. The susceptances of the
transmission lines are as follows: B14 = 17.3611p.u., B27 =
16.0000p.u., B39 = 17.0648p.u., B45 = 11.7647p.u., B57 =
6.2112p.u., B64 = 10.8696p.u., B78 = 13.8889p.u., B89 =
9.9206p.u., B96 = 5.8824p.u. The parameters for generators are: m1 = 0.1254, m2 = 0.034, m3 = 0.016, d1 =
0.0627, d2 = 0.017, d3 = 0.008. For simplicity, we take dk =
0.05, k = 4 . . . , 9. Assume that the fault trips the line between
7
8
9
10
Fig. 6. Effect of power dispatching control: the convergence of the distance
D2 (t) to 0. Here, the Euclid distance
qP D2 (t) between a state δ and the second
9
∗
2
EP δ2∗ is defined as D2 (t) =
i=2 (δi1 (t) − δ2i1 ) .
0.5
Ddesired(t)
0.4
∗
Distance to δdesired
Fig. 4. Unstable post-fault dynamics when there is no controls: |δ45 | and
|δ57 | evolve to 2π, triggering the tripping of lines {4, 5} and {5, 7}.
6
time (s)
time (s)
0.3
0.2
0.1
0
0
1
2
3
4
5
6
7
8
9
10
time (s)
Fig. 7. Autonomous dynamics when we switch the power injections to the
desired values: the convergence of the distance Ddesired (t) to 0. Here,
∗
the distance Ddesired (t) between
qP a state δ and the desired EP δdesired
9
∗
2
is defined as Ddesired (t) =
i=2 (δi1 (t) − δdesired ) .
i1
buses 5 and 7, and make the power injections to fluctuate.
When the fault is cleared this line is re-closed. We also assume
the fluctuation of the generation (probably due to renewables)
and load so that the voltages Vk and power injections Pk of the
∗
post-fault dynamics are given in Tab. I. The stable EP δdesired
is calculated as [−0.1629 0.4416 0.3623 −0.3563 −0.3608 −
0.3651 0.1680 0.1362 0.1371]> . However, the fault-cleared
state, with angles [0.025 − 0.023 0.041 0.012 − 2.917 −
0.004 0.907 0.021 0.023]> and generators angular velocity
[−0.016 − 0.021 0.014]> , is outside the set P. It can be seen
from Fig. 4 that the uncontrolled post-fault dynamics is not
stable since |δ45 | and |δ57 | quickly evolve from initial values
to 2π, which will activate the protective devices to trip the
lines.
Using CVX software [13] to minimize kL† pkE,∞ , we
obtain the new power injections at buses 1-6 as follows: P1 = 0.5890, P2 = 0.5930, P3 = 0.5989, P4 =
−0.0333, P5 = −0.0617, and P6 = −0.0165. Accord-
ingly, the minimum value of kL† pkE,∞ = 0.0350 <
sin(π/89). Hence, the first EP obtained from equation (2)
will be in the set defined by the inequalities |δkj | ≤
π/89, ∀{k, j} ∈ E, and can be approximated by δ1∗ ≈
L† p = [0.0581 0.0042 0.0070 0.0271 0.0042 0.0070 −
0.0308 − 0.0486 − 0.0281]> . The simulation results confirm that the post-fault dynamics is made stable by applying
the optimum power injection control, as showed in Fig. 5.
Using the above procedure, after one step, we can find that
∗
δ2∗ = 0.9259δdesired
+ 0.0741δ1∗ is the intersection of the set
∗
A(δ1∗ ) and the line segment connecting δ1∗ and δdesired
. This
EP is inside the inverse stability region of δ1∗ , and hence the
system state will converge from δ1∗ to δ2∗ when we do the
(2)
power dispatching Pk corresponding to δ2∗ . On the other
∗
hand, δ2∗ is very near the desired EP δdesired
and it is easy to
∗
check that δdesired is in the inverse stability region of δ2∗ , and
thus the system state will converge from δ2∗ to the desired
∗
EP δdesired
. Such convergence of the controlled post-fault
dynamics is confirmed in Figs. 6-7.
Note that D(δ0 , M ) ≥ R(δ0 ) as R(δ0 ) is the distance from δ0
to the boundary of the set P. This, together with (21), leads
to E(M, δ ∗ ) + E(δ0 , δ ∗ ) ≥ R(δ0 )/2. From (8) and (9), we
have E(δ0 , δ ∗ ) ≤ F (δ0 , δ ∗ ) < R(δ0 )/4. Hence,
V. C ONCLUSIONS
Electric power grids possess rich dynamical behaviours,
e.g., nonlinear interaction, prohibition of global stability,
and exhibition of significant uncertainties, that challenge the
maintenance of their reliable operation and pose interesting
questions to control and power communities. This letter characterized a surprising property termed as “inverse stability”,
which was rarely investigated and poorly understood (though
some related inverse problems were addressed in [14]). This
new notion could change the way we think about the stability
assessment problem. Instead of estimating the set of initial
states leading to a given operating condition, we characterized
the set of operating conditions that a power grid converges to
from a given initial state under changes in power injections
and lines. In addition, we briefly described three applications
of the inverse stability certificate: (i) assessing the stability of
renewable power systems, (ii) solving the stability-constrained
OPF problem, and (iii) designing power dispatching remedial
actions to recover the transient stability of power systems.
Remarkably, we showed that robust stability due to the fluctuation of renewable generations can be effectively assessed,
and that the stability constraint can be incorporated as a static
constraint into the conventional OPF. We also illustrated a
unconventional control method, in which we appropriately
relocate the operating point to attract a given fault-cleared state
that originally leads to an unstable dynamics.
R EFERENCES
VI. A PPENDIX : P ROOF OF T HEOREM 1
For each EP δ ∗ ∈ A(δ0 ), let M be the point on the
boundary of the set P so that E(M, δ ∗ ) = Emin (δ ∗ ) =
minδ∈∂P E(δ, δ ∗ ), as showed in Fig. 1. From (8), we have
E(M, δ ∗ ) + E(δ0 , δ ∗ ) ≥ D(M, δ ∗ ) + D(δ0 , δ ∗ )
∗ 2
∗ 2
X
(δMkj − δkj
) + (δ0kj − δkj
)
=g
akj
2
{k,j}∈E
≥g
X
{k,j}∈E
akj
(δMkj − δ0kj )2
D(δ0 , M )
=
.
4
2
(21)
E(M, δ ∗ ) > R(δ0 )/4 > E(δ0 , δ ∗ ).
(22)
Therefore, for any δ ∗ ∈ A(δ0 ), we have E(δ0 , δ ∗ ) <
Emin (δ ∗ ). By applying the stability analysis in Section III-A,
we conclude that the initial state δ0 must be inside the stability
region of the EP δ ∗ and the system state will converge from
the initial state δ0 to the EP δ ∗ .
VII. ACKNOWLEDGEMENTS
This work was supported by the MIT/Skoltech, Ministry
of Education and Science of Russian Federation (Grant no.
14.615.21.0001.), and NSF under Contracts 1508666 and
1550015.
[1] I. Nagel, L. Fabre, M. Pastre, F. Krummenacher, R. Cherkaoui, and
M. Kayal, “High-Speed Power System Transient Stability Simulation
Using Highly Dedicated Hardware,” Power Systems, IEEE Transactions
on, vol. 28, no. 4, pp. 4218–4227, 2013.
[2] A. Pai, Energy Function Analysis for Power System Stability, ser.
Power Electronics and Power Systems. Springer US, 2012. [Online].
Available: https://books.google.com/books?id=1HDgBwAAQBAJ
[3] H.-D. Chiang, Direct Methods for Stability Analysis of Electric Power
Systems, ser. Theoretical Foundation, BCU Methodologies, and Applications. Hoboken, NJ, USA: John Wiley & Sons, Mar. 2011.
[4] R. Davy and I. A. Hiskens, “Lyapunov functions for multi-machine
power systems with dynamic loads,” Circuits and Systems I: Fundamental Theory and Applications, IEEE Transactions on, vol. 44, no. 9,
pp. 796–812, 1997.
[5] T. L. Vu and K. Turitsyn, “A framework for robust assessment of power
grid stability and resiliency,” IEEE Transactions on Automatic Control,
vol. 62, no. 3, pp. 1165–1177, March 2017.
[6] A. Pizano-Martianez and C. R. Fuerte-Esquivel and D. Ruiz-Vega,
“Global Transient Stability-Constrained Optimal Power Flow Using an
OMIB Reference Trajectory,” IEEE Transactions on Power Systems, vol.
25, no. 1, pp. 392–403, Feb 2010.
[7] T. L. Vu, S. Chatzivasileiadis, H. D. Chiang, and K. Turitsyn, “Structural
emergency control paradigm,” IEEE Journal on Emerging and Selected
Topics in Circuits and Systems, vol. 7, no. 3, pp. 371–382, Sept 2017.
[8] A. R. Bergen and D. J. Hill, “A structure preserving model for power
system stability analysis,” Power Apparatus and Systems, IEEE Transactions on, no. 1, pp. 25–35, 1981.
[9] T. L. Vu, S. M. A. Araifi, M. S. E. Moursi, and K. Turitsyn, “Toward
simulation-free estimation of critical clearing time,” IEEE Trans. on
Power Systems, vol. 31, no. 6, pp. 4722–4731, Nov 2016.
[10] S. Fiaz and D. Zonetti and R. Ortega and J.M.A. Scherpen and A.J. van
der Schaft, “A port-Hamiltonian approach to power network modeling
and analysis,” European Journal of Control, vol. 19, no. 6, pp. 477 –
485, 2013.
[11] S. Y. Caliskan and P. Tabuada, “Compositional Transient Stability
Analysis of Multimachine Power Networks,” IEEE Transactions on
Control of Network Systems, vol. 1, no. 1, pp. 4–14, March 2014.
[12] F. Dorfler, M. Chertkov, and F. Bullo, “Synchronization in complex
oscillator networks and smart grids,” Proceedings of the National
Academy of Sciences, vol. 110, no. 6, pp. 2005–2010, 2013.
[13] Grant, Michael C. and Boyd, Stephen P. and Ye, Yinu, “CVX: Matlab
software for disciplined convex programming (web page and software),”
Available at http://cvxr.com/cvx.
[14] I. A. Hiskens, “Power system modeling for inverse problems,” IEEE
Transactions on Circuits and Systems I: Regular Papers, vol. 51, no. 3,
pp. 539–551, March 2004.
| 3 |
arXiv:1201.3325v2 [] 14 Aug 2012
SIMPLICIAL COMPLEXES WITH RIGID DEPTH
ADNAN ASLAM AND VIVIANA ENE
Abstract. We extend a result of Minh and Trung [8] to get criteria for
√
depth I = depth I where I is an unmixed monomial ideal of the polynomial ring S = K[x1 , . . . , xn ]. As an application we characterize all the pure
simplicial complexes ∆ which have rigid depth, that is, which satisfy the con√
dition that for every unmixed monomial ideal I ⊂ S with I = I∆ one has
depth(I) = depth(I∆ ).
Introduction
Let S = K[x1 , . . . , xn ] be the polynomial ring over a field K and I ⊂ S a monomial ideal. In [5], the authors compare the properties
of I with the properties of its
√
radical by using the inequality βi (I) ≥ βi ( I). In particular, from the inequality
√
between the Betti numbers, one gets the inequality depth(S/I)ò depth(S/ I),
which implies, for instance, that S/I is Cohen-Macaulay if S/ I is so. In [8],
the authors presented criteria for the Cohen-Macaulayness of a monomial ideal in
terms of its primary decomposition. We extend their criteria to characterize
the
√
unmixed monomial ideals for which the equality depth(S/I) = depth(S/ I) holds.
We recall that an ideal I ⊂ S is unmixed if the associated prime ideals of S/I are
the minimal prime ideals of I.
Let ∆ be a pure simplicial complex with the facet set denoted, as usual, by F (∆),
T
and let I∆ = F ∈F (∆) PF be its Stanley-Reisner ideal. For any subset F ⊂ [n],
we denoted by PF the monomial prime ideal generated by √
the variables xi with
i∈
/ F . Let I ⊂ S be an unmixed monomial ideal such that I = I∆ and assume
T
that I = F ∈F (∆) IF where IF is the PF -primary component of I. Following [8],
for every a ∈ Nn , a = (a1 , . . . , an ), we set xa = xa1 1 · · · xann and denote by ∆a the
simplicial complex on the set [n] with the facet set F (∆a ) = {F ∈ F (∆) | xa ∈
/ IF }.
Moreover, for every simplicial complex Γ with F (Γ) ⊆ F (∆), we set
[
\
IG }.
IF \
LΓ (I) = {a ∈ Nn | xa ∈
F ∈F (∆)\F (Γ)
G∈F (Γ)
In Section 1, we prove the following theorem which is a natural extension of
Theorem 1.6 in [8].
1991 Mathematics Subject Classification. Primary 13C15, Secondary 13F55,13D45.
Key words and phrases. Monomial ideals, Simplicial complexes, Stanley-Reisner rings, Depth.
The second author was supported by the grant UEFISCDI, PN-II-ID-PCE- 2011-3-1023.
1
2
ADNAN ASLAM AND VIVIANA ENE
Theorem 1. Let ∆ be a pure simplicial
√ complex with depth K[∆] = t. Let I ⊂ S
be an unmixed monomial ideal with I = I∆ . Then the following conditions are
equivalent:
√
(a) depth(S/I) = depth(S/ I),
(b) depth K[∆a ] ≥ t for all a ∈ Nn ,
(c) LΓ (I) = ∅ for every simplicial complex Γ with F (Γ) ⊆ F (∆) and depth K[Γ] <
t.
As a main application of the above theorem we study in Section 2 a special class of simplicial complexes. We say that a pure simplicial
complex has
√
rigid depth if for every unmixed monomial ideal I ⊂ S with I = I∆ one has
depth(S/I) = depth(S/I∆ ). In Theorem 2.3 which generalizes [5, Theorem 3.2], we
give necessary and sufficient conditions for ∆ to have rigid depth. In particular,
from this characterization, it follows that if a pure simplicial complex has rigid
depth over a field of characteristic 0, then it has rigid depth over any field. In the
last part we discuss the behavior of rigid depth in connection to the skeletons of
the simplicial complex.
√
1. Criteria for depth(S/I) = depth(S/ I)
Let S = K[x1 , . . . , xn ] be the polynomial
ring over a field K. Let I ⊂ S be an
√
unmixed monomial ideal such that I = I∆ where ∆ is a pure simplicial complex
T
/ F ) for
with the facet set F (∆). Then I∆ = F ∈F (∆) PF , where PF = (xi | i ∈
T
every F ∈ F (∆). Let I = F ∈F (∆) IF where IF is the PF -primary component of
I.
In order to prove the main result of this section we need to recall some facts from
[8, Section 1]. For a = (a1 , . . . , an ) ∈ Zn , let Ga = {i | ai < 0}. We denote by ∆a
the simplicial complex on [n] of all the sets of the form F \ Ga where Ga ⊂ F ⊂ [n]
and such that F satisfies the condition xa ∈
/ ISF where SF = S[x−1
| i ∈ F ]. It is
i
shown in [8, Section 1] that if ∆a is non-empty, then ∆a is a pure subcomplex of
∆ of dim ∆a = dim ∆ − |Ga |.
For every simplicial subcomplex Γ of ∆ with F (Γ) ⊂ F (∆) we set
[
\
IG }.
IF \
LΓ (I) = {a ∈ Nn | xa ∈
F ∈F (∆)\F (Γ)
G∈F (Γ)
By [8, Lemma 1.5], we have
(1)
∆a = Γ if and only if a ∈ LΓ (I).
For the proof of the next theorem we also need to recall Takayama’s formula [9].
i
For every degree a ∈ Zn we denote by Hm
(S/I)a the a-component of the ith local
cohomology module of S/I with respect to the homogeneous maximal ideal of S.
For 1 ≤ j ≤ n, let
ρj (I) = max{νj (u) | u is a minimal generator of I},
where by νj (u) we mean the exponent of the variable xj in u. If xj does not divide
u, then we use the usual convention, νj (u) = 0.
SIMPLICIAL COMPLEXES WITH RIGID DEPTH
3
Theorem 1.1 (Takayama’s formula).
dimK H̃i−|Ga |−1 (∆a , K), if Ga ∈ ∆ and
i
dimK Hm
(S/I)a =
aj < ρj (I) for 1 ≤ j ≤ n,
0,
else.
The next theorem is a natural extension of [8, Theorem 1.6].
Theorem 1.2. Let ∆ be a pure simplicial
√ complex with depth K[∆] = t. Let I ⊂
S be an unmixed monomial ideal with I = I∆ . The following conditions are
equivalent:
(a) depth(S/I) = t,
(b) depth K[∆a ] ≥ t for all a ∈ Nn with ∆a 6= ∅,
(c) LΓ (I) = ∅ for every simplicial complex Γ with F (Γ) ⊆ F (∆) and depth K[Γ] <
t.
Proof. The proof of this theorem follows closely the ideas of the proof of [8, Theorem 1.6]. For the equivalence (a) ⇔ (b) we need to recall some known facts about
local cohomology; see [2, Section A. 7]. For any finitely generated graded S-module
i
M we have depth M ≥ t if and only if Hm
(M√
) = 0 for all i < t. Therefore, in our
hypothesis, and since depth(S/I) ≤ depth(S/ I) = t, we get
(2)
i
depth(S/I) = t ⇔ Hm
(S/I) = 0 for i < t.
In addition, for every a ∈ Nn , we get
(3)
i
depth(K[∆a ]) ≥ t ⇔ Hm
(K[∆a ]) = 0 for i < t.
For b ∈ Zn , we set Gb = {i | bi < 0} and Hb = {i | bi > 0}. By using [2,
Theorem A.7.3], for every b ∈ Zn , we obtain
i
dimK Hm
(K[∆a ])b = dimK H̃i−|Gb |−1 (linkstar Hb Gb ; K).
Here we denoted by star Hb the star of Hb in ∆a , and by linkstar Hb Gb the link of
Gb in the complex star Hb . We recall that if Γ is a simplicial complex and F is a face
of Γ, then starΓ F = {G | F ∪G ∈ Γ} and linkΓ F = {G | F ∪G ∈ Γ and F ∩G = ∅}.
Therefore, the equivalence (3) my be written
depth K[∆a ] ≥ t
(4)
⇔ H̃i−|Gb |−1 (linkstar Hb Gb ; K) = 0 for i < t and for every b ∈ Zn .
Since linkstar Hb Gb is acyclic for Hb 6= ∅ and star Hb = ∆a if Hb = ∅, we get
depth K[∆a ] ≥ t
(5)
⇔ H̃i−|Gb |−1 (link∆a Gb ; K) = 0 for i < t and for every b ∈ Zn .
By Takayama’s formula, the equivalence (2) may be rewritten
depth(S/I) = t
(6)
⇔ dimK H̃i−|Gb |−1 (∆b ; K) = 0 for i < t and for every b ∈ Zn .
4
ADNAN ASLAM AND VIVIANA ENE
Now, the equivalence (a)⇔ (b) follows by relations (5) and (6) if we notice that,
by the proof of (i) ⇒ (ii) in [8, Theorem 1.6], we have link∆a Gb = ∆b for any
Gb ∈ ∆a .
For the rest of the proof we only need to use (1). Indeed, for (b) ⇒ (c), let
us assume that LΓ (I) 6= ∅ for some subcomplex Γ of ∆ with F (Γ) ⊂ F (∆) and
such that depth(K[Γ]) < t. Then there exists a ∈ LΓ (I), hence Γ = ∆a . But this
equality is impossible since depth(K[∆a ]) ≥ t. For (c) ⇒ (b), let us assume that
there exists a ∈ Nn such that depth K[∆a ] < t. Then, for Γ = ∆a we get LΓ (I) 6= ∅,
a contradiction.
Obviously, for t = dim K[∆] in the above theorem we recover Theorem 1.6 in
[8].
The above theorem is especially useful in the situation when I is either an intersection of monomial prime ideal powers or an intersection of irreducible monomial
ideals. The first class of ideals may be studied with completely similar arguments to
those used in [8, Section 1]. In the sequel we discuss ideals which are intersections
of irreducible monomial ideals.
Tr
Let F (∆) = {F1 , . . . , Fr } and I = i=1 IFi be an intersection of irreducible
a
monomial ideals, that is, for every 1 ≤ i ≤ r, IFi = (xj ij | j 6∈ Fi ) for some
positive exponents aij . As a consequence
√ of the above theorem, one may express
the condition depth(S/I) = depth(S/ I) in terms of linear inequalities on the
exponents aij .
Proposition
√ 1.3. The set of exponents (aij ) for which the equality depth(S/I) =
depth(S/ I) holds consists of all points of positive integer coordinates in a finite
union of rational cones in Rr(n−d) .
Proof. Let Γ be a subcomplex of ∆ with depth(K[Γ]) < t and F (∆) \ F (Γ) =
{Fi1 , . . . , Fis } where 1 ≤ i1 < · · · < is ≤ r. The condition LΓ (I) = ∅ gives
s
\
aiq j
(xj
q=1
:j∈
/ Fiq ) ⊆
[
IFk .
k∈{i
/ 1 ,...,is }
This implies that the following conditions must hold
ai
j
ai
j
a
lcm(xj1 1 1 , xj2 2 2 , . . . , xjsis js ) ∈
[
IFk
k∈{i
/ 1 ,...,is }
for all s-tuples (j1 , j2 , . . . , js ), with jq ∈
/ Fiq for 1 ≤ q ≤ s. This is equivalent to
saying that for every s-tuple (j1 , j2 , . . . , js ), with jq ∈
/ Fiq for 1 ≤ q ≤ s, there exists
1 ≤ q ≤ s such that
aiq jq ≥ min{akjq : k 6= i1 , i2 , . . . , is , }.
In the following example we consider tetrahedral type ideals.
SIMPLICIAL COMPLEXES WITH RIGID DEPTH
5
Example 1.4. Let ∆ be the 4-cycle, that is, I∆ = (x1 , x2 ) ∩ (x1 , x4 ) ∩ (x2 , x3 ) ∩
(x3 , x4 ). Note that S/I∆ is Cohen-Macaulay, hence depth(S/I∆ ) = 2.
Let I = (xa1 1 , xa2 2 ) ∩ (xa1 3 , xa4 4 ) ∩ (xa2 5 , xa3 6 ) ∩ (xa3 7 , xa4 8 ). Then depth(S/I) =
depth(S/I∆ ), that is, I is a Cohen-Macaulay ideal, if and only if one of the following
condition holds:
(1)
(2)
(3)
(4)
a3
a2
a5
a1
≤ a1 ,
≤ a5 ,
≤ a2 ,
≤ a3 ,
a2
a6
a1
a4
= a5 ,
= a7 ,
= a3 ,
= a8 ,
a7
a4
a8
a6
≤ a6 .
≤ a8 .
≤ a4 .
≤ a7 .
In order to prove the above claim, we first notice that any subcomplex Γ of ∆
which has depth(K[Γ]) < 2 corresponds to a disconnected subgraph of ∆. But
∆ has two disconnected subgraphs which correspond to the pair of disjoint edges
{1, 2}, {3, 4} and {1, 4}, {2, 3} . Let Γ be the subgraph {1, 2}, {3, 4} . Then
the inequalities of the proof of Proposition 1.3 give
(a1 ≤ a3 or a2 ≤ a5 ) and (a1 ≤ a3 or a7 ≤ a6 )
and (a8 ≤ a4 or a2 ≤ a5 ) and (a8 ≤ a4 or a7 ≤ a6 ),
which is equivalent to
(7)
(a1 ≤ a3 and a8 ≤ a4 ) or
(a2 ≤ a5 and a7 ≤ a6 ).
Now we consider the other disconnected subgraph which corresponds to the pair
of disjoint edges {1, 4}, {2, 3} and get, similarly,
(8)
(a3 ≤ a1 and a5 ≤ a2 ) or
(a6 ≤ a7 and a4 ≤ a8 ).
By intersecting conditions (7) and (8), we get the desired relations.
Note that in this example the union of the four rational cones defined by the
set of the linear inequalities (1) − (4) is not a convex set. Indeed, if we take
the exponent vectors a = (3, 5, 1, 3, 5, 9, 7, 9) and a′ = (1, 3, 1, 1, 7, 11, 11, 1), then
the corresponding ideals are both Cohen-Macaulay. However, for the vector b =
(a + a′ )/2 = (2, 4, 1, 2, 6, 10, 9, 5), the corresponding ideal is not Cohen-Macaulay.
2. Rigid depth
Definition 2.1. Let ∆ be a pure simplicial complex.
√ We say that ∆ has rigid depth
if for every unmixed monomial ideal I ⊂ S with I = I∆ one has depth(S/I) =
depth(S/I∆ ).
For example, any pure simplicial complex ∆ with depth(K[∆]) = 1 has rigid
depth. In this section we characterize all the pure simplicial complexes which have
rigid depth.
In the next theorem we will use the formula given in the following proposition
for computing the depth of a Stanley-Reisner ring. We recall that the ith skeleton
of a simplicial complex ∆ is defined as ∆(i) = {F ∈ ∆ | dim F ≤ i}.
Proposition 2.2. [6] Let ∆ be a simplicial complex of dimension d − 1. Then:
depth(K[∆]) = max{i | ∆(i) is Cohen-Macaulay} + 1.
6
ADNAN ASLAM AND VIVIANA ENE
The following theorem generalizes [5, Theorem 3.2].
Theorem 2.3. Let ∆ be a pure simplicial complex with depth(K[∆]) = t and
T
I∆ = F ∈F (∆) PF . The following statements are equivalent:
(a) ∆ has rigid depth.
T
(b) depth(S/I) = t for every ideal I = F ∈F (∆) IF where IF are irreducible
√
monomial ideals with IF = PF for all F ∈ F (∆).
T
(c) depth(S/I) = t for every ideal I = F ∈F (∆) PFmF where mF are positive
integers.
(d) depth(K[Γ]) ≥ t for every subcomplex Γ of ∆ with F (Γ) ⊂ F (∆).
(e) For every subcomplex Γ of ∆ with F (Γ) ⊂ F (∆), the skeleton Γ(t−1) is
Cohen-Macaulay.
(f) Let F (∆) = {F1 , . . . , Fr }. Then, for every 1 ≤ k ≤ min{r, t} and for any
indices 1 ≤ i1 < · · · < ik ≤ r, we have |Fi1 ∩ · · · ∩ Fik | ≥ t − k + 1.
Proof. (a) ⇒ (b) and (a) ⇒ (c) are trivial.
(b) ⇒ (d): Let Γ be a subcompex of ∆ with F (Γ) ⊂ F (∆). We have to show
that depth(K[Γ]) ≥ t. For every F ∈ F (Γ), let IF = (x2i | i ∈
/ F ), and for every
T
F ∈ F (∆) \ F (Γ) let IF = PF = (xi | i ∈
/ F ). Let I = F ∈F (∆) IF . By assumption,
depth(S/I) = t. Let S ′ ⊂ K[x1 , . . . , xn , y1 , . . . , yn ] be the polynomial ring over K
in all the variables which are needed for the polarization of I, and let I p ⊂ S ′ be
T
the polarization of I. We have I p = F ∈F (∆) IFp , where
(xi yi | i ∈
/ F ), if F ∈ F (Γ),
p
IF =
PF ,
if F ∈ F (∆) \ F (Γ).
Then proj dim(S ′ /I p ) = proj dim(S/I). Let N be the multiplicative set generated
T
p
/ F ) and
by all the variables xi . Then IN
= F ∈F (Γ) (yi | i ∈
proj dim(S ′ /I p )N ≤ proj dim(S ′ /I p ) = proj dim(S/I).
This inequality implies that depth(K[Γ]) ≥ depth(S/I) = t.
(d) ⇔ (e) follows immediately by applying the criterion given in Proposition 2.2.
(d) ⇒ (f): We proceed by induction on k. The initial inductive step is trivial.
Let k > 1 and assume that |Fi1 ∩ · · · ∩ Fiℓ | ≥ t − ℓ + 1 for 1 ≤ ℓ < k and for any
1 ≤ i1 < · · · < iℓ ≤ r. Obviously, it is enough to show that |F1 ∩ · · ·∩ Fk | ≥ t − k + 1.
By [3, Theorem 1.1], we have the following exact sequence of S-modules:
(9)
0 → Tk
S
i=1 PFi
→
k
M
S
→
P
Fi
i=1
M
1≤i<j≤k
S
S
→ ··· →
→ 0.
PFi + PFj
PF1 + · · · + PFk
Tk
By assumption, depth(S/ i=1 PFi ) ≥ t. We decompose the above sequence in
k − 1 short exact sequences as follows:
0 → Tk
S
i=1
0 → U1 →
PFi
M
→
1≤i<j≤k
k
M
S
→ U1 → 0,
P
i=1 Fi
S
→ U2 → 0,
PFi + PFj
SIMPLICIAL COMPLEXES WITH RIGID DEPTH
7
..
.
0 → Uk−2 →
M
1≤j1 <···<jk−1 ≤k
PFj1
S
S
→ 0.
→
+ · · · PFjk−1
PF1 + · · · + PFk
Note that, for all ℓ and any 1 ≤ j1 < · · · < jℓ ≤ k, we have
PFj1 + · · · + PFjℓ = PFj1 ∩···∩Fjℓ .
In particular, S/(PFj1 + · · · + PFjℓ ) is Cohen-Macaulay of depth equal to |Fj1 ∩ · · · ∩
Fjℓ |. Therefore,
M
depth(
S/(PFj1 + · · · PFjℓ )) ≥ t − ℓ + 1
1≤j1 <···<jℓ ≤k
for every 1 ≤ ℓ < k and any 1 ≤ j1 < · · · < jℓ ≤ k. Now, by using the inductive
hypothesis and by applying Depth Lemma in the first k − 2 above short exact sequences from top to bottom, step by step, we obtain depth(U1 ) ≥ t−1, depth(U2 ) ≥
t − 2, . . . , depth(Uk−2 ) ≥ t − k + 2. Finally, by applying Depth Lemma in the last
short exact sequence, since the depth of the middle term is ≥ t − k + 2, we get
depth(S/(PF1 + · · · + PFk )) = |F1 ∩ · · · ∩ Fk | ≥ t − k + 1.
(f)⇒(d): Let Γ be a subcomplex of ∆ with F (Γ) = {Fj1 , . . . , Fjk } ⊂ F (∆). We
have to show that depth(K[Γ]) ≥ t. We may obviously assume that k < r and the
facets of Γ are F1 , . . . , Fk . If k ≤ t, then we use the short exact sequences derived
from (9) in the proof of (d) ⇒ (f) and, by applying successively Depth Lemma from
bottom to the top, we get, step by step, depth(Uk−2 ) ≥ t − k + 2, . . . , depth(U2 ) ≥
t− 2, depth(U1 ) ≥ t− 1, and, finally, from the first exact sequence, depth(K[Γ]) ≥ t.
If t < k, we use only the first t short exact sequences, that is, we stop at
M
S
→ Ut → 0.
0 → Ut−1 →
PFj1 + · · · + PFjt
1≤j1 <···<jt ≤k
Since the middle term in this short exact sequence has depth ≥ 1, we get depth(Ut−1 ) ≥
1. Next, by using the same arguments as before, we get depth(Ut−2 ) ≥ 2, . . . , depth(U1 ) ≥
t − 1, and, finally, depth(K[∆]) ≥ t, as desired.
The implication (d) ⇒ (a) follows by Theorem 1.2.
Finally, the implication (c) ⇒ (e) follows similarly to the proof of Corollary 1.9
in [8].
In order to state the first consequence of the above theorem, we need to know
the behavior of the depth of a Stanley-Reisner ring over a field when passing from
characteristic 0 to characteristic p > 0. We show in the next lemma that the Betti
numbers of the Stanley-Reisner ring can only go up when passing from characteristic
0 to a positive characteristic which, in particular, implies that the depth does not
increase. This result is certainly known. However we include here its proof since we
could not find any precise reference. The argument of the proof was communicated
to the second author by Ezra Miller.
Lemma 2.4. Let ∆ be a simplicial complex on the vertex set [n] and let K, L be
two fields with char K = 0, char L = p > 0. Then βi (K[∆]) ≤ βi (L[∆]) for all i.
8
ADNAN ASLAM AND VIVIANA ENE
Proof. Any field is flat over its prime field. Therefore, since char K = 0, we have
βi (K[∆]) = βi (Q[∆]) for all i, and since char L = p, we have βi (K[∆]) = βi (Fp [∆])
for all i, where Fp is the prime field of characteristic p. In other words, the Betti
numbers depend only on the characteristic of the base field. Let Zp be the local
ring of the integers at the prime p. The ring Zp [X] is *local ([1, Section 1.5])
and the Stanley-Reisner ideal I∆ ⊂ Zp [X] is *homogeneous. Let F be a minimal
free resolution of Zp [∆] over Zp [x1 , . . . , xn ]. Since p is a nonzerodivisor on Zp [∆],
by [7, Lemma 8.27], the quotient F /pF is a minimal free resolution of Fp [∆] over
Fp [x1 , . . . , xn ]. On the other hand, the localization F [p−1 ] by inverting p is a free
resolution, not necessarily minimal, of Q[∆] over Q[x1 , . . . , xn ]. Since the modules
in F /pF and F [p−1 ] have the same ranks, it follows that βi (Q[∆]) ≤ βi (Fp [∆]) for
all i which leads to the desired inequalities.
Corollary 2.5. Let ∆ be a pure simplicial complex with rigid depth over a field of
characteristic 0. Then ∆ has rigid depth over any field.
Proof. Let K be a field of characteristic 0 and L a field of characteristic p > 0.
The above lemma implies that proj dim K[∆] ≤ proj dim L[∆]. By AuslanderBuchsbaum formula, it follows that depth K[∆] ≥ depth L[∆]. Therefore, the desired statement follows by applying the combinatorial condition (f) of Theorem 2.3.
Example 2.6. Let ∆ be the six-vertex triangulation of the real projective plane;
see [1, Section 5.3]. If char K 6= 2, then ∆ is Cohen-Macaulay over K, hence
depth(K[∆]) = 2, and, by condition (f) of Theorem 2.3, it follows that ∆ does
not have rigid depth over K. But if char K = 2, then depth(K[∆]) = 1, and,
consequently, ∆ has rigid depth over K.
The simplicial complexes with one or two facets have rigid depth.
Lemma 2.7. Let ∆ be a pure simplicial complex with at most two facets. Then ∆
has rigid depth.
Proof. We only need to consider the case of simplicial complexes with two facets
since the other case is obvious. Let dim ∆ = d − 1 and F (∆) = {F, G}. We show
that depth(K[∆]) = t if and only if |F ∩ G| = t − 1. Then the claim follows by
condition (f) in Theorem 2.3. We consider the exact sequence
0 → K[∆] → (S/PF ) ⊕ (S/PG ) → S/(PF + PG ) ∼
= K[xi | i ∈ F ∩ G] → 0.
As (S/PF ) ⊕ (S/PG ) and S/(PF + PG ) are Cohen-Macaulay of dimensions d and,
respectively, |F ∩ G|, it follows that depth(K[∆]) = t if and only if |F ∩ G| = t − 1.
Example 2.8. Let ∆ and Γ be the simplicial complexes with F (∆) = {{1, 2, 3},
{1, 4, 5}} and F (Γ) = {{1, 2, 3}, {1, 3, 4}}. Obviously, by Lemma 2.7, ∆ is nonCohen-Macaulay of rigid depth 2, while Γ is Cohen-Macaulay of rigid depth.
In the sequel we investigate whether the rigid depth property is preserved by the
skeletons of the simplicial complexes with rigid depth. The next example shows
that this is not the case.
SIMPLICIAL COMPLEXES WITH RIGID DEPTH
9
Example 2.9. Let ∆ be the simplicial complex on the vertex set [8] with F (∆) =
{F, G} where F = {1, 2, 3, 4, 5} and G = {1, 2, 6, 7, 8}. Then, by Lemma 2.7 and
its proof, it follows that depth(K[∆]) = 3 and ∆ has rigid depth. Let ∆(3) be
the 3-dimensional skeleton of ∆ and Γ the subcomplex of ∆(3) with the facets
G1 = {1, 2, 3, 5} and G2 = {2, 6, 7, 8}. Then, again by the proof the above lemma,
we get depth(K[Γ]) = 2. But depth K[∆(3) ] = 3, thus the skeleton ∆(3) of ∆ does
not have rigid depth since it does not satisfy condition (d) in Theorem 2.3.
However, as an application of Theorem 2.3, we prove the following
Proposition 2.10. Let ∆ be a pure simplicial complex with rigid depth and let
t = depth(K[∆]). If ∆(i) has rigid depth for some i ≥ t − 1, then ∆(j) has rigid
depth for every j ≥ i.
Proof. By [4], we know that depth(K[∆(i) ]) = t for i ≥ t − 1. It is enough to show
that if ∆(i) has rigid depth for some i ≥ t − 1, then ∆(i+1) has the same property.
Let Γ ⊂ ∆(i+1) be a subcomplex with F (Γ) ⊂ F (∆(i+1) ). Then Γ(i) is a subcomplex of ∆(i) and F (Γ(i) ) ⊂ F (∆(i) ). By our assumption and by using condition
(e) in Theorem 2.3, it follows that Γ(t−1) is Cohen-Macaulay. Therefore, ∆(i+1)
satisfies condition (e) in Theorem 2.3, which ends our proof.
Acknowledgment
We thank Jürgen Herzog for helpful discussions on the subject of this paper and
Ezra Miller for the proof of Lemma 2.4. We would also like to thank the referee for
his valuable suggestions to improve our paper.
References
[1] W. Bruns, J. Herzog, Cohen-Macaulay rings, Revised Ed., Cambridge University Press, 1998.
[2] J. Herzog, T. Hibi, Monomial ideals, Graduate Texts in Mathematics 260, Springer, 2010.
[3] J. Herzog, D. Popescu, M. Vlădoiu, Stanley depth and size of a monomial ideal, to appear
in Proceed. AMS.
[4] J. Herzog, A. Soleyman Jahan, X. Zheng, Skeletons of monomial ideals, Math. Nachr. 283
(2010), 1403–1408.
[5] J. Herzog, Y. Takayama, N. Terai, On the radical of a monomial ideal, Arch. Math. 85 (2005)
397-408.
[6] T. Hibi, Quotient algebras of Stanley-Reisner rings and local cohomology, J. Algebra, 140,
(1991), 336–343.
[7] E. Miller, B. Sturmfels, Combinatorial commutative algebra, Graduate Texts in Mathematics
227, Springer, 2005.
[8] N. C. Minh, N. V. Trung, Cohen-Macaulayness of monomial ideals and symbolic powers of
Stanley-Reisner ideals, Adv. Math. 226 (2011), 1285–1306.
[9] Y. Takayama, Combinatorial characterizations of generalized Cohen-Macaulay monomial
ideals, Bull. Math. Soc. Sci. Math. Roumanie (N.S.) 48 (2005), 327–344.
[10] R. H. Villarreal, Monomial algebras, Marcel Dekker, 2001.
Abdus Salam School of Mathematical Sciences (ASSMS), GC University, Lahore, Pakistan.
E-mail address: adnanaslam15@yahoo.com
10
ADNAN ASLAM AND VIVIANA ENE
Faculty of Mathematics and Computer Science, Ovidius University, Bd. Mamaia 124,
900527 Constanta, Romania
Institute of Mathematics of the Romanian Academy, P.O. Box 1-764, RO-014700, Buchaest,
Romania
E-mail address: vivian@univ-ovidius.ro
| 0 |
Orthogonal Series Density Estimation for Complex Surveys
Shangyuan Ye, Ye Liang and Ibrahim A. Ahmad
arXiv:1709.06588v2 [stat.ME] 26 Sep 2017
Department of Statistics, Oklahoma State University
Abstract
We propose an orthogonal series density estimator for complex surveys, where samples are neither independent nor identically distributed. The proposed estimator is
proved to be design-unbiased and asymptotically design-consistent. The asymptotic
normality is proved under both design and combined spaces. Two data driven estimators are proposed based on the proposed oracle estimator. We show the efficiency of
the proposed estimators in simulation studies. A real survey data example is provided
for an illustration.
Keywords: Nonparametric, asymptotic, survey sampling, orthogonal basis, HorvitzThompson estimator, mean integrated squared error.
1
Introduction
Nonparametric methods are popular for density estimations. Most work in the area of
nonparametric density estimation was for independent and identically distributed samples.
However, both assumptions are violated if the samples are from a finite population using
a complex sampling design. Bellhouse and Stafford (1999) and Buskirk (1999) proposed
kernel density estimators (KDE) by incorporating sampling weights, and their asymptotic
properties were studied by Buskirk and Lohr (2005). Kernel methods for clustered samples
and stratified samples were studied in Breunig (2001) and Breunig (2008), respectively.
One disadvantage of the KDE is that all samples are needed to evaluate the estimator.
However, in some circumstances, there is a practical need to evaluate the estimator without
using all samples for confidentiality or storage reasons. For example, many surveys are
routinely conducted and sampling data are constantly collected. Data managers want to
publish exact estimators without releasing all original data. In Section 6, we provide a real
data example from Oklahoma M-SISNet, which is a routinely conducted survey on climate
policies and public views. The orthogonal series estimators are useful alternatives to KDEs,
without needing to release or store all samples.
1
The basic idea of the orthogonal series method is that any square integrable function
f , in our case a density function, can be projected onto an orthogonal basis {ϕj }: f (x) =
P∞
j=0 θj ϕj (x), where
θj =
Z
ϕj (x)f (x)dx = E(ϕj (X))
(1)
is called the jth Fourier coefficient. Some of the work in orthogonal series density estimation (OSDE) was covered in monographs by Efromovich (1999) and Tarter and Lock
(1993), among others. Efromovich (2010) gave a brief introduction of this method. Walter
(1994) discussed properties of different bases. Donoho et al. (1996) and Efromovich (1996)
studied data driven estimators. Asymptotic properties were studied by Pinsker (1980) and
Efromovich and Pinsker (1982).
In this paper, we study the OSDE for samples from complex surveys. To the best of
our knowledge, no previous work has been done on developing OSDE for finite populations.
We propose a Horvitz-Thompson type of OSDE, incorporating sampling weights from the
complex survey. We show that the proposed OSDE is design-unbiased and asymptotically
design-consistent. We further prove the asymptotic normality of the proposed estimator. We
compare the lower bound of minimax mean integrated squared error (MISE) with the I.I.D.
case in Efromovich and Pinsker (1982). We propose two data driven estimators and show
their efficiency in a simulation study. Finally, we analyze the M-SISNet survey data using
the proposed estimation. All proofs to theorems and corollaries are given in the appendix.
2
Notations
Consider a finite population labeled as U = {1, 2, ..., N}. A survey variable x is associated
with each unit in the finite population. A subset s of size n is selected from U according to
some fixed-size sampling design P̧(·). The first and second order inclusion probabilities from
the sampling design P̧(·) are πi = Pr(i ∈ s) and πij = Pr(i, j ∈ s), respectively. The inverse
of the first order inclusion probability defines the sampling weight di = πi−1 , ∀i ∈ s.
The inference approach used in this paper for complex surveys is the combined designmodel-based approach originated in Hartley and Sielken (1975). This approach accounts
for two sources of variability. The first one is from the fact that the finite population is
a realization from a superpopulation, that is, the units xU = {x1 , x2 , ..., xN } are considered independent random variables with a common distribution function F , whose density
function is f . The second one is from the complex sampling procedure which leads to a
2
sample x = {x1 , x2 , . . . , xn }. Denote w = {w1 , w2 , . . . , wn } design variables that determine
the sampling weights. The sampling design P̧(·) is embedded within a probability space
(S, J̧, PP̧ ). The expectation and variance operator with respect to the sampling design are
denoted by EP̧ (·) = EP̧ (· | xU ) and VarP̧ (·) = VarP̧ (· | xU ), respectively. The superpopulation ξ, from which the finite population is realized, is embedded within a probability space
(Ω, F̧, Pξ ). The sample x and the design variables w are ξ-measurable. The expectation
and variance operator with respect to the model are denoted by Eξ (·) and Varξ (·), respectively. Assume that, given the design variables w, the product space, which couples the
model and the design spaces, is (Ω × S, F̧ × J̧, Pξ × PP̧ ). The combined expectation and
variance operators are denoted by EC (·) and VarC (·), where EC (·) = Eξ [EP̧ (· | xU )] and
VarC (·) = Eξ [VarP̧ (· | xU )] + Varξ [EP̧ (· | xU )].
3
Main Results
Consider a sample s = {x1 , x2 , ..., xn } drawn from a finite population xU using some fixedsize sampling design P̧(·). Our goal is to estimate the hypothetical density function f of the
superpopulation. Equation (1) implies that θj can be estimated using the Horvitz-Thompson
(HT) estimator for the finite population mean
θ̂j = N
−1
n
X
di ϕj (xi ),
(2)
i=1
where N is the finite population size and di = πi−1 is the sampling weight for unit i. The
HT estimator is a well known design unbiased estimator (Fuller, 2009). The basis {ϕj } can
be Fourier, polynomial, spline, wavelet, or others. Properties of different bases are discussed
in Efromovich (2010). We consider the cosine basis throughout the paper, which is defined
√
as {ϕ0 = 1, ϕj = 2 cos(πjx)}, j = 1, 2, · · · , x ∈ [0, 1]. Regarding the compact support [0, 1]
for the density, we adopt the argument in Wahba (1981):“it might be preferable to assume
the true density has compact support and to scale the data to interior of [0, 1].” Analogous
to Efromovich (1999), we propose an orthogonal series estimator in the form
fˆ(x) = fˆ(x, {wj }) = 1 +
∞
X
wj θ̂j ϕj (x),
(3)
j=1
where θ̂j is the HT estimator for the Fourier coefficient as in (2) and wj ∈ [0, 1] is a shrinking
R1
coefficient. Note that θ0 = 0 f (x)dx = 1. If xU is known for all units in the finite population,
3
we can write the population estimator for f (x) as
fU (x) = fU (x, {wj }) = 1 +
where θU,j = N
PN
−1
i=1
∞
X
wj θU,j ϕj (x),
(4)
j=1
ϕj (xi ).
The following theorems and a corollary show properties of our proposed estimator under
both design and combined spaces. Theorem 1 considers unbiasedness and consistency under
the design space.
PP
P∞ 2
πik
Theorem 1 Suppose f ∈ L2 (R), δ = N −1
i=1 wi < ∞.
i6=k πi πk − N < ∞ and
Then, the estimator fˆ(x, {wj }) is design-unbiased and asymptotically design-consistent for
fU (x, {wj }), i.e.,
h
h
i
i
ˆ
ˆ
EP̧ f (x, {wj }) = fU (x, {wj }) and ΓP̧ = VarP̧ f (x, {wj }) → 0 as N → ∞.
Theorem 2 shows the asymptotic normality of the proposed estimator fˆ(x, {wj }) under
the design space.
Theorem 2
Suppose that all assumptions in Theorem 1 hold. As N → ∞,
fˆ(x, {wj }) − fU (x, {wj })
Γ̂P̧
L
P̧
−−→ N(0, 1),
(5)
where
Γ̂P̧ = N −1
J
X
wj2 (1 + 2−1/2 θ̂2j + δ θ̂j2 )(1 + 2−1/2 ϕ2j (x)).
j=1
We then show the asymptotic normality of the proposed estimator fˆ(x, {wj }) under the
combined inference. Define a Sobolev Class of k-fold differentiable densities as F̧(k, Q) =
P∞
P
2k 2
{f : f (x) = 1 + ∞
j=1 (πj) θj ≤ Q < ∞}, k ≥ 1. Note that for any
j=1 θj ϕj (x),
f ∈ F̧(k, Q), f is 1-periodic, f (k−1) is absolute differentiable and f (k) ∈ L2 (R).
Theorem 3
Suppose that f ∈ F̧(k, Q) and all assumptions in Theorem 2 hold. Then,
fˆ(x, {wj }) − f (x) LC
h
i −→ N(0, 1) as N → ∞,
VarC fˆ(x, {wj })
(6)
h
i
P
where VarC fˆ(x, {wj }) = N −1 Jj=1 wj2 bj (1 + 2−1/2 ϕ2j (x)) and bj = 2 + 21/2 θ2j + (δ − 1)θj2 +
oN (1).
4
The following corollary is a direct result of using Theorem 3 and Efromovich and Pinsker
(1982). It shows the lower bound of the minimax MISE for the proposed estimator fˆ(x, {wj })
under the Sobolev class.
Corollary 1 Let f ∈ F̧(k, Q) and fˆ(x, {wj }) be the estimator in Theorem 3. The lower
bound of the minimax MISE, under the combined inference approach, is given by:
h
i
R(F̧) = inf sup MISEC fˆ(x, {wj }) ≥ N −2k/(2k+1) P (k, Q, b)(1 + oN (1)),
(7)
{wj }
f ∈F̧(k,Q)
o2k/(2k+1)
n
k
and b = 2.
where P (k, Q, b) = Q1/(2k+1) π(k+1)b
Remark that this lower bound is of the same form as the I.I.D. case in Efromovich and Pinsker
(1982), but with b = 2 instead of b = 1.
4
Data Driven Estimators
The choice of shrinking coefficients ŵj is not unique. To get a proper data driven estimator,
we start with the oracle estimator (3), and then obtain ŵj by minimizing the MISE for the
oracle estimator. Here, we propose two estimators: a truncated estimator and a smoothed
truncated estimator, mimicking those in the I.I.D. case.
The truncated estimator, denoted by fˆT , is an estimator with ŵj = 1 for j ≤ J, and
ŵj = 0 for j > J. Alternatively we can write ŵj = Ij≤J . Then, only the truncation
parameter J needs to be estimated. Notice that the MISE of this estimator is
J h
h
i Z
i X
2
ˆ
MISEC f (x, {wj }) =
VarC (θ̂j ) − θj − f 2 (x)dx.
j=1
Since f 2 (x)dx is fixed and an unbiased estimator for θj2 is θ̂j2 −N −1 bj , a data-driven estimate
for J can be obtained from
R
Jˆ = arg min
J
X
(2N −1 b̂j − θ̂j2 ),
j=1
where b̂j is the plug-in estimator of bj . In practice, the solution is obtained through a
numerical search. Efromovich (1999) suggests to set the upper bound for Jˆ to be ⌊4+0.5 ln n⌋
for the search. Theoretically, the minimum of the MISE can be approximated in the following
corollary.
5
Corollary 2
Let f ∈ F̧(k, Q), k > 1/2. The MISE of fˆT is minimized when
J ≈ N 1/(2k+1) H1 (k, b, c),
(8)
and the minimum is approximately
R(fˆT ) = MISEC (fˆT (x, {ŵj })) ≈ N −2k/(2k+1) H2 (k, b, c),
−1/(2k+1)
where H1 (k, b, c) = b
and c is a constant.
2k+1
(2k+2)c
−1/(2k+1)
2k/(2k+1)
, H2 (k, b, c) = b
(9)
2k+1
(2k+2)c
−1/(2k+1)
,
One possible modification for fˆT is to shrink each Fourier coefficient toward zero. We call
this estimator the smoothed truncated estimator, denoted by fˆS . It is constructed similarly
as the truncated estimator, with the first J Fourier coefficients shrunk by multiplying the
optimal smoothing coefficients wj∗, obtained from the proof of Corollary 1. Mathematically,
ŵj = ŵj∗ Ij≤J , where ŵj∗ = (θ̂j2 − N −1 b̂j )/θ̂j2 is a direct plug-in estimator for wj∗ .
A potential problem of the nonparametric density estimation is that the estimator may
not be a valid density function. A simple modification is to define the L2 -projection of fˆT
(or fˆS ) onto a class of non-negative densities, f˜T (x) = max{0, fˆT (x) − const.}, where the
normalizing constant is to make f˜T integrate to 1. It has been proved that the constant
always exists and is unique (Glad et al., 2003).
5
Simulation
We compared our proposed estimators with the series estimator that ignores the finite population and sampling designs, through a Monte Carlo simulation study. We considered
estimating density functions for three sampling designs: (1) the simple random sample without replacement (SRSWOR), (2) the stratified sampling and (3) the Poisson sampling. Note
that the Poisson sampling has a random size with units independently sampled and hence
violates our assumption of fixed size sampling.
1. For the SRSWOR, we considered two superpopulations: the standard normal distribution N(0, 1) and a mixture normal distribution 0.4N(−1, 0.5) + 0.6N(1, 1).
2. For the stratified sampling, we considered two superpopulations: a two-component
mixture normal 0.4N(−1, 0.5) + 0.6N(1, 1) and a three-component mixture normal
6
0.3N(−1, 0.15) + 0.4N(0, 0.15) + 0.3N(1, 0.15). We designed two strata for the twocomponent mixture and three strata for the three-component mixture. A proportional
stratified sampling is used.
3. For the Poisson sampling, we considered the same two superpopulations as in (1).
We specified the expected sample size for the Poisson sampling to be n, and generated the first order inclusion probabilities for the Poisson sampling using the function
“inclusionprobabilities” in the R package “sampling” (Till and Matei, 2016).
For all cases, we considered a finite population of size N = 1, 000 drawn from each of the
superpopulations. We repeated drawing the finite population for m1 = 100 times. For each
of the finite population, we drew samples according to the sampling design, with increasing
sample sizes: n = 20, 40, 60 and 80. The replication number for each finite population is
m2 = 10, 000. The performance of estimators is measured by a Monte Carlo approximation
of the MISE:
Z
m1 X
m2 h
i2
1 X
f˜ij (x) − f (x) dx.
MISEMC (f˜) =
mm
1
2 i=1 j=1
The results of the simulation study are shown in Table 1. In general, the I.I.D. series
estimator, which ignores the sampling design, performs the worst in all cases. However,
it is not surprising to see that the improvement for the proposed estimators is much more
in stratified sampling than in SRSWOR or Poisson sampling. It confirms the necessity of
incorporating stratification sampling weights into the series estimator for a complex survey.
Lastly, the smoothed truncated estimator performs better than the truncated estimator in
most scenarios.
6
Oklahoma M-SISNet Survey
The Oklahoma Weather, Society and Government Survey conducted by Meso-Scale Integrated Sociogeographic Network (M-SISNet) measures Oklahomans’ perceptions of weather
in the state, their views on government policies and societal issues and their use of water and
energy. The survey is routinely conduced at the end of each season. Until the end of 2016, 12
waves of survey data have been collected. It is desired that estimates can be obtained without constantly pulling out the original data. The sampling design has two separated phases.
In Phase I, a simple random sample of size n = 1, 500 is selected from statewide households.
In Phase II, a stratified oversample is selected from five special study areas: Payne County,
7
Oklahoma City County, Kiamichi County, Washita County and Canadian County. In each
stratum, the sample size is fixed to be 200. The second phase can be viewed as a stratified
sampling over the entire state with six strata: n1 = · · · = n5 = 200 and n6 = 0, where the
sixth stratum contains households not in the five special study areas. This design with oversampling is not a typical fixed-size complex survey. The first-order inclusion probabilities
are approximately πhi = nh /Nh + n/N, for i = 1, . . . , Nh and h = 1, . . . , 6. Note that for
units not in the five areas, this inclusion probability is simply n/N. We presents OSDEs for
two continuous variables for illustration: the monthly electricity bill and the monthly water
bill. Figure 1 shows OSDEs of the two variables for all seasons in 2015.
Acknowledgement
This research is partially supported by National Science Foundation under Grant No. OIA1301789.
Appendix
Proof of Theorem 1
Proof. We first show that fˆ(x, {wj }) is design-unbiased:
∞
h
i
X
EP̧ fˆ(x, {wj }) = EP̧ 1 +
wj θ̂j ϕj (x)
j=1
= 1+
= 1+
∞
X
j=1
∞
X
wj EP̧ (θ̂j )ϕj (x)
wj θU,j ϕj (x)
j=1
= fU (x, {wj }).
It remains to show that fˆ(x, {wj }) is asymptotically design-consistent, that is, the design-variance
of fˆ(x, {wj }) approaches zero in the limit. We need the simple fact that
√
ϕ2j (x) = [ 2 cos(πjx)]2 = 1 + cos(π2jx) = 1 + 2−1/2 ϕ2j (x).
8
Then, we have
ΓP̧ = VarP̧ 1 +
=
∞
X
∞
X
j=1
wj θ̂j ϕj (x)
wj2 ϕ2j (x)VarP̧ (θ̂j )
j=1
=
∞
X
wj2
j=1
h
#
" n
i
X
di ϕj (xi ) ,
1 + 2−1/2 ϕ2j (x) N −2 VarP̧
i=1
and
VarP̧
"
n
X
i=1
#
di ϕj (xi )
= VarP̧
= EP̧
=
N
X
i=1
−
=
"
"
N
X
i=1
Ii di ϕj (xi )
i=1
N
X
#2
Ii di ϕj (xi )
i=1
−
(
EP̧
EP̧ (Ii2 )d2i EP̧ ϕ2j (xi ) +
(N
X
N
X
#
EP̧ (Ii )di EP̧ [ϕj (xi )]
i=1
"
N
X
#)2
Ii di ϕj (xi )
i=1
XX
πik di dk EP̧ [ϕj (xi )] EP̧ [ϕk (xk )]
i6=k
)2
h
i XX π
ik 2
2
EP̧ 1 + 2−1/2 ϕ2j (xi ) +
θ − N 2 θU,j
πi πk U,j
i6=k
2
= N (1 + 2−1/2 θU,2j + δθU,j
)
≤ N M,
2 ≤ M < ∞ for every j.
where 1 + 2−1/2 θU,2j + δθU,j
P
2
Hence, ΓP̧ ≤ N −1 M ∞
j=1 wj → 0 as N → ∞.
Proof of Theorem 2
Proof. By the definition of θ̂j and θU,j , we have
fˆ(x, {wj }) = 1 +
= 1+
∞
X
wj θ̂j ϕj (x)
j=1
N
X
i=1
9
Ii di
∞
X
j=1
wj ϕj (x)ϕj (xi ),
and
E Ii di
∞
X
j=1
wj ϕj (x)ϕj (xi ) =
=
∞
X
wj ϕj (x)E [ϕj (xi )]
j=1
∞
X
wj θU,j ϕj (x).
j=1
Also, from the proof of Theorem 1, we have
∞
∞
X
X
2
wj2 (1 + 2−1/2 θU,2j + δθU,j
)
wj ϕj (x)ϕj (xi ) =
Var Ii di
j=1
j=1
≤ B
∞
X
j=1
wj2 < ∞ by assumption.
Therefore, by the Lindeberg-Lévy central limit theorem, we have
fˆ(x, {wj }) − fU (x, {wj }) LP̧
−−→ N (0, 1).
ΓP̧
(10)
It remains to show that Γ̂P̧ is consistent for ΓP̧ under design, or equivalently,
P
P̧
|Γ̂P̧ − ΓP̧ | −−→ 0, as n → N.
(11)
Condition (11) can be proved by using the facts that θ̂j is design unbiased and E(θ̂j2 ) = θj2 +
Var(θ̂j ) → θj2 as n → N .
Then, Theorem 2 is proved by using the equations (10) and (11) in conjunction with Slutsky’s
theorem.
Proof of Theorem 3
Proof. Since fU (x, {wj }) is the standard OSDE from an I.I.D. sample which is the finite population,
then
fU (x, {wj }) − f (x) Lξ
−→ N (0, 1).
Varξ [fU (x, {wj })]
10
(12)
The asymptotic distribution of the I.I.D. OSDE under Sobolev class is obtained from Efromovich
(1999), Chapter 7. Also,
J
h
i
h
i
X
VarC wj θ̂j ϕj (x)
VarC fˆ(x, {wj }) =
j=1
=
J
X
wj2 (1 + 2−1/2 ϕ2j (x))VarC (θˆj )
(13)
j=1
Next, we calculate the variance of θ̂j by using Theorem 1:
h
i
h
i
VarC (θ̂j ) = Eξ VarP̧ (θ̂j ) + Varξ EP̧ (θ̂j )
h
i
2
) + Varξ (θU,j )
= Eξ N −1 (1 + 2−1/2 θU,2j + δθU,j
h
i
2
) + Varξ (θU,j )
= N −1 1 + 2−1/2 θ2j + δEξ (θU,j
(14)
2 ) and Var (θ
Then, we evaluate Eξ (θU,j
ξ U,j ) separately. Based on a standard result in the I.I.D. case,
we have
Varξ (θU,j ) = N −1 (1 + 2−1/2 θ2j − θj2 )
(15)
and
2
Eξ (θU,j
) = E2ξ (θU,j ) + Varξ (θU,j )
= N −1 (1 + 2−1/2 θ2j − θj2 ) + θj2 .
Then, plug equations (15) and (16) into (14), we have
h
i
VarC (θ̂j ) = N −1 2 + 21/2 θ2j + (δ − 1)θj2 + oN (1) = N −1 bj .
(16)
(17)
Hence, plug (17) into (13) we can get the variance of fˆ under the combined inference approach.
Finally, apply Theorem 5.1 in Bleuer and Kratina (1999), Theorem 3 is proved.
Proof of Corollary 1
Proof. The proof is similar to Efromovich and Pinsker (1982). We sketch the steps as follows. We
first evaluate the linear minimax MISE for the functions in the Sobolev class defined above. That
is, we optimize wj∗ ’s that minimize MISEC (fˆ). Notice that EC (θ̂j ) = Eξ [EP̧ (θ̂j )] = Eξ (θU,j ) = θj
11
implying that θ̂j is an unbiased estimator of θj . Therefore,
Z
h
i
MISEC fˆ(x, {wj }) = EC
(f − fˆ)2
∞ n
o
i
h
X
wj2 VarC (θ̂j ) + θj2 − 2wj θj2 + θj2 .
=
(18)
j=1
A straightforward calculation yields that
wj∗ =
θj2
θj2 + VarC (θ̂j )
.
(19)
Plug equation (19) into (18),
h
i
sup MISEC fˆ(x, {wj })
{wj } f ∈F̧(k,Q)
∞
X
θj2 VarC (θ̂j )
≥
sup
,
2
f ∈F̧(k,Q) j=1 θj + VarC (θ̂j )
RL (F̧) =
inf
(20)
where VarC (θ̂j ) is of the form (17). Plug (17) into (20), and use the Lagrange multiplier to show
that the maximum of (6) is attained at
θj2 = N −1 (µ/(πj)k − bj )+ ,
where µ is determined by the constraint
obtain
P∞
2k 2
j=1 (πj) θj
(21)
≤ Q. Plug equation (21) back to (20), we
RL (F̧) ≥ N −2k/(2k+1) P (k, Q, b).
Pinsker (1980) shows that for Sobolev ball F̧, the linear minimax risk is asymptotically equal to
the minimax risk, that is, R(F̧) = RL (F̧)(1 + oN (1)). Therefore Corollary 1 is proved.
Proof of Corollary 2
Proof. Let ŵj = Ij≤J . Plug equation (17) into (18), we have
R(fˆT ) = N −1
J
X
j=1
bj +
∞
X
j=J+1
θj2 ≈ N −1 bJ +
12
∞
X
j=J+1
θj2 .
(22)
Notice that for f ∈ F̧(k, Q). By a straightforward calculation, we have θj2 = cj −2(k+1) (Efromovich,
1999). Therefore,
∞
X
j=J+1
θj2
≈c
Z
∞
j −2(k+1) dj =
J
c
J −2k−1 .
2k + 1
(23)
Plug (23) into (22) and optimize J, Corollary 2 is proved.
References
Bellhouse, D. and Stafford, J. (1999), ‘Density estimation from complex surveys’, Statistica
Sinica 9, 407–424.
Bleuer, S. and Kratina, I. (1999), ‘On the two-phase framework for joint model and designbased inference’, The Annals of Statistics 33, 2789–2810.
Breunig, R. (2001), ‘Density estimation for clustered data’, Econometric Reviews 20, 353–
367.
Breunig, R. (2008), ‘Nonparametric density estimation for stratified samples’, Statistics and
Probability Letters 78, 2194–2200.
Buskirk, T. (1999), Using nonparametric methods for density estimation with complex survey
data, Technical report, PhD thesis, Department of Mathematics, Arizona State University.
Buskirk, T. and Lohr, S. (2005), ‘Asymptotic properties of kernel density estimation with
complex survey data’, Journal of Statistical Planning and Inference 128, 165–190.
Donoho, D., Johnstone, I., Kerkyacharian, G. and Picard, D. (1996), ‘Density estimation by
wavelet thresholding’, Annals of Statistics 24, 508–539.
Efromovich, S. (1996), ‘Adaptive orthogonal series density estimation for small samples’,
Computational Statistics and Data Analysis 22, 599–617.
Efromovich, S. (1999), Nonparametric Curve Estimation: Methods, Theorey and Applications, New York: Springer.
Efromovich, S. (2010), ‘Orthogonal series density estimation’, WIREs Comp Stat 2, 467–476.
13
Efromovich, S. and Pinsker, M. (1982), ‘Estimation of square-integrable probability density
of a random variable’, Problems of Information Transmission 18, 19–38.
Fuller, W. (2009), Sampling Statistics, Wiley, New York.
Glad, I., Hjort, N. and Ushakov, N. (2003), ‘Correction of density estimators that are not
densities’, Scandinavian Journal of Statistics 30, 415–427.
Hartley, H. and Sielken, R. (1975), ‘A super-population viewpoint for finite population sampling’, Biometrics 31, 411–422.
Pinsker, M. (1980), ‘Optimal filtration of square-integrable signals in Gaussian noise’, Problems Inform. Transmission 16, 53–68.
Tarter, M. and Lock, M. (1993), Model-Free Curve Estimation, New York: Chapman and
Hall.
Till, Y. and Matei, A. (2016), sampling: Survey Sampling. R package version 2.8.
URL: https://CRAN.R-project.org/package=sampling
Wahba, G. (1981), ‘Data-based optimal smoothing of orthogonal series density estimates’,
The Annals of Statistics 9, 146–156.
Walter, G. (1994), Wavelets and other Orthogonal Systems with Applications, London: CRC
Press.
14
Table 1: Monte Carlo approximation of MISE for three sampling designs and two superpopulations. The finite population size is N = 1, 000. The replication size of the finite
population is m1 = 100, and the replication size of the sample is m2 = 10, 000. Three
estimators are compared: the truncated estimator, the smoothed estimator and the series
estimator ignoring finite population and sampling design (I.I.D.).
SRSWOR
n
20
40
60
80
Standard Normal
Mixture Normal
Truncated Smoothed I.I.D. Truncated Smoothed
0.0232
0.0220
0.0290
0.0498
0.0480
0.0150
0.0140
0.0157
0.0311
0.0318
0.0116
0.0109
0.0121
0.0226
0.0234
0.0094
0.0089
0.0100
0.0173
0.0180
I.I.D.
0.0535
0.0388
0.0335
0.0219
n
20
40
60
80
Poisson Sampling
Standard Normal
Mixture Normal
Truncated Smoothed I.I.D. Truncated Smoothed
0.0497
0.0481
0.0527
0.0580
0.0442
0.0281
0.0270
0.0392
0.0344
0.0294
0.0241
0.0229
0.0237
0.0283
0.0280
0.0201
0.0190
0.0211
0.0235
0.0234
I.I.D.
0.0705
0.0399
0.0322
0.0285
n
20
40
60
80
Stratified Sampling
Two Strata
Three Strata
Truncated Smoothed I.I.D. Truncated Smoothed
0.0415
0.0409
0.0739
0.2847
0.2826
0.0231
0.0230
0.0688
0.2731
0.2718
0.0181
0.0180
0.0672
0.0426
0.0419
0.0142
0.0142
0.0675
0.0412
0.0406
I.I.D.
0.3106
0.3309
0.1132
0.1175
15
0.020
(a) Water Bill
0.000
0.005
Density
0.010
0.015
Winter
Spring
Summer
Fall
0
50
100
150
200
Amount (in dollars)
250
300
(b) Electricity Bill
0.000
Density
0.004
0.008
Winter
Spring
Summer
Fall
0
50
100
150
200
Amount (in dollars)
250
300
Figure 1: OSDEs of the electricity bill and the water bill for seasonal waves in 2015.
16
| 10 |
A Parametric MPC Approach to Balancing the Cost of Abstraction for
Differential-Drive Mobile Robots
arXiv:1802.07199v1 [cs.RO] 20 Feb 2018
Paul Glotfelter and Magnus Egerstedt
Abstract— When designing control strategies for differentialdrive mobile robots, one standard tool is the consideration of a
point at a fixed distance along a line orthogonal to the wheel axis
instead of the full pose of the vehicle. This abstraction supports
replacing the non-holonomic, three-state unicycle model with a
much simpler two-state single-integrator model (i.e., a velocitycontrolled point). Yet this transformation comes at a performance cost, through the robot’s precision and maneuverability.
This work contains derivations for expressions of these precision
and maneuverability costs in terms of the transformation’s
parameters. Furthermore, these costs show that only selecting
the parameter once over the course of an application may
cause an undue loss of precision. Model Predictive Control
(MPC) represents one such method to ameliorate this condition.
However, MPC typically realizes a control signal, rather than
a parameter, so this work also proposes a Parametric Model
Predictive Control (PMPC) method for parameter and sampling
horizon optimization. Experimental results are presented that
demonstrate the effects of the parameterization on the deployment of algorithms developed for the single-integrator model
on actual differential-drive mobile robots.
I. I NTRODUCTION
Models are always abstractions in that they capture some
pertinent aspects of the system under consideration whereas
they neglect others. But models only have value inasmuch
as they allow for valid predictions or as generators of design
strategies. For example, in a significant portion of the many
recent, multi-agent robotics algorithms for achieving coordinated objectives, single-integrator models are employed
(e.g., [1], [2], [3], [4]). Arguably, such simple models have
enabled complex control strategies to be developed, yet, at
the end of the day, they have to be deployed on actual
physical robots. This paper formally investigates how to
strike a balance between performance and maneuverability
when mapping single-integrator controllers onto differentialdrive mobile robots.
Due to the single-integrator model’s prevalence as a
design tool, a number of methods have been developed
for mapping from single-integrator models to more complex, non-holonomic models. For example, the authors of
[5] achieve a map from single integrator to unicycle by
leveraging a control structure introduced in [6]. However,
this map does not come with formal guarantees about the
degree to which the unicycle system approximates the singleintegrator system. One effective solution to this problem is
to utilize a so-called Near-Identity Diffeomorphism (NID)
This research was sponsored by Grants No. 1531195 from the U.S.
National Science Foundation.
The authors are with the Institute for Robotics and Intelligent Machines, Georgia Institute of Technology, Atlanta, GA 30332, USA,
{paul.glotfelter,magnus}@gatech.edu.
between single-integrator and unicycle systems, as in [7],
[8], where the basic idea is to perturb the original system
ever-so-slightly (the near-identity part) and then show that
there exists a diffeomorphism between a lower-dimensional
version of the perturbed system’s dynamics and the singleintegrator dynamics. As the size of the perturbation is given
as a design parameter, a bound on how far the original
system may deviate from the single-integrator system follows
automatically.
A concept similar to NIDs from single-integrator to unicycle dynamics appears in the literature in different formats.
For example, [9] utilizes this technique from a kinematics
viewpoint to stabilize a differential-drive-like system. This
"look-ahead" technique also arises in feedback linearization
methods as a mathematical tool to ensure that the differentialdrive system is feedback linearizable (e.g., [10], [11]).
This paper utilizes the ideas in [7], [8] to show that
the NID incurs an abstraction cost, in terms of precision
and maneuverability, that is based on the physical geometry
of the differential-drive robots; in particular, the precision
cost focuses on increasing the degree to which the singleintegrator system matches the unicycle-modeled system, and
the maneuverability cost utilizes physical properties of the
differential-drive systems to limit the maneuverability requirements imposed by the transformation. By striking a
balance between these two costs, a one-parameter family of
abstractions arises. However, the maneuverability cost shows
that only selecting the parameter once over the course of an
experiment may cause a loss of precision.
A potential solution to this issue is to repeatedly optimize
the parameter based on the system’s model and a suitable
cost metric. Model Predictive Control (MPC) represents one
such method. In particular, MPC approaches solve an optimal
control problem over a time interval, utilize a portion of the
controller, and re-solve the problem over the next time interval, effectively producing a state- and time-based controller.
The authors of [12], [13] produce such a Parametric Model
Predictive Control (PMPC) formulation. However, this formulation does not permit the cost metric to influence the
time interval, which has practical performance implications.
Using the formulated precision and maneuverability costs,
this work formulates an appropriate PMPC cost metric and
extends the work in [12], [13] to integrate a sampling horizon
cost directly into the PMPC program.
This paper is organized as follows: Sec. II presents the
system of interest and introduces the inherent trade-off contained in the NID. Sec. III discusses the PMPC formulation.
Sec. IV formulates the cost functions that allow a balanced
selection of the NID’s parameters, with respect to the generated cost functions. To demonstrate and verify the main
results of this work, Sec. V shows data from simulations and
physical experiments, with Sec. VI concluding the paper.
II. F ROM U NICYCLES TO S INGLE I NTEGRATORS
This article uses the following mathematical notation. The
expression k · k is the usual Euclidean norm. The symbol
∂x f (x) represents the partial derivative of the function f :
Rn → Rm with respect to the variable x, assuming the
convention that ∂x f (x) ∈ Rm×n . The symbol R≥0 refers
to the real numbers that are greater than or equal to zero.
As the focus of the paper is effective abstractions for
controlling differential-drive robots, this section establishes
the Near-Identity Diffeomorphism (NID) that provides a
relationship between single-integrator and unicycle models.
That is, systems whose pose is given by planar positions
T
x̄ = [x1 x2 ] and orientations θ, with the full state given
T T
T
= [x1 x2 θ] . The associated unicycle
by x = x̄ θ
dynamics are given by (dropping the dependence on time t)
R(θ)e1 0 v
ẋ =
,
(1)
0
1 ω
where the control inputs v, ω ∈ R are the linear and
rotational velocities, respectively, 0 is a zero-vector of the
appropriate dimension, and
T
cos(θ) − sin(θ)
e1 = 1 0 , R(θ) =
.
sin(θ) cos(θ)
Letting
ux = v
ω
T
be the collective control input to the unicycle-modeled agent,
the objective becomes to turn this model into a singleintegrator model. To this end, we here recall the developments in [7]. Let xsi ∈ R2 be given by
xsi = Φ(x, l) = x̄ + lR(θ)e1 ,
(2)
where l ∈ (0, ∞) is a constant. The map Φ(x, l) is, in fact,
the NID, as defined in [7]. Geometrically, the point xsi is
simply given by a point at a distance l directly in front of
the unicycle with pose x.
Now, assume that the dynamics of xsi are given by a
controller
ẋsi = usi ,
where usi ∈ R2 is continuously differentiable, and compare
this system to the time-derivative of (2), which yields
cos(θ) −l sin(θ)
ẋsi = usi =
ux = Rl (θ)ux .
(3)
sin(θ) l cos(θ)
Note that the NID maps from three degrees of freedom to
two degrees of freedom. As a consequence, the resulting
unicycle controller cannot explicitly affect the orientation θ
of the unicycle model.
By [7], Rl (θ) is invertible, yielding a relationship between
usi and ux . Consequently, (3) allows the transformation of
linear, single-integrator algorithms into algorithms in terms
of the non-linear, unicycle dynamics. Note that in this paper,
which is different from [7], we let l˙ = 0 over the PMPC
time intervals (i.e., l is a constant value).
The unicycle model in (1) is not directly realizable on
a differential-drive mobile robot. However, the relationship
between the control inputs to the unicycle model and the
differential-drive model is given by
rw
rw
(ωr + ωl ), ω =
(ωr − ωl ),
(4)
v=
2
lw
where ωr and ωl are the right and left wheel velocities,
respectively. The wheel radius rw and base length lw encode
the geometric properties of the robot.
In the discussion above, the parameter l (i.e., the distance
off the wheel axis to the new point) is not canonical.
Moreover, it plays an important role since
kx̄ − xsi k = l.
(5)
The above equation seems to indicate that one should simply
choose l ∈ (0, ∞) to be as small as possible. However, the
following sections show that small values of l induce high
maneuverability costs.
In order to strike a balance between precision and maneuverability, we will, for the remainder of this paper, assume
that the control input to the unicycle model is given by
ux = Rl (θ)−1 usi ,
where usi is the control input supplied by a single-integrator
algorithm. Sec. IV contains the further investigation of the
effects of the parameter l on the precision and maneuverability implications of the transformation in (2).
III. A PARAMETRIC MPC F ORMULATION
Having introduced the system of interest, this section
contains a derivation of a Parametric Model Predictive Control (PMPC) method with a variable sampling interval for
general, nonlinear systems. Later, Sec. V utilizes a specific
case of these results. In general, MPC methods solve an
optimal control problem over a time interval and use only
a portion of the obtained controller (for a small amount
of time) before resolving the problem, producing a timeand state-based controller. In this case, PMPC optimizes
the parameters of a system. That is, this method finds the
optimal, constant parameters of a system, rather than a timevarying control input, over a time interval. For clarity, this
section specifies dependencies on time t. Let
ẋ(t) = f (x(t), p, t), xt0 = x(t0 ),
where x(t) ∈ Rn , p ∈ Rm , and f (·) is continuously
differentiable in x, measurable in t. The program
t0Z+∆t
arg min
J(p, ∆t) =
p∈Rm ,∆t∈R≥0
s.t. ẋ(t) = f (x(t), p, t)
x(t0 ) = xt0 ,
L(x(s), p, s)ds + C(∆t)
t0
expresses the PMPC problem of interest, where L(·) is
continuously differentiable in x and p. Note that, in this
case, both ∆t and p are decision variables determined by
the PMPC program.
A. Optimality Conditions
This section contains the derivation of the necessary,
first-order optimality conditions for the PMPC formulation,
realizing gradients for the proposed cost. In particular, the
derivation proceeds by calculus of variations.
˜ ∆t),
Proposition 1. The augmented cost derivatives ∂p J(p,
˜
∂∆t J(p, ∆t) are
∆t) = 0. Applying the mean value theorem and taking the
limit as → 0 shows that
˜
˜ ∆t)
J(p+γ,
∆t+τ ) − J(p,
=
lim
→0
t +∆t
0Z
∂p L(x(s), p, s)+λ(s)T ∂p f (x(s), p, s)ds γ
t0
+[∂∆t C(∆t)+L(x(t0 +∆t), p, t0 +∆t)] τ,
which is linear in τ and γ, and provides the final expressions
t0Z+∆t
˜ ∆t) =
∂p J(p,
t0Z+∆t
˜ ∆t) =
∂p J(p,
∂p L(x(s), p, s) + λ(s)T ∂p f (x(s), p, s)ds
t0
˜ ∆t) = L(x(t0 + ∆t), p, t0 + ∆t) + ∂∆t C(∆t),
∂∆t J(p,
˜ ∆t) (i.e., J(p, ∆t) augmented
where the augmented cost J(p,
with the dynamics constraint) is given by
˜ ∆t) =
J(p,
t0Z+∆t
L(x(s), p, s)+λ(s)T (f (x(s), p, s)− ẋ(s))ds+C(∆t).
t0
Proof. The proof proceeds by calculus of variations. Perturb
p and ∆t as p 7→ p + γ and ∆t 7→ ∆t + τ , where γ ∈ Rm ,
τ ∈ R. The perturbed augmented cost is
˜
J(p+γ,
∆t+τ ) =
t0 +∆t+τ
Z
L(x(s)+η(s), p+γ, s)
t0
T
λ(s) (f (x(s)+η(s), p+γ, s)− ẋ(s)−η̇(s))ds+
C(∆t+τ )+o().
Performing a Taylor expansion yields that
˜
J(p+γ,
∆t+τ ) =
t0 +∆t+τ
Z
L(x(s), p, s)+∂x L(x(s), p, s)η(s)+∂p L(x(s), p, s)γ
t0
+λT (f (x(s), p, s)+∂x f (x(s), p, s)η(s)
+∂p f (x(s), p, s)γ − ẋ(s)−η̇(s))ds
+C(∆t) + ∂∆t C(∆t)τ +o().
The proof now proceeds with multiple steps. First, the
application of integration by parts to the quantity λ(t)T η̇(t).
˜ + γ, ∆t + τ ) −
Second, the subtraction of the costs J(p
˜ ∆t). Note that, to subtract the costs properly, the inteJ(p,
˜
gral in J(p+γ,
t+τ ) must be broken up into two intervals:
[t, t + ∆t] and [t + ∆t, t + ∆t + τ ]. Furthermore, the costate
assumes the usual definition: λ̇(t) = −∂x L(x(t), p, t)T −
∂x f (x(t), p, t)T λ(t) with the boundary condition λ(t0 +
∂p L(x(s), p, s)+λ(t)T ∂p f (x(s), p, s)ds
t0
˜ ∆t) = ∂∆t C(∆t)+L(x(t0 + ∆t), p, t0 +∆t),
∂∆t J(p,
completing the proof.
Interestingly, both of the usual conditions for free parameters and final time still hold, and the first-order, necessary
optimality conditions for candidate solutions p∗ and ∆t∗ are
that
˜ ∗ , ∆t∗ ) = 0, ∂∆t J(p
˜ ∗ , ∆t∗ ) = 0.
∂p J(p
Furthermore, this formulation becomes amenable to solution by numerical methods for the optimal parameters p∗ and
˜ ∆t) can also
∆t∗ . In such cases, the expression for ∂p J(p,
be expressed as a costate-like variable ξ : [t0 , t0 +∆t] → Rm
with dynamics
˙ = −∂p L(x(t), p, t)T − ∂p f (x(t), p, t)T λ(t)
ξ(t)
ξ(t0 + ∆t) = 0,
where ξ(·) is defined as
t0Z+∆t
∂p L(x(s), p, s)T +∂p f (x(s), p, s)T λ(s)ds.
ξ(t) =
t
In this case, the necessary optimality condition is that
ξ(t0 ) = 0.
B. Numerical Methods
The above expressions allow for applications of typical
gradient descent methods. Many such methods could apply,
and this article presents one simple method in Alg. 1.
Note that this algorithm procures the decision variables
over one sampling interval [t0 , t0 + ∆t]. In practice, one
typically applies this algorithm repeatedly. For example, the
experiments in Sec. V-C consecutively apply this algorithm
to solve the PMPC problem.
IV. P RECISION VS . M ANEUVERABILITY
As already noted in Sec. II, the parameter l is a design parameter. This section discusses the importance and effects of
selecting l and proposes precision and maneuverability costs
that elucidate the selection of this parameter and its impact
on the differential-drive system. These derivations influence
the PMPC cost metric in Sec. V and, for comparison, an
optimal, static parameterization.
Algorithm 1 Gradient Descent Algorithm for PMPC
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
k←0
pk ← initial guess
∆tk ← initial guess
˜ k , ∆t) > do
˜ k , ∆t) + ∂∆t J(p
while ∂p J(p
Solve forward for x(·) from xt using pk and ∆tk
Solve backward for λ(·), ξ(·) using x(·)
˜ k , ∆t) and ∂∆t J(p
˜ k , ∆t)
Compute gradients ∂p J(p
˜ k , ∆t)T
pk+1 ← pk − γ1 ∂p J(p
˜ k , ∆t)
∆tk+1 ← ∆tk − γ2 ∂∆t J(p
k ←k+1
and let θsi be the angle of the vector usi . From (3),(4) we can
retrieve the magnitude of the difference in angular velocities,
|ωr − ωl |, as
|ωr − ωl | =
=
=
=
=
A. Precision Cost
Seeking to select l, we initially present a cost that incorporates the degree to which the transformed system in
(2) represents the original system xsi over an arbitrary time
duration T ≥ 0. As such, we model the precision cost by
the averaged tracking error
Z
1 T
D1 (x̄, xsi ) =
kx̄ − xsi k dt.
(6)
T 0
=
=
≤
lw
ω
rw
l T
e Rl (θ)−1 usi
r 2
lw T −1
e T R(−θ)R(θsi )ūsi
rw 2
lw T −1
e T R(θsi − θ)ūsi
rw 2
lw
1 cos(θsi − θ) − sin(θsi − θ) kusi k
0
sin(θsi − θ) cos(θsi − θ)
0
rw
l
lw sin(θsi − θ) cos(θsi − θ) kusi k
0
rw
l
l
lw kusi k
sin(θsi − θ)
rw l
lw v̄
.
rw l
It immediately follows from (5) that D1 (x̄, p) can be directly
written as a function of l, given by
Z
1 T
D1 (x̄, xsi ) =
kx̄ − xsi k dt
T 0
Z
1 T
=
l dt = l.
(7)
T 0
Thus, Prop. 2 yields an upper bound on the magnitude of
the wheel-velocity difference
This immediate result states that the smaller l is, the better
the unicycle model tracks the single-integrator model.
Proposition 3. Given that the control-input magnitude kusi k
is upper-bounded by v̄, the magnitude of the forward velocity,
|ωr + ωl |, is upper bounded by
B. Maneuverability Cost
In this section, we derive a geometrically-influenced maneuverability cost that models the degree to which the
selection of l influences the maneuverability requirements of
the unicycle-modeled system, with respect to the map defined
in (2). That is, we wish to elucidate how the parameter
l affects the expressions for the differential-drive agent’s
forward velocity, wheel difference, and exerted control effort.
To this end, we utilize the differential-drive model in
(4). Initially, note that the magnitude of the wheel-velocity
difference |ωr − ωl | represents a measure of the complexity
of a maneuver that the differential-drive system performs.
Using this definition as guidance, we state the following
proposition.
Proposition 2. Given that the control-input magnitude kusi k
is upper-bounded by v̄, the magnitude of the wheel-velocity
difference, |ωr − ωl |, is upper bounded by
|ωr − ωl | ≤
|ωr + ωl | ≤
Proof. Let
e1 = 1
"
1
T
−1
1 , T =
0
#
0
1 , ūsi = kusi k
l
T
0
2v̄
.
rw
T
0
and
T −1 , ūsi , θsi
be defined as in the proof of Prop. 2. Then, we have, through
(4), that
|ωr + ωl | =
=
=
Proof. Let
e2 = 0
Prop. 3 shows a similar result for the forward velocity of the
differential-drive agent.
=
lw v̄
.
rw λ
lw v̄
.
rw l
|ωr − ωl | ≤
=
2
v
rw
2 T
e Rl (θ)−1 usi
rw 1
2 T −1
e T R(−θ)R(θsi )ūsi
rw 1
2
[1 0] R(θsi − θ)ūsi
rw
2
kusi k
[cos(θsi − θ) − sin(θsi − θ)]
0
rw
2
kusi k cos(θsi − θ)
rw
2v̄
≤
.
rw
where α ∈ (0, 1). Now, we seek the optimal l such that (10)
is minimized. That is,
=
l∗ = arg min D(l).
(11)
l
So Prop. 3 reveals that the forward velocity of the
differential-drive system remains independent of the selection of the parameter l.
To elucidate an appropriate maneuverability cost in terms
of l, define the average control effort exerted by the
differential-drive system over an arbitrary time duration
T >= 0 as
Z
1 T
|ωr − ωl | + |ωr + ωl | dt.
T 0
Directly applying Props. 2,3 reveals that the above expression
is bounded above by
lw v̄ 2v̄
+ .
(8)
rw l
r
The expression in (8) demonstrates an interesting quality of the system. As l grows large, the forward velocity
dominates the control effort exerted by the differential-drive
system. However, if l becomes small, then the choice of l
affects the potentially exerted control effort.
Thus, (8) reveals how l affects the maneuverability requirements imposed by the abstraction. The fact that we
always pay the forward-velocity price, regardless of the selection of l, naturally excludes the forward velocity from the
soon-to-be-formulated cost, because any selection of l results
in the same cost bound; but the choice of l directly affects
the cost associated with the wheel difference. Accordingly,
the wheel difference must play a role in the final PMPC cost
metric. With this conclusion in mind, we define the static
maneuverability cost as
lw v̄
D2 (l) =
,
(9)
rw l
which the static parameterization in the following section
utilizes.
C. An Optimal, One-Time Selection
Sec. V utilizes the results in Sec. IV to formulate an
appropriate cost metric for a PMPC program. To have a baseline comparison, this section formulates an optimal, one-time
selection for the parameter l. That is, the selection occurs
once over the experiment’s duration. This selection should
strike a balance between precision and maneuverability.
Eqns. (7) and (9) represent each of these facets, respectively,
and introduce an inherent trade-off in selecting l. Making l
smaller directly reduces the cost in (7). However, consider
the relationship in (9); as l decreases, the differential-drive
system accumulates a higher maneuverability cost.
As such, the convex combination of (6) and (9) yields a
precision and maneuverability cost in terms of l as
D(l) = αD1 (l) + (1 − α)D2 (l)
lw v̄
,
= αl + (1 − α)
rw l
(10)
(11) leads to Prop. 4.
Proposition 4. The optimal l∗ is given by
r
1 − α lw v̄
.
l∗ =
α rw
Proof. We have that
∂
lw v̄
D(l) = α − (1 − α)
∂l
rw l2
lw v̄
= α − (1 − α)
.
rw l2
Setting this equation equal to zero directly yields the minimizer
r
1 − α lw v̄
∗
l =
.
(12)
α rw
Note that the above result utilizes (9), which is an upper
bound on the wheel velocity difference. Thus, the PMPC
method should outperform this static selection, a suspicion
that Sec. V investigates.
D. PMPC Cost
This section formulates a PMPC cost based on the analysis
in Sec. IV. To increase precision, the parameter l must be
minimized. However, (9) in Sec. IV-B indicates that the
wheel velocity difference must be managed. Thus, precision
and maneuverability are balanced with the cost
L(x, l, t) = (1 − β)(ωr − ωl )2 + βl2
= (1 − β)((lw /rw )e2 Rl (θ)−1 (usi ))2 + βl2 ,
where β ∈ (0, 1).
With this cost metric, the PMPC program becomes
t0Z+∆t
(1 − β)((lw /rw )e2 Rl (θ)−1 (usi ))2 + βl2
arg min
l∈R,∆t∈R≥0
t0
+ C(∆t)
R(θ)e1 0
s.t. ẋ =
R (θ)−1 usi
0
1 l
(13)
x(t0 ) = xt0 ,
Note that the sampling cost C(∆t) and single-integrator
control input usi have yet to be specified.
Angular Velocity ( )
0.1
0
-0.1
-0.2
Static
PMPC
-0.3
0
20
40
60
80
100
Time (s)
Fig. 3: Angular velocity (ω) during the simulation. The
simulation shows that the static selection (solid line) and
PMPC method (dashed line) both generate similar angular
velocity values.
Fig. 1: The GRITSbot, which is a small, differential-drive
mobile robot used in the Robotarium. This figure displays
the base length and wheel radius of the GRITSbots.
V. N UMERICAL R ESULTS
To demonstrate the findings in Sec. IV, we conduct two
separate tests: in simulation and on real hardware. The
simulation portion shows the effects of a one-time parameter
selection on the angular velocity versus the PMPC method.
The experimental section contains the same implementation on a real, physical system: the Robotarium (www.
robotarium.org). In particular, the experimental results
highlight the practical differences between using a PMPC
approach and a one-time selection.
A. Experiment Setup
This section proposes cost functions based on the results
in Sec. IV and expresses the PMPC problem to be solved
in simulation and on the Robotarium. Furthermore, this
section also statically parameterizes the NID to provide a
baseline comparison to the PMPC strategy. In this case,
the particular setup involves a mobile robot tracking an
ellipsoidal reference signal
Parameter (l)
0.06
0.04
r(t) =
0.4 cos((1/10)t)
.
0.2 sin((1/10)t)
(14)
For a single-integrator system, the controller
usi = xsi − r + ṙ
drives the single-integrator system to the reference exponentially quickly. Utilizing the transformation in Sec. IV yields
the controller
ux = Rl (θ)−1 (r − xsi + ṙ)
= Rl (θ)−1 (r − (x̄ + lR(θ)e1 ) + ṙ).
The GRITSbots of the Robotarium (shown in Fig. 1) have a
wheel radius and base length of
rw = 0.005 m, lw = 0.03 m.
Furthermore, their maximum forward velocity is
v̄ = 0.1 m/s.
For this problem, we also consider the sampling cost
1
,
∆t
which prevents the time horizon from becoming too small
(i.e., the cost penalizes small time horizons).
Substituting these values into (13), the particular PMPC
problem to be solved is
C(∆t) =
0.02
0
20
40
60
80
t0Z+∆t
100
Time (s)
Fig. 2: Parameter (left) and sampling horizon (right) from
PMPC simulation, which oscillate because of the ellipsoidal
reference trajectory in (14). Due to the sharp maneuvers required, the time horizon shortens and the parameter increases
on the left and right sides of the ellipse. On flatter regions,
the PMPC reduces the parameter and increases the sampling
time. The zoomed portion displays the discrete nature of the
PMPC solution.
(β − 1)((lw /rw )e2 Rl (θ)−1 (r − xsi +ṙ))2
arg min
l∈R,∆t∈R≥0
t0
+ βl2 ds + (1/∆t)
R(θ)e1 0
s.t. ẋ =
R (θ)−1 (r − xsi + ṙ)
0
1 l
x(t0 ) = xt0 ,
where l and ∆t are the decision variables and β = 0.01.
Both simulation and experimental results utilize Alg. 1 to
l =0.030937
t =3.3119
l =0.027048
t =3.4262
l =0.043572
t =3.2757
Angular Velocity ( )
Fig. 4: Robot during the PMPC experiment. This figure shows that the parameter grows and sampling horizon shrinks when
the robot must perform more complex maneuvers (i.e., on the left and right sides of the ellipse). Over the flatter portions
of the ellipse, the parameter increases and sampling horizon (solid line) reduces, allowing the robot to track the reference
(solid circle) more closely.
with α = 0.99. This assignment to α in (12) implies that
1
l∗ = 0.078.
0.5
Note that this value of l∗ is only for the one-time selection.
The PMPC method induces different parameter values every
0.033 s.
0
-0.5
Static
PMPC
-1
0
20
40
60
80
100
Time (s)
Fig. 5: Angular velocity of robots for PMPC method (dashed
line) versus static parameterization (solid line). In this case,
both methods generate similar angular velocities, but the
PMPC method produces better tracking.
solve for the optimal parameters and time horizon online
with the step-size values
γ1 = 0.001, γ2 = 0.01.
Each experiment initially executes Alg. 1 to termination;
then, steps are performed each iteration to ensure that the
current values stays close to the locally optimal solution
realized by Alg. 1. In particular, each iteration takes 0.033 s,
which is the Robotarium’s sampling interval.
For comparison, the one-time selection method stems
directly from the abstraction cost formulated in Sec. IV-C
Parameter (l)
0.06
0.05
0.04
0.03
0.02
0
20
40
60
80
100
Time (s)
Fig. 6: Parameter (left) and sampling horizon (right) from
PMPC experiment on the Robotarium. The PMPC program
reduces the sampling horizon and increases the parameter to
cope with the sharp maneuvers required at the left and right
sides of the ellipse. On flatter regions, the PMPC decreases
the parameter and increases the time horizon, providing
better reference tracking. The zoomed portion illustrates the
discrete nature of the PMPC solution.
B. Simulation Results
This section contains the simulation results for the method
described in Sec. V-A. In particular, the simulation compares
the proposed PMPC method to the one-time selection process
in Sec. IV-C, showing that the PMPC method can outperform
the one-time selection. Fig. 3 shows the simulated angular
velocities, and Fig. 2 shows the parameter and sampling
horizon evolution. Both methods generate similar control
inputs. However, Fig. 2 demonstrates that the PMPC method
selects smaller parameter values, implying that this method
provides better reference tracking.
Additionally, Fig. 2 also shows that the sampling horizon
shortens and the parameter increases around the left and
right portions of the ellipse, because these regions require
sharper maneuvers and incur a higher maneuverability cost.
Furthermore, the ellipsoidal reference trajectory induces the
oscillations in Fig. 2. Overall, these simulated results show
that the PMPC method can outperform a static parameterization.
C. Experimental Comparison
This section contains the experimental results of the implementation described in Sec. V-A. The physical experiments
for this paper were deployed on the Robotarium and serve
to highlight the efficacy and validity of applying the PMPC
approach on a real system. Additionally, the experiments
display the propriety of the maneuverability cost outlined
in Sec. IV-B.
Figs. 5-6 display the angular velocity of the mobile robot,
the sampling horizon, and the parameter selection, respectively. As in the simulated results, Fig. 5 shows that the static
parameterization and PMPC method produce similar angular
velocities, and Fig. 6 shows that the PMPC method is able
to adaptively adjust the parameter and sampling horizon to
handle variations in the reference signal.
Moreover, on a physical system, the PMPC method still
adjusts the time horizon and parameter to account for maneuverability requirements. For example, on the left and
right sides of the ellipse, the maneuverability cost rises,
because the reference turns sharply. Thus, the parameter
increases and the sampling horizon decreases. Over flat
portions of the ellipse, the maneuverability cost decreases,
permitting the extension of the time horizon and reduction
of the parameter (i.e., better tracking). That is, reductions
of the maneuverability cost permit decreasing the parameter
l, allowing the PMPC strategy to outperform the static
parameterization. Furthermore, the decrease of the sampling
horizon during high-maneuverability regions accelerates the
execution of Alg. 1, which is useful in a practical sense.
VI. C ONCLUSION
This work presented a variable-sampling-horizon Parametric Model Predictive Control (PMPC) method that allows for
optimal parameter and sampling horizon selection with the
application of controlling differential-drive mobile robots. To
formulate an appropriate cost for the PMPC strategy, this
article discussed a class of Near-Identity Diffeomorphisms
(NIDs) that allow the transformation of single-integrator
algorithms to unicycle-modeled systems. Additionally, this
work showed an inherent trade-off induced by the NID and
formulated precision and maneuverability costs that allow
for the optimal parameterization of the NID via a PMPC
program. Furthermore, simulation and experimental results
were produced that illustrated the validity of the proposed
costs and the efficacy of the PMPC method.
R EFERENCES
[1] M. Ji and M. Egerstedt, “Distributed formation control while preserving connectedness,” in Proceedings of the IEEE CDC, pp. 5962–5967,
Dec 2006.
[2] A. Y. Yazicioğlu and M. Egerstedt, “Leader selection and network
assembly for controllability of leader-follower networks,” in 2013
ACC, pp. 3802–3807, June 2013.
[3] Y. Hong, J. Hu, and L. Gao, “Tracking control for multi-agent
consensus with an active leader and variable topology,” Automatica,
vol. 42, no. 7, pp. 1177 – 1182, 2006.
[4] W. Ni and D. Cheng, “Leader-following consensus of multi-agent
systems under fixed and switching topologies,” Systems & Control
Letters, vol. 59, no. 3–4, pp. 209 – 217, 2010.
[5] J. Cortes et al., “Coverage control for mobile sensing networks:
Variations on a theme,” in Proceedings of the MCCA, July 2002.
[6] A. Astolfi, “Exponential stabilization of a wheeled mobile robot via
discontinuous control.,” Journal of dynamic systems, measurement,
and control, vol. 121, no. 1, pp. 121–126, 1999.
[7] R. Olfati-Saber, “Near-identity diffeomorphisms and exponential tracking and -stabilization of first-order nonholonomic se(2) vehicles,” in Proceeding of the 2002 ACC, May 2002.
[8] R. Olfati-Saber, “Exponential -tracking and -stabilization of secondorder nonholonomic se(2) vehicles using dynamic state feedback,”
in Proceedings of the 2002 ACC (IEEE Cat. No.CH37301), vol. 5,
pp. 3961–3967 vol.5, May 2002.
[9] P. Ogren, M. Egerstedt, and X. Hu, “A control Lyapunov function
approach to multi-agent coordination,” in Decision and Control, 2001.
Proceedings of the IEEE Conference on, vol. 2, 2001.
[10] G. Oriolo, A. D. Luca, and M. Vendittelli, “Wmr control via dynamic
feedback linearization: design, implementation, and experimental validation,” IEEE Transactions on Control Systems Technology, vol. 10,
Nov 2002.
[11] E. Yang, D. Gu, and H. Hu, “Nonsingular formation control of cooperative mobile robots via feedback linearization,” in 2005 IEEE/RSJ
IROS, pp. 826–831, Aug 2005.
[12] G. Droge and M. Egerstedt, “Adaptive look-ahead for robotic navigation in unknown environments,” in 2011 IEEE/RSJ International
Conference on Intelligent Robots and Systems, pp. 1134–1139, Sept
2011.
[13] G. Droge and M. Egerstedt, “Adaptive time horizon optimization
in model predictive control,” in Proceedings of the 2011 American
Control Conference, pp. 1843–1848, June 2011.
| 3 |
Online Model Estimation for Predictive Thermal Control of Buildings
Peter Radecki, Member, IEEE, and Brandon Hencey, Member, IEEE
Ideally, a Building Automation System (BAS) would
automatically modify set-points and load shedding based on
weather, occupancy, and utility pricing predictions [8].
Every building has unique and time-varying thermal
dynamics, occupancy, and heat loads which must be
characterized accurately if a BAS is to apply model
predictive controllers (MPC) to realize energy and monetary
savings [3]. Additional considerations include: measured
building data often contains low information content;
engineering models contain designer’s intent instead of
actual construction; and building’s usage evolves over time
[9]. Unfortunately, in practice there has yet to be
demonstrated a scalable, low-cost method to readily acquire
these much needed accurate models of individual buildings’
unique thermal envelopes.
For continuous commissioning and lifetime adaptability a
low-cost scalable method to acquire control-oriented
building models must: learn both the dynamics and the
disturbance patterns quickly, provide stable extrapolation, be
adaptable to future changes in building structure or use, and
use existing available data. White-box, first-principles,
forward modeling approaches are often inaccurate, not
robust to changes, and take extensive engineering or
research effort to build [2], [10]. Black-box approaches take
up to 6-months to train and cannot be safely extrapolated [5],
[6]. Recently gray-box methods have begun to show
potential as a scalable option for learning control-oriented
building models in some limited specific studies [11], [12],
[13], [14].
We propose a multi-mode Unscented Kalman Filter
(UKF) as a generalizable on-line gray-box data-driven
method to learn the building’s multi-zone thermal dynamics
and detect unknown time varying thermal loads. By coupling
known building information and simple physics models with
existing measurable building data we demonstrate how a
probabilistic estimation framework can overcome
shortcomings of many previously attempted specialized
solutions. Our method adapts over time to continually learn
both dynamics and disturbances while providing stable
prediction performance.
Continuing our work in [11], this paper aims to generalize
our findings and method with the following contributions:
literature survey on control-oriented thermal
modeling for buildings,
development of minimal parameterization for
dynamics estimation,
generalized thermal disturbance pattern estimation,
multi-mode heuristic for simultaneous parameter
and disturbance estimation.
Abstract—This study proposes a general, scalable
method to learn control-oriented thermal models of
buildings that could enable wide-scale deployment of
cost-effective predictive controls. An Unscented Kalman
Filter augmented for parameter and disturbance
estimation is shown to accurately learn and predict a
building’s thermal response. Recent studies of heating,
ventilating, and air conditioning (HVAC) systems have
shown significant energy savings with advanced model
predictive control (MPC). A scalable cost-effective
method to readily acquire accurate, robust models of
individual buildings’ unique thermal envelopes has
historically been elusive and hindered the widespread
deployment of prediction-based control systems.
Continuous commissioning and lifetime performance of
these thermal models requires deployment of on-line
data-driven system identification and parameter
estimation routines. We propose a novel gray-box
approach using an Unscented Kalman Filter based on a
multi-zone thermal network and validate it with
EnergyPlus simulation data. The filter quickly learns
parameters of a thermal network during periods of
known or constrained loads and then characterizes
unknown loads in order to provide accurate 24+ hour
energy predictions. This study extends our initial
investigation by formalizing parameter and disturbance
estimation routines and demonstrating results across a
year-long study.
I. INTRODUCTION
A. Overview
Significant energy savings in buildings’ heating,
ventilating, and air-conditioning (HVAC) systems could be
realized with advanced control systems [1], but deployment
of these control systems requires a method to readily acquire
low cost models of buildings’ unique thermal envelopes [2],
[3]. Previous studies have investigated several methods but
generally arrived at non-scalable specialized solutions [4],
[5], [6], [7].
This work was supported by the Department of Defense (DoD) through
the National Defense Science & Engineering Graduate Fellowship
(NDSEG) Program.
P. Radecki and B. Hencey were with the Sibley School of Mechanical
and Aerospace Engineering, Cornell University, Ithaca, NY, 14850 USA.
Currently Radecki is a Transmission Development Engineer at General
Motors Powertrain in Detroit, Michigan, email: ppr27@cornell.edu.
Currently Hencey is with the Air Force Research Laboratory in Dayton,
Ohio, email: bhencey@gmail.com.
Copyright © 2015 Peter Radecki. All Rights Reserved.
1
A comparison of UKF and EKF applied estimation
techniques is included for the benefit of practicing engineers.
The robustness of the UKF estimation technique learning
both parameters and disturbances is demonstrated against a
multi-zone high-fidelity EnergyPlus simulation in a yearlong study.
A short explanation of our proposed method follows.
Using a simple first order heat transfer model with multiple
zones, the UKF estimates model parameters of the thermal
dynamics during periods which have small or wellcharacterized thermal loads. After learning the dynamics
during low disturbance periods, such as nighttime, the UKF
is augmented to track unknown disturbances while
continuing to improve its dynamics model. The UKF is
simpler than an adaptive control technique to implement
because it internally maintains a covariance quality metric
which only adjusts parameter estimates if the incoming data
provides new thermal information. Fig. 1 shows how the
proposed UKF enables rapid deployment of advanced
predictive controllers for BAS.
The paper is organized as follows. First, a background
section initially examines the scope of the problem and
previous approaches before proposing and analyzing an
extensible on-line data-driven approach. After deriving the
thermal model and parameterization, we formulate and
compare performance of the EKF and UKF. True utility of
the UKF is then demonstrated across a year-long study.
Based on data generated from our simple passive 5-zone
thermal model plus a more complex passive 5-zone
EnergyPlus simulated model, less than 2 weeks of training
data is shown to make reliable 24-hour predictions. Based on
testing and performance, a discussion of how the UKF fits
into the building thermal modeling problem and an
identification of areas of future research conclude the paper.
Despite these significant advances, most modern buildings
have realized only minimal energy savings [3]. BAS modelbased predictive controllers are rarely implemented and
typically underperform [15]. Larger buildings’ successes in
realizing energy savings has been generally limited to
automated lighting control, changing nighttime temperature
set-points, and load shedding during peak demand [15]. The
data available from BAS is underutilized—generating trend
plots instead of enabling intelligent energy management
decision and control tasks [4]. It is not uncommon for
buildings to simultaneously run cooling and heating
components throughout all 12 months of the year [9]. Load
shedding, reducing peak usage by duty cycling off portions
of a system, is often based on some arbitrary component
order and can significantly impact occupant comfort—if one
portion of the system that is already at capacity sheds, it may
not recover until nighttime [16]. Certain sites, such as Drury
University and UC Merced have a human operator regularly
check weather forecasts and vary temperature set points
based on personal experience and intuition [16], [17]. These
techniques are labor and expertise intensive. Usually,
however, the cost of the human operator doesn’t justify the
energy savings and limits widespread deployment.
Optimal building thermal control is a multi-objective
optimization problem involving user comfort, air quality,
energy cost, smart grid demand response, and thermal
dynamics, which generally cannot be performed optimally
by a human operator without expertise. Realistic widespread
improvement in building controls requires a scalable method
to accurately learn unique thermal models.
C. Building Controls and Modeling Survey
Many researchers have evaluated and implemented
custom one-off BAS demonstrating energy savings with
MPC [18], but widespread implementation has been illusive
because a scalable method to learn thermal models has not
been available. Both forward and inverse modeling efforts
have traditionally taken months of a researcher’s or
engineer’s time to accurately generate. Given that the
majority of buildings which will be in use by 2030 are
currently over 10 years old, the retrofit problem is
significant. A scalable method must handle new
construction, existing structures, and remodeling projects
and continue to function if a building is repurposed.
White-box, or forward models can be generated by
analyzing blueprints, building materials, and expected use
patterns to generate a sophisticated simulation, but are
generally not cost effective for retrofit or existing buildings.
Furthermore, some studies have shown that predicted and
actual energy consumption can differ by up to 40% using
forward models from the design stage [10].
MPC can perform adequately given simple, accurate first
or second order heat transfer models, so there is viability for
inverse models [9], which use building data recorded over
time to infer the dynamics. Today’s networked sensors and
available computation power make data-driven models
feasible. The inverse modeling paradigm can use either pure
numerical methods in a black-box environment or physics,
first-principles based methods in a gray-box environment.
B. Building Automation Systems Background
Buildings use 39% of the total US energy supply, a
significant fraction of which is used to provide people with a
comfortable indoor working and living environment by
operating HVAC systems. It is estimated that 25% to 30% of
building energy usage or around 10% of the total US energy
consumption could potentially be reduced with component
and controls upgrades [1], [2]. Reducing this energy
consumption would also help significantly reduce CO2
generation [9].
BAS and their connected components advanced
tremendously over the past few decades and have started to
be widely deployed in modern buildings and retrofit into
remodeled buildings. Utilizing technologies such as wireless
sensing, LonTalk, and BACnet, BAS provide networked
infrastructure making it easier for sensor information and
control signals to be distributed throughout and between
buildings. Component advancements include sensors that
can detect occupants, CO2 level and light in addition to
traditional temperature and humidity measurements.
Furnaces, water heaters and air conditioners have all started
to approach their maximum theoretical efficiency. The
computational power necessary to run demand-responsive
and predictive control algorithms is cheaply available.
2
Current Methodology:
Up to 1 year of time per building
Proposed Methodology:
Weeks of time per building
Analyze building
plans/geometry
Discover/Utilize networked
sensors in building
Manually forward model
and simulate or perform
custom study of building
using recorded data.
Recursive online gray‐box
learning algorithms
determine dynamics of
building
Apply advanced controls
Apply advanced controls
Fig. 1. Comparison of existing typical thermal modeling process left and proposed method right . Picture Credits: EnergyPlus: DOE, R.
L. Smith Building center : ThermoAnalytics, and floor plan: 3 .
gain wide acceptance because zone interactions affect BAS
control systems [18].
In [11] we demonstrated the first published study of a
scalable modeling and online estimation framework for
multi-zone building states, parameters, and unmodeled
dynamics. Since our study in 2012 several other researchers
have validated our initial claims and highlighted new
challenges.
Studies using real and simulated data demonstrated
individual aspects of the proposed scalable modeling and
estimation framework. Massoumy in 2013 [13]
demonstrated the applicability of gray-box estimation with
real data collected from Michigan Tech’s new Lakeshore
Center building–he used an off-line batch parameter
estimation routine with online state estimates, validating the
results of our EKF versus UKF comparison. Fux [12]
demonstrated the relevance of 1R1C models to individual
rooms or entire zones by generating accurate predictions
from a simple single-zone model of an entire building using
real data in an EKF. The study by Fux [12] also validated the
concept of multi-mode learning and the importance of
characterizing disturbances. Martincevic [14] used simulated
data from IDA-ICE in a year-long study to demonstrate a
50-zone model that learned parameters without disturbances
from a constrained UKF. The model had a prediction error
of less than 1oC RMS error if no unmodeled disturbances
were present.
Other researchers demonstrated the extensibility of the KF
as an online estimation framework and presented important
insight. Studies showed ground coupling was not necessary
in RC models for certain scenarios [8], extended an EKF for
fault diagnosis and monitoring with real-data from a
supermarket [20], used an Ensemble KF to constrain
parameters to physically realistic values [21], and showed
the applicability of RC models as a design tool using offline
parameter estimation [22]. Lin [23] showed the need for
meaningful inter-zone excitation in order to guarantee
satisfactory information content is present in measured data.
Lin’s result corresponded with our follow-up study using an
UKF learned model for MPC [24]. Lin further noted that one
simple plot demonstrating prediction accuracy versus
Many researchers from present day to those who participated
in the 1990’s ASHRAE Energy Predictor Shootouts have
taken data-driven approaches, but their generated methods
and models have typically been too specialized to scale to
other buildings [2], [4], [5], [6]. A brief comparison of
previously tried data-driven methods is provided before
proposing our solution.
Black-box models such as Artificial Neural Networks
(ANN) are often impractical for modeling thermal dynamics
in building systems because of the large amount of training
data (6 months to a year) that must be analyzed before
getting an accurate model [6]. Because ANN create an
arbitrary representation of the system, they are sensitive to
the quality of the data collected, may meld together effects
from loads and dynamics, are not robust to component
failures, and are not adaptable when building use and
configuration patterns change [2], [18]. Pure numerical
methods may have better applications where simple heuristic
and physical models are impractical such as pattern
recognition of occupancy, lighting, or thermal component
loads.
Gray-box methods, which use pre-existing knowledge of
the dynamical structure, have been used off-line with genetic
algorithms (GA) [5] and recursively on-line (in real-time).
Off-line methods may be good for initial model acquisition
but on-line methods are desirable—buildings age and change
physically over time due to deterioration or reconfiguration
and change temporally as occupant usage evolves. On-line
methods can also be dual-purposed as process monitoring,
analysis, and fault detection devices.
Examination of traditional online gray-box techniques in
buildings with adaptive control [2] and Extended Kalman
Filters (EKF) [19] shows the need for further development.
Historically, adaptive control techniques have fallen short
because they require autonomous tuning of a complex
forgetting factor, which requires actively monitoring the
excitation level of incoming data as an information criterion
used to enable or disable thermal model learning [2]. Many
on-line methods [2], [4], and [5] have used single zone
models for demonstrating utility, but in reality multi-zone
simulations are necessary before any modeling method will
3
measured data is meaningless in that it doesn’t say anything
about the model’s robustness for control.
In summary, recent studies have shown RC models are
applicable for thermal modeling of buildings and Kalman
filtering can learn parameters for models of buildings using
both simulated and real data. Our paper builds upon recent
advancements
with
an
explanation
of
unique
parameterization and formalizes the multi-mode estimation
technique with a generalized algorithm for learning any
disturbance pattern. We conclude with a year-long study of
simultaneous state, parameter, and disturbance estimation
showing robust, meaningful prediction accuracy.
Thermal radiation does play a significant role in the
heating and cooling of many buildings. For the purposes of
evaluating UKF parameter estimation and thermal load
detection for a passive building, the radiation between
surfaces and zones is linearly approximated and lumped with
convection and conduction [25]. The proposed framework
could readily be augmented for nonlinear radiation effects at
the cost of increasing the number of associated parameters to
learn.
Thermal disturbances significantly affect most buildings
but are often overly complex to model requiring information
about building geometry and neighboring foliage [26]. Solar
gain is treated as an unmodeled external disturbance. This
simplification removes complexities of modeling diffuse and
direct sunlight, shading, and night sky radiation temperature,
and allows for simple disturbance generation in EnergyPlus
by turning on or off environmental radiation transfer. The
solar gain provides us with a specific periodic disturbance to
estimate with patterns. In practice this technique could
estimate any number of disturbances if one has some
information about the disturbance frequency, intensity, or
timing such as dusk and dawn times or building occupancy
times. Common examples amenable to disturbance pattern
estimation include occupant body-heat, equipment,
computers, electrical loads, lighting, and HVAC.
Using the 2-node example, a state space representation
can be derived where is a vector of temperatures; A is a
matrix of RC values; and
is a vector of additive,
independent, time-varying disturbances such as solar
radiation.
II. PARAMETER ESTIMATION FORMULATION
A. Thermal Model
A standard thermal network captures the dominant
convection and conduction heat transfer modes and mass
transfer occurring between zones inside and outside the
building, while solar gain is treated as an unmodeled
disturbance. Internal zone radiation is linearly approximated
and lumped with convection and conduction [25]. The
thermal network matches that commonly used in the
community and provides a simple mechanism to explore
simultaneous
model
and
disturbance
estimation.
Subsequently developed filters use the thermal model but are
not restricted to it. One could select a non-linear model
incorporating HVAC dynamics and use it in the proposed
estimation framework.
Convection, conduction, and mass transfer heat flux
(watts) into zone
is contributed from the temperature
differential to connected adjacent node(s) divided by the
(degree/watt) plus an additive term
thermal resistance
(watts) representing disturbances. (Note: unless otherwise
mentioned, subscripts denote zones.)
1
1
1
1
2
/
and thermal capacity
(joule/degree)
The heat flux
affects the time-based temperature rate of change .
Based on [27], an n-node thermal network can be formalized
by defining a simple undirected weighted graph with: nodes
and temperatures
≔ 1,2, … ,
assigned capacitances
; edges
⊂
that connect adjacent nodes with
weights
∀ , ∈ such that
∀ , ∈
that
are assigned resistances. For a general thermal network with
n nodes, the A matrix is
0
if
, , ∉
Substituting for , the temperature rate of change of node i
due to connection(s) with node(s) j and disturbance
becomes
/
/ .
1
The derived representation for temperature change due to
heat transfer is mathematically analogous to voltage change
due to current flow in a resistor-capacitor network. For
visualization a simple 2-node example with two capacitances
and one resistance is shown in Fig. 2.
if
∑
, ,
∈ .
3
if
B. Parameterization
A minimal set of independent parameters must be
specified for filters to enforce the system dynamics during
parameter estimation [28]. Over-parameterization causes
unidentifiable parameter manifolds or extra degrees of
Fig. 2. Two node example thermal network.
4
freedom and can result in violation of dynamics constraints
and physics laws such as conservation of energy. In machine
learning and system identification, indeterminate degrees of
freedom can cause overfitting where the model learns the
noise instead of the dynamics of interest. In estimation
theory, parameter observability requires that the Fisher
information matrix is invertible—redundant parameters or
over parameterization breaks this observability criterion
resulting in an unobservable subspace [29].
Efficient and reliable parameter estimation requires
estimating a minimal number of parameters [28]. From
Equation 2 there are only two unique parameters required
to describe the A matrix despite it containing three
variables—two resistances and one capacitance. The extra
parameter acts as a scaling factor and can be quantified only
if the heat flux q is provided in addition to the temperature
histories. Without the scaling factor only a time-constant can
be inferred. Suppose the time-constant for our system was 1.
Then
1 and we get the plot in Fig. 3 of possible
values for
and , many of which are violations of
physics first principles such as a negative thermal
capacitance and resistance. Unfortunately, removing the
negative-valued parameter space does not resolve the
ambiguity in selecting the true resistance and capacitance
values for the provided time constant. This ambiguity
generally makes the estimation problem numerically
unstable, theoretically unobservable, or practically
unreliable. Rectifying the ambiguity could be done with
actual heat flux information which is generally unavailable
in practice, so for this study, selecting a minimal set of
parameters mitigates the problem.
Now we present methods based on a careful graph study
to obtain a minimal parameter set for thermal network
estimation. Because diagonal terms in A are linear
combinations of the off-diagonal terms, parameter
estimation is only performed for off-diagonals. RC products
are estimated together in order to reduce the non-linearity of
the estimation problem. Parameterization of trees, graphs
with no cycles of which Fig. 2 is an example, with combined
RC products automatically guarantees a minimal
representation of the system.
Unfortunately this minimal guarantee does not extend to
graphs containing closed cycles. Fig. 4 is an example of a
graph containing a cycle whose state space A matrix is
shown.
4
We arbitrarily selected R13C1 to show that one of the six RC
products is redundant and can be eliminated by multiplying
and dividing the other RijCi parameters by each other around
the cycle:
In a graph, each cycle which uses at least one unique edge
and passes through no nodes with infinite capacitance may
be used to eliminate one redundant RC product from the
estimation problem by multiplying and dividing around the
loop. For any thermal network the total number of unique
parameters should be one less than the sum of the number of
resistances and capacitances.
In general unique edges should be selected for
elimination. Eliminating a shared edge between two cycles
joins the two cycles mathematically through multiplication
in the estimation routine which can negatively impact
numerical stability. Selecting multiple redundant parameters
to prune from a graph estimation problem should be done
such that each redundant RC parameter lies on a globally
unique edge for its respective cycle, and the shortest
available cycle should be chosen for calculation in order to
guarantee minimal parameter cross-sensitivity.
Two nodes that have no shared conduction or convection
are considered independent, and any edge directly
connecting them is pruned from the graph to give the
simplest representation. Independent ambient nodes such as
external temperatures have infinite capacitance in the
thermal network. External nodes may have unique update
functions depending on the simulation and weather desired
for the modeling exercise. As an example look back at Fig.
2, if
were an external temperature, setting
∞ would
give the following state space representation.
1
1
Indeterminate Parameterization: Rij *Ci =1
4
Ci
2
0
-2
-4
-4
-2
0
Rij
2
4
Fig. 3. Non‐minimal parameterization gives rise to estimation
ambiguity. The curve satisfying
1
demonstrates the
unobservable subspace which includes physically meaningless
and cannot be uniquely identified.
negative quantities.
C1
R12
R13
C2
R23
C3
0
Fig. 4. Three‐node graph with one loop.
5
0
5
drawn from distribution . The measurement noise terms
̅
are drawn from distribution .
The measurement function of the thermal network is
linear so H is simply an n row identity matrix (provided all
temperatures are measured) padded with columns of zeros
for the parameters. The update portion of the filter therefore
can be written as a simple linear Kalman Filter. However in
the prediction step, calculation of the Jacobian matrix F
depends whether one chooses to estimate RC or 1/RC
parameters. A comparison of both cases demonstrates that
RC parameters would be a poor choice, especially during the
initial acquisition stage, because poor parameter estimates
will be squared and could easily cause the filter to diverge
and blowup.
C. Extended Kalman Filter (EKF)
For all tests the state space system is integrated at one
minute interval time-steps with Euler integration to allow
discrete-time filter implementation. Parameter estimation
with the Kalman Filter is achieved by augmenting the
temperature states
…
with unique parameters
… 1/
and
disturbances
̅
1/
/ … /
together in the state representation
, ̅ ,
. For the purposes of estimation, the full
discrete-time stochastic system is
1
1
1
̅
̅
6
̅
̅
Case 1:
represents process noise,
represents
where
represents
estimation uncertainty in RC parameters,
process noise for disturbances, and ̅
represents
measurement noise. All noise terms are assumed zero mean,
Gaussian, white, and stationary. Note that in 6 , the matrix
is actually a vector of functions composed from ̅
as
defined by the parameterization of the RC terms. This
representation results in multiplication and division of
estimated parameters through the dynamics function.
Specifically, temperature is being multiplied by RC
parameters necessitating non-linear estimation techniques.
For baseline comparisons, an EKF and UKF are
formulated. In order to define notation, the prediction and
update steps of the discrete EKF are shown in 7 , but for
proper treatment of the derivation and background of the
Kalman Filter, please consult [29], [30], [31]. (Note: For
brevity in the following Kalman Filter formulations, notation
deviates from the modeling section: subscripts denote time
rather than node indices.)
Case 2:
|
|
|
,
State Estimate
State Covariance
Update:
|
|
|
|
|
|
|
Innovation
Innov. Covariance
Optimal Kalman Gain
State Estimate
State Covariance
f
x xˆ k 1|k 1 , u k 1 ,
Hk
8
D. Unscented Kalman Filter (UKF)
Unlike the EKF, which uses a Jacobian first order
linearization evaluated at the current estimate to propagate a
probability distribution through a non-linear transform, the
UKF uses the Unscented Transform to pass a distribution
through a nonlinear transform. Specifically the UKF samples
(2n+1) points in the distribution, evaluates each point
through the non-linear transform and then recombines these
points to generate a transformed mean and covariance which
is oftentimes more accurate and stable than that obtained
from the single point EKF linearizations [32]. The samples,
called sigma points, are evenly spaced to capture at least the
first and second order moments of the distribution and are
weighted such that the covariance and mean of the samples
matches that of the original distribution. After being
mapped through the non-linear transform the resulting points
are multiplied by their assigned weights to determine the
transformed mean and covariance. For linear systems both
7
Jacobians:
Fk
1
1
Thus, for numerical stability the EKF must follow Case 2
and estimate 1/RC parameters.
Measurement noise is specified based on the accuracy of
the temperature sensors. Process noise is specified for the
temperature states based on the level of zone aggregation
used while the RC and disturbance process noise is set to an
artificial value greater than zero in order to allow the filter to
vary its estimate of these parameters through time.
Increasing process noise level for any parameter indicates
that the model isn’t confident of its ability to describe the
process evolution of that parameter. Disturbances, which by
their nature the model is not explicitly capturing, are biased
and vary with time. In order to estimate the disturbances
over time; their noise level is set to be non-zero. Because the
RC values should be fairly constant while the disturbance
bias may change throughout the course of a day, the noise
level for RC parameters should be much smaller than
disturbances.
Predict:
|
1
h
x xˆ k |k 1
For the filter dynamics function
, temperature state
dynamics follow the previously derived thermal model while
the RC and disturbance parameters are modeled as constants.
Artificial process noise for the constant parameters, denoted
, allows the filter to change its estimate of these values
through time and allows the filter to track the true time
,
varying disturbance. The set of process noise terms
, and
are stacked as defined by the state and
6
the EKF and UKF perform identically to a traditional
Kalman Filter but for certain non-linear systems a UKF can
provide higher accuracy with the same order of calculation
complexity.
The UKF was implemented with the same augmented state
vector and used the same measurement and process noise
values as the EKF. Removing the requirement to design and
calculate a Jacobian, the UKF is amenable to either RC or
1/RC parameter representations—both are evaluated in the
results section. The standard values of
10 ,
0,
2, typical for a Gaussian distribution, were used to
generate the following samples and weights .
,
,
|
For:
|
1, … ,
|
|
For:
,
A 10,000 run Monte Carlo simulation was conducted to
compare the EKF and UKF performance for the simple twoparameter search problem. Table 1 shows the distributions
for all parameter and filter values which were randomly
sampled at the beginning of each test run. The distributions
of parameter and temperature values were chosen such that
some runs will have high levels of excitation while others
will have no excitation—effectively T(0) equal to Text.
Results of the simulation, shown in Table 2, demonstrated
that the UKF outperformed the EKF for non-linear
estimation of parameter p1 but had statistically similar
performance for estimation of linear parameter p2. The UKF
showed filter stability—resilience to exponential tracking
divergence—while 2% of the EKF runs went unstable and
did not complete execution. These results agree with
published studies comparing the filters’ general performance
in other application areas [33], [34]. The EKF could estimate
linear additive disturbances, which is in agreement with
[19], but when estimating coefficient parameters such as
weights in a thermal network, the UKF is a more viable
solution.
Further testing showed that increasing the number of
temperature zones or trying to directly estimate RC instead
of its reciprocal, 1/RC, further degraded the EKF stability
and performance. However for the UKF, comparison of
estimating RC products and their reciprocals showed that
direct RC parameters are more robust. For zones with little
thermal connection, the filter estimating RC parameters will
continue increasing estimates but will be finitely decreasing
the covariance, so the estimate will eventually converge.
When 1/RC parameters are utilized for the same zones, the
estimate will be driven to zero, but the covariance will not
|
1, … ,2
|
|
|
1
For:
1, … ,2
1
2∗
The samples were then recombined to give the a priori state
and covariance estimates.
|
|
|
f
|
|
TABLE I
MONTE CARLO SAMPLED PARAMETER DISTRIBUTIONS
|
|
|
Variable
|
Range
0,1
0,20
̂ 0
0,1
̂ 0
0,20
0
0,100
0
0,10
0
| .5, .5 |
| .5, .5 |
1,0,0
1, 0,10 , …
Est. Process Variance
0,10
1, 0,5 10 , …
Initial Est. Variance
0
0,5 10
= Uniform Distribution,
= Normal Distribution.
Truth
Truth
I.C. Estimation
I.C. Estimation
Truth
I.C. Estimation
True Meas. Error
Est. Meas. Error
True Process Variance
Note that λ and W can be reused so only the χ terms need to
be recalculated each iteration. Because the measurement
function is linear, the unscented transform is only used in the
prediction step; the measurement step is simply calculated as
a linear Kalman filter update.
E. EKF vs. UKF
In order to compare an EKF and UKF the 2-node thermal
network from Fig. 2 was chosen. Looking back to 5 , the
system can be parameterized as Ap.
0
Quantity
0
TABLE 2
10,000 RUN MONTE CARLO RESULTS
In order to compare both the parameter estimation and bias
detection capabilities only T was sensed—Text was treated as
was used
a constant latent variable. A second parameter
to estimate Text as a linear additive disturbance as shown.
Test
EKF
EKF
EKF
UKF
UKF
UKF
9
7
Statistic
Numerical Instability Count
p1 Average Estimation Error
p2 Average Estimation Error
Numerical Instability Count
p1 Average Estimation Error
p2 Average Estimation Error
Result
175
3.16
60.5
0
0.18
58.0
decrease as quickly, which can result in a zero crossing that
violates conservation of energy by estimating a negative
parameter. As a result, RC parameters were estimated by the
UKF for all remaining tests.
R.C. Disturbance
Solar Gain
1000
0.1
0.05
0
-0.05
F. Disturbance Estimation and Pattern Recognition
Direct sensing of disturbance heat flux is rarely practical in
building systems. However, timing information for
disturbances is typically available, so a practical method is
presented to learn disturbances given only timing
information with no prior disturbance quantification. This
method could readily be augmented if additional disturbance
heat flux data was available.
Looking at thermal model 1 , a change in zone
temperature may be explained away using either connected
zones with their respective temperatures or additive
disturbances. In order to get satisfactory estimation
performance this ambiguity must be considered in the filter
design lest it manifest problems akin to non-minimal
parameterization. The engineering solution selected to
rectify the problem, splits the estimation problem based on
the presence of disturbances and manipulates the process
covariance for estimated disturbance parameters based on
timing of expected unquantified disturbances. From a
control theory perspective the system does not have timeinvariant observability. However, buildings are a timevarying system that have some periodicity. Looking over a
horizon, for example one day for solar disturbances, we can
learn constant parameters when no disturbances are present
and learn disturbances after having estimated the constant
parameters. This partitioning enables time-varying
observability.
The presented method is not claimed to be optimal, rather
it is a practical solution based on engineering judgment of
typical scenarios common in buildings. Typically, a system
can sense if people are using a building but cannot measure
their heat flux or the equipment they use, likewise it can
sense if the HVAC is on but not the exact heat flow
delivered to a specific room. The approach attempts to use
commonly available timing knowledge to quantify and infer
disturbances that are not directly measurable and only
partially predictable.
Learning of disturbances was done in a Markov fashion:
the estimator assumed no knowledge of previous historical
disturbance patterns and estimated a new disturbance value
at each time step based on the previous time step’s
estimate, the dynamic model, and the current measurement.
The disturbance states in the UKF were modeled as
constants which have zero-mean Gaussian additive noise. A
characteristic change in the disturbance such as a heater
turning on or the sun coming up at dawn violates the zeromean assumption causing a bias in the disturbance. In order
to track these sudden bias changes using a simple UKF, the
variance(s) correlating to those specific zone(s) disturbances
were inflated to allow the filter to acquire and track the new
value. This artificial tuning of the covariance is similar to
tuning a forgetting factor in adaptive control frameworks.
In the Matlab-based simulation, heating or cooling was
arbitrarily added to individual zones from 10am to noon, so
500
0
6
12
18
24
0
-6
1
0
0
-1
-1
6
6
12
18
24
12 18
Hour
24
x 10 Variance
1
0
0
-4
x 10 Variance
12 18
Hour
24
0
6
Fig. 5 Disturbance Parameter Process Variance Tuning
the covariance was increased around those times. In the
EnergyPlus simulation, the primary unmodeled disturbance
was solar radiation, so the covariance was increased around
dusk and dawn. This variance tuning is visually depicted in
Fig. 5.
Because the UKF can explain temperature swings by either
tuning RC or disturbance values, a multi-mode approach was
chosen as a uniform method to split the estimation problem.
In order to acquire good estimates, the UKF was operated in
two modes: A) Acquisition Mode: initially only RC products
were estimated by running the filter at night when solar
gains were at a minimum, and B) Monitoring Mode: both
RC products and disturbances were estimated
simultaneously after RC product estimates had started to
converge to constant values. This splitting proved critical to
obtaining good estimates from EnergyPlus data but
unnecessary for the simple Matlab based simulation due to
its consistently high level of external temperature excitation.
Disturbance estimates from the entire multi-day learning
period were heuristically combined in order to generate a 24hour pattern. This disturbance pattern was then used when
predicting the building’s thermal response.
The heuristic pattern recognition algorithm was a simple
weighted average. For thermal network simulated data the
true disturbances were identical every day. Thus the bias
values learned at each time step were equally averaged to
generate one 24-hour pattern for each bias term.
Mathematically this can be written as summation over
days of learning with minute time steps to generate a
disturbance pattern
correlating to zone based on the
as shown, where the value inside
UKF estimated bias
brackets notates the minute-based time step index.
,
∀ ∈ 1
24
60
10
The predominant unmodeled disturbance from EnergyPlus
data was radiation from the sun, which is dependent on the
cloud cover, time of day, season of year and other factors.
For the purpose of engineering a robust simple solution we
made a realistic assumption that we had a measurement of
the average solar intensity for morning and afternoon and
used this for both pattern recognition and simulation
8
predictions. The solar intensity reading was calculated as the
summation of the Environment Direct Solar and
Environment Diffuse Solar variables from EnergyPlus.
Algorithm 1 contains the weighted average and prediction
steps used for EnergyPlus simulations.
ALGORITHM 1
♦ INITIALIZATION
-Max solar radiation:
-Indices: zone , day , time of day
-Dawn, midday, and dusk times:
,
-Morning & afternoon avg solar radiation:
,
-Estimated bias per zone :
-Measured solar radiation intensity: ,
♦ WEIGHTED DISTURBANCE PATTERN
-Disturbance Pattern:
for
1 to
if
/2 0.35
for
1 to 24 60
,
if
then
elseif
then
,
elseif
then
,
else
end
end
end
/
♦ PREDICTED DISTURBANCE
-Bias for Prediction:
for
1 to …
for
1 to 24 60
if
then
,
,
elseif
then
elseif
then
,
,
else
end
end
III. SIMULATION RESULTS
A. UKF 5-room Simulated Performance
A six-node thermal network, corresponding to five internal
zones and one external temperature shown in Fig. 6, was
used to evaluate the UKF parameter estimation and thermal
disturbance detection. For this first evaluation, measurement
data was generated from a model whose dynamics were
structurally identical to the dynamics used in the UKF. The
5-room models shown here are based on that in [11], but
feature extended explanations, derivations, and simulation
results.
Given five finite capacitances and thirteen resistances,
there were a total of seventeen unique RC products for the
UKF to estimate in this model. A first test was run in
acquisition mode, so disturbance estimation was disabled.
The external temperature forcing function was composed
from the sum of a 40 degree peak-to-peak sinusoid with
period of one day, a 10 degree peak-to-peak sinusoid with
period of 4 hours, and random noise which would allow the
temperature to drift from day to day. Resistance and
capacitance values were chosen such that thermal lag in the
simulation would be similar order as thermal lag in a small
to medium sized building. Temperature states were
initialized with less than a degree of error while RC
estimates were all arbitrarily initialized to 1000 with a
standard deviation of 500. Using four days of recorded data,
RC parameters were learned by the UKF. At the end of the
four day simulation, a 48 hour prediction of the five zones’
temperatures was made using the acquired RC parameters.
The true 48 hour external temperature profile was provided
to both the true dynamics and UKF dynamics in order to
establish a fair comparison baseline for evaluation of the
UKF. In a real system inaccuracies in the weather forecast
would degrade the prediction quality.
,
,
then
, /
/
By averaging the solar intensity before midday and after
midday, two weights were determined for each day. In order
to ensure sufficient signal to noise ratio for disturbance
pattern estimation, a day’s disturbance estimates were
discarded if the day’s total solar intensity averaged below
35% of the maximum solar intensity possible for that
location. The remaining days had their bias estimation
values multiplied by the ratio of maximum solar radiation to
respective half day average measured intensity in order to
normalize the bias values. Then the normalized bias values
were averaged using the same equation for to estimate the
repeated daily disturbance profile. When used for
predictions, the disturbance profile was scaled by the
predicted solar intensity weights to predict each zone’s
unique solar disturbance quantity at each timestep
throughout the day. Thus predictions utilized the final
estimated RC parameter, 24-hr disturbance patterns,
predicted external temperature profile, and half-day average
predicted solar intensity. The pattern recognition shown in
Algorithm 1 could be applied to any cyclic disturbance. The
variables ,
,
, and
would be
adapted to whatever period and timing knowledge a designer
had of other loads or disturbances present in the
environment, and the arrays
and
would be
modified to contain the pattern array.
Fig. 6. Left: Five‐room building used
for simulations, Right: Top view of node labeled internal thermal
network representation. White lines represent building internal walls.
A sixth unlabeled node acts as an external temperature
4000
4000
3000
3000
Truth
Estimate
2000
2000
3 variance
1000
1000
0
0
-1000
0
0.5
Time (Day)
-1000
1
0
0.5
Time (Day)
Fig. 7. Example estimation of two parameters from 5‐room building
using RC‐thermal model data.
9
1
Prediction Performance
Bias Estimates
Temperature (deg/min)
Temperature
80
70
60
Truth
Prediction
50
40
0
0.5
1
Time (Day)
1.5
0.1
0.05
0
0
6
12
18
24
12
Time (hrs)
18
24
0
-0.05
-0.1
2
0
Truth
6
Estimate
Fig. 9. The actual disturbances introduced into system for the two
zones shown in Fig. 10 are labeled above as “Truth” and were
repeated on a 24 hour cycle. Dashed lines represent the UKF 4‐day
average estimate of the disturbances which was used for
predictions in Fig. 10. Deg F
Fig. 8. Example 48‐hour prediction window in a disturbance free
environment based on RC estimates. External temperature forcing
function was composed from two sinusoids plus white noise. Deg F
Running the described simulation showed that some
parameters are estimated very well while others are not; two
characteristic examples of this are shown in Fig. 7. The
external temperature does not fully excite all of the node-tonode thermal connections which limits the filter’s ability to
precisely determine all of the parameters. Despite this
numerical estimation error, the 48-hour prediction
demonstrated excellent matching between the UKF
estimated model and the true model. From a poorly
performing filter one would expect predictions to have
increasing divergence or lag as the prediction window is
increased. Characteristic temperature predictions of the
highest and lowest capacitance rooms are shown in Fig. 8 for
visualization. In our chosen model the poorly estimated
parameters correlate with higher order dynamics that neednot be accurately estimated for good model fitting. For
practical applications this is analogous to having multiple
zones that are always excited together such that their relative
interaction need not be known for useful predictions.
For a second evaluation daily repeating disturbances were
introduced uniquely into each zone of the house. The
disturbance states in the UKF were modeled as constants
which have zero-mean Gaussian additive noise; in order to
track a sudden bias change using a simple UKF, the variance
correlating to that specific bias must be inflated to allow the
filter to acquire and track the new value. This inflation must
occur anytime the disturbance changes significantly enough
that the zero mean Gaussian assumption is severely violated,
such as when an HVAC system turns on or at dusk and dawn
due to solar radiation. For the combined RC and disturbance
estimation, 4 days of data were again used for training.
Disturbances turned on at 10:00 and off at 12:00 each day,
so the estimate covariance P was boosted at those times and
the process variance was also temporarily increased when
the disturbance was cycled on and off as previously shown
in Fig. 5. With this variance tuning method, the filter had
excellent disturbance tracking and similar RC estimation
accuracy. Example tracking of two disturbances is shown in
Fig. 9. Using the described simple heuristic pattern
recognition with a 24 hour period shown in Equation 10 ,
the estimated biases from all four days were averaged to
generate a single day estimate of the repeating disturbance
pattern. A 48-hour prediction was then made using the
average disturbance and final RC estimates along with the
Prediction Performance
80
Temperature
75
70
65
Truth
RC Only Pred
RC + Dist Pred
60
55
0
0.5
1
Time (Day)
1.5
2
Fig. 10. Example 48‐hour prediction derived from RC estimates
and estimated daily cyclic disturbances. Gray boxes denote times
where external disturbances were present in the true system. An
external temperature forcing function was composed from two
sinusoids plus low amplitude white noise. Deg F
exact external temperature profile. Again, excellent
predictions were made with the estimated model. Fig. 10
plots temperatures of two zones comparing predictions from
the RC only estimation model and the RC plus disturbance
estimation model to the truth model. This evaluation
provides good indication of the applicability of the UKF to
thermal network parameter and disturbance estimation.
B. UKF EnergyPlus Performance
Given the excellent performance of the UKF on data
generated by the thermal network model, the UKF was
tested on data generated from an EnergyPlus simulation.
Individual data traces show typical estimation performance
which is validated in an aggregated year-long demonstration.
This more realistic EnergyPlus model, shown in Fig. 6, had
five rooms correlating to the five zones; realistic data for the
floor, wall, and ceiling composition; and windows on the
four exterior walls, which pointed in the cardinal compass
directions. The structure was simulated with weather and
solar radiation for Elmira, NY. In order to acquire good
estimates, the UKF was operated in two modes: A)
Acquisition Mode: night-only estimation of RC products,
10
Prediction Performance
40
Temperature
35
30
25
20
15
Fig. 11 Screenshot from Sustain showing afternoon sun on West Wall
10 Truth
0 RC Only Pred
0.5
RC + Dist Pred
and B) Monitoring Mode: simultaneous estimation of both
RC products and disturbances. Distinguishing learning
modes was less of a concern for the thermal network
simulated data because the disturbances were only on for
short periods of time and the external temperature had a
consistently high level of variation to excite the system.
Because the primary unmodeled thermal disturbance was
solar radiation, covariance tuning was done for the
Monitoring Mode by increasing the bias states’ process
variance for two hours starting at dawn and another two
hours ending 20 minutes after dusk as previously shown in
Fig. 5.
Fig. 11 shows the path of the sun on March 9th and the
variability of the sun path over the course of the year that
will be simulated by EnergyPlus. The graphic was generated
by Sustain, a front-end for EnergyPlus developed by
researchers at Cornell University Program of Computer
Graphics [35]. Due to the axis-inclination of the Earth, the
sun’s path and solar gain varies over the year. Fig. 13 shows
the resulting average disturbances from a four day test where
biases were only estimated for the last two days and then
combined into a 24-hour pattern. Notice how the East room
is heated in the morning, the West room is heated in the
afternoon, and the South room is heated all day, which
correlates nicely with expected heat from solar radiation.
Plots of the 48-hour predictions are shown in Fig. 12.
Predictions which utilize solar disturbances have much
higher accuracy than the RC only predictions. The accuracy
of these predictions ground assumptions made in the thermal
network formulation and more importantly demonstrate the
utility of the UKF for system identification of buildings’
thermal envelope.
1
1.5
Time (Day)
2
Fig. 12. Top South Zone, Bottom West Zone of 48‐hour EnergyPlus
predictions Deg C
Temperature (deg/min)
-4
2
1
0
-1
2
1
0
-1
2
1
0
-1
2
1
0
-1
x 10
0x 10-4
East
5
10 North 15
20
5
10 West 15
20
0x 10
5
10 South 15
20
0
5
10
-4
0x 10
-4
15
Time (hrs)
20
Fig. 13. Estimated 24‐hour cyclic disturbances from EnergyPlus data
Deg C .
calculation failure. From further analysis, simple estimation
monitoring by a human or addition of heuristic rules to the
existing framework would fix all 43 estimation routine
failures. For example over 10 of the failures occurred
because 4 consecutive days had less than 35% solar radiation
causing no disturbance pattern to be learned. Fixing these
sorts of numerical issues to guarantee 100% reliability in an
automated algorithm is outside the scope of the current
study. Results suggested the algorithm could easily be
matured for practical application by adding a number of
heuristics.
Using the 314 successful estimation runs, we compared the
24-hour prediction simulations against the buildings’ truth
simulation from EnergyPlus and found good accuracy—the
models have enough fidelity to be used for control. Over a
12 hour prediction horizon the root mean square (RMS)
temperature prediction error was 1.16oC and over 24 hours
the RMS temperature prediction error was 1.48oC. ASHRAE
standards mandates that vertical temperature stratification in
an occupied zone should be less than 5.4oF (3oC) [36]. Home
and office thermostats often use a dead-band of 4oF (2.2oC)
to 8oF (4.4oC). The model’s prediction errors are well within
these design bounds for the 24 hour prediction horizon. In
Fig. 14 the RMS error for the prediction is shown over time
C. UKF EnergyPlus Year Study
Further analysis of the EnergyPlus generated data was
conducted by analyzing a total year of data. A unique UKF
instance was initialized each day on the first 357 days of the
year and run for 7 days, 3 days in acquisition mode and 4
days in monitoring mode. Then the UKF was used to predict
the building’s response on the eighth day and compared
against the buildings actual response from EnergyPlus. This
generated 357 sets of learned parameters, bias estimates, and
24-hour prediction simulations.
Of the total set, 43 simulations resulted in estimation
routine errors such as negative parameter estimates or
covariance shrinking to zero causing an UKF matrix inverse
11
Standard Deviation of Prediction Error
East Zone Prediction
2.5
Temperature (deg C)
Temperature Deg.
20
2
1.5
1
7 day
14 day
28 day
0.5
0
0
0.2
0.4
0.6
Time (Day)
0.8
10
0
5
0
5
10
15
20
West Zone Prediction
25
30
25
30
20
10
1
10
15
20
Time (days)
Fig. 15 Month prediction using a model learned from 1 week of data.
Fig. 14. Root Mean Square error of temperature predictions of all 5
zones over the year for different lengths of estimation learning time.
Deg C
depending on the target application and available
computation and sensing hardware. For example, Dobbs [37]
compared accuracy of thermal models across different levels
of RC zone aggregation—leveraging such a tool may aid
control-oriented model creation for the UKF. Extensions to
the UKF may offer new opportunities for fault detection and
monitoring [38].
One outstanding challenge remaining with this online
estimation technique is a demonstration of the learned
models’ performance with model predictive controllers in
practice. Some studies [13], [24], [23] have begun
investigating how the quality of the learned model affects
the performance of predictive controllers that use the model.
The consensus to date is that intra-zone excitation is
necessary in order to learn a building’s internal coupling.
Before controlling the building in a novel way to maximize
energy savings, a buildings internal thermal coupling must
be known. Deriving methods to monitor the quality of the
measured data and better learn the building’s thermal
dynamics on-demand, by experimentally exciting the
building, are the subject of a future paper by the authors.
demonstrating good performance. Increased learning periods
of 14 and 28 days further reduced prediction error but did
not drive it to zero—likely because the simple RC model
could not capture the entire fidelity of the EnergyPlus truth
simulation.
Additionally Fig. 15 shows a month-long stable prediction
of the East and West zones based on a model learned from 7
days of data. This long prediction used the same data types
as that in Fig. 12: correct zone initial temperature conditions,
correct external temperatures over the horizon and half day
average solar intensity values over the horizon. The longhorizon prediction accuracy demonstrates the learned model
is unbiased, stable, and robust. To the author’s knowledge,
this is the first year-long study of an online UKF estimating
disturbances with parameters and states for a building.
IV. DISCUSSION
Results of the UKF estimation and model prediction
capabilities have demonstrated the method as a powerful
tool for thermal modeling of building systems. The
simplicity with which a thermal network can be described
combined with the numerical stability and robustness of the
UKF are important factors which could enable its
deployment as a scalable system identification routine for
buildings thermal envelopes.
No physics constraints were applied to ensure RC
parameters were positive, or to inform the bias and
disturbance estimation. Realistic estimation of values was
solely dependent on the quality of the chosen thermal model
representation, estimation technique, and measured data. The
authors expect that good results obtained from this paper’s
simulations would reflect realistic expectations of good
perfomance in real world applications. Accurate bias
tracking was achieved though covariance tuning, but this
might not be scalable to certain buildings where disturbances
occur on erratic schedules, so multiple hypothesis estimation
or constraints may be augmented with a UKF to provide a
more powerful solution.
Further investigation into model selection and fidelity
could lead to performance improvements for the UKF
V. CONCLUSION
A multi-mode implementation of a multi-zone UKF was
presented as a scalable and rapidly deployable system
identification routine for building thermal dynamics. Using a
passive 5-room model, the UKF demonstrated the ability to
learn both dynamics parameters for a thermal network and
unknown disturbances. 24-Hour predictions from UKF
estimated parameters yielded accurate results which were
validated with EnergyPlus simulations using a full year of
data. The UKF, a data-driven, model-based approach,
amenable to augmentation with numerical methods, provides
a promising step towards a scalable framework to realize
advanced BAS predictive controllers.
ACKNOWLEDGMENT
We would like to thank Dave Bosworth for his help
generating an EnergyPlus 5-room building dataset for
evaluation of parameter estimation methods.
12
REFERENCES
Europe, Istanbul, 2014.
[15] S. Kiliccote, M. A. Piette and D. Hansen, "Advanced
controls and communications for demand response and
energy efficiency in commercial buildings," in Second
Carnegie Mellon Conference in Electric Power
Systems, Pittsburgh, PA, 2006.
[16] B. Gammill, Interviewee, HVAC Controls Manager,
Drury University. [Interview]. December 2010.
[17] B. Coffey, P. Haves, B. Hencey, Y. Ma, F. Borrelli and
S. Bengea, "Development and Testing of Model
Predictive Control for a Campus Chilled Water Plant
with Thermal Storage," in ACEEE Summer Study on
Energy Efficiency in Buildings, Asilomar, CA, 2010.
[18] V. M. Zavala, D. Skow, T. Celinski and P. Dickinson,
"Techno-Economic Evaluation of a Next-Generation
Building Energy Management System," ANL/MCSTM-313, 2011.
[19] Z. O'Neill, S. Narayanan and R. Brahme, "Model-Based
Thermal Load Estimation in Buildings," in SimBuild:
4th Nat. Conf. IBPSA-USA, New York City, NY, 2010.
[20] Z. O'Neill and S. Narayanan, "Model-based estimation
of cold room temperatures in a supermarket
refrigeration system," Applied Thermal Engineering,
vol. 73, pp. 819-830, 2014.
[21] B. Huchuk, C. A. Cruickshank, W. O'Brein and H.
Gunay, "Recursive thermal building model training
using Ensemble Kalman Filters," in eSim, Ottowa,
Canada, 2014.
[22] O. Ogunsola and L. Song, "Application of a simplified
thermal network model for real-time thermal load
estimation," Energy and Buildings, vol. 96, pp. 309318, 2015.
[23] Y. Lin, T. Middelkoop and P. Barooah, "Issues in
identification of control-oriented thermal models of
zones in multi-zone buildings," in IEEE Conference on
Decision and Control, Hawaii, 2012.
[24] P. Radecki and B. Hencey, "Online Thermal
Estimation, Control, and Self-Excitation of Buildings,"
in IEEE Conference on Decision and Control, Florence,
IT, 2013.
[25] M. Wetter and C. Haugstetter, "Modelica versus Trnsys
- A comparison between an equation-based and a
procedural modeling language for building energy
simulation," in Proceedings of SimBuild, 2nd National
Conference of IBPSA-USA, International Building
Performance Simulation Association, Cambridge, MA,
2006.
[26] N. L. Jones and D. P. Greenberg, "Fast computation of
incident solar radiation from preliminary to final
building design," in 12th International Conference of
the International Building Performance Simulation
Association, Sydney, Australia, 2011.
[27] K. Deng, P. Barooah, P. G. Mehta and S. P. Meyn,
"Building Thermal Model Reduction via Aggregation
of States," in American Control Conference, Baltimore,
[1] R. Brown, "U . S . Building-Sector Energy Efficiency
Potential," Lawrence Berkeley National Laboratory,
Berkeley, CA, 2008.
[2] T. Y. Chen and A. K. Athienitis, "Investigation of
practical issues in building thermal parameter
estimation," Building and Environment, vol. 38, no. 8,
pp. 1027-1038, Aug 2003.
[3] X. Li and J. Wen, "Review of building energy modeling
for control and operation," Renewable and Sustainable
Energy Reviews, vol. 37, pp. 517-537, 2014.
[4] J. Wen and T. F. Smith, "Development and Validation
of Online Parameter Estimation for HVAC Systems,"
Journal of Solar Energy Engineering, vol. 125, no. 3,
pp. 324-330, 2003.
[5] S. Wang and X. Xinhua, "Parameter estimation of
internal thermal mass of building dynamic models
using genetic algorithm," Energy Conversion and
Management, vol. 47, no. 13-14, pp. 1927-1941, Aug
2006.
[6] S. Karatasou, M. Santamouris and V. Geros, "Modeling
and predicting building's energy use with artificial
neural networks: Methods and results," Energy and
Buildings, vol. 38, no. 8, pp. 949-958, Aug 2006.
[7] H.-x. Zhao and F. Magoules, "A review on the
prediction of building energy consumption," Renewable
and Sustainable Energy Reviews, vol. 16, pp. 35863592, 2012.
[8] I. Hazyuk, C. Ghiaus and D. Penhouet, "Optimal
temperature control of intermittently heated buildings
using Model Predictive Control: Part I - Building
modeling," Building and Environment, vol. 51, pp. 379387, 2011.
[9] "NSF CMMI Workshop on Building Systems," Urbana,
IL, 2010.
[10] M. Trčka and J. L. Hensen, "Overview of HVAC
system simulation," Automation in Construction, vol.
19, no. 2, pp. 93-99, Mar 2010.
[11] P. Radecki and B. Hencey, "Online Building Thermal
Parameter Estimation via Unscented Kalman Filtering,"
in American Controls Conferece, Montreal, Canada,
2012.
[12] S. F. Fux, A. Ashouri, M. J. Benz and L. Guzzella,
"EKF based self-adaptive thermal model for a passive
house," Energy and Buildings, vol. 68, pp. 811-817,
2014.
[13] M. Maasoumy, B. Moridian, M. Razmara, M.
Shahbakhti and A. Sangiovani-Vincentelli, "Online
Simultaneous State Estimation and Parameter
Adaptation for Building Predictive Control," in
Dynamic System and Control Conference, Stanford,
CA, 2013.
[14] A. Martincevic, A. Starcic and M. Vasak, "Parameter
estimation for low-order models of complex buildings,"
in 5th IEEE PES Innovative Smart Grid Technologies
13
MD, 2010.
[28] S.-K. Lin, "Minimal Linear Combinations of the Inertia
Parameters of a Manipulator," IEEE Transactions on
Robotics and Automation, vol. 11, no. 3, pp. 360-373,
1995.
[29] Y. Bar-Shalom, X. R. Li and T. Kirubarajan, Estimation
with Applications to Tracking and Navigation, 2001,
pp. 381--394, 476--484.
[30] G. Welch and G. Bishop, "An Introduction to the
Kalman Filter," Department of Computer Science,
University of North Caronlina. Tech. Rep. TR 95-041,
Chapel Hill, NC, 2003.
[31] R. E. Kalman, "A New Approach to Linear Filtering
and Prediction Problems," ASME Transactions Journal of Basic Engineering, vol. 82, no. Series D, pp.
35-45, 1960.
[32] S. J. Julier and J. K. Uhlmann, "A New Extension of
the Kalman Filter to Nonlinear Systems," in SPIE: The
Proceedings of AeroSense: The 11th International
Symposium on Aerospace/Defense Sensing, Simulation
and Controls, Orlando, FL, 1997.
[33] C. C. Qu and J. Hahn, "Process monitoring and
parameter estimation via unscented Kalman filtering,"
Journal of Loss Prevention in the Process Industries,
vol. 22, no. 6, pp. 703-709, Nov 2009.
[34] J. L. Crassidis and F. L. Markley, "Unscented Filtering
for Spacecraft Attitude Estimation," AIAA Journal on
Guidance, Control and Dynamics, vol. 26, no. 4, pp.
536-542, 2003.
[35] D. Greenberg, K. Pratt, B. Hencey, N. Jones, L.
Schumann, J. Dobbs, Z. Dong, D. Bosworth and B.
Walter, "Sustain: An experimental test bed for building
energy simulation," Energy and Buildings, vol. 58, pp.
44-57, 2013.
[36] American Society of Heating, Refrigerating and Air
Conditioning Engineers, "ASHRAE Standard 55,
Thermal Environmental Conditions for Human
Occupancy," www.ashrae.org, 2010.
[37] J. Dobbs and B. Hencey, "A Comparison of Thermal
Zone Aggregation Methods," in 51st IEEE Conference
on Decision and Control, Hawaii, 2012.
[38] N. Tudoroiu, M. Zaheeruddin, E.-r. Tudoroiu and V.
Jeflea, "Fault Detection and Diagnosis ( FDD ) in
Heating Ventilation Air Conditioning Systems ( HVAC
) Using an Interactive Multiple Model Augmented
Unscented Kalman Filter ( IMMAUKF )," in HSI,
Krakow, Poland, 2008.
14
| 3 |
1
Chance-Constrained Day-Ahead Hourly Scheduling in Distribution System Operation
Yi Gu1 , Student Member, IEEE, Huaiguang Jiang2, Member, IEEE, Jun Jason Zhang1 , Senior Member,
IEEE, Yingchen Zhang2 , Senior Member, IEEE, Eduard Muljadi2, Fellow, IEEE, and Francisco J.
Solis3 , Senior Member
1
arXiv:1711.10687v1 [] 29 Nov 2017
Dept. of Electrical and Computer Engineering, University of Denver, Denver, CO, 80210
2
National Renewable Energy Laboratory, Golden, CO, 80401
2
School of Mathematical and Natural Sciences, Arizona State University, Glendale, AZ, 85306
Abstract—This paper aims to propose a two-step approach for
day-ahead hourly scheduling in a distribution system operation,
which contains two operation costs, the operation cost at substation level and feeder level. In the first step, the objective is to
minimize the electric power purchase from the day-ahead market
with the stochastic optimization. The historical data of dayahead hourly electric power consumption is used to provide the
forecast results with the forecasting error, which is presented by
a chance constraint and formulated into a deterministic form by
Gaussian mixture model (GMM). In the second step, the objective
is to minimize the system loss. Considering the nonconvexity
of the three-phase balanced AC optimal power flow problem
in distribution systems, the second-order cone program (SOCP)
is used to relax the problem. Then, a distributed optimization
approach is built based on the alternating direction method of
multiplier (ADMM). The results shows that the validity and
effectiveness method.
Index terms— Renewable energy integration, second-order
cone program, Gaussian mixture model, optimal power flow,
stochastic optimization, alternating direction method of multiplier, Gaussian mixture model
N OMENCLATURE
C
f1
f2
β
cDA , cRT , cP V
cs
GDA
GRT
GR
λt
GDL
t
Pij
zij
Gerr
Vi
n
ǫ
Total cost of the two steps.
Operation cost at the substation level.
System loss at feeder levels.
The system loss weight.
The unit price of the day-ahead market, realtime market and renewable generation.
The price to resell the redundant power
generated to the electric market.
The electric power obtained from the dayahead market.
The RT power consuming.
The renewable generation.
An occurrence probability helps to decide if
the systems needs to buy the electric power
at time t.
The demand load at period t.
The loss of branch lines from bus i to j.
The impedance from bus i to j.
Forecasting error model of the renewable
energy.
The voltage at bus i.
The amount of the clusters in the model,
n = 1, · · · , N .
The weight of each cluster in GMM.
x
Ui , Ci
pi , qi
Vi , Ii
si
Ω
Gf
x = [x1 , x2 , · · · , xq ]T is the q-dimensional
data vector.
The parent node and the children nodes at
node i, i ∈ NB .
Active and reactive power at bus i.
The voltage and the current.
Injection power at bus i.
Complex impedance.
The forecasted renewable generation.
I. BACKGROUND
AND
M OTIVATION
In general power system operation, the day-ahead hourly
scheduling [1]supplies the customers playing a more active
role. They can be provided more benefits, such as lower
system operation cost, advanced system reliability, and lower
volatility in hourly [2]. The high penetration of the renewable
energy [3] can lead to a lower net load power consumption
from the bulk power system, however, it brings increasingly
stochastic deviations in net load profiles [4]. Compared with
the transmission systems, the distribution systems often work
in an unbalanced polyphase state because of the asynchrony,
asymmetry and the diversity of the load, which brings bigger
challenge for the distribution system operation. In this paper,
a stochastic optimization based two-step approach for dayahead scheduling is proposed to minimize the distribution
system operation cost, which consists of the cost of net load
consumption and the system loss.
At the substation level, the high penetration of the renewable
energy can reduce the electric power [5] purchasing from the
day-ahead market for the distribution systems [6]. In [7]–[9],
the stochastic programming optimization (SPO) can provide
many potential benefits to the transmission systems. With
the renewable energies, this paper focuses on minimizing
the electric power purchasing from the day-ahead market at
the substation level in the first step. The historical data of
the day-ahead electric power purchasing is used to generate
the forecast results and the forecasting errors, which can be
formulated as a chance constraint for the SPO in distribution
systems. According to the distribution of the forecasting errors,
the chance constraint can be formulated into a deterministic
form by Gaussian Mixture Model (GMM) [10], and the global
minimum of the convex problem can be determined.
In [2],the variability of the renewable energy [11] is managed by hourly demand response in day-ahead scheduling,
without considering the stochastic net load deviation in an
system, which equals to the operation cost at feeder level. β
is used to limit the system loss weight in the optimization.The
consymption of the load cost model at the substation level is
simulated as:
hour, which dramatically impacts the operation cost of system
loss. However, the single line model is used compute the
system loss, which ignores the three-phase balanced configuration in distribution systems. In the second step, this
paper focuses on minimizing the cost of the system loss at
the feeder level with the three-phase balanced model. The
characteristic of the nonconvexity is a critical problem for the
three-phase balanced AC optimal power flow. The heuristic
methods are always applied to solve the problem, but which
are hardly avoided falling into the local minimums [12]. Based
on the alternating direction methods of multipliers (ADMM), a
distributed method is provided to solve the AC optimal power
flow problem.
This paper is organized as follows: the proposed approach
is described in Section II. In Section III, the operation cost
in substation level is analyzed. In Section IV, the operation
cost in feeder level is analyzed. In Section V, the numerical
results are presented in IEEE standard distribution systems.
The conclusion is Section VI.
II. A RCHITECTURE
S UMMARY
A PPROACH
AND
OF THE
f1 =
t=1
AT THE
where the time intervals is defined as t = 1, 2, , N T .
cDA , cRT and cP V describe the unit price in the day-ahead,
real-time market and solar generation, respectively. cs is the
established price to sell the redundant power generated by
distribution systems to the electric market, GDA and GRT
are the electric power amount purchased from the day-ahead
and real time market, GP V is the amount of the solar power
generation in the distribution systems. λt is an occurrence
probability which decides if the distribution systems needs
to buy the electric from real-time market at time t. GDL
is
t
V
the demand load at time t. cDA GDA
+ cP V GP
is the baset
t
case generation cost consists of the time-dependent power
consumption by day-ahead scheduling and the renewable
RT
presents the deviation
energy generation cost. λt · cRT
t Gt
power purchased from real-time market for compensation. And
V
− GDL
+ GP
(1 − λt ) · cs (GDA
t ) means the redundant energy
t
t
can be resold to the market in a lower price.
Several months of hourly power forecasting data in dayahead scheduling and actual net load power consumption
are used to provide the forecast results [4], [17], [18].
The available hourly forecasting power in day-ahead market
GDA (N T + 1) can be now described as following (Gerr is
the day-ahead forecasting error) [19]:
P ROPOSED
GDA (N T + 1) = GDA (N T ) + Gerr (N T ) · GDA (N T ) (3)
Subject to:
S UBSTATION L EVEL
GDA + λGRT + GP V = GDL
(4a)
cs < cDA < cRT
t
(4b)
GRT,min ≤ GRT ≤ GRT,max
GDA,min ≤ GDA ≤ GDA,max
(4c)
(4d)
GP V,min ≤ GP V ≤ GP V,max
(4e)
λt =
The total cost is calculated as (1), the sum of the operation
cost at the substation level and feeder level:
C = min(f1 + βf2 )
(2)
o
V
+ (1 − λt ) · cs (GDA
+ GP
− GDL
t
t
t )
In Fig. 1, the proposed method consists two steps, the
stochastic optimization of the electric power purchased from
the day-ahead market [13] at the substation level and the
system loss optimization for the three-phase balanced AC
optimal power flow [14], [15] at the feeder level [16].
Fig. 1 shows the first step at the left, the historical data
of the electric power purchased from the day-ahead market is
employed to generate the forecast results and the distribution
of the forecast errors. Then, combined with the day-ahead
hourly scheduling model, an objective formulation is simulated
with an chance constraint. A GMM based approach is used
to convert the chance constraint into a deterministic problem.
Finally, the day-ahead hourly optimal operation cost can be
obtained at the first step.
In the rest side of Fig. 1, a three-phase balanced AC optimal
power flow is simulated to compute the distribution system at
the feeder level. After the day-ahead hourly purchased electric
power is determined, the three-phase balanced distribution
system model is built to minimum the system loss. Considering
the nonconvexity of the AC optimal power flow, an inequality
constraint is built based on SOCP to relax the problem into a
convex problem. After that, the objective function of system
loss can be derived with ADMM. Finally, the three-phase
balanced AC optimal power flow is used to minimize the
system loss successfully.
III. O PERATION C OST
NT n
X
V
RT RT
(cDA GDA
+ cP V GP
t
t ) + (λt · ct Gt )
(
V
1 GDA
+ GP
< GDL
t
t
t
V
0 GDA
+ GP
≥ GDL
t
t
t
P r(f (Gerr ) ≤ 0) > α
(1)
(4f)
(4g)
where (4g) indicates that day-ahead forecasting error [20]
should be fulfilled with the probability α, and the chanceconstraint [21] can be converted in a deterministic formulation
with the GMM.
where C is the total cost, f1 is the purchased electric power
from the day-ahead market, which equals to the operation cost
at the substation level, f2 is the system loss of the distribution
2
Day-ahead Hourly
Scheduling Model
Forecast
Results and
Errors
Second-order Cone Programming
Based Relaxation
Gaussian
Mixed Model
Objective Function of Threephase Unbalanced AC Optimal
Power Flow
Objective Function
with Chance
Constraint
Objection Function
Derivation with ADMM and
Computation
Objective Function with
Deterministic Constraint
Final Results
Results of Day-ahead Hourly
Scheduling
Substation Level
Stochastic Optimization at Hourly Level
Convex Relaxation
Three-phase Unbalanced
Distribution System Model
Alternating Direction Method of Multipliers
System Data
Collection
Historical
Purchased
Power Data
Feeder Level
AC Optimal Power Flow at Minutes Level
Fig. 1. The flowchart of proposed approach.
2) M-Step: In M-step, the parameters are recalculated and
estimated to maximize the quantity of the expectation in (7).
A. GMM with Expectation Maximization.
Comparing with the regular GMM, a definition of expectation maximization EM based GMM is described. It is
used to model the forecasting error model of the renewable
generation. We have known that GMM is a particular form of
the finite mixture model. For (5), more than one components
is calculated as the sum with different weights (ǫ):
p(x|Θ) = ǫp(x|θ)
θ(t+1) = arg max R(θ|θt )
(7)
The EM based GMM can decide the amount of clusters
based on minimum description length (MDL) [24], [25]. The
MDL criterion is frequently used on the field of selection
in [26], because of the improved method is less sensitive to
the initialization.
(5)
ǫ in (5) is calculated in [22] and has been described to be
non-negative, that the sum equals to 1. Each component in
the GMM model is a normal distribution and obeys to θn =
(µ, Σ), which indicates the means vector and the covariance.
According to [23], the parameters of the mixture model
is computed with expectation maximization (EM), the
expectation-step (called E-step) and maximization-step (called
M-step) are described as below.
1) E-Step: The algorithm is ended when the function in (6)
reaches the convergence.
IV. O PERATION C OST
AT THE
F EEDER L EVEL
When the substation level cost is determined, the objective
function of the three-phase balanced AC optimal power flow
is to minimum the system loss. As we know, the influence
of the three-phase distribution system is small enough, the
second order cone programming (SOCP) is used to calculate
the system loss here [27]. which can be defined as follows [28],
[29]:
X
F =
Pij ,
(8)
E
R(θ|θt ) = Ex,θt [logL(θ; x)]
where Pij is a branch loss from bus i to j, and Pij = |Iij |2 rij ,
Iij is the complex current from bus i to j, and zij is the
complex impedance zij = rij + ixij . E is the branch set of
the distribution system, which can be represented with the set
of buses and branches: G = [V, E].
(6)
Where L(θ; x) = p(x|θ). x is a set of the observed data
from the given statistic model and the θ is unknown parameters
along with the likelihood function in (6).
3
TABLE I
T IME CONSUMING OF DIFFERENT ALGORITHMS BASED ON THREE
350
INDIVIDUAL SYSTEMS
300
IEEE 13-bus
1.78 s
189.43 s
7.20 s
IEEE 34-bus
5.23 s
221.34 s
13.16 s
IEEE 123-bus
14.66 s
459.21 s
25.58 s
Total Cost ($)
Method
Proposed Method
Simulated annealing
Interior-Point
Comparison with Different Alpha
Alpha=95%
Alpha=90%
Alpha=80%
250
200
150
100
50
Based on the branch flow model, the SOCP relaxation
inequalities can be represented as follows
0
0
(9)
Vi,min ≤ Vi ≤ Vi,max ,
(10a)
Iij ≤ Iij,max .
(10b)
min f (x) + g(z)
(11a)
s.t. x ∈ Kx , z ∈ Kz
Ax + Bz = c
(11b)
x,z
B. The performance of the proposed approach
It is assumed that the renewable energy resources are located
in bus 7, 23, 29, 35, 47, 49, 65, 76, 83, and 99 [30]. The result
of the three-phase balanced AC optimal power flow is shown in
Fit. 2(b). The errors of the primal residual and the dual residual
are less than 0.5 × 10−3 after 5 iterations. After 30 iterations
(less than 0.2 second), the curves of primal residual and the
dual residual are coinciding and stable, which demonstrates
the high speed and effectiveness of the convergency of the
proposed approach.
In Fig. 3, the comparison is made with different α in (4g)
during 24 hours. The results shows that the total operation cost
of the system is higher with a lower α, which indicates that
a higher accuracy of error forecasting model can help to save
more money.
V. T HE N UMERICAL R ESULTS
250
251
30
28
26
46
108
45
27
43
44
23
21
19
20
38
36
35
18 135
37
9
10
7
149
1
8
12
150
152
13
5
52
53
54
55
94
6
95
93
195
610
56
85
79
77 78
76
80
84
76
88
90
92
16
74
72
17
4
75
73
57
96
34
15
3
71
70
69
67
160
60
58
59
61
2
99
98
197
97
62
39
68
14
11
451
450
100
101
66
40
22
114
104
103
102
63
41
113
107
106
64
65
105
42
24
112
110
300
151
109
47
25 48
350
111
51
50
49
25
As shown in Fig. 2(a), the IEEE 123-bus distribution
system contains 118 basic branches, 85 unbalanced loads, 4
capacitors, and 11 three phase switches (6 initially closed
and 5 initially opened), which is the topology taken in the
preliminary results. The simulation platform is based on a
computer server with a Xeon processor and 32 GB ram. The
programming language are Python and Matlab.
Then, a dual problem is build based on the objective
function (8) and (11a) with ADMM and solved in parallel.
29
20
A. The test bench
The standard optimization problem of ADMM is defined as
follows:
31
15
Fig. 3. The total cost with different Alpha.
where Sij indicate the complex power flow Sij = Pij + iQij ,
lij := |Iij |2 , and vi := |Vi |2 . The basic physical constraints
can be defined as follows:
32
10
Time(Hours)
|Sij |2
≤ lij ,
vi
33
5
91
81
87
89
86
82
83
C. Performance comparison
(a)
As in Table I, the different algorithms are used on different
individual power systems (IEEE trans 13, -34 and -123 Bus
). The proposed approach obtains a shortest time consuming
on all of the three power systems, which demonstrates our
method can work more efficiently than others.
-3
2.5 x 10
Primal Residual
Dual Residual
2
Error
1.5
1
VI. C ONCLUSION
0.5
0
1
5
10
15
20
25
30
35
Iteration Number
(b)
Fig. 2. (a) The IEEE 123-bus distribution system. (b) The residual error
curves of the three-phase balanced AC optimal power flow.
4
In this paper, a stochastic optimization based approach
is proposed for the chance-constrained day-ahead hourly
scheduling problem in distribution system operation. The
operation cost is divided into two parts, the operation cost
at substation level and feeder level. In the operation cost at
substation level, the proposed approach minimizes the electric
power purchase from the day-ahead market with a stochastic
optimization. In the operation cost at feeder level, the system
loss is presented with the three-phase balanced AC optimal
power flow [31]. In our work, the detailed flowchart and the
description of the stochastic optimization with the forecasting
errors is improved, which helps to describe our approach
detailedly. And the detailed description of the derivation from
the chance constraint into the deterministic form with GMM
is provided. The SOCP relaxation is used to solve the problem
with three-phase balanced distribution system.
In the future, we will focus on improving an advanced
approach to develop the distribution system optimization based
on an three-phased unbalanced system [32]–[35].
[16] Yi Gu, Huaiguang Jiang, Yingchen Zhang, Jun Jason Zhang, Tianlu Gao,
and Eduard Muljadi, “Knowledge discovery for smart grid operation,
control, and situation awarenessa big data visualization platform,” in
North American Power Symposium (NAPS), 2016. IEEE, 2016, pp. 1–6.
[17] Rui Yang, Huaiguang Jiang, and Yingchen Zhang, “Short-term state
forecasting-based optimal voltage regulation in distribution systems:
Preprint,” Tech. Rep., NREL (National Renewable Energy Laboratory
(NREL), Golden, CO (United States)), 2017.
[18] Huaiguang Jiang, Fei Ding, Yingchen Zhang, Huaiguang Jiang, Fei
Ding, and Yingchen Zhang, “Short-term load forecasting based automatic distribution network reconfiguration: Preprint,” Tech. Rep.,
National Renewable Energy Laboratory (NREL), Golden, CO (United
States), 2017.
[19] YM Atwa, EF El-Saadany, MMA Salama, and R Seethapathy, “Optimal
renewable resources mix for distribution system energy loss minimization,” IEEE Transactions on Power Systems, vol. 25, no. 1, pp. 360–370,
2010.
[20] James W Taylor, Lilian M De Menezes, and Patrick E McSharry, “A
comparison of univariate methods for forecasting electricity demand up
to a day ahead,” International Journal of Forecasting, vol. 22, no. 1,
pp. 1–16, 2006.
[21] RK Jana* and MP Biswal, “Stochastic simulation-based genetic algorithm for chance constraint programming problems with continuous
random variables,” International Journal of Computer Mathematics, vol.
81, no. 9, pp. 1069–1076, 2004.
[22] Weishi Peng, “Model selection for gaussian mixture model based on
desirability level criterion,” Optik-International Journal for Light and
Electron Optics, vol. 130, pp. 797–805, 2017.
[23] Todd K Moon, “The expectation-maximization algorithm,” IEEE Signal
processing magazine, vol. 13, no. 6, pp. 47–60, 1996.
[24] Hiroshi Tenmoto, Mineichi Kudo, and Masaru Shimbo, “Mdl-based
selection of the number of components in mixture models for pattern
classification,” Advances in Pattern Recognition, pp. 831–836, 1998.
[25] Zhengrong Liang, Ronald J Jaszczak, and R Edward Coleman, “Parameter estimation of finite mixtures using the em algorithm and
information criteria with application to medical image processing,” IEEE
Transactions on Nuclear Science, vol. 39, no. 4, pp. 1126–1133, 1992.
[26] Geoffrey McLachlan and David Peel, Finite mixture models, John Wiley
& Sons, 2004.
[27] Rabih A Jabr, “Radial distribution load flow using conic programming,”
IEEE transactions on power systems, vol. 21, no. 3, pp. 1458–1459,
2006.
[28] Qiuyu Peng and Steven H Low, “Distributed algorithm for optimal power
flow on a radial network,” in 2014 IEEE 53rd Annual Conference on
Decision and Control (CDC). IEEE, 2014, pp. 167–172.
[29] Steven H Low, “Convex relaxation of optimal power flowpart i:
Formulations and equivalence,” IEEE Transactions on Control of
Network Systems, vol. 1, no. 1, pp. 15–27, 2014.
[30] VH Méndez Quezada, J Rivier Abbad, and T Gomez San Roman,
“Assessment of energy distribution losses for increasing penetration of
distributed generation,” IEEE Transactions on power systems, vol. 21,
no. 2, pp. 533–540, 2006.
[31] Mahesh K Banavar, Jun J Zhang, Bhavana Chakraborty, Homin
Kwon, Ying Li, Huaiguang Jiang, Andreas Spanias, Cihan Tepedelenlioglu, Chaitali Chakrabarti, and Antonia Papandreou-Suppappola, “An
overview of recent advances on distributed and agile sensing algorithms
and implementation,” Digital Signal Processing, vol. 39, pp. 1–14, 2015.
[32] Whei-Min Lin, Yuh-Sheng Su, Hong-Chan Chin, and Jen-Hao Teng,
“Three-phase unbalanced distribution power flow solutions with minimum data preparation,” IEEE Transactions on power Systems, vol. 14,
no. 3, pp. 1178–1183, 1999.
[33] William H Kersting, “Radial distribution test feeders,” in Power
Engineering Society Winter Meeting, 2001. IEEE. IEEE, 2001, vol. 2,
pp. 908–912.
[34] Sarika Khushalani, Jignesh M Solanki, and Noel N Schulz, “Development of three-phase unbalanced power flow using pv and pq models
for distributed generation and study of the impact of dg models,” IEEE
Transactions on Power Systems, vol. 22, no. 3, pp. 1019–1025, 2007.
[35] Huaiguang Jiang, Yan Li, Yingchen Zhang, Jun Jason Zhang,
David Wenzhong Gao, Eduard Muljadi, and Yi Gu, “Big data-based
approach to detect, locate, and enhance the stability of an unplanned
microgrid islanding,” Journal of Energy Engineering, vol. 143, no. 5,
pp. 04017045, 2017.
R EFERENCES
[1] Nima Amjady, “Day-ahead price forecasting of electricity markets by a
new fuzzy neural network,” IEEE Transactions on power systems, vol.
21, no. 2, pp. 887–896, 2006.
[2] Hongyu Wu, Mohammad Shahidehpour, and Ahmed Al-Abdulwahab,
“Hourly demand response in day-ahead scheduling for managing the
variability of renewable energy,” IET Generation, Transmission &
Distribution, vol. 7, no. 3, pp. 226–234, 2013.
[3] Yi Gu, Huaiguang Jiang, Yingchen Zhang, and David Wenzhong Gao,
“Statistical scheduling of economic dispatch and energy reserves of
hybrid power systems with high renewable energy penetration,” in
Signals, Systems and Computers, 2014 48th Asilomar Conference on.
IEEE, 2014, pp. 530–534.
[4] Huaiguang Jiang, Yingchen Zhang, Eduard Muljadi, Jun Zhang, and
Wenzhong Gao, “A short-term and high-resolution distribution system
load forecasting approach using support vector regression with hybrid
parameters optimization,” IEEE Transactions on Smart Grid, 2016.
[5] Juan Manuel Carrasco, Leopoldo Garcia Franquelo, Jan T Bialasiewicz,
Eduardo Galván, Ramón Carlos PortilloGuisado, MA Martin Prats,
José Ignacio León, and Narciso Moreno-Alfonso, “Power-electronic
systems for the grid integration of renewable energy sources: A survey,”
IEEE Transactions on industrial electronics, vol. 53, no. 4, pp. 1002–
1016, 2006.
[6] Hongyu Wu, Mohammad Shahidehpour, Zuyi Li, and Wei Tian,
“Chance-constrained day-ahead scheduling in stochastic power system
operation,” IEEE Transactions on Power Systems, vol. 29, no. 4, pp.
1583–1591, 2014.
[7] Kyri Baker, Gabriela Hug, and Xin Li, “Energy storage sizing taking into
account forecast uncertainties and receding horizon operation,” IEEE
Transactions on Sustainable Energy, vol. 8, no. 1, pp. 331–340, 2017.
[8] Stein W Wallace and William T Ziemba, Applications of stochastic
programming, SIAM, 2005.
[9] Andrzej Ruszczyński, “Parallel decomposition of multistage stochastic
programming problems,” Mathematical programming, vol. 58, no. 1,
pp. 201–228, 1993.
[10] Yonghong Huang, Kevin B Englehart, Bernard Hudgins, and Adrian DC
Chan, “A gaussian mixture model based classification scheme for myoelectric control of powered upper limb prostheses,” IEEE Transactions
on Biomedical Engineering, vol. 52, no. 11, pp. 1801–1811, 2005.
[11] Huaiguang Jiang, Jun Jason Zhang, David Wenzhong Gao, Yingchen
Zhang, and Eduard Muljadi, “Synchrophasor based auxiliary controller
to enhance power system transient voltage stability in a high penetration
renewable energy scenario,” in Power Electronics and Machines for
Wind and Water Applications (PEMWA), 2014 IEEE Symposium. IEEE,
2014, pp. 1–7.
[12] Zaiyong Tang and Kallol Kumar Bagchi, “Globally convergent particle
swarm optimization via branch-and-bound,” Computer and Information
Science, vol. 3, no. 4, pp. 60–71, 2010.
[13] Antonio J Conejo, Miguel A Plazas, Rosa Espinola, and Ana B Molina,
“Day-ahead electricity price forecasting using the wavelet transform and
arima models,” IEEE transactions on power systems, vol. 20, no. 2, pp.
1035–1042, 2005.
[14] Carol S Cheng and Dariush Shirmohammadi, “A three-phase power flow
method for real-time distribution system analysis,” IEEE Transactions
on Power Systems, vol. 10, no. 2, pp. 671–679, 1995.
[15] SM Moghaddas-Tafreshi and Elahe Mashhour, “Distributed generation
modeling for power flow studies and a three-phase unbalanced power
flow solution for radial distribution systems considering distributed
generation,” Electric Power Systems Research, vol. 79, no. 4, pp. 680–
686, 2009.
5
| 3 |
POLYHEDRAL PRODUCTS AND COMMUTATOR SUBGROUPS
OF RIGHT-ANGLED ARTIN AND COXETER GROUPS
TARAS PANOV AND YAKOV VERYOVKIN
arXiv:1603.06902v2 [] 1 Sep 2016
To the memory of Rainer Vogt
Abstract. We construct and study polyhedral product models for classifying
spaces of right-angled Artin and Coxeter groups, general graph product groups
and their commutator subgroups. By way of application, we give a criterion of
freeness for the commutator subgroup of a graph product group, and provide
an explicit minimal set of generators for the commutator subgroup of a rightangled Coxeter group.
1. Introduction
Right-angled Artin and Coxeter groups are familiar objects in geometric group
theory [Da2]. From the abstract categorical viewpoint, they are particular cases of
graph product groups, corresponding to a sequence of m groups G = (G1 , . . . , Gm )
and a graph Γ on m vertices. Informally, the graph product group G Γ consists of
words with letters from G1 , . . . , Gm in which the elements of Gi and Gj with i 6= j
commute whenever {i, j} is an edge of Γ. The graph product group G Γ interpolates
between the free product G1 ⋆ · · · ⋆ Gm (corresponding to a graph consisting of
m disjoint vertices) and the cartesian product G1 × · · · × Gm (corresponding to a
complete graph). Right-angled Artin and Coxeter groups RAΓ and RC Γ correspond
to the cases Gi = Z and Gi = Z2 , respectively.
The polyhedral product is a functorial combinatorial-topological construction
assigning a topological space (X , A)K to a sequence of m pairs of topological
spaces (X , A) = {(X1 , A1 ), . . . , (Xm , Am )} and a simplicial complex K on m vertices [BP1, BBCG, BP2]. It generalises the notion of a moment-angle complex
ZK = (D2 , S 1 )K , which is a key object of study in toric topology. Polyhedral products also provide a unifying framework for several constructions of classifying spaces
for right-angled Artin and Coxeter groups, their commutator subgroups, as well as
general graph products groups. The description of the classifying spaces of graph
product groups and their commutator subgroups was implicit in [PRV], where the
canonical homotopy fibration
m
Y
(EG, G)K −→ (BG)K −→
BGk .
k=1
of polyhedral products was introduced and studied.
To each graph Γ without loops and double edges one can assign a flag simplicial
complex K, whose simplices are the vertex sets of complete subgraphs (or cliques)
of Γ. For any flag complex K the polyhedral product (BG)K is the classifying
2010 Mathematics Subject Classification. 20F65, 20F12, 57M07.
Key words and phrases. Right-angled Artin group, right-angled Coxeter group, graph product,
commutator subgroup, polyhedral product.
The research of the first author was carried out at the IITP RAS and supported by the Russian
Science Foundation grant no. 14-50-00150. The research of the second author was supported by
the Russian Foundation for Basic Research grant no. 14-01-00537.
1
2
TARAS PANOV AND YAKOV VERYOVKIN
space for the corresponding graph product group G K = G Γ , while (EG, G)K is
the classifying space for the commutator subgroup of G K . In the case of right-angled
Artin group RAΓ = RAK , each BGi = BZ is a circle, so we obtain as (BG)K the
subcomplex (S 1 )K in an m-torus introduced by Kim and Roush in [KR]. In the case
of right-angled Coxeter group RC K , each BGi = BZ2 is an infinite real projective
space RP ∞ , so the classifying space for RC K is a similarly defined subcomplex
(RP ∞ )K in the m-fold product of RP ∞ . The classifying space for the commutator
subgroup of RC K is a finite cubic subcomplex RK in an m-dimensional cube, while
the classifying space for the commutator subgroup of RAK is an infinite cubic
subcomplex LK in the m-dimensional cubic lattice. All these facts are summarised
in Theorem 3.2 and Corollaries 3.3 and 3.4.
The emphasis of [PRV] was on properties of graph products of topological (rather
than discrete) groups, as part of the homotopy-theoretical study of toric spaces and
their loop spaces. In the present work we concentrate on the study of the commutator subgroups for discrete graph product groups. Apart from a purely algebraic
interest, our motivation lies in the fact that the commutator subgroups of graph
products are the fundamental groups of very interesting aspherical spaces. From this
topological perspective, right-angled Coxeter groups RC K are the most interesting.
The commutator subgroup RC ′K is π1 (RK ) for a finite-dimensional aspherical complex RK , which turns out to be a manifold when K is a simplicial subdivision of
sphere. When K is a cycle (the boundary of a polygon) or a triangulated 2-sphere,
one obtains as RC ′K a surface group or a 3-manifold group respectively. These
groups have attracted much attention recently in geometric group theory and lowdimensional topology. The manifolds RK corresponding to (the dual complexes of)
higher-dimensional permutahedra and graph-associahedra also feature as the universal realisators in the works of Gaifullin [Ga1], [Ga2] on the problem of realisation
of homology classes by manifolds.
In Theorem 4.3 we give a simple criterion for the commutator subgroup of a graph
product group to be free. In the case of right-angled Artin groups this result was
obtained by Servatius, Droms and Servatius in [SDS]. In Theorem 4.5 we provide
an explicit minimal generator set for the finitely generated commutator subgroup
of a right-angled Coxeter group RC K . This generator set consists of nested iterated
commutators of the canonical generators of RC K which appear in a special order
determined by the combinatorics of K.
Theorems 4.3 and Theorem 4.5 parallel the corresponding results obtained
in [GPTW] for the loop homology algebras and rational homotopy Lie algebras
of moment-angle complexes. Algebraically, these results of [GPTW] can be interpreted as a description of the commutator subalgebra in a special graph product
graded Lie algebra (see Theorem 4.6). The results of Section 4 in the current paper
constitute a group-theoretic analogue of the results of [GPTW] for graded associative and Lie algebras.
We dedicate this article to the memory of Rainer Vogt, who shared his great
knowledge and ideas with T. P. during insightful collaboration in the 2000s.
The authors are grateful to Alexander Gaifullin for his invaluable comments and
suggestions, which much helped in making the text more accessible.
2. Preliminaries
We consider a finite ordered set [m] = {1, 2, . . . , m} and its subsets I =
{i1 , . . . , ik } ⊂ [m], where I can be empty of the whole of [m].
Let K be an (abstract) simplicial complex on [m], i. e. K is a collection of subsets
of [m] such that for any I ∈ K all subsets of I also belong to K. We always assume
that the empty set ∅ and all one-element subsets {i} ⊂ [m] belong to K. We
POLYHEDRAL PRODUCTS AND COMMUTATOR SUBGROUPS
3
refer to I ∈ K as a simplex (or a face) of K. One-element faces are vertices, and
two-element faces are edges. Every abstract simplicial complex K has a geometric
realisation |K|, which is a polyhedron in a Euclidean space (a union of convex
geometric simplices). In all subsequent constructions it will be useful to keep in
mind the geometric object |K| alongside with the abstract collection K.
We recall the construction of the polyhedral product (see [BP1, BBCG, BP2]).
Construction 2.1 (polyhedral product). Let K be a simplicial complex on [m]
and let
(X , A) = {(X1 , A1 ), . . . , (Xm , Am )}
be a sequence of m pairs of pointed topological spaces, pt ∈ Ai ⊂ Xi , where pt
denotes the basepoint. For each subset I ⊂ [m] we set
(2.1)
m
Y
(X , A)I = (x1 , . . . , xm ) ∈
Xk : xk ∈ Ak
for k ∈
/I
k=1
and define the polyhedral product of (X , A) corresponding to K as
[
[ Y
Y
(X , A)K =
(X , A)I =
Ai .
Xi ×
I∈K
I∈K i∈I
i∈I
/
In the case when all pairs (Xi , Ai ) are the same, i. e. Xi = X and Ai = A for
i = 1, . . . , m, we use the notation (X, A)K for (X , A)K . Also, if each Ai = pt , then
we use the abbreviated notation X K for (X , pt )K , and X K for (X, pt )K .
This construction of the polyhedral product has the following categorical interpretation. Consider the face category cat(K), whose objects are simplices I ∈ K
and morphisms are inclusions I ⊂ J. Let top denote the category of topological spaces. Define a cat(K)-diagram (a covariant functor from the small category
cat(K) to the “large” category top)
DK (X , A) : cat(K) −→ top,
(2.2)
I 7−→ (X , A)I ,
which maps the morphism I ⊂ J of cat(K) to the inclusion of spaces (X , A)I ⊂
(X , A)J . Then we have
(2.3)
(X , A)K = colim DK (X , A) = colim(X , A)I .
I∈K
Here colim denotes the colimit functor (also known as the direct limit functor )
from the category of cat(K)-diagrams of topological spaces to the category top.
By definition, colim is the left adjoint to the constant diagram functor. The details
of these constructions can be found, e. g., in [BP2, Appendix C].
Given a subset J ⊂ [m], consider the restriction of K to J:
KJ = {I ∈ K : I ⊂ J},
which is also known as a full subcomplex of K. Recall that a subspace Y ⊂ X is
called a retract of X if there exists a continuous map r : X → Y such that the
r
composition Y ֒→ X −→ Y is the identity. We record the following simple property
of the polyhedral product.
Proposition 2.2. (X, A)KJ is a retract of (X, A)K whenever KJ ⊂ K is a full
subcomplex.
Proof. We have
[ Y
Y
Ai ,
(X , A)K =
Xi ×
I∈K i∈I
i∈[m]\I
(X , A)KJ =
[
Y
I∈K, I⊂J i∈I
Xi ×
Y
i∈J\I
Ai .
4
TARAS PANOV AND YAKOV VERYOVKIN
Since each Ai is a pointed space, there is a canonical inclusion (X , A)KJ ֒→
(X , A)K . Furthermore, for each I ∈ K there is a projection
Y
Y
Y
Y
Ai .
Ai −→
Xi ×
rI :
Xi ×
i∈I
i∈I∩J
i∈[m]\I
i∈J\I
Since KJ is a full subcomplex, the image of S
rI belongs to (X , A)KJ . The projections
rI patch together to give a retraction r = I∈K rI : (X , A)K → (X , A)KJ .
The following examples of polyhedral products feature throughout the paper.
Example 2.3.
1. Let (X, A) = (S 1 , pt ), where S 1 is a circle. The corresponding polyhedral
product (S 1 )K is a subcomplex in the m-torus (S 1 )m :
[
(2.4)
(S 1 )K =
(S 1 )I ⊂ (S 1 )m .
I∈K
In particular, when K = {∅, {1}, . . . , {m}} (which is m disjoint points geometrically), the polyhedral product (S 1 )K is the wedge (S 1 )∨m of m circles.
When K consists of all proper subsets of [m] (which geometrically corresponds
to the boundary ∂∆m−1 of an (m − 1)-dimensional simplex), (S 1 )K is known as the
fat wedge of m circles; it is obtained by removing the top-dimensional cell from the
standard cell decomposition of an m-torus (S 1 )m .
For a general K on m vertices, (S 1 )K sits between the m-fold wedge (S 1 )∨m and
the m-fold cartesian product (S 1 )m .
2. Let (X, A) = (R, Z), where Z is the set of integer points on a real line R. We
denote the corresponding polyhedral product by LK :
[
(2.5)
LK = (R, Z)K =
(R, Z)I ⊂ Rm .
I∈K
When K consists of m disjoint points, LK is a grid in m-dimensional space Rm
consisting of all lines parallel to one of the coordinate axis and passing though
integer points. When K = ∂∆m−1 , the complex LK is the union of all integer
hyperplanes parallel to coordinate hyperplanes.
3. Let (X, A) = (RP ∞ , pt ), where RP ∞ is an infinite-dimensional real projective
space, which is also the classifying space BZ2 for the 2-element cyclic group Z2 .
Consider the polyhedral product
[
(2.6)
(RP ∞ )K =
(RP ∞ )I ⊂ (RP ∞ )m .
I∈K
Similarly to the first example above, (RP ∞ )K sits between the m-fold wedge
(RP ∞ )∨m (corresponding to K consisting of m points) and the m-fold cartesian
product (RP ∞ )m (corresponding to K = ∆m−1 ).
4. Let (X, A) = (D1 , S 0 ), where D1 is a closed interval (a convenient model is the
segment [−1, 1]) and S 0 is its boundary, consisting of two points. The polyhedral
product (D1 , S 0 )K is known as the real moment-angle complex [BP1, §3.5], [BP2]
and is denoted by RK :
[
(2.7)
RK = (D1 , S 0 )K =
(D1 , S 0 )I .
I∈K
It is a cubic subcomplex in the m-cube (D1 )m = [−1, 1]m . When K consists of
m disjoint points, RK is the 1-dimensional skeleton of the cube [−1, 1]m . When
K = ∂∆m−1 , RK is the boundary of the cube [−1, 1]m . In general, if {i1 , . . . , ik }
is a face of K, then RK contains 2m−k cubic faces of dimension k which lie in the
k-dimensional planes parallel to the {i1 , . . . , ik }th coordinate plane.
POLYHEDRAL PRODUCTS AND COMMUTATOR SUBGROUPS
5
The space RK was introduced and studied in the works of Davis [Da1] and
Davis–Januszkiewicz [DJ], although their construction was different. When |K| is
homeomorphic to a sphere, RK is a topological manifold (this follows from the
results of [Da1], see also [BP2] and [Ca, Theorem 2.3]). Furthermore, the manifold
RK has a smooth structure when |K| is the boundary of a convex polytope. In this
case RK is the universal abelian cover of the dual simple polytope P [DJ, §4.1].
The four polyhedral products above are related by the two homotopy fibrations [PRV], [BP2, §4.3]
(2.8)
LK −→ (S 1 )K −→ (S 1 )m ,
(2.9)
RK −→ (RP ∞ )K −→ (RP ∞ )m .
Construction 2.4 (right-angled Artin and Coxeter group). Let Γ be a graph on m
vertices. We write {i, j} ∈ Γ when {i, j} is an edge. Denote by F (g1 , . . . , gm ) a free
group with m generators corresponding to the vertices of Γ. The right-angled Artin
group RAΓ corresponding to Γ is defined by generators and relations as follows:
(2.10)
RAΓ = F (g1 , . . . , gm ) (gi gj = gj gi for {i, j} ∈ Γ).
When Γ is a complete graph we have RAΓ = Zm , while when Γ has no edges we
obtain the free group.
The right-angled Coxeter group RC Γ is defined as
(2.11)
RC Γ = F (g1 , . . . , gm ) (gi2 = 1, gi gj = gj gi for {i, j} ∈ Γ).
Both right-angled Artin and Coxeter groups have a categorical interpretation
similar to that of polyhedral products (see (2.3)). Namely, consider the following
cat(K)-diagrams, this time in the category grp of groups:
DK (Z) : cat(K) −→ grp,
I 7−→ ZI ,
DK (Z2 ) : cat(K) −→ grp,
I 7−→ ZI2 ,
Q
Q
where ZI = i∈I Z and ZI2 = i∈I Z2 . A morphism I ⊂ J of cat(K) is mapped to
the monomorphism of groups ZI → ZJ and ZI2 → ZJ2 respectively. Then
(2.12)
I
RAK1 = colimgrp DK (Z) = colimgrp
I∈K Z ,
I
RC K1 = colimgrp DK (Z2 ) = colimgrp
I∈K Z2 ,
where K1 denotes the 1-skeleton of K, which is a graph. Here colimgrp denotes the
colimit functor in grp.
A missing face (or a minimal non-face) of K is a subset I ⊂ [m] such that I is
not a simplex of K, but every proper subset of I is a simplex of K. A simplicial
complex K is called a flag complex if each of its missing faces consists of two vertices.
Equivalently, K is flag if any set of vertices of K which are pairwise connected by
edges spans a simplex.
A clique (or a complete subgraph) of a graph Γ is a subset I of vertices such that
every two vertices in I are connected by an edge. Each flag complex K is the clique
complex of its one-skeleton Γ = K1 , that is, the simplicial complex formed by filling
in each clique of Γ by a face.
Note that the colimits in (2.12), being the corresponding right-angled groups,
depend only on the 1-skeleton of K and do not depend on missing faces with more
than 2 vertices. For example, the colimits of the diagrams of groups D∆2 (Z) and
D∂∆2 (Z) are both Z3 . This reflects the lack of “higher” commutativity in the category of groups: when generators gi commute pairwise, they commute altogether.
This phenomenon is studied in more detail in [PRV] and [PR].
6
TARAS PANOV AND YAKOV VERYOVKIN
For these reasons we denote the right-angled Artin and Coxeter groups corresponding to the 1-skeleton of K simply by RAK and RC K respectively.
By analogy with the polyhedral product of spaces X K = colimI∈K X I , we may
consider the following more general construction of a discrete group.
Construction 2.5 (graph product). Let K be a simplicial complex on [m] and
let G = (G1 , . . . , Gm ) be a sequence of m groups, which we think of as discrete
topological groups. We also assume that none of Gi is trivial, i. e. Gi 6= {1}. For
each subset I ⊂ [m] we set
m
Y
G I = (g1 , . . . , gm ) ∈
Gk : gk = 1 for k ∈
/I .
k=1
Then consider the following cat(K)-diagram of groups:
DK (G) : cat(K) −→ grp,
I 7−→ G I ,
which maps a morphism I ⊂ J to the canonical monomorphism of groups G I →
G J . Define the group
I
G K = colimgrp DK (G) = colimgrp
I∈K G .
(2.13)
The group G K depends only on the graph K1 and is called the graph product of the
groups G1 , . . . , Gm . We have canonical homomorphisms G I → G K , I ∈ K, which
can be shown to be injective.
As in the case of right-angled Artin and Coxeter groups (corresponding to Gi = Z
and Gi = Z2 respectively), one readily deduces the following more explicit description from the universal property of the colimit:
Proposition 2.6. The is an isomorphism of groups
m
Gk (gi gj = gj gi for gi ∈ Gi , gj ∈ Gj , {i, j} ∈ K),
GK ∼
=
⋆
k=1
where
⋆
m
k=1
Gk denotes the free product of the groups Gk .
Remark. We use the symbol ⋆ to denote the free product of groups, instead of the
more common ∗; the latter is reserved for the join of topological spaces.
3. Classifying spaces
Here we collect the information about the classifying spaces for graph product
groups. The results of this section are not new, but as they are spread across the
literature we find it convenient to collect everything in one place. The corresponding
references are given below.
Recall that a path-connected space X is aspherical if πi (X) = 0 for i > 2. An
aspherical space X is an Eilenberg–Mac Lane space K(π, 1) with π = π1 (X).
Given a (discrete) group G, there is a universal G-covering EG → BG whose
total space EG is contractible and the base BG, known as the classifying space
for G, has the homotopy type K(G, 1) (i. e. π1 (BG) = G and πi (BG) = 0 for
i > 2). We shall therefore switch between the notation BG and K(G, 1) freely.
Note that BZ ≃ S 1 and BZ2 ≃ RP ∞ , with the universal coverings R → S 1 and
∞
S → RP ∞ respectively.
Now we use the notation from Construction 2.5. The classifying space BG I is
the product of BGi over i ∈ I. We therefore have the polyhedral product (BG)K
corresponding to the sequence of pairs (BG, pt ) = {(BG1 , pt ), . . . , (BGm , pt )}.
Similarly, we have the polyhedral product (EG, G)K corresponding to the sequence
of pairs (EG, G) = {(EG1 , G1 ), . . . , (EGm , Gm )}. Here each Gi is included in EGi
as the fibre of the covering EGi → BGi over the basepoint.
POLYHEDRAL PRODUCTS AND COMMUTATOR SUBGROUPS
7
The homotopy fibrations (2.8) and (2.9) can be generalised as follows.
Proposition 3.1. The sequence of canonical maps
(EG, G)K −→ (BG)K −→
m
Y
BGk
k=1
is a homotopy fibration.
When each Gk is Z, we obtain the fibration (2.8), as the pair (EZ, Z) is homotopy
equivalent to (R, Z). Similarly, when each Gk is Z2 , we obtain (2.9), as the pair
(EZ2 , Z2 ) is homotopy equivalent to (D1 , S 0 ).
Qm
Proof of Proposition 3.1. We denote k=1 BGk by BG [m] ; this is compatible with
the notation BG I . According to [PRV, Proposition 5.1], the homotopy fibre of
the inclusion (BG)K → BG [m] can be identified with the homotopy colimit
hocolimI∈K G [m] /G I of the cat(K)-diagram in top given on the objects by I 7→
G [m] /G I (where the latter is the quotient group, viewed as a discrete space) and
sending a morphism I ⊂ J to the canonical projection G [m] /G I → G [m] /G J of the
quotients. This diagram is not Reedy cofibrant, e. g. because G [m] /G I → G [m] /G J
is not a cofibration of spaces. The latter map is homotopy equivalent to the closed
cofibration (EG, G)I → (EG, G)J , which is a morphism in the cat(K)-diagram
DK (EG, G), see (2.2). The diagram DK (EG, G) is Reedy cofibrant, see [BP2,
Proposition 8.1.1]. Therefore, the homotopy fibre of the inclusion (BG)K → BG [m]
is given by
hocolimI∈K G [m] /G I ≃ colimI∈K (EG, G)I = (EG, G)K .
Now we state the following group-theoretic consequence of the homotopy fibration in Proposition 3.1.
Theorem 3.2. Let K be a simplicial complex on m vertices, and let GK be a graph
product group given by (2.13).
(a) π1 ((BG)K ) ∼
= GK .
(b) Both spaces (BG)K and (EG, G)K are aspherical if and only if K is flag.
Hence, B(GK ) = (BG)K whenever K is flag.
(c) πi ((BG)K ) ∼
= πi ((EG, G)K ) for i > 2.
(d) Q
π1 ((EG, G)K ) is isomorphic to the kernel of the canonical projection GK →
m
k=1 Gk .
Proof. To prove (a) we proceed inductively by adding simplices to K one by one
and use van Kampen’s Theorem. The base of the induction is K consisting of m
disjoint points. Then (BG)K is the wedge BG1 ∨ · · · ∨ BGm , and π1 ((BG)K ) is the
free product G1 ⋆ · · · ⋆ Gm . This is precisely G K , so (a) holds. Assume now that K′
is obtained from K by adding a single 1-dimensional simplex {i, j}. Then, by the
definition of the polyhedral product,
′
(BG)K = (BG)K ∪ (BGi × BGj ),
where the two pieces are glued along BGi ∨ BGj . By van Kampen’s Theorem,
′
π1 ((BG)K ) is the amalgamated free product π1 ((BG)K ) ⋆(Gi ⋆Gj ) (Gi × Gj ). The
latter group is obtained from π1 ((BG)K ) by adding all relations of the form gi gj =
′
gj gi for gi ∈ Gi , gj ∈ Gj . By the inductive assumption, this is precisely G K .
Adding simplices of dimension > 2 to K does not change G K and results in adding
cells of dimension > 3 to (BG)K , which does not change the fundamental group
π1 ((BG)K ). The inductive step is therefore complete, proving (a).
8
TARAS PANOV AND YAKOV VERYOVKIN
Now we prove (b). The canonical homomorphisms G I → G K give rise to the
maps of classifying spaces BG I → B(G K ). These define a morphism from the
cat(K)-diagram DK (BG, pt ) to the constant diagram B(G K ), and hence a map
colimI∈K BG I = (BG)K → B(G K ).
(3.1)
According to [PRV, Proposition 5.1], the homotopy fibre of the map (3.1) can be
identified with the homotopy colimit hocolimI∈K G K /G I of the cat(K)-diagram
in top given on the objects by I 7→ G K /G I (where the latter is the right coset,
viewed as a discrete space) and sending a morphism I ⊂ J to the canonical projection G K /G I → G K /G J of cosets. By [PRV, Corollary 5.4], the homotopy colimit
hocolimI∈K G K /G I is homeomorphic to the identification space
(3.2)
Bcat(K) × G K ∼ .
Here Bcat(K) is the classifying space of cat(K), which is homeomorphic to the
cone on |K|. The equivalence relation ∼ is defined as follows: (x, gh) ∼(x, g) whenever h ∈ G I and x ∈ B(I ↓ cat(K)), where I ↓ cat(K) is the undercategory, whose
objects are J ∈ K such that I ⊂ J, and B(I ↓ cat(K)) is homeomorphic to the star
of I in K. When K is a flag complex, the identification space (3.2) is contractible
by [PRV, Proposition 6.1]. Therefore, the map (3.1) is a homotopy equivalence,
which implies that (BG)K is aspherical when K is flag.
Assume now that K is not flag. Choose a missing face J = {j1 , . . . , jk } ⊂ [m] with
k > 3 vertices and consider the corresponding full subcomplex KJ . Then (BG)KJ is
the fat wedge of the spaces {BGj , j ∈ J} (see Example 2.3.1), and it is a retract of
(BG)K by Proposition 2.2. Hence, in order to see that (BG)K is not aspherical, it
is enough to check that (BG)KJ is not aspherical. Let FW (X1 , . . . , Xk ) denote the
fat wedge of spaces X1 , . . . , Xk . According to a result of Porter [Po], the homotopy
fibre of the inclusion
k
Y
Xi
FW (X1 , . . . , Xk ) ֒→
i=1
k−1
is Σ
ΩX1 ∧ · · · ∧ ΩXk , where Σ denotes the suspension and Ω the loop
space functor.
Q In our case we obtain that the homotopy fibre of the inclusion
(BG)KJ → j∈J BGj is Σ k−1 Gj1 ∧ · · · ∧ Gjk . Since each Gj is a discrete space, the
latter suspension is a wedge
Q of (k − 1)-dimensional spheres. It has nontrivial homotopy group πk−1 . Since j∈J BGj is a K(π, 1)-space, the homotopy exact sequence
implies that πk−1 ((BG)KJ ) 6= 0 for some k > 3. Hence, (BG)KJ and (BG)K are
non-aspherical.
Asphericity of (EG, G)K and statements (c) and (d)Q
follow from the homotopy
m
exact sequence of the fibration in Proposition 3.1, as πi ( k=1 BGk ) = 0, i > 2.
Specialising to the cases Gk = Z and Gk = Z2 respectively we obtain the following results about right-angled Artin and Coxeter
groups. Note that in these two
Q
cases the groups Gk are abelian, so G K → m
G
k=1 k is the abelianisation homo′
morphism, and its kernel is the commutator subgroup (G K ) .
Corollary 3.3. Let K be a simplicial complex on m vertices, let (S 1 )K and LK be
the polyhedral products given by (2.4) and (2.5) respectively, and let RAK be the
corresponding right-angled Artin group.
(a) π1 ((S 1 )K ) ∼
= RAK .
(b) Both (S 1 )K and LK are aspherical if and only if K is flag.
(c) πi ((S 1 )K ) ∼
= πi (LK ) for i > 2.
(d) π1 (LK ) is isomorphic to the commutator subgroup RA′K .
POLYHEDRAL PRODUCTS AND COMMUTATOR SUBGROUPS
9
Corollary 3.4. Let K be a simplicial complex on m vertices, let (RP ∞ )K and RK
be the polyhedral products given by (2.6) and (2.7) respectively, and let RC K be the
corresponding right-angled Coxeter group.
(a) π1 ((RP ∞ )K ) ∼
= RC K .
(b) Both (RP ∞ )K and RK are aspherical if and only if K is flag.
(c) πi ((RP ∞ )K ) ∼
= πi (RK ) for i > 2.
(d) π1 (RK ) is isomorphic to the commutator subgroup RC ′K .
Remark. All ingredients in the proof of Theorem 3.2 are contained in [PRV]. The
fact that the polyhedral product (BG)K is the classifying space for the graph
product group G K whenever K is a flag complex implies that the classifying space
functor converts the colimit of groups (defining the graph product) to the colimit
of topological spaces (defining the polyhedral product). This is not the case when
K is not flag because of the presence of higher Whitehead and Samelson products
(see [PRV, PR, GT]), but the situation can be remedied by replacing colimits with
homotopy colimits. All these facts were proved in [PRV] for arbitrary well-pointed
topological groups.
Statements (a) and (b) of Corollary 3.3, implying a homotopy equivalence
(S 1 )K ≃ K(RAK , 1) for flag K, were obtained by Kim and Roush [KR, Theorem 10]. Statements (a) and (b) of Corollary 3.4, implying a homotopy equivalence
(RP ∞ )K ≃ K(RC K , 1) for flag K, are implicit in the works of Davis [Da1] and
Davis–Januszkiewicz [DJ, p. 437]. In particular, contractibility of the space (3.2)
(which is the crucial step in the proof of Theorem 3.2 (b)) in the case of right-angled
Coxeter group RC K follows from [Da1, Theorem 13.5]. The isomorphism between
π1 (RK ) and the commutator subgroup RC ′K was also obtained in the work of
Droms [Dr] (his cubic complex is the 2-dimensional skeleton of our complex RK ,
and therefore has the same fundamental group).
In the case of a general graph product G K , the result that both spaces (BG)K
and (EG, G)K are aspherical if and only if K is flag appeared in the work of
Stafa [St, Theorem 1.1].
Example 3.5. Let K be an m-cycle (the boundary of an m-gon). A simple argument with Euler characteristic shows that RK is homeomorphic to a closed orientable surface of genus (m − 4)2m−3 + 1 (this observation goes back to a 1938 work
of Coxeter, see [BP2, Proposition 4.1.8]). Therefore, the commutator subgroup of
the corresponding right-angled Coxeter group RC K is a surface group. This example
was studied in [SDS] and [Dr].
Similarly, when |K| ∼
= S 2 (which is equivalent to K being the boundary of a
3-dimensional simplicial polytope), RK is a 3-dimensional manifold. Therefore, the
commutator subgroup of the corresponding RC K is a 3-manifold group. The fact
that 3-manifold groups appear as subgroups in right-angled Artin and Coxeter
groups has attracted much attention in the recent literature.
All homology groups are considered with integer coefficients. The homology of
RK is described by the following result. For the particular case of flag K it gives a
description of the homology of the commutator subgroup RC ′K .
Theorem 3.6 ([BP1], [BP2, §4.5]). For any k > 0, there is an isomorphism
M
e k−1 (KJ ),
H
Hk (RK ) ∼
=
J⊂[m]
e k−1 (KJ ) is the reduced simplicial homology group of KJ .
where H
The cohomology ring structure of H ∗ (RK ) is described in [Ca].
10
TARAS PANOV AND YAKOV VERYOVKIN
4. The structure of the commutator subgroups
By Theorem 3.2,
m
Y
Ker G K →
Gk = π1 ((EG, G)K ).
k=1
In the case of right-angled Artin or Coxeter groups (or, more generally, when each
′
Gk is abelian), the group above is the commutator subgroup (G K ) . We want to
study the group π1 ((EG, G)K ), identify the class of simplicial complexes K for
which this group is free, and describe a minimal generator set.
We shall need the following modification of a result of Grbić and Theriault [GT]:
Proposition 4.1. Let K = K1 ∪I K2 be a simplicial complex obtained by gluing
K1 and K2 along a common face I, which may be empty. If the polyhedral products (EG, G)K1 and (EG, G)K2 are homotopy equivalent to wedges of circles, then
(EG, G)K is also homotopy equivalent to a wedge of circles.
Proof. We may assume that K has the vertex set [m] = {1, . . . , m}, K1 is the full
subcomplex of K on the first m1 vertices {1, . . . , m1 }, K2 is the full subcomplex of
K on the last m2 vertices {m − m2 + 1, . . . , m}, and the common face I is on the
k vertices {m1 − k + 1, . . . , m1 }, where m1 < m, m2 < m and m = m1 + m2 − k.
Consider the polyhedral product (CX , X )K corresponding to a sequence of pairs
(CX , X ) = {(CX1 , X1 ), . . . , (CXm , Xm )}, where CXi denotes the cone on Xi .
According to [GT, Theorem 6.12],
(CX , X )K ≃ (M1 ∗ M2 ) ∨ ((CX , X )K1 ⋊ M2 ) ∨ (M1 ⋉ (CX , X )K2 ),
Q 1
Qm
where M1 = m
i=1 Xi , M2 =
i=m−m2 +1 Xi , M1 ∗ M2 denotes the join of M1 and
M2 , X ⋊ Y denotes the right half-smash X × Y /pt × Y of two pointed spaces X, Y ,
and X ⋉ Y denotes their left half-smash X × Y /X × pt .
In our case, each Xi = Gi is a discrete space, the pair (EGi , Gi ) is homotopy
equivalent to (CGi , Gi ), and each of M1 , M2 in (4.1) is a discrete space. Hence, each
of the three wedge summands in (4.1) is a wedge of circles, and so is (EG, G)K .
(4.1)
A graph Γ is called chordal (or triangulated ) if each of its cycles with > 4 vertices
has a chord (an edge joining two vertices that are not adjacent in the cycle).
The following result gives an alternative characterisation of chordal graphs.
Theorem 4.2 (Fulkerson–Gross [FG]). A graph is chordal if and only if its vertices
can be ordered in such a way that, for each vertex i, the lesser neighbours of i form
a clique.
Such an ordering of vertices is called a perfect elimination ordering.
Theorem 4.3. Let K be a flag simplicial complex on m vertices, let G =
(G1 , . . . , Gm ) be a sequence of m nontrivial groups, and let GK be the graph product
group given by (2.13). The following conditions are equivalent:
Qm
(a) Ker(GK → k=1 Gk ) is a free group;
(b) (EG, G)K is homotopy equivalent to a wedge of circles;
(c) Γ = K1 is a chordal graph.
Proof. (b)⇒(a) This follows from Theorem 3.2 (d) and the fact that the fundamental group of a wedge of circles is free.
(c)⇒(b) Here we use the argument from [GPTW, Theorem 4.6]. However, that
argument contained an inaccuracy, which was pointed out by A. Gaifullin and
corrected in the argument below.
POLYHEDRAL PRODUCTS AND COMMUTATOR SUBGROUPS
11
Assume that the vertices of K are in perfect elimination order. We assign to each
vertex i the clique Ii consisting of i and the lesser neighbours of i. Since K is a
flag complex,
Sm each clique Ii is a face. All maximal faces are among I1 , . . . , Im , so
we have i=1 Ii = K. Furthermore, for each k = 1, . . . , m the perfect elimination
ordering on K induces such an ordering on the full subcomplex K{1,...,k−1} , so we
Sk−1
Sk−1
have i=1 Ii = K{1,...,k−1} . In particular, the simplicial complex i=1 Ii is flag as
Sk−1
a full subcomplex in a flag complex. The intersection Ik ∩ i=1 Ii is a clique, so it
Sk−1
is a face of i=1 Ii . An inductive argument using Proposition 4.1 then shows that
(EG, G)K is a wedge of circles.
Qm
(a)⇒(c) Let Ker(G K → k=1 Gk ) be a free group. Suppose that the graph
Γ = K1 is not chordal, and choose a chordless cycle J with |J| > 4. Then the full
subcomplex KJ is the same cycle (the boundary of a |J|-gon).
We first consider the case when each Gk is Z2 , so that (EG, G)K is RK . Then
RKJ is homeomorphic to a closed orientable surface of genus (|J| − 4)2|J|−3 + 1
by [BP2, Proposition 4.1.8]. In particular, the fundamental group π1 (RKJ ) is not
free. On the other hand, RKJ is a retract of RK by Proposition 2.2, so π1 (RKJ ) is
a subgroup of the free group π1 (RK ) = Ker(RC K → (Z2 )m ). A contradiction.
Now consider the general case. Note that the pair (EGk , Gk ) is homotopy equivalent to (CGk , Gk ), so we can consider (CG, G)K instead of (EG, G)K . Since each
Gk is discrete and nontrivial, we may fix an inclusion of a pair of points S 0 ֒→ Gk ;
then there is a retraction Gk → S 0 (it does not have to be a homomorphism
of groups). It extends to a retraction of cones, so we have a retraction of pairs
(CGk , Gk ) → (D1 , S 0 ). These retractions give rise to a retraction of polyhedral
products (CG, G)K → (D1 , S 0 )K = RK . Hence, we have a composite retraction
(CG, G)K → RK → RKJ , so π1 (RKJ ) includes as a subgroup in the free group
Q
1
π1 (EG, G)K = Ker(G K → m
k=1 Gk ). On the other hand, if K contains a chordless cycle J with |J| > 4, then π1 (RKJ ) is the fundamental group of a surface of
positive genus, so it is not free. A contradiction.
Corollary 4.4. Let RAK and RC K be the right-angled Artin and Coxeter groups
corresponding to a simplicial complex K.
(a) The commutator subgroup RA′K is free if and only if K1 is a chordal graph.
(b) The commutator subgroup RC ′K is free if and only if K1 is a chordal graph.
Part (a) of Corollary 4.4 is the result of Servatius, Droms and Servatius [SDS].
The difference between parts (a) and (b) is that the commutator subgroup RA′K
is infinitely generated, unless RAK = Zm , while the commutator subgroup RC ′K is
finitely generated. We elaborate on this in the next theorem.
Let (g, h) = g −1 h−1 gh denote the group commutator of two elements g, h.
Theorem 4.5. Let RC K be the right-angled Coxeter group corresponding to a simplicial complex K on m vertices. The commutator subgroup RC ′K has a finite minP
e 0 (KJ ) iterated commutators
imal generator set consisting of J⊂[m] rank H
(4.2)
(gj , gi ),
(gk1 , (gj , gi )),
...,
(gk1 , (gk2 , · · · (gkm−2 , (gj , gi )) · · · )),
where k1 < k2 < · · · < kℓ−2 < j > i, ks 6= i for any s, and i is the smallest vertex
in a connected component not containing j of the subcomplex K{k1 ,...,kℓ−2 ,j,i} .
Theorem 4.5 is similar to a result of [GPTW] describing the commutator subalgebra of the graded Lie algebra given by
(4.3)
LK = FLhu1 , . . . , um i [ui , ui ] = 0, [ui , uj ] = 0 for {i, j} ∈ K ,
where FLhu1 , . . . , um i is the free graded Lie algebra on generators ui of degree
one, and [a, b] = −(−1)|a||b|[b, a] denotes the graded Lie bracket. The commutator
12
TARAS PANOV AND YAKOV VERYOVKIN
subalgebra is the kernel of the Lie algebra homomorphism LK → CLhu1 , . . . , um i
to the commutative (trivial) Lie algebra.
The graded Lie algebra (4.3) is a graph product similar to the right-angled
Coxeter group RC K . It has a colimit decomposition similar to (2.13), with each
Gi replaced by the trivial Lie algebra CLhui = FLhui/([u, u] = 0) and the colimit
taken in the category of graded Lie algebras.
Theorem 4.6 ([GPTW, Theorem 4.3]). The commutator subalgebra of the graded
P
e 0 (KJ )
Lie algebra LK has a finite minimal generator set consisting of J⊂[m] rank H
iterated commutators
[uj , ui ],
[uk1 , [uj , ui ]],
...,
[uk1 , [uk2 , · · · [ukm−2 , [uj , ui ]] · · · ]],
where k1 < k2 < · · · < kℓ−2 < j > i, ks 6= i for any s, and i is the smallest vertex
in a connected component not containing j of the subcomplex K{k1 ,...,kℓ−2 ,j,i} .
Although the scheme of the proof of Theorem 4.5 is similar to that for Theorem 4.6, more specific techniques are required to work with group commutators, as
opposed to Lie algebra brackets. Nevertheless, most of these techniques are quite
standard, and can be extracted from the classical texts like [MKS].
Proof of Theorem 4.5. The first part is a standard argument applicable to the commutator subgroup of an arbitrary group. An element of RC ′K is a product of commutators (a, b) with a, b ∈ RC K . Writing each of a, b as a word in the generators
g1 , . . . , gm and using the Hall identities
(a, bc) = (a, c)(a, b)((a, b), c),
(4.4)
(ab, c) = (a, c)((a, c), b)(b, c),
n
ni
we express each element of RC ′K in terms of iterated commutators (gi1i1 , . . . , giℓ ℓ )
with nik ∈ Z and arbitrary bracketing. Since we have relations gi2 = 1 in RC K ,
we may assume that each nik is 1. We refer to ℓ > 2 as the length of an iterated
commutator. If an iterated commutator (gi1 , . . . , giℓ ) contains a commutator (a, b)
where each of a, b is itself a commutator, then we can remove such (gi1 , . . . , giℓ )
from the list of generators by writing (a, b) as a word in shorter commutators a, b
and using (4.4) iteratively. We therefore obtain a generators set for RC ′K consisting
only of nested iterated commutators, i. e. those not containing (a, b) where both a, b
are commutators. The next step is to use the identity
((a, b), c) = (b, a)(c, (b, a))(a, b)
and the identities (4.4) to express each nested commutator in terms of canonical
nested commutators (gi1 , (gi2 , · · · (giℓ−2 , (giℓ−1 , giℓ )) · · · )).
The most important part of the proof is to express each canonical nested commutator in terms of canonical nested commutators in which the generators gi appear
in a specific order. This will be done by a combination of algebraic and topological
arguments and use the specifics of the group RC K .
We first prove a particular case of the statement, corresponding to K consisting
of m disjoint points. The group RC K is then a free product of m copies of Z2 .
Lemma 4.7. Let G be a free product of m copies of Z2 , given by the presentation
G = F (g1 , . . . , gm ) (gi2 = 1, i = 1, . . . , m).
Then the commutator subgroup G′ is a free group freely generated by the iterated
commutators of the form
(gj , gi ),
(gk1 , (gj , gi )),
...,
(gk1 , (gk2 , · · · (gkm−2 , (gj , gi )) · · · )),
where k1 < k2 < · · · < kℓ−2 < j > i and ks 6= i for any s. Here, the number of
commutators of length ℓ is equal to (ℓ − 1) m
ℓ .
POLYHEDRAL PRODUCTS AND COMMUTATOR SUBGROUPS
13
Proof. The statement is clear for m = 1 (then G = Z2 ) and for m = 2 (then
G = Z2 ⋆ Z2 and G′ ∼
= Z with generator (g2 , g1 )). For m = 3, the lemma says that
the commutator subgroup of G = Z2 ⋆ Z2 ⋆ Z2 is freely generated by
(g2 , g1 ), (g3 , g1 ), (g3 , g2 ), (g1 , (g3 , g2 )), (g2 , (g3 , g1 )).
This is easy to see geometrically, by identifying RC ′K with π1 (RK ). In our case
RK is the 1-skeleton of the 3-cube (see Example 2.3.4). We have (g1 , (g3 , g2 )) =
g1 (g2 , g3 )g1 (g3 , g2 ), (g2 , (g3 , g1 )) = g2 (g1 , g3 )g2 (g3 , g1 ), and the elements (g2 , g1 ),
(g3 , g1 ), (g3 , g2 ), g1 (g2 , g3 )g1 , g2 (g1 , g3 )g2 correspond to the loops around five different faces of RK , which freely generate its fundamental group.
The general statement for arbitrary m can be proved by a similar topological
argument, by identifying G′ with the fundamental group of the 1-skeleton of the
m-dimensional cube (see [St, Proposition 3.6]). However, we record an algebraic
argument for subsequent use. We have the commutator identity
(4.5) (gq , (gp , x)) = (gq , x)(x, (gp , gq ))(gq , gp )(x, gp )(gp , (gq , x))(x, gq )(gp , gq )(gp , x),
which can be deduced from the Hall–Witt identity, or checked directly. Note that
if x is a canonical nested commutator, then the factor (x, (gp , gq )) can be expressed
via nested commutators as in the beginning of the proof of Theorem 4.5. In this case
we can use (4.5) to swap gp and gq in the commutator (gq , (gp , x)), by expressing
it through (gp , (gq , x)) and canonical nested commutators of lesser length.
In the subsequent arguments, we shall swap elements in an iterated commutator.
Such a swap will change the element of the group represented by the commutator,
but the two elements will always differ by a product of commutators of lesser
length, as in the case of (gp , (gq , x)) and (gq , (gp , x)) in the argument above. If we
want to swap elements gp and gq inside a canonical nested commutator of the form
(· · · , (gp , (gq , x)) · · · ), where x is a smaller canonical nested commutator, then we
need to use the first identity of (4.4) alongside with (4.5). Note that if both b and
c in the identity (a, bc) = (a, c)(a, b)((a, b), c) are canonical nested commutators,
then (a, c) and (a, b) are also canonical nested commutators, while ((a, b), c) =
(b, a)c−1 (a, b)c is a product of nested commutators of lesser length.
Therefore, using (4.5) together with the identities (4.4) we can change the order of any two generators in a commutator (gi1 , · · · (giℓ−2 , (giℓ−1 , giℓ )) · · · ) within
the positions i1 to iℓ−2 . We first use this observation to eliminate commutators (gi1 , · · · (giℓ−2 , (giℓ−1 , giℓ )) · · · ) which contain a pair of repeating generators gi (i. e. have ip = iq for some p 6= q). Namely, if the repeating pair occurs within the positions i1 to iℓ−2 , then we use (4.5) to reduce the commutator to the form (· · · (gi , (gi , x)) · · · ), where x = (giℓ−1 , giℓ ), and use the relation
(gi , (gi , x)) = (gi , x)(gi , x) to reduce the commutator to a product of commutators of lesser length. (Note that here we use the relation gi2 = 1 in G.) If
one of the repeating generators is on the position iℓ−1 or iℓ , then we use (4.5)
to reduce the commutator to the form (· · · (gi , (gi , gj )) · · · ) and use the relation
(gi , (gi , gj )) = (gi , gj )(gi , gj ). As a result, we obtain a generator set for G′ consisting of commutators (gi1 , · · · (giℓ−2 , (giℓ−1 , giℓ )) · · · ) with all different gi . This already
shows that G′ is finitely generated.
Now we use (4.5) to put the generators gi in (gi1 , · · · (giℓ−2 , (giℓ−1 , giℓ )) · · · ) in an
order. Choose the generator gik with the largest index ik . If it is not within the last
three positions, then we use (4.5) to move it to the third-to-last position. The case
m = 3 considered above shows that the commutator (gj , (gi , gk )) can be expressed
through (gi , (gj , gk ), (gk , (gj , gi )) and commutators of lesser length. This allows us
to move the generator gik with the largest index ik in (gi1 , · · · (giℓ−2 , (giℓ−1 , giℓ )) · · · )
to the second-to-last position, and set j = ik . Then we use (4.5) and (4.4) to put
the first ℓ − 2 generators in the commutator in an increasing order, and redefine
14
TARAS PANOV AND YAKOV VERYOVKIN
their indices as k1 < · · · < kℓ−2 . As a result, we obtain a generator set for G′
consisting of commutators of the required form (gk1 , (gk2 , · · · (gkℓ−2 , (gj , gi )) · · · ))
where k1 < k2 < · · · < kℓ−2 < j > i and ks 6= i for any s.
It remains to show that the
of G′ is free. This gen generating set
Pmconstructed
m
m−1
erating set consists of N = ℓ=2 (ℓ − 1) ℓ = (m − 2)2
+ 1 commutators. On
the other hand, G′ ∼
= π1 (RK ), where RK the 1-skeleton of the m-cube. Then RK is
homotopy equivalent to a wedge of N circles (as easy to see inductively or by computing the Euler characteristic), and π1 (RK ) is a free group of rank N . We therefore
have a system of N generators in a free group of rank N . This system must be free,
by the classical result that a free group of finite rank cannot be isomorphic to its
proper quotient group, see [MKS, Theorem 2.13].
Now we resume the proof of Theorem 4.5. We shall exclude some commutators
(gk1 , (gk2 , · · · (gkℓ−2 , (gj , gi )) · · · )) from the generating set using the new commutativity relations.
First assume that j and i are in the same connected component of the
complex K{k1 ,...,kℓ−2 ,j,i} . We shall show that the corresponding commutator
(gk1 , (gk2 , · · · (gkℓ−2 , (gj , gi )) · · · )) can be excluded from the generating set. We
choose a path from i to j, i. e. choose vertices i1 , . . . , in from k1 , . . . , kℓ−2 with
the property that the edges {i, i1 }, {i1, i2 }, . . . , {in−1 , in }, {in , j} are all in K. We
proceed by induction on the length of the path. The induction starts from the
commutator (gj , gi ) = 1 corresponding to a one-edge path {i, j} ∈ K. Now assume that the path consists of n + 1 edges. Using the relation (4.5) we can move
the elements gi1 , gi2 , . . . , gin in (gk1 , (gk2 , · · · (gkℓ−2 , (gj , gi )) · · · )) to the right and
restrict ourselves to the commutator (gi1 , (gi2 , · · · (gin , (gj , gi )) · · · )). Observe that
in the presence of the commutation relation (gp , gq ) = 1 the identity (4.5) does
not contain the factor (x, (gp , gq )) and therefore it allows us to change the order
of gp and gq without assuming x to be a commutator. We therefore can convert the commutator (gi1 , (gi2 , · · · (gin , (gj , gi )) · · · )) (with {in , j} ∈ K) to the
commutator (gj , (gi1 , · · · (gin−1 , (gin , gi )) · · · )). The latter contains a commutator
(gi1 , · · · (gin−1 , (gin , gi )) · · · ) corresponding to a shorter path {i, i1 , . . . , in }. By inductive hypothesis, it can be expressed through commutators of lesser length, and
therefore excluded from the set of generators.
We therefore obtain a generator set for RC ′K consisting of nested commutators
(gk1 , · · · (gkℓ−2 , (gj , gi )) · · · ) with j and i in different connected components of the
complex K{k1 ,...,kℓ−2 ,j,i} . Consider commutators (gk1 , · · · (gkℓ−2 , (gj , gi1 )) · · · ) and
′
′
, j, i2 }
(gk1′ , · · · (gkℓ−2
, (gj , gi2 )) · · · ) such that {k1 , . . . , kℓ−2 , j, i1 } = {k1′ , . . . , kℓ−2
and i1 , i2 lie in the same connected component of K{k1 ,...,kℓ−2 ,j,i1 } which is different
from the connected component containing j. We claim that one of these commutators can be expressed through the other and commutators of lesser length. To see
this, we argue as in the previous paragraph, by considering a path in K{k1 ,...,kℓ−2 ,j,i1 }
between i1 and i2 , and then reducing it inductively to a one-edge path. This leaves
us with the pair of commutators (gi2 , (gj , gi1 )) and (gi1 , (gj , gi2 )) where {i1 , i2 } ∈ K,
{i1 , j} ∈
/ K, {i2 , j} ∈
/ K. The claim then follows easily from the relation (gi1 , gi2 ) = 1
(compare the case m = 3 of Lemma 4.7).
Thus, to enumerate independent commutators, we use the convention of writing
(gk1 , · · · (gkℓ−2 , (gj , gi )) · · · ) where i is the smallest vertex in its connected component within K{k1 ,...,kℓ−2 ,j,i} . This leaves us with precisely the set of commutators (4.2). It remains to show that this generating set is minimal. For this we once
again recall that RC ′K = π1 (RK ). The first homology group H1 (RK ) is isomorphic
to RC ′K /RC ′′K , where RC ′′K is the commutator subgroup of RC ′K . On the other
POLYHEDRAL PRODUCTS AND COMMUTATOR SUBGROUPS
hand, we have
H1 (RK ) ∼
=
M
J⊂[m]
15
e 0 (KJ )
H
by Theorem 3.6. Hence, the number of generators in the abelian group H1 (RK ) ∼
=
P
e 0 (KJ ). The latter number agrees with the number of
RC ′K /RC ′′K is J⊂[m] rank H
e 0 (K) is one less the number of
iterated commutators in the set (4.2), as rank H
connected components of K.
r3
Example 4.8.
1. Let K be the simplicial complex 1 r r 2 r 4 on four vertices. Then the commutator subgroup RC ′K is free, and Theorem 4.5 gives the following free generators:
(g3 , g1 ), (g4 , g1 ), (g4 , g2 ), (g4 , g3 ),
(g2 , (g4 , g1 )), (g3 , (g4 , g1 )), (g1 , (g4 , g3 )), (g3 , (g4 , g2 )),
(g2 , (g3 , (g4 , g1 ))).
2. Let K be an m-cycle with m > 4 vertices. Then K1 is not a chordal graph, so
the group RC ′K is not free. One can see that RK is an orientable surface of genus
(m − 4)2m−3 + 1 (see Example 3.5), so RC ′K ∼
= π1 (RK ) is a one-relator group.
When m = 4, we get a 2-torus as RK , and Theorem 4.5 gives the generators
a1 = (g3 , g1 ) and b1 = (g4 , g2 ). The single relation is obviously (a1 , b1 ) = 1. For
m > 5 we do not know the explicit form of the single relation in the surface group
RC ′K ∼
= π1 (RK ) in terms of the generators provided by Theorem 4.5. Compare [Ve],
where the corresponding problem is studied for the commutator subalgebra of the
graded Lie algebra from Theorem 4.6.
References
[BBCG] Anthony Bahri, Martin Bendersky, Frederic R. Cohen, and Samuel Gitler. The polyhedral
product functor: a method of computation for moment-angle complexes, arrangements
and related spaces. Adv. Math. 225 (2010), no. 3, 1634–1668.
[BP1]
Victor Buchstaber and Taras Panov. Torus actions, combinatorial topology and homological algebra. Uspekhi Mat. Nauk 55 (2000), no. 5, 3–106 (Russian). Russian Math.
Surveys 55 (2000), no. 5, 825–921 (English).
[BP2]
Victor Buchstaber and Taras Panov. Toric Topology. Math. Surv. and Monogr., 204,
Amer. Math. Soc., Providence, RI, 2015.
[Ca]
Li Cai. On products in a real moment-angle manifold. Journal of the Mathematical
Society of Japan, to appear; arXiv:1410.5543.
[Da1]
Michael W. Davis. Groups generated by reflections and aspherical manifolds not covered
by Euclidean space. Ann. of Math. (2) 117 (1983), no. 2, 293–324.
[Da2]
Michael W. Davis. The geometry and topology of Coxeter groups. London Math. Soc.
Monographs Series, 32. Princeton Univ. Press, Princeton, NJ, 2008.
[DJ]
Michael W. Davis and Tadeusz Januszkiewicz. Convex polytopes, Coxeter orbifolds and
torus actions. Duke Math. J. 62 (1991), no. 2, 417–451.
[Dr]
Carl Droms. A complex for right-angled Coxeter groups. Proc. Amer. Math. Soc. 131
(2003), no. 8, 2305–2311.
[FG]
D. R. Fulkerson and O. A. Gross. Incidence matrices and interval graphs. Pacific J.
Math 15 (1965), 835–855.
[Ga1]
Alexander Gaifullin. Universal realisators for homology classes. Geom. Topol. 17 (2013),
no. 3, 1745–1772.
[Ga2]
Alexander Gaifullin. Small covers of graph–asociahedra and realisation of cycles. Mat.
Sbornik 207 (2016) (Russian); Sbornik Math. 207 (2016) (English translation, in this
volume).
[GPTW] Jelena Grbić, Taras Panov, Stephen Theriault and Jie Wu. Homotopy types of momentangle complexes for flag complexes. Trans. Amer. Math. Soc. 368 (2016), no. 9, 6663–
6682.
[GT]
Jelena Grbić and Stephen Theriault. Homotopy theory in toric topology. Russian Math.
Surveys 71 (2016), no. 2.
16
[KR]
[MKS]
[PR]
[PRV]
[Po]
[SDS]
[St]
[Ve]
TARAS PANOV AND YAKOV VERYOVKIN
Ki Hang Kim and Fred W. Roush. Homology of certain algebras defined by graphs. J.
Pure Appl. Algebra 17 (1980), no. 2, 179–186.
Wilhelm Magnus, Abraham Karrass and Donald Solitar. Combinatorial group theory.
Presentations of groups in terms of generators and relations. Second revised edition.
Dover Publications, Inc., New York, 1976.
Taras Panov and Nigel Ray. Categorical aspects of toric topology. In: Toric Topology,
M. Harada et al., eds. Contemp. Math., 460. Amer. Math. Soc., Providence, RI, 2008,
pp. 293–322.
Taras Panov, Nigel Ray and Rainer Vogt. Colimits, Stanley–Reiner algebras, and loop
spaces. In: Categorical decomposition techniques in algebraic topology (Isle of Skye,
2001). Progress in Math., 215, Birkhäuser, Basel, 2004, pp. 261–291.
Gerald J. Porter. The homotopy groups of wedges of suspensions. Amer. J. Math. 88
(1966), 655–663.
Herman Servatius, Carl Droms and Brigitte Servatius. Surface subgroups of graph
groups. Proc. Amer. Math. Soc. 106 (1989), no. 3, 573–578.
Mentor Stafa. On the fundamental group of certain polyhedral products. J. Pure Appl.
Algebra 219 (2015), no. 6, 2279–2299.
Yakov Veryovkin. Pontryagin algebras of some moment-angle-complexes. Dal’nevost.
Mat. Zh. (2016), no. 1, 9–23 (in Russian); arXiv:1512.00283.
Department of Mathematics and Mechanics, Moscow State University, Leninskie
Gory, 119991 Moscow, Russia,
Institute for Theoretical and Experimental Physics, Moscow, Russia and
Institute for Information Transmission Problems, Russian Academy of Sciences
E-mail address: tpanov@mech.math.msu.su
URL: http://higeom.math.msu.su/people/taras/
Department of Mathematics and Mechanics, Moscow State University, Leninskie
Gory, 119991 Moscow, Russia,
Steklov Mathematical Institute, Russian Academy of Sciences
E-mail address: verevkin j.a@mail.ru
| 4 |
IDEAL-ADIC COMPLETION OF QUASI-EXCELLENT RINGS (AFTER
GABBER)
arXiv:1609.09246v1 [] 29 Sep 2016
KAZUHIKO KURANO AND KAZUMA SHIMOMOTO
Abstract. In this paper, we give a detailed proof to a result of Gabber (unpublished)
on the ideal-adic completion of quasi-excellent rings, extending the previous work on
Nishimura-Nishimura. As a corollary, we establish that an ideal-adic completion of an
excellent ring is excellent.
1. Introduction
Throughout this paper, we assume that all rings are commutative and possess an identity. The aim of this article is to give a detailed proof to the following theorem (see
Theorem 5.1).
Main Theorem 1 (Nishimura-Nishimura, Gabber). Let A be a Noetherian ring, and I
an ideal of A. Assume that A is I-adically complete. Then, if A/I is quasi-excellent, so is
A.
This result was proved in characteristic 0 by Nishimura-Nishimura in [12], using the
resolution of singularities. More recently, the general case was settled by Gabber, using
his theorem of local uniformization of quasi-excellent schemes. The idea of his proof is
sketched in a letter [16] from Gabber to Laszlo. The above theorem is a special and
difficult case of the Lifting Problem, which was formulated by Grothendieck [5, Remarque
(7.4.8)]. For the precise definition of lifting problem as well as its variations, we refer the
reader to Definition 2.4. As an important corollary, we obtain the following theorem (see
Corollary 5.5).
Main Theorem 2. Let A be an excellent ring with an ideal I ⊂ A. Then the I-adic
completion of A is an excellent ring. In particular, if A is excellent, then the formal power
series ring A[[x1 , . . . , xn ]] is excellent.
Here is an outline of the paper.
Key words and phrases. Ideal-adic completion, lifting problem, local uniformization, Nagata ring, quasiexcellent ring.
2010 Mathematics Subject Classification: 13B35, 13F25, 13F40.
1
2
K. KURANO AND K. SHIMOMOTO
In § 2, we fix notation and definitions concerning excellent rings for which we refer the
reader to [9] and [10]. Then we introduce terminology related to the class of ring theoretic
properties preserved under ideal-adic completion and make a table on the known results.
In § 3, we begin with the notions of quasi-excellent schemes and alteration coverings.
Then we recall the recent result of Gabber on the existence of local unifromization of a
quasi-excellent Noetherian scheme as an alteration covering.
In § 4, using Gabber’s local uniformization theorem, we give a proof to the classical
theorem of Brodmann and Rotthaus in the full generality. This result is an important
step to prove the main result of this paper.
In § 5, we finish the proof of the main result after proving a number of intermediate
lemmas based on the ideas explained in [12] and [16]
2. Notation and conventions
We use the following notation. Let I be an ideal of a ring A. Then denote by V (I) the
set of all prime ideals of A containing I. Let D(I) := Spec A \ V (I). Let A be a ring with
bI the I-adic completion lim A/I n . If (A, m) is a local ring,
an ideal I. We denote by A
←−n
b or A∧ the m-adic completion lim A/mn . Let A be an integral
we simply denote by A
←−n
domain. Denote by Q(A) the field of fractions of A.
Now let P be a ring theoretic property of local Noetherian rings (e.g., regular, normal,
reduced,. . .). Let A be a Noetherian algebra over a field K. We say that A has geometrically P over K, if every local ring of A ⊗K L is P for every finite field extension L/K.
Let k(p) denote the residue field of A at p ∈ Spec A. For the following definition, we refer
the reader to [5, (7.3.1)].
Definition 2.1. Let P be a ring theoretic property of local Noetherian rings.
(1) A homomorphism of Noetherian rings ψ : A → B is said to be a P-homomorphism,
if ψ is flat and the fiber B ⊗A k(p) is geometrically P over k(p) for any p ∈ Spec A.
A homomorphism of Noetherian rings is said to be a regular homomorphism, if
it is a P-homomorphism with P being regular.
(2) A Noetherian ring A is said to be a P-ring, if for any p ∈ Spec A, the natural
cp is a P-homomorphism.
homomorphism Ap → A
For the following definition, we refer the reader to [9] and [10].
Definition 2.2. Let A be a Noetherian ring.
(1) We say that A is a G-ring (resp. Z-ring), if A is a P-ring with P being regular
(resp. normal).
(2) We say that A is a J2 -ring, if the regular locus of every finitely generated A-algebra
is Zariski open.
IDEAL-ADIC COMPLETION OF QUASI-EXCELLENT RINGS (AFTER GABBER)
3
(3) We say that A is catenary, if for any pair of prime ideals p ⊂ q of R, let p = p0 ⊂
p1 ⊂ · · · ⊂ pm = q and p = p′0 ⊂ p′1 ⊂ · · · ⊂ p′n = q be saturated strictly increasing
chains of prime ideals between p and q, then we have m = n. We say that A is
universally catenary, if every finitely generated A-algebra is catenary.
A Noetherian ring is called quasi-excellent, if it is a G-ring and a J2 -ring. A Noetherian
ring is called excellent, if it is a G-ring, a J2 -ring and universally catenary.
Any field, the ring of integers and complete local rings are excellent rings. Polynomial
rings over a G-ring are also G-rings in [9, Theorem 77]. One can prove the same for Z-rings
in the same way. Therefore, the category of G-rings (that of Z-rings, that of J2 -rings or
that of universally catenary rings) is closed under polynomial extensions, homomorphic
images and localizations.
It is known that any semi-local G-ring is a J2 -ring (cf. [9, Theorem 76]). Hence semilocal G-rings are quasi-excellent.
Definition 2.3. Let A be a Noetherian ring.
(1) We say that A is an N2 -ring (or Japanese ring), if A is an integral domain and
the integral closure of A in any finite field extension of Q(A) is module-finite over
A, where Q(A) is the field of fractions of A.
(2) We say that A is a Nagata ring (or universally Japanese ring), if for any P ∈ Spec A
the integral domain A/P is an N2 -ring.
It is known that quasi-excellent rings are Nagata rings (cf. [9, Theorem 78]). The
category of Nagata rings is closed under polynomial extensions, homomorphic images and
localizations (cf. [9, Theorem 72]).
Definition 2.4. Let P′ be a ring theoretic property of Noetherian rings.
(1) We say that the Lifting Property (LP for short) holds for P′ , if the following
condition holds. For any Noetherian ring A with an ideal I such that A is Iadically complete, if A/I is P′ , then A is P′ .
(2) We say that the Local Lifting Property (LLP for short) holds for P′ , if the lifting
property holds for any semi-local ring A.
(3) We say that the Power Series Extension Property (P SEP for short) holds for P′ ,
if the following condition holds. For any Noetherian ring A, if A is P′ , then the
formal power series ring A[[x]] is P′ .
(4) We say that the Ideal-adic Completion Property (ICP for short) holds for P′ , if
the following condition holds. For any Noetherian ring A with an ideal I, if A is
P′ , then the I-adic completion of A is P′ .
4
K. KURANO AND K. SHIMOMOTO
For any property P′ , it is easy to see the following implications (see Corollary 5.5 and
its proof):
LLP ⇐= LP =⇒ P SEP.
If the category of Notherian rings having the property P′ is closed under homomorphic
images, then the implication
P SEP =⇒ ICP
holds.
If the category of Notherian rings having the property P′ is closed under polynomial
extension, then the implication
P SEP ⇐= ICP
holds.
In this article, we consider P′ as one of the following properties:
G-ring, quasi-excellent, excellent, universally catenary, Nagata, Nagata Z-ring.
Let us remark that the category of Noetherian rings with P′ as above is closed under
polynomial extensions and homomorphic images.
In 1970, Seydi [15] proved that PSEP holds for P′ = universally catenary. In 1975,
Marot [8] proved that LP holds for P′ = Nagata. In 1979, Rotthaus [13] proved that LLP
holds for P′ = G-ring. Hence, LLP holds for P′ = quasi-excellent. In 1981, Nishimura [11]
found an example, and proved that ICP does not hold for P′ = G-ring. In 1982, Greco [4]
found an example, and proved that LLP does not hold for both P′ = universally catenary
and P′ = excellent. In 1987, Nishimura-Nishimura ([12, Theorem A]) proved that LP
holds for P′ = Nagata Z-ring. They also proved that LP holds for P′ = quasi-excellent,
if the ring contains a field of characteristic 0 ([12, Theorem B]). Recently, Gabber [16]
proved that LP holds for P′ = quasi-excellent in general. The aim of this paper is to
complete the following table by giving the details of Gabber’s theorem.
LLP
G-ring
quasi-excellent
excellent
universally catenary
Nagata
×
×
LP
PSEP
ICP
×
×
×
×
×
Nagata Z-ring
Here, remark that PSEP and ICP hold for P′ = excellent, since they hold for both P′ =
quasi-excellent and P′ = universally catenary.
IDEAL-ADIC COMPLETION OF QUASI-EXCELLENT RINGS (AFTER GABBER)
5
3. Gabber’s local uniformization theorem for quasi-excellent schemes
In this section, let us recall a recent result on the existence of local uniformizations for
quasi-excellent schemes, due to Gabber.
Definition 3.1. A Noetherian scheme X is quasi-excellent (resp. excellent), if X admits
an open affine covering, each of which is the spectrum of a quasi-excellent ring (resp. an
excellent ring). Once this condition holds, then any other open affine covering has the
same property.
We introduce the notion of alteration covering of schemes.
Definition 3.2. In this definition, we assume that all schemes are Notherian, and all
morphisms are dominant, generically finite morphisms of finite type between Notherian
schemes.
Let Y be a Noetherian integral scheme. We say that a finite family of scheme maps
{φi : Xi → Y }i=1,...,m is an alteration covering of Y , if there exist a proper morphism
S
f : V → Y , a Zariski open covering V = m
i=1 Vi , together with a family of scheme maps
{ψi : Vi → Xi }i=1,...,m such that the following diagram commutes for each i = 1, . . . , m
Vi
(3.1)
↓ ψi
Xi
−→
φi
−→
V
↓f
Y
where Vi → V is the natural open immersion.
If Xi is a regular integral scheme for each i = 1, . . . , m, we say that {φi : Xi → Y }i=1,...,m
is a regular alteration covering of Y .
We refer the reader to [7] for alteration coverings or the alteration topology in the
general situation. The definiton adopted in [7] looks slightly different from the above
one. However, we may resort to [7, Théorèm 3.2.1; EXPOSÉ II]. For the convenience
of readers, notice that the alteration topology is the same as Voevodsky’s h-topology
in the Noetherian case (see [2] for the proof of this fact). Let us state Gabber’s local
uniformization theorem for which we refer the reader to [7].
Theorem 3.3 (Gabber). Assume that Y is a quasi-excellent Noetherian integral scheme.
Then there exists a regular alteration covering of Y .
Using the valuative criterion for proper maps and Gabber’s theorem, we obtain the following corollary. That is the reason why the above theorem is called “local uniformization
theorem” for quasi-excellent schemes.
6
K. KURANO AND K. SHIMOMOTO
Corollary 3.4. Assume that A is a quasi-excellent domain. Then there exists a finite
field extension K/Q(A) such that if R is a valuation domain satisfying Q(R) = K and
A ⊂ R, then there exists a regular domain B such that A ⊂ B ⊂ R and B is a finitely
generated A-algebra.
4. A generalization of a theorem of Brodmann and Rotthaus
The purpose of this section is to prove the following theorem. It was proved by Brodmann and Rotthaus [1] for rings containing a field of characteristic zero, using the resolution of singularities by Hironaka. Let rad(A) be the Jacobson radical of a ring A.
Theorem 4.1. Let A be a Noetherian ring with an ideal I ⊂ rad(A). Assume that A/I
is quasi-excellent and A is a G-ring. Then A is a J2 -ring. In other words, A is quasiexcellent.
We need to prove a number of lemmas before proving this theorem. Let X be a Noetherian scheme. We denote by Reg(X) the regular locus of X, and put Sing(X) = X \Reg(X).
Let us recall that a subset of a Noetherian scheme is open if and only if it is constructible
and stable under generalization of points [6, II. Ex. 3.17, 3.18].
Lemma 4.2. Let A be a Noetherian ring with an ideal I ⊂ A and let π : X → Spec A be
a scheme map of finite type. Assume that A/I is a J2 -ring. Then Reg(X) ∩ π −1 V (I) is
open in π −1 V (I) .
Proof. For the proof, we may assume that X is an affine scheme. Let B be an A-algebra
of finite type and put X = Spec B. First, let us prove the following lemma:
Claim 4.3. Assume that Z ⊂ V (IB) is a closed subset. Then Reg(X) ∩ Z is constructible
in Z.
Proof of the claim. We prove it by Noetherian induction. So let us suppose that any
proper closed subset of Z satisfies the conclusion of the claim. We may assume that
Z = V (q) for some prime ideal q ⊃ IB. Since A/I is a J2 -ring, we have
(4.1)
Reg Spec(B/q) is a non-empty open set in Spec(B/q).
Assume that q ∈
/ Reg(X). Then we have Reg(X) ∩ Z = ∅ and this is evidently con-
structible. Next, assume that q ∈ Reg(X). Then since Bq is a regular local ring, the
maximal ideal qBq is generated by a regular sequence. This together with (4.1) implies
the following:
- There exists an element f ∈ B \ q such that qB[f −1 ] is generated by a B[f −1 ]regular sequence and B[f −1 ]/qB[f −1 ] is a regular ring.
IDEAL-ADIC COMPLETION OF QUASI-EXCELLENT RINGS (AFTER GABBER)
7
Hence if p ∈ Spec B is taken such that p ∈ V (q) and p ∈
/ V (q + f B), then Bp is regular.
We have the decomposition:
Reg(X) ∩ V (q) = Reg(X) ∩ V (q + f B) ∪ Reg(X) ∩ V (q) \ V (q + f B)
By Noetherian induction hypothesis, we see that Reg(X) ∩ V (q + f B) is constructible.
On the other hand, we have Reg(X) ∩ V (q) \ V (q + f B) = V (q) \ V (q + f B) and this
is clearly open in V (q). We conclude that Reg(X) ∩ V (q) is constructible, which finishes
the proof of the claim.
We can finish the proof of the lemma in the following way. It is easy to show that
Reg(X) ∩ π −1 V (I) is closed under taking generalizations of points inside π −1 V (I) .
Combining Claim 4.3, we conclude that Reg(X) ∩ π −1 V (I) is open in π −1 V (I) .
Using this lemma, we can prove the following crucial fact.
Lemma 4.4. Let B be a Noetherian domain and let us choose q ∈ Spec B and an ideal
J ⊂ B. Assume that Bq is a G-ring and B/J is a J2 -ring. Then there exists b ∈ B \ q
together with an alteration covering {φb,i : Xb,i → Spec B[b−1 ]}i such that
B[b−1 ]
(4.2)
φ−1
Spec
⊂ Reg(Xb,i )
b,i
JB[b−1 ]
for all i.
Proof. Since Bq is a quasi-excellent local domain by assumption, there exists a regular
alteration covering {φi : Xi → Spec Bq }i=1,...,m , together with a proper map f : V →
S
Spec Bq with V := m
i=1 Vi and {ψi : Vi → X}i by Theorem 3.3. By Chow’s lemma, we
may assume that f : V → Spec Bq is projective. Then we can find an element b̃ ∈ B \q and
an alteration covering {φb̃,i : Xb̃,i → Spec B[b̃−1 ]}i=1,...,m such that φi = φb̃,i ⊗B[b̃−1 ] Bq .
Remark that, if φb̃,i (s) is a generalization of qB[b̃−1 ], then we have s ∈ Xi . In particular,
OXb̃,i ,s is regular.
(4.3)
Let us put
B[b̃−1 ]
Zb̃ := Spec
, which is a closed subset of Spec B[b̃−1 ].
JB[b̃−1 ]
Then we find that
Wb̃,i := φ−1 (Zb̃ ) \ Reg(Xb̃,i ) is closed in φ−1 (Zb̃ )
b̃,i
b̃,i
by Lemma 4.2. Hence Wb̃,i is a closed subset of Xb̃,i and thus, φb̃,i (Wb̃,i ) is a constructible
subset of Spec B[b̃−1 ] by Chevalley’s theorem (cf. [9, Theorem 6]). From this, it follows
that the Zariski closure φb̃,i (Wb̃,i ) of φb̃,i (Wb̃,i ) is the set of all points of Spec B[b̃−1 ] that
are obtained as a specialization of a point of φb̃,i (Wb̃,i ).
8
K. KURANO AND K. SHIMOMOTO
Assume that we have q ∈ φb̃,i (Wb̃,i ). Then there is a point s ∈ Wb̃,i such that q ∈
{φb̃,i (s)}. By (4.3), the local ring OXb̃,i ,s is regular. Hence s ∈ Reg(Xb̃,i ), which is a
contradiction to s ∈ Wb̃,i . Thus, we must get q ∈
/ φb̃,i (Wb̃,i ).
Let us choose an element 0 6= b ∈ B such that q ∈ Spec B[b−1 ] ⊂ Spec B[b̃−1 ] and
Spec B[b−1 ] ∩ φb̃,i (Wb̃,i ) = ∅ for all i. Consider the fiber square:
φb,i
−→ Spec B[b−1 ]
Xb,i
↓
↓
φb̃,i
−→ Spec B[b̃−1 ]
Xb̃,i
Then we have Wb̃,i ∩ Xb,i = ∅. Let us put
B[b−1 ]
and Wb,i := φ−1
Zb := Spec
b,i (Zb ) \ Reg(Xb,i ).
JB[b−1 ]
Since Zb = Zb̃ ∩ Spec B[b−1 ], we get
−1
−1
(Zb̃ ) ∩ Xb,i \ Reg(Xb̃,i ) = Wb̃,i ∩ Xb,i = ∅.
Wb,i = φb̃,i
(Zb̃ ) ∩ Xb,i \ Reg(Xb,i ) = φb̃,i
This proves the assertion (4.2).
Lemma 4.5. Let {φi : Xi → Y }i=1,...,m be an alteration covering and let y1 , . . . , yl be a
sequence of points in Y such that yj+1 ∈ {yj } for j = 1, . . . , l − 1, where {yj } denotes the
Zariski closure of {yj } in Y . Then there exist i and a sequence of points x1 , . . . , xl in Xi
such that φi (xj ) = yj for j = 1, . . . , l and xj+1 ∈ {xj } for j = 1, . . . , l − 1.
Proof. By assumption, for i = 1, . . . , m, there is a commutative diagram:
Vi
↓ ψi
Xi
where V =
Sm
i=1 Vi
−→
φi
−→
V
↓f
Y
f
is a Zariski open covering and V −
→ Y is a proper surjective map. Let
us find a sequence v1 , . . . , vl in V such that vj maps to yj for j = 1, . . . , l and vj+1 ∈ {vj }
for j = 1, . . . , l − 1. First, lift y1 to a point v1 ∈ V via f . Suppose that a sequence
v1 , . . . , vt in V has been found such that f (vj ) = yj for j = 1, . . . , t and vj+1 ∈ {vj } for
j = 1, . . . , t − 1. So let us find vt+1 ∈ V with the required condition. Since f is a proper
map, f ({vt }) is equal to {yt }. Hence yt+1 ∈ f ({vt }) and there is a lift vt+1 of yt+1 such
that vt+1 ∈ {vt }. Here, we have vl ∈ Vi for some i. Since Vi is closed under generalizations,
v1 , . . . , vl are contained in Vi . Then we see that the sequence x1 := ψi (v1 ), . . . , xl := ψi (vl )
in Xi satisfies the required conditions.
IDEAL-ADIC COMPLETION OF QUASI-EXCELLENT RINGS (AFTER GABBER)
9
Proof of Theorem 4.1. In order to show that A is a J2 -ring, it is enough to prove that the
regular locus of any finite A-algebra is open (cf. [9, Theorem73]). Let B be a Noetherian
domain that is finitely generated as an A-module. It suffices to prove that Reg(B) contains
a non-empty open subset of Spec B by Nagata’s topological criterion (cf. [10, Theorem
24.4]). Since A → B is module-finite, we have IB ⊂ rad(B) and B/IB is quasi-excellent.
Since B is a G-ring, Bq is quasi-excellent for any q ∈ Spec B by [9, Theorem 76]. By Lemma
4.4, there exists bq ∈ B \ q and an alteration covering {φbq ,i : Xbq ,i → Spec B[b−1
q ]}i such
that
(4.4)
φ−1
bq ,i
Spec
B[b−1
q ]
IB[b−1
q ]
⊂ Reg(Xbq ,i ) for all i.
There exists a family of elements b1 , . . . , bs ∈ {bq | q ∈ Spec B} such that
(4.5)
−1
Spec B = Spec B[b−1
1 ] ∪ · · · ∪ Spec B[bs ].
There exists a finite family of alteration coverings {φbj ,i : Xbj ,i → Spec B[b−1
j ]}i with the
same property as (4.4) by letting bj = bq . By the lemma of generic flatness (cf. [9, (22.A)]),
we can find an element 0 6= c ∈ B such that the induced map
−1 −1
−1 −1
φ−1
bj ,i Spec B[bj c ] → Spec B[bj c ] is flat for all i and j.
If we can show that B[c−1 ] is regular, then the proof is finished.
Let us pick p ∈ Spec B such that c ∈
/ p. Then we want to prove that Bp is regular. Let
us choose a maximal ideal m ∈ Spec B such that p ⊂ m. Since IB ⊂ rad(B), it follows
that IB ⊂ m. By (4.5), there exists j such that p, m ∈ Spec B[b−1
j ]. By Lemma 4.5,
there exist x1 , x2 ∈ Xbj ,i for some i such that x2 ∈ {x1 }, φbj ,i (x1 ) = p and φbj ,i (x2 ) = m.
By (4.4) together with the fact IB ⊂ m, the local ring OXbj ,i ,x2 is regular. Since x1 is a
generalization of x2 , OXbj ,i ,x1 is also a regular local ring. Here, Bp → OXbj ,i ,x1 is flat, as
c∈
/ p. Therefore, Bp is regular, as desired.
5. Lifting problem for quasi-excellent rings
In this section, we shall prove the main theorem:
Theorem 5.1 (Nishimura-Nishimura, Gabber). Let A be a Noetherian ring, and I an
ideal of A. Assume that A is I-adically complete. Then, if A/I is quasi-excellent, so is A.
Proof. Assume the contrary. Let A be a Noetherian ring, and I an ideal of A. Suppose
that A is I-adically complete, A/I is quasi-excellent, but A is not quasi-excellent.
Step 1. We shall reduce this problem to a simpler case as long as possible.
(1-1) The following are well-known facts.
10
K. KURANO AND K. SHIMOMOTO
• Let R be a Noetherian ring and I1 , I2 be ideals of R such that I1 ⊃ I2 . If R is
I1 -adically complete, then R is I2 -adically complete.
• Let R be a Noetherian ring and J1 , J2 be ideals of R. If R is J1 -adically complete,
then R/J2 is ((J1 + J2 )/J2 )-adically complete.
We shall use these facts without proving them.
Suppose I = (a1 , . . . , at ). Put Ii = (a1 , . . . , ai ) for i = 1, . . . , t and I0 = (0). Remark
that A/Ii is (Ii+1 /Ii )-adically complete, and
(A/Ii )/(Ii+1 /Ii ) = A/Ii+1
for i = 0, 1, . . . , t − 1. Here, Ii+1 /Ii is a principal ideal of A/Ii . Remember that A/It is
quasi-excellent, but A/I0 is not so. Therefore, there exists i such that A/Ii+1 is quasiexcellent, but A/Ii is not so.
Replacing A/Ii and Ii+1 /Ii with A and I respectively, we may assume that
(A1) I is the principal ideal generated by some x 6= 0, that is, I = (x).
(1-2) We put
F = {J | A/J is not quasi-excellent}.
Since F contains (0), the set F is not empty and there exists a maximal element J0 in F.
Replacing A/J0 by A, we may assume that
(A2) if J 6= (0), then A/J is quasi-excellent.
(1-3) By Theorem 4.1, A is not a G-ring. There exist prime ideals P and Q of A such
that P ⊃ Q, and the generic fiber of
AP /QAP −→ (AP /QAP )∧
is not geometrically regular. On the other hand, if Q 6= (0), the above map is a regular
homomorphism by (A2). Therefore we know Q = (0), that is, A is an integral domain.
Since quasi-excellent rings are Nagata [9, Theorem 78], we know that A/xA is a Nagata
ring. Since the lifting property holds for Nagata rings by Marot [8], A is a Nagata domain.
Let A be the integral closure of A in Q(A). Then A is module-finite over A.
By Greco’s theorem [3, Theorem 3.1], A is not quasi-excellent, since A is not so. Here
A is xA-adically complete and A satisfies (A2). Replacing A with A, we further assume
that
(A3) A is a Nagata normal domain.
(1-4) Since A/xA is quasi-excellent, A/xA is a Nagata Z-ring. Since the lifting property
holds for Nagata Z-rings by Nishimura-Nishimura [12, Theorem A], A is also a Nagata
cP is a local normal domain for any prime ideal P of
Z-ring. Since A is a normal Z-ring, A
A. Then by [10, Theorem 31.6], A is universally catenary. Thus we know
IDEAL-ADIC COMPLETION OF QUASI-EXCELLENT RINGS (AFTER GABBER)
11
(A4) A is a Z-ring and universally catenary.
Step 2. We assume (A1), (A2), (A3) and (A4).
By Theorem 4.1, A is not a G-ring. By [9, Theorem 75], there exists a maximal ideal
cm is not regular. Consider the fibers of
m of A such that the homomorphism Am → A
cm . By (A2), the fibers except for the generic fiber are geometrically regular. So
Am → A
we concentrate on the generic fiber.
Let L be a finite algebraic extension of Q(A), where Q(A) is the field of fractions of A.
Let BL be the integral closure of A in L. Remark that BL is a finite A-module by (A3).
Consider the following fiber squares:
L
=
L
↑
↑
↑
↑
−→
cm
L ⊗A A
↑
=
cm =
BL −→ BL ⊗A Am −→ BL ⊗A A
A
−→
Am
−→
↑
cm
A
Q
nL
\
⊗BL (B
L )n
Q \
n (BL )n
Here n runs over all the maximal ideals of BL lying over m.
\
We want to discuss whether L ⊗BL (B
L )n is regular or not for L and n.
Recall that (BL )n /x(BL )n is a G-ring since A/xA is quasi-excellent. Let (BL )∗n be the
x(BL )n -adic completion of (BL )n . By the local lifting property for G-rings (Rotthaus [13]),
(BL )∗n is a G-ring.
(5.1)
∗
\
∗
\
Hence, the homomorphism (BL )∗n → (B
L )n = (BL )n is regular and Sing (BL )n is a closed
\
subset of Spec(BL )∗ . Therefore, L ⊗B (BL )∗ is regular if and only if L ⊗B (B
L )n is
n
L
n
L
regular. For a finite algebraic extension L of Q(A), we put
)
(
∗ is a minimal prime ideal of Sing (B )∗ such that q∗ ∩ B = (0),
q
L
L
n
n
n
SL = q∗n
where n is a maximal ideal of BL .
Here, L ⊗BL (BL )∗n is not regular for some L and n, since A is not a G-ring. Therefore,
SL is not empty for some L. We put
(5.2)
h0 = min{ht Q | Q ∈ SL for some L}.
Remember that A is a Z-ring by (A4). Since BL is a finite A-module, BL is also a
∗
\
Z-ring. It is easy to see that (B
L )n is the completion of both (BL )n and (BL )n . Since
(BL )n and (BL )∗n are Z-rings (cf. (5.1)),
(5.3)
\
(BL )n , (BL )∗n and (B
L )n are local normal domains.
Therefore we know
(5.4)
h0 ≥ 2.
12
K. KURANO AND K. SHIMOMOTO
We shall prove the following claim in the rest of Step 2.
Claim 5.2. Let p be a prime ideal of A. If ht p ≤ h0 , then Ap is excellent.
Let p be a prime ideal of A such that 0 < ht p ≤ h0 . By (A4), A is universally catenary.
It is enough to prove that Ap is a G-ring. By (A2) and [9, Theorem 75], it is enough to
cp is geometrically regular. Let L be a finite algebraic
show that the generic fiber of Ap → A
extension of Q(A), and BL be the integral closure of A in L. Consider the following fiber
squares.
L
=
L
↑
↑
↑
↑
−→
cp
L ⊗A A
=
↑
cp =
BL −→ BL ⊗A Ap −→ BL ⊗A A
A
−→
Ap
−→
Q
qL
↑
c
Ap
\
⊗BL (B
L )q
Q \
q (BL )q
Here q runs over all the prime ideals of BL lying over p. Since A is normal, we have
0 < ht q = ht p ≤ h0 .
(5.5)
\
It is enough to show that L ⊗BL (B
L )q is regular. Let n be a maximal ideal of BL such
that n ⊃ q. Let q∗ be a minimal prime ideal of q(BL )∗n . Since (BL )n → (BL )∗n is flat, we
have q∗ ∩ BL = q. Consider the commutative diagram:
(BL )∗n −→ ((BL )∗n )q∗
↑
BL −→ (BL )n −→
↑α
(BL )q
β
−→ (((BL )∗n )q∗ )∧
−→
↑ α̂
\
(BL )q
The map α as above is a flat local homomorphism. Since the closed fiber of α is of
dimension 0, we have dim(BL )q = dim((BL )∗n )q∗ . In particular
ht q = ht q∗ .
(5.6)
By (5.1), β is a regular homomorphism and Sing (BL )∗n is a closed subset of Spec(BL )∗n .
√ ∗
Let c∗n be the defining ideal of Sing (BL )∗n satisfying c∗n =
cn . We put T =
(((BL )∗n )q∗ )∧ . Since β is a regular homomorphism, c∗n T is a defining ideal of Sing(T ).
Suppose
c∗n = q∗n,1 ∩ · · · ∩ q∗n,s ,
where q∗n,i ’s are prime ideals of (BL )∗n such that q∗n,i 6⊃ q∗n,j if i 6= j.
One of the following three cases occurs:
Case 1. If none of q∗n,i ’s is contained in q∗ , then c∗n T = T .
Case 2. If q∗n,i = q∗ for some i, then c∗n T = q∗ T .
Case 3. Suppose that q∗n,1 , . . . , q∗n,t are properly contained in q∗ , and q∗n,t+1 , . . . , q∗n,s are
not contained in q∗ for some t satisfying 1 ≤ t ≤ s. Then, c∗n T = q∗n,1 T ∩ · · · ∩ q∗n,t T .
IDEAL-ADIC COMPLETION OF QUASI-EXCELLENT RINGS (AFTER GABBER)
13
In any case as above, we can verify c∗n T ∩ BL 6= (0) as follows. In Case 1, we have
c∗n T ∩ BL = BL 6= (0). In Case 2, we have q∗ T ∩ BL = q∗ T ∩ (BL )∗n ∩ BL = q∗ ∩ BL =
q 6= (0) by (5.5). In Case 3, suppose that q∗n,i is properly contained in q∗ . Then by (5.5)
and (5.6), we have ht q∗n,i < h0 . Since q∗n,i is in Sing (BL )∗n , we have q∗n,i ∩ BL 6= (0) by
the minimality of h0 .
Take 0 6= b ∈ c∗n T ∩ BL . Since c∗n T is the defining ideal of Sing(T ), T ⊗BL BL [b−1 ] is
regular. Since
−1 α̂⊗1
−1
\
(B
L )q ⊗BL BL [b ] −→ T ⊗BL BL [b ]
−1
\
\
is faithfully flat, (B
L )q ⊗BL BL [b ] is regular. Hence, (BL )q ⊗BL L is regular. We have
completed the proof of Claim 5.2.
Step 3. Here, we shall complete the proof of Theorem 5.1.
First of all, remember the following Rotthaus’ Hilfssatz (cf. [12, Theorem 1.9 and
Proposition 1.18] or originally [14]).
Theorem 5.3 (Rotthaus’ Hilfssatz). Let B be a Noetherian ring and n ∈ Max B. Assume
that B is xB-adically complete Nagata ring.
We put
Γ(n) = {γ | n ∈ γ ⊂ Max B,
#
γ < ∞}.
For γ ∈ Γ(n), we put Sγ = B \ ∪a∈γ a and Bγ = Sγ−1 B. Consider the homomorphism
Bγ∗ → Bn∗ induced by Bγ → Bn , where ( )∗ denotes the (x)-adic completion.
Let q∗n be a minimal prime ideal of Sing(Bn∗ ). For each γ ∈ Γ(n), we put q∗γ := q∗n ∩ Bγ∗ .
We define
n
[
o
∆γ (x) := Q ∩ B | Q ∈ Min(B ∗ /q∗ ) (Bγ∗ /q∗γ )/(x) , ∆(x) :=
∆γ (x)
γ
γ
γ∈Γ(n)
where (Bγ∗ /q∗γ ) is the normalization of Bγ∗ /q∗γ in the field of fractions.
Assume the following two conditions:
(i) for each γ ∈ Γ(n), ht q∗γ > 0,
(ii)
# △(x)
< ∞.
Then q∗n ∩ B 6= (0) is satisfied.
We refer the reader to [12] for the proof of this theorem. We deeply use it in our proof.
Now, we start to prove Theorem 5.1.
Let L be a finite algebraic extension of Q(A). Let B be the integral closure of A in L.
Let n be a maximal ideal of B. Suppose that q∗n is a minimal prime ideal of Sing(Bn∗ ) such
that
(5.7)
q∗n ∩ B = (0)
14
K. KURANO AND K. SHIMOMOTO
and ht q∗n = h0 . We remark that such L, B, n, q∗n certainly exist by the definition of h0
(see (5.2)).
We define Bn∗ , Bγ , Bγ∗ , q∗γ , ∆γ (x), ∆(x) as in Theorem 5.3. Recall that Bγ is a semi-local
ring satisfying Max(Bγ ) = {aBγ | a ∈ γ}. Since B is xB-adically complete, any maximal
ideal of B contains x. Since Bγ /xBγ is isomorphic to Bγ∗ /xBγ∗ , there exists one-to-one
correspondence between γ and the set of maximal ideals of Bγ∗ . Let n∗ be the maximal
ideal of Bγ∗ corresponding to n, that is, n∗ = nBγ∗ .
In the rest of this proof, we shall prove the conditions (i) and (ii) in Theorem 5.3. Then,
it contradicts (5.7) and this completes the proof of Theorem 5.1.
Put Cγ = Bγ∗ /q∗γ . Since A is a Nagata ring, Cγ is a Nagata ring, too. Therefore, the
normalization Cγ is a finite Cγ -module. Note that x 6∈ q∗γ since q∗n ∩ B = (0). Take
Q ∈ MinCγ (Cγ /xCγ ). Since Cγ is a universally catenary Nagata Z-ring *, we find that
Q∩Cγ is a minimal prime ideal of xCγ . Therefore, △γ (x) defined in Theorem 5.3 coincides
with
{Q̃ ∩ B | Q̃ ∈ MinBγ∗ Bγ∗ /q∗γ + xBγ∗ }.
Here, we shall prove the following claim.
Claim 5.4.
(1) For each γ ∈ Γ(n), ht q∗γ = h0 .
(2) For any Q ∈ △(x), ht Q = h0 + 1.
First, we shall prove (1). Consider the following homomorphisms.
(5.8)
f
g
cn ,
(Bγ∗ )n∗ −→ Bn∗ −→ B
where n∗ = nBγ∗ . By the local lifting property for G-rings, Bγ∗ is a G-ring. Since
\
∗
cn = (B
B
γ )n∗ , gf is a regular homomorphism. Since g is faithfully flat, f is a regular
homomorphism by [9, (33.B)] or [10, Theorem 32.1]. Since q∗γ (Bγ∗ )n∗ = q∗n ∩ (Bγ∗ )n∗ ,
(5.9)
q∗γ (Bγ∗ )n∗ is a minimal prime ideal of Sing((Bγ∗ )n∗ ).
Furthermore, q∗n is a minimal prime ideal of q∗γ Bn∗ . Then, we have
ht q∗γ = ht q∗γ (Bγ∗ )n∗ = ht q∗n = h0 .
The assertion (1) has thus been proved. Since h0 ≥ 2 as in (5.4), the condition (i) in
Theorem 5.3 follows from the above assertion (1).
Next, we prove (2). Take Q ∈ △γ (x) for some γ ∈ Γ(n).
We shall prove that Bγ∗ is normal. For a ∈ γ, let a∗ denote the maximal ideal aBγ∗ of
ca is a normal local domain. By the
Bγ∗ . Here B is a normal Z-ring, since A is. Therefore, B
* Assume that C is a universally catenary Nagata Z-domain. Let C be the normalization of C. Let Q
(resp. P ) be a prime ideal of C (resp. C). It is easy to see, if Q ∩ C = P , then ht Q = ht P .
IDEAL-ADIC COMPLETION OF QUASI-EXCELLENT RINGS (AFTER GABBER)
15
\
∗
c
local lifting property for G-rings, (Bγ∗ )a∗ is a G-ring. Therefore, (Bγ∗ )a∗ → (B
γ )a∗ = Ba is
a regular homomorphism. Thus, (Bγ∗ )a∗ is normal. Hence, we know that Bγ∗ is normal.
By definition, there exists Q̃ ∈ MinBγ∗ Bγ∗ /q∗γ + xBγ∗ such that Q = Q̃ ∩ B. Since
Bγ → Bγ∗ is flat and QBγ∗ = Q̃, we have
(5.10)
ht Q̃ = dim(Bγ∗ )Q̃ = dim(Bγ )Q = ht Q.
Since Bγ∗ is a Noetherian normal ring, Bγ∗ is the direct product of finitely many integrally
closed domains. Remember that Bγ∗ is universally catenary. Then we have
ht Q̃ = ht q∗γ + 1.
(5.11)
By (5.10), (5.11) and the assertion (1) together, we obtain ht Q = h0 + 1.
We have completed the proof of Claim 5.4.
Let q be a prime ideal of B such that ht q ≤ h0 . Since A is normal, ht(q∩A) = ht q ≤ h0 .
Hence, A(q∩A) is excellent by Claim 5.2. Therefore, Bq is excellent. Let us remember that
B/xB is quasi-excellent. Then by Lemma 4.4, there exists bq ∈ B \q such that there exists
an alteration covering
(5.12)
φbq ,i
{Xbq ,i −→ Spec(B[b−1
q ])}i
such that
(5.13)
φ−1
bq ,i
for each i. We put
Ω=
Spec
[
q
B[b−1
q ]
xB[b−1
q ]
⊂ Reg(Xbq ,i )
Spec(B[b−1
q ]) ⊂ Spec(B).
By definition, Ω is an open set that contains all the prime ideals of B of height less than or
equal to h0 . Hence, the complement Ωc contains only finitely many prime ideals of height
h0 + 1. If △(x) is contained in Ωc , then △(x) must be a finite set by Claim 5.4 (2).
Thus, it suffices to prove
△(x) ⊂ Ωc .
Assume the contrary. Take Q ∈ △(x) ∩ Ω. Since Q ∈ △(x), there exists Q̃ ∈
MinBγ∗ Bγ∗ /q∗γ + xBγ∗ such that Q̃ ∩ B = Q for some γ ∈ Γ(n). Since Q ∈ Ω, we
find that Q ∈ Spec(B[b−1
q ]) for some q with ht q ≤ h0 .
Since (5.12) is an alteration covering, we have a proper surjective generically finite
S
map π : V → Spec(B[b−1
q ]), together with an open covering V =
i Vi and a morphism
ψi : Vi → Xbq ,i for each i with commutative diagrams as in (3.1). Consider the following
16
K. KURANO AND K. SHIMOMOTO
diagram:
Vi′
↓ ψi′
Xb′ q ,i
↓g
Xbq ,i
V′
⊂
↓ f′
φ′bq ,i
∗
−→ Spec(Bγ∗ [b−1
q ]) −→ Spec(Bγ )
φbq ,i
−→
↓h
↓
Spec(B[b−1
q ])
−→
Spec(B)
We put ( )′ = ( ) ×Spec B Spec Bγ∗ . Then, both Q̃ and q∗γ are contained in Spec(Bγ∗ [b−1
q ]).
′
Since f ′ : V ′ → Spec(Bγ∗ [b−1
q ]) is proper surjective, there exist ξ1 , ξ2 ∈ V such that
S
f ′ (ξ1 ) = Q̃, f ′ (ξ2 ) = q∗γ , and ξ1 is a specialization of ξ2 . Since V ′ = i Vi′ is a Zariski open
covering, we have ξ1 ∈ Vi′ for some i. Since Vi′ is closed under generalization, both ξ1 and
ξ2 are contained in Vi′ . Here, we put η1 = ψi′ (ξ1 ), η2 = ψi′ (ξ2 ) and ζ1 = g(ψi′ (ξ1 )).
Since x ∈ Q and φbq ,i (ζ1 ) = Q, we know that OXbq ,i ,ζ1 is a regular local ring by (5.13).
Since B → Bγ∗ is flat, OXbq ,i ,ζ1 → OXb′
q ,i
,η1
is a flat local homomorphism. Its closed fiber
is the identity since the maximal ideal of OXbq ,i ,ζ1 contains x. Thus, OXb′
q ,i
local ring. Since η2 is a generalization of η1 , OXb′
q ,i
(Bγ∗ )q∗γ
→ OXb′
q ,i
,η2
is flat, since
h(q∗γ )
,η2
,η1
is a regular
is also a regular local ring. Here,
= (0) by (5.7). Therefore, (Bγ∗ )q∗γ is a regular local
ring. It contradicts (5.9). The condition (ii) in Theorem 5.3 has been proved.
We have completed the proof of Theorem 5.1.
Now we obtain the following corollary.
Corollary 5.5. Let A be an excellent ring with an ideal I ⊂ A. Then the I-adic completion
of A is an excellent ring. In particular, if A is excellent, then the formal power series ring
A[[x1 , . . . , xn ]] is excellent.
bI denote the I-adic completion of A. We want to prove that A
bI is excellent.
Proof. Let A
bI /I A
bI is also excellent. In particular, it is
As A is excellent by assumption, A/I ∼
= A
bI is quasi-excellent. So it suffices to prove that A
bI is
quasi-excellent. By Theorem 5.1, A
universally catenary. For this, let I = (t1 , . . . , tm ) with ti ∈ A. By [15, Théorèm 1.12], the
formal power series ring A[[T1 , . . . , Tm ]] is universally catenary. As there is an isomorphism
[10, Theorem 8.12]
bI ∼
A
= A[[T1 , . . . , Tm ]]/(T1 − t1 , . . . , Tm − tm ),
bI is universally catenary and hence excellent.
we see that A
Finally, assume that A is excellent. Then the polynomial algebra A[x1 , . . . , xn ] is excel-
lent, and the (x1 , . . . , xn )-adic completion of A[x1 , . . . , xn ] is A[[x1 , . . . , xn ]]. Hence it is
excellent. This proves the corollary.
IDEAL-ADIC COMPLETION OF QUASI-EXCELLENT RINGS (AFTER GABBER)
17
Acknowledgement . The authors are grateful to Professor O. Gabber for permitting us to
write this paper. The authors are also grateful to Professor J. Nishimura for listening to
the proof of the main theorem and providing us with useful suggestions kindly.
References
[1] M. Brodmann and C. Rotthaus, Ü ber den regulären Ort in Ausgezeichneten Ringen, Math. Z. 175
(1980) 81–85.
[2] T. G. Goodwillie and S. Lichtenbaum, A cohomological bound for the h-topology, Amer. J. Math. 123
(2001) 425–443.
[3] S. Greco, Two theorems on excellent rings, Nagoya Math. J. 60 (1976), 139–149.
[4] S. Greco, A note on universally catenary rings, Nagoya Math. J. 87 (1982), 95–100.
[5] A. Grothendieck, Élements de Géométrie Algébrique IV, Publications Math. I.H.E.S. 24 (1965).
[6] R. Hartshorne, Algebraic Geometry, Springer-Verlag (1977).
[7] L. Illusie, Y. Laszlo, and F. Orgogozo, Travaux de Gabber sur l’uniformisation locale et la cohomologie
étale des schémas quasi-excellents, Astérisque 363-364 (2014).
[8] J. Marot, Sur les anneaux universellement japonais, Bull. Soc. Math. France 103 (1975), 103–111.
[9] H. Matsumura, Commutative algebra. Second edition, . Mathematics Lecture Note Series, 56. Benjamin/Cummings Publishing Co., Inc., Reading, Mass., 1980.
[10] H. Matsumura, Commutative ring theory, Cambridge Studies in Advanced Mathematics 8, Cambridge
University Press, Cambridge (1986).
[11] J. Nishimura, Ideal-adic completion of Noetherian rings, J. Math. Kyoto Univ. 21 (1981), 153–169.
[12] J. Nishimura and T. Nishimura, Ideal-adic completion of Noetherian rings II, Algebraic Geometry
and Commutative Algebra in Honor of Masayoshi Nagata (1987) 453–467.
[13] C. Rotthaus, Komplettierung semilokaler quasiausgezeichneter Ringe, Nagoya Math. J. 76 (1979),
173–180.
[14] C. Rotthaus, Zur Komplettierung ausgezeichneter Ringe, Math. Ann. 253 (1980), 213–226.
[15] H. Seydi, Anneaux henséliens et conditions de chaı̂nes, Bull. Soc. Math. France 98 (1970) 9–31.
[16] A letter from O. Gabber to Y. Laszlo (2007).
Department of Mathematics, School of Science and Technology, Meiji University, Higashimata 1-1-1, Tama-ku, Kawasaki 214-8571, Japan
E-mail address: kurano@isc.meiji.ac.jp
Department of Mathematics, College of Humanities and Sciences, Nihon University,
Setagaya-ku, Tokyo 156-8550, Japan
E-mail address: shimomotokazuma@gmail.com
| 0 |
Automated Speed and Lane Change Decision
Making using Deep Reinforcement Learning
Carl-Johan Hoel∗† , Krister Wolff∗ , Leo Laine∗†
∗ Chalmers
University of Technology, 412 96 Göteborg, Sweden
Group Trucks Technology, 405 08 Göteborg, Sweden
Email: {carl-johan.hoel, krister.wolff, leo.laine}@chalmers.se
arXiv:1803.10056v1 [cs.RO] 14 Mar 2018
† Volvo
Abstract—This paper introduces a method, based on deep
reinforcement learning, for automatically generating a general
purpose decision making function. A Deep Q-Network agent was
trained in a simulated environment to handle speed and lane
change decisions for a truck-trailer combination. In a highway
driving case, it is shown that the method produced an agent
that matched or surpassed the performance of a commonly used
reference model. To demonstrate the generality of the method, the
exact same algorithm was also tested by training it for an overtaking case on a road with oncoming traffic. Furthermore, a novel
way of applying a convolutional neural network to high level
input that represents interchangeable objects is also introduced.
I. I NTRODUCTION
By automating heavy vehicles, there is potential for a significant productivity increase, see e.g. [1]. One of the challenges
in developing autonomous vehicles is that they need to make
decisions in complex environments, ranging from highway
driving to less structured areas inside cities. To predict all
possible traffic situations, and code how to handle them, would
be a time consuming and error prone work, if at all feasible.
Therefore, a method that can learn a suitable behavior from its
own experiences would be desirable. Ideally, such a method
should be applicable to all possible environments. This paper
introduces how a specific machine learning algorithm can be
applied to automated driving, here tested on a highway driving
case and an overtaking case.
Traditionally, rule based gap acceptance models are common to make lane changing decisions, see for example [2] or
[3]. More recent methods often consider the utility of a potential lane change. Either the utility of changing to a specific lane
is estimated, see [4] or [5], or the total utility (also called the
expected return) over a time horizon is maximized by solving a
partially observable Markov decisions process (POMDP), see
[6] or [7]. Two commonly used models for speed control and
to decide when to change lanes are the Intelligent driver model
(IDM) [8] and the Minimize overall braking induced by lane
changes (MOBIL) model [9]. The combination of these two
models was used as a baseline when evaluating the method
presented in this paper.
A common problem with most existing methods for autonomous driving is that they target one specific driving case.
For example, the ones mentioned above are designed for
highway driving, but if a different case is considered, such as
driving on a road with oncoming traffic, a completely different
method is required. In an attempt to overcome this issue, we
introduced a more general approach in [10]. This method is
based on a genetic algorithm, which is used to automatically
train a general-purpose driver model that can handle different
cases. However, the method still requires some features to be
defined manually, in order to adapt its rules and actions to
different driving cases.
During the last years, the field of deep learning has made
revolutionary progress in many areas, see e.g. [11] or [12]. By
combining deep neural networks with reinforcement learning,
artificial intelligence has evolved in different domains, from
playing Atari games [13], to continuous control [14], reaching
a super human performance in the game of Go [15] and beating
the best chess computers [16]. Deep reinforcement learning
has also successfully been used for some special applications
in the field of autonomous driving, see e.g. [17] and [18].
This paper introduces a method based on a Deep Q-Network
(DQN) agent [13] that, from training in a simulated environment, automatically generates a decision making function. To
the extent of the authors’ knowledge, this method has not
previously been applied to this problem. The main benefit of
the presented method is that it is general, i.e. not limited to
a specific driving case. For highway driving, it is shown that
it can generate an agent that performs better than the combination of the IDM and MOBIL model. Furthermore, with no
tuning, the same method can be applied to a different setting,
in this case driving on a road with oncoming traffic. Two
important differences compared to our previous approach in
[10] is that the method presented in this paper does not need
any hand crafted features and that the training is significantly
faster. Moreover, this paper introduces a novel way of using
a convolutional neural network architecture by applying it to
high level sensor data, representing interchangeable objects,
which improves and speeds up the learning process.
This paper is organized as follows: The DQN algorithm
and how it was implemented is described in Sect. II. Next,
Sect. III gives an overview of the IDM and the MOBIL model,
and describes how the simulations were set up. In Sect. IV,
the results are presented, followed by a discussion in Sect. V.
Finally the conclusions are given in Sect. VI.
II. S PEED AND LANE CHANGE DECISION MAKING
In this paper, the task of deciding when to change lanes
and to control the speed of the vehicle under consideration
(henceforth referred to as the ego vehicle) is viewed as a
reinforcement learning problem. A Deep Q-Network (DQN)
agent [13] is used to learn the Q-function, which describes how
beneficial different actions are in a given state. The state of the
surrounding vehicles and the available lanes are known to the
agent, and its objective is to choose which action to take, which
for example could be to change lanes, brake or accelerate. The
details of the procedure are described in this section.
A. Reinforcement learning
Reinforcement learning is a branch of machine learning,
where an agent acts in an environment and tries to learn a
policy, π, that maximizes a cumulative reward function. The
policy defines which action, a, to take, given a state, s. The
state of the environment will then change to a new state, s0 ,
and return a reward, r. The reinforcement learning problem is
often modeled as a Markov Decision Process (MDP), which
is defined as the tuple hS, A, T, R, γi, where S is the set of
states, A is the set of actions, T : S × A → S is the state
transition probability function, R : S × A × S → R is the
reward function and γ ∈ [0, 1] is a discount factor. An MDP
satisfies the Markov property, which means that the probability
distribution of the future states depends only on the current
state and action, and not on the history of previous states. At
every time step, t, the goal of the agent is to maximize the
future discounted return, defined as
∞
X
γ k rt+k ,
(1)
Rt =
k=0
where rt+k is the reward given at step t + k. See [19] for
a comprehensive introduction to reinforcement learning and
MDPs.
B. Deep Q-Network
In the reinforcement learning algorithm called Q-learning
[20], the agent tries to learn the optimal action value function,
Q∗ (s, a). This function is defined as the maximum expected
return when being in a state, s, taking some action, a, and
then following the optimal policy, π ∗ . This is described by
Q∗ (s, a) = max E [Rt |st = s, at = a, π] .
π
(2)
The optimal action value function follows the Bellman equation, see [20],
h
i
∗ 0 0
Q∗ (s, a) = E r + γ max
Q
(s
,
a
)|s,
a
,
(3)
0
a
which is based on the intuition that if the values of Q∗ (s0 , a0 )
are known, the optimal policy is to select an action, a0 , that
maximizes the expected value of Q∗ (s0 , a0 ).
In the DQN algorithm [13], Q-learning is combined with
deep learning. A deep neural network with weights θ is used
as a function approximator of the optimal value function,
i.e. Q(s, a; θ) ≈ Q∗ (s, a). The network is then trained by
adjusting its parameters, θi , at every iteration, i, to minimize
the error in the Bellman equation. This is typically done with
stochastic gradient descent, where mini-batches with size M of
experiences, described by the tuple et = (st , at , rt , st+1 ), are
drawn from an experience replay memory. The loss function
at iteration i is defined as
h
i
0 0 −
2
Li (θi ) = EM (r + γ max
Q(s
,
a
;
θ
)
−
Q(s,
a;
θ
))
. (4)
i
i
0
a
Here, θi− are the network parameters used to calculate the
target at iteration i. In order to make the learning process
more stable, these parameters are held fixed for a number of
iterations and then periodically updated with the latest version
of the trained parameters, θi . The trade off between exploration
and exploitation is handled by following an -greedy policy.
This means that a random action is selected with probability
, and otherwise the action with the highest value is chosen.
For further details on the DQN algorithm, see [13].
Q-learning and the DQN algorithm are known to overestimate the action value function under some conditions. A
further development is the Double DQN algorithm [21], which
aims to decouple the action selection and action evaluation.
This is done by updating Eq. 4 to
Li (θi ) = EM r + γQ(s0 , arg max Q(s0 , a; θi ); θi− )
a
2
− Q(s, a; θi ) .
(5)
C. Implementation
The Double DQN algorithm, outlined above, was applied to
control a vehicle in the two test cases, described in Sect. III-B.
The details of the implementation are presented below.
1) MDP formulation: Since the intention of other road
users cannot be observed, the speed and lane change decision
making problem can be modeled as a partially observable
Markov decision process (POMDP) [22]. To address the partial
observability, the POMDP can be approximated by an MDP
with a k-Markov approximation, where the state consists of
the last k observations, st = (ot−k+1 , ot−k+2 , . . . , ot ) [13].
However, for the method presented in this paper, it proved
sufficient to set k = 1, i.e. to simply use the last observation.
Two different agents were investigated in this study. They
both used the same state input, s, defined as a vector with 27
elements, which contained information on the ego vehicle’s
speed, available lanes and states of the 8 surrounding vehicles.
Table I shows the configuration of the state (see Sect. III for
details on how the traffic environment was simulated).
The first agent only controlled the lane changing decisions,
whereas the speed was automatically controlled by the IDM.
This gave a direct comparison to the lane change decisions
taken by the MOBIL model, in which the speed also was controlled by the IDM (see Sect. III-A for details). The other agent
controlled both the lane changing decisions and the speed.
Here, the speed was changed by choosing between four different acceleration options: full brake (−9 m/s2 ), medium brake
(−2 m/s2 ), maintain speed (0 m/s2 ) and accelerate (+2 m/s2 ).
The action spaces of the two agents are given in Table II. When
a decision to change lanes was taken, the intended lane of the
lateral control model, described in Sect. III-B, was changed.
Both agents took decisions at an interval of ∆t = 1 s.
A simple reward function was used. Normally, at every time
step, a positive reward was given, equal to the normalized distance driven during that time step. This reward was calculated
ego
ego
as ∆d/∆dmax , where ∆dmax = ∆tvmax
, and vmax
was the
maximum possible speed of the ego vehicle. However, if a
collision occurred, or the ego vehicle drove out of the road (it
TABLE I
S TATE INPUT VECTOR USED BY THE AGENTS . s1 , s2 AND s3 DESCRIBE
THE STATE OF THE EGO VEHICLE AND THE AVAILABLE LANES , WHEREAS
s3i+1 , s3i+2 AND s3i+3 , FOR i = 1,2,...8, REPRESENT THE STATE OF THE
SURROUNDING VEHICLES .
s1
s2
s3
s3i+1
s3i+2
s3i+3
max
vego /vego
Normalized ego vehicle speed,
1, if there is a lane to the left
0, otherwise
1, if there is a lane to the right
0, otherwise
Normalized relative position of vehicle i, ∆si /∆smax
Normalized relative speed of vehicle i, ∆vi /vmax
−1, if vehicle i is two lanes to the right of the ego vehicle
−0.5, if vehicle i is one lane to the right of the ego vehicle
0, if vehicle i is in the same lane as the ego vehicle
0.5, if vehicle i is one lane to the left of the ego vehicle
1, if vehicle i is two lanes to the left of the ego vehicle
TABLE II
ACTION SPACES OF THE TWO AGENTS .
Agent1
a1
a2
a3
Stay in current lane
Change lanes to the left
Change lanes to the right
Agent2
a1
a2
a3
a4
a5
a6
Stay in current lane, keep current speed
Stay in current lane, accelerate with -2 m/s2
Stay in current lane, accelerate with -9 m/s2
Stay in current lane, accelerate with 2 m/s2
Change lanes to the left, keep current speed
Change lanes to the right, keep current speed
could choose to change lanes to one that did not exist), a penalizing reward of −10 was given and the episode was terminated.
If the ego vehicle ended up in a near collision, defined as being
one vehicle length (4.8 m) from another vehicle, a reward
of −10 was also given, but the episode was not terminated.
Finally, to limit the number of lane changes, a reward of −1
was given when a lane changing action was chosen.
2) Neural network design: Two different neural network
architectures were investigated in this study. Both had 27 input
neurons, for the state described above. The final output layer
had 3 output neurons for Agent 1 and 6 output neurons for
Agent 2, where the value of neuron ni represented the value
function when choosing action ai , i.e. Q(s, ai ).
The first architecture was a standard fully connected neural
network (FCNN), with two hidden layers. Each layer consisted
of nhidden neurons, set to 512, and rectified linear units (ReLUs) were used as activation functions [23]. The final output
layer used a linear activation function.
The second architecture introduces a new way of applying
temporal convolutional neural networks (CNNs). CNNs are inspired by the structure of the visual cortex in animals. By their
architecture and weight sharing properties, they create a space
and shift invariance, and reduce the number of parameters to
be optimized. This has made them successful in the field of
computer vision, where they have been applied directly to low
level input, consisting of pixel values. For further details on
27x1
35x1
64x1
3
3x1
or
6x1
merge
8x32
8x32
1x32
24
32 filters
size 3x1
stride 3
32 filters
size 1x32
stride 1
maxpool
merge
fully connected
fully connected
output
input
Fig. 1. The second network architecture, which used convolutional neural
networks and max pooling to create translational invariance between the input
from different surrounding vehicles. See the main text for further explanations.
CNNs, see e.g. [12].
In this study, a CNN architecture was applied to a high level
input, which described the state of identical, interchangeable
objects, see Fig. 1. Two convolutional layers were applied
to the part of the state vector that represented the relative
position, speed and lane of the surrounding vehicles. The first
layer had nconv1 filters, set to 32, with filter size 3, stride 3 and
ReLU activation functions. This structure created an output of
8 × 32 signals. Since there were 3 neighbouring input neurons
that described the properties of each of the 8 surrounding
vehicles, by setting the filter size and stride to 3, each row of
the output only depended on one vehicle. The second layer
had nconv2 filters, set to 32, with filter size 1, stride 1 and
ReLU activation functions. This further aggregated knowledge
about each vehicle in every row of the 8 × 32 output signal.
After the second convolutional layer, a max pooling layer was
added. This structure created a translational invariance of the
input that described the relative state of the different vehicles,
i.e. the result would be the same if e.g. the input describing
vehicle 3 and vehicle 4 switched position in the input vector.
This translational invariance, in combination with the reduced
number of optimizable parameters, simplified and sped up the
training of the network. See Sect. V for a further discussion
on why a CNN architecture was beneficial in this setting.
The output of the max pooling layer was then concatenated
with the rest of the input vector. A fully connected layer
with nfull units, here set to 64, and ReLu activation functions
followed. Finally, the output layer had 3 or 6 neurons, both
with linear activation functions.
3) Training details: The network was trained by using
the Double DQN algorithm, described in Sect. II-B. During
training, the policy followed an -greedy behavior, where
decreased linearly from start to end over N−end iterations.
A discount factor, γ, was used for future rewards. The target
network was updated every Nupdate iterations by cloning the
online parameters, i.e. setting θi− = θi , at the updating step.
Learning started after Nstart iterations and a replay memory of
size Mreplay was used. Mini-batches of training samples with
size Mmini were uniformly drawn from the replay memory
and the network was updated using the RMSProp algorithm
[24], with a learning rate of η. In order to improve the
stability, error clipping was used by limiting the error term
r + γQ(s0 , arg maxa Q(s0 , a; θi ); θi− ) − Q(s, a; θi ) to [−1, 1].
TABLE III
H YPERPARAMETERS USED TO TRAIN THE DQN AGENTS .
Discount factor, γ
Learning start iteration, Nstart
Replay memory size, Mreplay
Initial exploration constant, start
Final exploration constant, end
Final exploration iteration, N-end
Learning rate, η
Mini-batch size, Mmini
Target network update frequency, Nupdate
0.99
50,000
500,000
1
0.1
500,000
0.00025
32
30,000
The hyperparameters of the training are summarized in Table III. Due to the computational complexity, a systematic grid
search was not performed. Instead, the hyperparameter values
were selected from an informal search, based upon the values
given in [13] and [21].
The state space, described above, did not provide any information on where in an episode the agent was at a given
time step, e.g. if it was in the beginning or close to the end
(Sect. III-B describes how an episode was defined). The reason
for this choice was that the goal was to train an agent that performed well in highway driving of infinite length. Therefore,
the longitudinal position was irrelevant. However, at the end of
a successful episode, the future discounted return, Rend , was
0. To avoid that the agent learned this, the last experience eend
was not stored in the experience replay memory. Thereby, the
agent was tricked to believe that the episode continued forever.
III. S IMULATION SETUP
A highway case was used as the main way to test the
algorithm outlined above. To evaluate the performance of the
agent, a reference model, consisting of the IDM and MOBIL
model, was used. This section briefly summarizes the reference
model, describes how the simulations were set up and how
the performance was measured. Moreover, in order to show
the versatility of the proposed method, it was further tested in
a secondary overtaking case with oncoming traffic, which is
also described here.
A. Reference model
The IDM [8] is widely used in transportation research to
model the longitudinal dynamics of a vehicle. With this model,
the speed of the ego vehicle, v, varies according to
δ ∗
2
v
d (v, ∆v)
v̇ = a 1 −
−
,
(6)
v0
d
√
d∗ (v, ∆v) = d0 + vT + v∆v/(2 ab).
(7)
The vehicle’s speed depends on the distance to the vehicle in
front, d, and the speed difference (approach rate), ∆v. Table IV
shows the parameters that are used to tune the model. The
values were taken from the original paper [8].
The MOBIL model [9] makes decisions on when to change
lanes by maximizing the acceleration of the vehicle in consideration and the surrounding vehicles. For a lane change to be
allowed, the induced acceleration of the following car in the
new lane, an , must fulfill a safety criterion, an > −bsafe . To
TABLE IV
IDM AND MOBIL MODEL PARAMETERS .
Minimum gap distance, s0
Safe time headway, T
Maximal acceleration, a
Desired deceleration, b
Acceleration exponent, δ
2m
1.6 s
0.7 m/s2
1.7 m/s2
4
Politeness factor, p
Changing threshold, ath
Maximum safe deceleration, bsafe
0
0.1 m/s2
4 m/s2
predict the acceleration of the ego and surrounding vehicles,
the IDM model is used. If the safety criterion is met, MOBIL
changes lanes if
ãe − ae + p ((ãn − an ) + (ão − ao )) > ath ,
(8)
where ae , an and ao are the accelerations of the ego vehicle,
the trailing vehicle in the target lane, and the trailing vehicle
in the current lane, respectively, assuming that the ego vehicle
stays in its lane. Furthermore, ãe , ãn and ão are the corresponding accelerations if the lane change is carried out. The
politeness factor, p, controls how the effect on other vehicles
is valued. To perform a lane change, the collective acceleration
gain must be higher than a threshold, ∆ath . If there are lanes
available both to the left and to the right, the same criterion is
applied to both options. If both criteria are fulfilled, the option
with the highest acceleration gain is chosen. The parameter
values of the MOBIL model are shown in Table IV. They were
taken from the original paper [9], except for the politeness
factor, here set to 0. This setting provided a more fair comparison to the DQN agent, since then neither method considered
possible acceleration losses of the surrounding vehicles.
B. Traffic simulation
1) Highway case: A highway case was used as the main
way to test the method presented in this paper. This case
was similar to the one used in the previous study [10]. For
completeness, it is summarized below.
A three-lane highway was used, where the ego vehicle to be
controlled was surrounded by 8 other vehicles. The ego vehicle
consisted of a 16.5 m long truck-semitrailer combination and
the surrounding vehicles were normal 4.8 m long passenger
cars. These surrounding vehicles stayed in their initial lanes
and followed the IDM model longitudinally. Overtaking was
allowed both on the left and the right side of another vehicle.
An example of an initial traffic situation is shown in Fig. 2a.
Although normal highway driving mostly consists of traffic
with rather constant speeds and small accelerations, occasionally vehicles brake hard, or even at the maximum of their
capability to avoid collisions. Drivers can also decide to suddenly increase their speed rapidly. Therefore, in order for the
agent to learn to keep a safe inter-vehicle distance, such quick
speed changes need to be included in the training process.
The surrounding vehicles in the simulations were assigned
different desired speed trajectories. To speed up the training
of the agent, these trajectories contain frequent speed changes,
(a)
(b)
Fig. 2. (a) Example of an initial traffic situation for the highway case, which was used as the main way to test the algorithm. (b) Example of a traffic situation
for a secondary overtaking case with oncoming traffic, showing the situation 10 seconds from the initial state. In both cases, the ego vehicle (truck-trailer
combination) is shown in green and black. The arrows represent the velocities of the vehicles.
Speed (m/s)
40
30
20
10
0
0
200
400
600
800
Position (m)
Fig. 3. Example of six different randomly generated speed trajectories, defined
for different positions along the highway. The solid lines are fast trajectories,
applied to vehicles starting behind the ego vehicle, whereas the dashed lines
are slow trajectories, applied to vehicles starting in front of the ego vehicle.
TABLE V
PARAMETERS OF THE SIMULATED HIGHWAY CASE .
Maximum initial vehicle spread, dlong
Minimum initial inter-vehicle distance, d∆
+
Front vehicle minimum speed, vmin
+
Front vehicle maximum speed, vmax
−
Rear vehicle minimum speed, vmin
−
Rear vehicle maximum speed, vmax
ego
Initial ego vehicle speed, vinit
ego
Maximum ego vehicle speed, vmax
Episode length, dmax
200 m
25 m
16.7 m/s (60 km/h)
23.6 m/s (85 km/h)
26.4 m/s (95 km/h)
33.3 m/s (120 km/h)
25 m/s (90 km/h)
25 m/s (90 km/h)
800 m
which occurred more often than during normal highway driving. Some examples are shown in Fig. 3.
The ego vehicle initially started in the middle lane, surrounded by 8 other vehicles. These were randomly positioned
in the lanes, within dlong longitudinally and with a minimum inter-vehicle distance d∆ . The initial and maximum ego
ego
ego
vehicle speed was vinit
and vmax
respectively. Vehicles that
were positioned in front of the ego vehicle were assigned
+
+
slower speed trajectories, in the range [vmin
, vmax
], whereas
vehicles placed behind the ego vehicle were assigned faster
−
−
speed trajectories, in the range [vmin
, vmax
]. Episodes where
two vehicles were placed too close together with a large speed
difference, thus causing an unavoidable collision, were deleted.
Each episode was dmax long. The values of the mentioned
parameters are presented in Table V. Further details on the
setup of the simulations, and how the speed trajectories were
generated, are described in [10].
2) Overtaking case: In order to illustrate the generality of
the method presented in this paper, a secondary overtaking
case, including two-way traffic, was also tested. Fig. 2b shows
an example of this case. The ego vehicle started in the right
ego
lane, with an initial speed of vinit
, set to 25 m/s. Another
vehicle, which followed a random slow speed profile (defined
above), was placed 50 m in front of the ego vehicle. Two
oncoming vehicles, also following slow speed profiles, were
placed in the left, oncoming lane, at a random distance between 300 and 1100 m in front of the ego vehicle.
3) Vehicle dynamics and lateral control: In both the highway and the overtaking case, the dynamics of the vehicles
were simulated by using simple kinematic models. A lane
following two-point visual control model [25] was used to
control the vehicles laterally. As mentioned in Sect. II-C, when
the agent decided to change lanes, the setpoint of this model
was changed to the new desired lane. The same procedure was
used if the MOBIL model decided to change lanes.
C. Performance index
In order to evaluate how the DQN agent performed compared to the reference driver model (presented in Sect. III-A)
in a specific episode of the highway case, a performance index
p̃ was defined as
p̃ = (d/dmax )(v̄/v̄ref ).
(9)
Here, d is the distance driven by the ego vehicle (limited by a
collision or the episode length), dmax is the episode length, v̄
is the average speed of the ego vehicle and v̄ref is the average
speed when the reference model controlled the ego vehicle
through the episode. With this definition, the distance driven
by the ego vehicle was the dominant limiting factor when a
collision occurred. However, if the agent managed to complete
the episode without collisions, the average speed determined
the performance index.
For the overtaking case, the reference model described
above cannot be used. Instead, the performance index was
simply defines as p̃o = (d/dmax )(v̄/v̄refIDM ). Here, v̄refIDM
was the mean speed of the ego vehicle when it was controlled
by the IDM through the same episode, i.e. when it did not
overtake the preceding vehicle.
IV. R ESULTS
This section focuses on the results that were obtained for
the highway case, described in Sect. III-B, which was the main
way of testing the presented method. It also briefly explains
and discusses some characteristics of the results, whereas a
TABLE VI
S UMMARY OF THE RESULTS OF THE DIFFERENT AGENTS FOR THE
HIGHWAY CASE AND THE OVERTAKING CASE .
Highway case
Agent1CNN
Agent2CNN
Agent1FCNN
Agent2FCNN
1
Overtaking case
Collision free
episodes
Performance
index, p̃
Collision free
episodes
Performance
index, p̃o
100%
100%
98%
86%
1.01
1.10
0.98
0.96
100%
100%
-
1.06
1.11
-
more general discussion follows in Sect. V. The results regarding the overtaking case are collected in Sect. IV-C.
As described in Sect. II, two agents with different action
spaces were investigated. Agent1 only decided when to change
lanes, whereas Agent2 decided both the speed and when to
change lanes. Furthermore, two different neural network architectures were used. In summary, the four variants were
Agent1FCNN , Agent1CNN , Agent2FCNN and Agent2CNN .
Five different runs were carried out for the four agent variants, where each run had different random seeds for the DQN
and the traffic simulation. The networks were trained for 2 million iterations (3 million for Agent2FCNN ), and at every 50,000
iterations, they were evaluated over 1,000 random episodes.
Note that these evaluation episodes were randomly generated,
and not presented to the agents during training. During the
evaluation runs, the performance index described in Sect. III-C
was used to compare the agents’ and the reference model’s
behaviour. The results are shown in Fig. 4, which presents the
average proportion, p̂, of successfully completed, i.e. collision
free, evaluation episodes of the four agent variants, and in
Fig. 5, which shows their average performance index, p̃. The
final performance of the fully trained agents is summarized in
Table VI.
A. Agents using a CNN
In Fig. 4, it can be seen that Agent1CNN solved all the
episodes already after 100,000 iterations, which is the first
evaluation after that the training started at 50,000 iterations.
At this point it had learned to always stay in its lane, in
order to avoid collisions. Since it often got blocked by slower
vehicles, its average performance index was therefore lower
than 1 at this point, see Fig. 5. However, after around 600,000
iterations, Agent1CNN had learned to carry out lane changes
when necessary, and performed similar to the reference model.
Fig. 4 shows that Agent2CNN quickly figured out how to
change lanes and increase its speed to solve most of the
episodes. Its performance index was on par with the reference model early on during the training, at around 250,000
iterations, see Fig. 5. Then, at 400,000 iterations, it solved all
the evaluation episodes without collisions. With more training, there were still no collisions, but the performance index
increased and stabilized at 1.1.
Fig. 6 shows a histogram of the performance index for
1,000 evaluation episodes, which were run by the final trained
version of Agent1CNN and Agent2CNN . Since all the episodes
were completed without collisions, the performance index was
Agent1CNN
0.5
Agent2CNN
Agent1FCNN
Agent2FCNN
0
0
0.5
1
1.5
2
2.5
3
106
Iteration
Fig. 4. Proportion of episodes solved without collisions by the different agents
during training.
1
Agent1CNN
Agent2CNN
0.5
Agent1FCNN
Agent2FCNN
Reference model
0
0
0.5
1
1.5
2
2.5
3
106
Iteration
Fig. 5. Performance index of the different agents during training.
Mean: 1.01
Mean: 1.10
0.2
0.2
0.1
0.1
0
0.8
1
1.2
Performance index
1.4
0
0.8
1
1.2
1.4
Performance index
Fig. 6. Histogram of the performance index at the end of the training for
Agent1CNN (left) and Agent2CNN (right).
simply the speed ratio v̄/v̄ref . In the figure, it can be seen that
most often there was a small difference between the average
speed of the agents and the reference model. There were also
some outliers, which were both faster and slower than the
reference model. The explanation for these is that the episodes
were randomly generated, which meant that even a reasonable
action could get the ego vehicle into a situation where it got
locked in and could not overtake the surrounding vehicles.
Therefore, a small difference in behaviour could lead to such
situations for both the trained agents and the reference model,
which explains the outliers. Furthermore, the peak at index 1
for Agent2CNN is explained by that there were some episodes
when the lane in front of the ego vehicle was free from the
start. Then both the reference model and the agents drove at
the maximum speed through the whole episode.
To further illustrate the properties of the agents, and how
they developed during training, the percentage of chosen actions is shown in Fig. 7. For Agent1CNN , it can be seen that it
quickly figured out that changing lanes can lead to collisions,
and therefore it chose to stay in its lane almost 100% of
the time in the beginning. This explains why it completed
all the episodes already from the first evaluation point after
its training started. However, as training proceeded, it figured
out when it safely could change lanes, and thereby perform
better. At the end of its training, it chose to change lanes
1
Action proportion
0.02
0.015
0.5
0.01
Agent1CNN
a2
0.005
Agent2CNN
a3
0
0
0
0.5
1
1.5
0
2
1
1.5
2
106
Iteration
Fig. 8. Proportion of overtaking episodes solved without collisions by the
different agents during training.
1
Action proportion
0.5
106
Iteration
a1
a2
a3
0.5
a4
1
a5
a6
0
Agent1CNN
0.5
0
0.5
1
1.5
Iteration
2
Agent2CNN
6
10
Fig. 7. Top: proportion of actions chosen by Agent1CNN during training. Due
to the scale difference, a1 , i.e. stay in the current lane, is here left out. Bottom:
proportion of actions chosen by Agent2CNN during training. Both plots start
at 100,000 iterations, since that is the first evaluation point after that training
started at 50,000 iterations.
around 1% of the time. Agent2CNN first learned a short sighted
strategy, where it accelerated most of the time to obtain a
high immediate reward. This naturally led to many rear end
collisions. However, when its training proceeded, it learned to
control its speed by braking or idling, and to change lanes
when necessary. Reassuringly, both agents learned to change
lanes to the left and right equally often.
B. Agents using a FCNN
Both Agent1FCNN and Agent2FCNN failed to complete all the
evaluation episodes without collisions, see Fig. 4 and Table VI.
Naturally, Agent1FCNN solved a significantly higher fraction of
the episodes and performed better than Agent2FCNN , since it
only needed to decide when to change lanes, and not control
the speed. In the beginning, it learned to always stay in its
lane, and thereby solved all episodes without collisions, but
reached a lower performance index than the reference model,
see Fig. 5. With more training, it started to change lanes
and performed reasonably well, but sometimes caused collisions. Agent2FCNN performed significantly worse and collided
in 14% of the episodes by the end of its training. A longer
training run was carried out for Agent1FCNN and Agent2FCNN ,
but after 20 million iterations, the results were the same.
C. Overtaking case
In order to demonstrate the generality of the method presented in this paper, the same algorithm was applied to an
overtaking situation, described in Sect. III-B. Fig. 8, Fig. 9 and
Table VI show the proportion of successfully completed evaluation episodes, p̂, and the modified performance index, p̃o , of
Agent1CNN and Agent2CNN . By the end of the training, both
agents solved all episodes without collisions. Furthermore, in
all the episodes, the ego vehicle overtook the slower vehicle,
resulting in performance indexes above 1.
IDM (no overtaking)
0
0
0.5
1
Iteration
1.5
2
106
Fig. 9. Performance index of the different agents during training on the
overtaking case.
V. D ISCUSSION
In Table VI, it can be seen that both Agent1 and Agent2 with
the convolutional neural network architecture solved all the
episodes without collisions. The performance of Agent1CNN
was on par with the reference model. Since they both used the
IDM to control the speed, this result indicates that the trained
agent and the MOBIL model took lane changing decisions
with similar quality. However, when adding the possibility
for the agent to also control its speed, as in Agent2CNN , the
trained agent had the freedom to find better strategies and
could therefore outperform the reference model. This result
illustrates that for a better performance, lateral and longitudinal
decisions should not be completely separated.
As expected, using a CNN architecture resulted in a significantly better performance than a FCNN architecture, see e.g.
Table VI. The reason for this is, as mentioned in Sect. II-C, that
the CNN architecture creates a translational invariance of the
input that describes the relative state of the different vehicles.
This is reasonable, since it is desirable that the agent reacts
the same way to other vehicles’ behaviour, independently of
where they are positioned in the input vector. Furthermore,
since CNNs share weights, the complexity of the network is
reduced, which in itself speeds up the learning process. This
way of using CNNs can be compared to how they previously
were introduced and applied to low level input, often on pixels
in an image, where they provide a spatial invariance when
identifying features, see e.g. [26]. The results of this paper
show that it can also be beneficial to apply CNNs to high level
input of interchangeable objects, such as the state description
shown in Sect. II-C.
As mentioned in Sect. II-C, a simple reward function was
used. Naturally, the choice of reward function strongly affects
the resulting behaviour. For example, when no penalty was
given for a lane change, the agent found solutions where
it constantly demanded lane changes in opposite directions,
which made the vehicle drive in between two lanes. In this
study, a simple reward function worked well, but for other
cases a more careful design may be required. One way to
determine a reward function that mimics human preferences
is to use inverse reinforcement learning [27].
In a previous paper, [10], we presented a different method,
based on a genetic algorithm, that automatically can generate
a driving model for similar cases as described here. That
method is also general and it was shown that it is applicable to
different cases, but it requires some hand crafted features when
designing the structure of its rules. However, the method presented in this paper requires no such hand crafted features, and
instead uses the measured state, described in Table I, directly
as input. Furthermore, the method in [10] achieved a similar
performance when it comes to safety and average speed, but
the number of necessary training episodes was between one
and two orders of magnitude higher than for the method that
was investigated in this study. Therefore, the new method is
clearly advantageous compared to the previous one.
An important remark is that when training an agent by
using the method presented in this paper, the agent will only
be able to solve the type of situations that it is exposed to
in the simulations. It is therefore important that the design
of the simulated traffic environment covers the intended case.
Furthermore, when using machine learning to produce a decision making function, it is hard to guarantee functional safety.
Therefore, it is common to use an underlying safety layer,
which verifies the safety of a planned trajectory before it is
executed by the vehicle control system, see e.g. [28].
VI. C ONCLUSION AND FUTURE WORK
The main results of this paper show that a Deep Q-Network
agent can be trained to make decisions in autonomous driving,
without the need of any hand crafted features. In a highway
case, the DQN agents performed on par with, or better than, a
reference model based on the IDM and MOBIL model. Furthermore, the generality of the method was demonstrated by
applying it to a case with oncoming traffic. In both cases, the
trained agents handled all episodes without collisions. Another
important conclusion is that, for the presented method, applying a CNN to high level input that represents interchangeable
objects can both speed up the learning process and increase
the performance of the trained agent.
Topics for future work include to further analyze the generality of this method by applying it to other cases, such as crossings and roundabouts, and to systematically investigate the impact of different parameters and network architectures. Moreover, it would be interesting to apply prioritized experience
replay [29], which is a method where important experiences
are repeated more frequently during the training process. This
could potentially improve and speed up the learning process.
ACKNOWLEDGMENT
This work was partially supported by the Wallenberg Artificial Intelligence, Autonomous Systems and Software Program
(WASP), funded by Knut and Alice Wallenberg Foundation,
and partially by Vinnova FFI.
R EFERENCES
[1] D. J. Fagnant and K. Kockelman, “Preparing a nation for autonomous vehicles: opportunities, barriers and policy recommendations,” Transportation Research Part A: Policy and Practice, vol. 77, pp. 167 – 181, 2015.
[2] P. Gipps, “A model for the structure of lane-changing decisions,” Transportation Research Part B: Methodological, vol. 20, no. 5, pp. 403 –
414, 1986.
[3] K. I. Ahmed, “Modeling drivers’ acceleration and lane changing behavior,” Ph.D. dissertation, Massachusetts Institute of Technology, 1999.
[4] J. Eggert and F. Damerow, “Complex lane change behavior in the foresighted driver model,” in 2015 IEEE 18th International Conference on
Intelligent Transportation Systems, 2015, pp. 1747–1754.
[5] J. Nilsson et al., “If, when, and how to perform lane change maneuvers
on highways,” IEEE Intelligent Transportation Systems Magazine, vol. 8,
no. 4, pp. 68–78, 2016.
[6] S. Ulbrich and M. Maurer, “Towards tactical lane change behavior
planning for automated vehicles,” in 2015 IEEE 18th International Conference on Intelligent Transportation Systems, 2015, pp. 989–995.
[7] Z. N. Sunberg, C. J. Ho, and M. J. Kochenderfer, “The value of inferring
the internal state of traffic participants for autonomous freeway driving,”
in 2017 American Control Conference (ACC), 2017, pp. 3004–3010.
[8] M. Treiber, A. Hennecke, and D. Helbing, “Congested Traffic States
in Empirical Observations and Microscopic Simulations,” Phys. Rev. E,
vol. 62, pp. 1805–1824, 2000.
[9] A. Kesting, M. Treiber, and D. Helbing, “General lane-changing model
mobil for car-following models,” Transportation Research Record, vol.
1999, pp. 86–94, 2007.
[10] C. J. Hoel, M. Wahde, and K. Wolff, “An evolutionary approach to
general-purpose automated speed and lane change behavior,” in 2017
16th IEEE International Conference on Machine Learning and Applications (ICMLA), 2017, pp. 743–748.
[11] J. Schmidhuber, “Deep learning in neural networks: An overview,” Neural Networks, vol. 61, pp. 85 – 117, 2015.
[12] Y. LeCun, Y. Bengio, and G. E. Hinton, “Deep learning,” Nature, vol.
521, no. 7553, pp. 436–444, 2015.
[13] V. Mnih et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–533, 2015.
[14] T. P. Lillicrap et al., “Continuous control with deep reinforcement learning,” CoRR, vol. abs/1509.02971, 2015.
[15] D. Silver et al., “Mastering the game of go without human knowledge,”
Nature, vol. 550, pp. 354–359, 2017.
[16] D. Silver et al., “Mastering chess and shogi by self-play with a general
reinforcement learning algorithm,” CoRR, vol. abs/1712.01815, 2017.
[17] S. Shalev-Shwartz, S. Shammah, and A. Shashua, “Safe, multiagent, reinforcement learning for autonomous driving,” CoRR, vol.
abs/1610.03295, 2016.
[18] A. E. Sallab, M. Abdou, E. Perot, and S. Yogamani, “Deep reinforcement
learning framework for autonomous driving,” Electronic Imaging, vol.
2017, no. 19, pp. 70–76, 2017.
[19] R. S. Sutton and A. G. Barto, Introduction to Reinforcement Learning.
MIT Press, 1998.
[20] C. J. C. H. Watkins and P. Dayan, “Q-learning,” Machine Learning,
vol. 8, no. 3, pp. 279–292, 1992.
[21] H. v. Hasselt, A. Guez, and D. Silver, “Deep reinforcement learning with
double q-learning,” in Proceedings of the Thirtieth AAAI Conference on
Artificial Intelligence, 2016, pp. 2094–2100.
[22] L. P. Kaelbling, M. L. Littman, and A. R. Cassandra, “Planning and
acting in partially observable stochastic domains,” Artif. Intell., vol. 101,
no. 1-2, pp. 99–134, 1998.
[23] V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proceedings of the 27th International Conference
on International Conference on Machine Learning, 2010, pp. 807–814.
[24] T. Tieleman and G. Hinton, “Lecture 6.5—RmsProp: Divide the gradient by a running average of its recent magnitude,” Coursera: Neural
Networks for Machine Learning, 2012.
[25] D. D. Salvucci and R. Gray, “A two-point visual control model of
steering,” Perception, vol. 33, no. 10, pp. 1233–1248, 2004.
[26] Y. LeCun et al., “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
[27] S. Zhifei and E. M. Joo, “A review of inverse reinforcement learning
theory and recent advances,” in 2012 IEEE Congress on Evolutionary
Computation, 2012, pp. 1–8.
[28] S. Underwood et al., Truck Automation: Testing and Trusting the Virtual
Driver. Springer International Publishing, 2016, pp. 91–109.
[29] T. Schaul et al., “Prioritized experience replay,” CoRR, vol.
abs/1511.05952, 2015.
| 2 |
arXiv:1712.08397v1 [math.RA] 22 Dec 2017
KÄHLER–POISSON ALGEBRAS
JOAKIM ARNLIND AND AHMED AL-SHUJARY
Abstract. We introduce Kähler–Poisson algebras as analogues of algebras of
smooth functions on Kähler manifolds, and prove that they share several properties with their classical counterparts on an algebraic level. For instance, the
module of inner derivations of a Kähler–Poisson algebra is a finitely generated
projective module, and allows for a unique metric and torsion-free connection
whose curvature enjoys all the classical symmetries. Moreover, starting from
a large class of Poisson algebras, we show that every algebra has an associated
Kähler–Poisson algebra constructed as a localization. At the end, detailed
examples are provided in order to illustrate the novel concepts.
1. Introduction
Poisson manifolds and their geometry have been of great interest over the last
decades. Besides from being important from a purely mathematical point of view,
they are also fundamental to areas in mathematical and theoretical physics. Many
authors have studied the geometric and algebraic properties of symplectic and Poisson manifolds together in relation to concepts such as connections, local structure
and cohomology (see e.g. [Lic77, Wei83, Bry88, Hue90]). Moreover, there is a well
developed field of deformations of Poisson structures, perhaps most famous through
Kontsevich’s result on the existence of formal deformations [Kon03]. The ring of
smooth functions on a Poisson manifold is a Poisson algebra and it seems quite natural to ask to what extent geometric properties and concepts may be introduced in
an arbitrary Poisson algebra, without making reference to an underlying manifold.
The methods of algebraic geometry can readily be extended to Poisson algebras
(see e.g. [Ber79]); however, this will not be directly relevant to us as we shall
start by focusing on metric aspects. Our work is mainly motivated by the results in
[AHH12, AH14], where it is shown that one may reformulate the Riemannian geometry of an embedded Kähler manifold M entirely in terms of the Poisson structure
on the algebra smooth functions of M . Let us also mention that the starting point
of our approach is quite similar to that of [Hue90] (although metric aspects were
not considered there).
In this note, we show that any Poisson algebra, fulfilling an “almost Kähler condition”, enjoys many properties similar to those of the algebra of smooth functions
on an almost Kähler manifold, opening up for a more metric treatment of Poisson
algebras. Such algebras will be called “Kähler–Poisson algebras”, and we show
that one may associate a Kähler–Poisson algebra to every algebra in a large class
of Poisson algebras. In particular, we prove the existence of a unique Levi-Civita
connection on the module generated by the inner derivations, and show that the
curvature operator has all the classical symmetries. As our approach is quite close
to the theory of Lie-Rinehart algebras, we start by introducing metric Lie-Rinehart
1
2
JOAKIM ARNLIND AND AHMED AL-SHUJARY
algebras and recall a few results on the Levi-Civita connection and the corresponding curvature.
In physics, the dynamics of quantum systems are found by using a correspondence between Poisson brackets of functions on the classical manifold, and the commutator of operators in the quantum system. Thus, understanding how properties
of the underlying manifold may be expressed in Poisson algebraic terms enables
both interpretation and definition of quantum mechanical quantities. For instance,
this has been used in the context of matrix models to identify emergent geometry
(cf. [BS10, AHH12]).
Let us briefly outline the contents of the paper. In Section 2 we recall a few
the results from [AH14], in order to motivate and understand the introduction of a
Kähler type condition for Poisson algebras, and Section 3 explains how the theory
of Lie-Rinehart algebras can be extended to include metric aspects. In Section 4,
we define Kähler–Poisson algebras and investigate their basic properties as well as
showing that one may associate a Kähler–Poisson algebra to an arbitrary Poisson
algebra in a large class of algebras. In Section 5 we derive a compact formula for the
Levi-Civita connection as well as introducing Ricci and scalar curvature. Section 6
presents a number of examples together with a few detailed computations.
Remark 1.1. We have become aware of the fact that the terminology Kähler–
Poisson structure (resp. Kähler–Poisson manifold ) is used for certain Poisson
structures on a complex manifold where the Poisson bivector is of type (1, 1) (see
e.g. [Kar02]), but we hope that this will not be a source of confusion for the reader.
2. Poisson algebraic formulation of almost Kähler manifolds
In [AH14] it was shown that the geometry of embedded almost Kähler manifolds
can be reformulated entirely in the Poisson algebra of smooth functions. As we shall
develop an algebraic analogue of this fact, let us briefly recall the main construction.
Let (Σ, ω) denote a n-dimensional symplectic manifold and let g be a metric on
Σ. Furthermore, let us assume that x : (Σ, g) → (Rm , ḡ) is an isometric embedding
of Σ into Rm (with the metric ḡ), and write
p → x(p) = x1 (p), x2 (p), . . . , xm (p) .
The results in [AH14] state that the Riemannian geometry of Σ may be formulated
in terms of the Poisson algebra generated by the embedding coordinates x1 , . . . , xm .
These results hold true as long as there exists a non-zero function γ ∈ C ∞ (Σ) such
that
(2.1)
γ 2 g ab = θap θbq gpq
where θab and gab denote the components of the Poisson bivector and the metric
in local coordinates {ua }na=1 , respectively. If (Σ, ω, g) is an almost Kähler manifold
then it follows from the compatibility condition ω(X, Y ) = g(X, J(Y )) (where J
denotes the almost complex structure on Σ) that relation (2.1) holds with γ = 1.
In local coordinates, the isometric embedding is characterized by
gab = ḡ ij ∂a xi ∂b xj ,
and the Poisson bracket is computed as
{f, h} = θab (∂a f )(∂b h).
KÄHLER–POISSON ALGEBRAS
3
Note that in the above and following formulas, indices i, j, k, . . . run from 1 to m
and indices a, b, c, . . . run from 1 to n.
Defining D : Tp Rm → Tp Rm as
D(X) ≡ Di j X j ∂i =
1 i k
{x , x }ḡkl {xj , xl }ḡjm X m ∂i
γ2
for X = X i ∂i ∈ Tp Rm , one computes
1 ab
θ (∂a xi )(∂b xk )ḡkl θpq (∂p xj )(∂q xl )ḡ jm X m
γ2
1
= 2 θab θpq gbq (∂a xi )(∂p xj )ḡ jm X m = g ap (∂a xi )(∂p xj )ḡ jm X m ,
γ
D(X)i =
by using (2.1). Hence, the map D is identified as the orthogonal projection onto
Tp Σ, seen as a subspace of Tp Rm . Having the projection operator at hand, one
may directly proceed to develop the theory of submanifolds. For instance, the
Levi-Civita connection ∇ on Σ is given by
¯ XY
∇X Y = D ∇
¯ is the Levi-Civita connection on (Rm , ḡ). In the
where X, Y ∈ Γ(T Σ) and ∇
particular case (but generically applicable, by Nash’s theorem [Nas56]) when ḡ is
the Euclidean metric, the above formula reduces to
∇X Y i =
1
γ4
m
X
{xi , xk }{xj , xk }X l {xl , xn }{Y j , xn }.
i,j,k,l,n=1
As we intend to develop an analogous theory for Poisson algebras, without any
reference to a manifold, we would like to reformulate (2.1) in terms of Poisson algebraic expressions. Using that gab = ḡij (∂a xi )(∂b xj ) and {xi , xk } = θab (∂a xi )(∂b xj ),
one derives
γ 2 g ab = θap θbq gpq
⇒
γ 2 δca = θap θbq gpq gbc
⇒
γ 2 θar = θap θbq gpq gbc θcr
⇒
γ 2 {xi , xj } = (∂a xi )(∂r xj )θap θbq θcr ḡ kl (∂p xk )(∂q xl )ḡ mn (∂b xm )(∂c xn )
⇒
γ 2 {xi , xj } = −{xi , xk }ḡkl {xl , xn }ḡnm {xm , xj }
which is equivalent to the statement that
(2.2)
γ 2 {f, h} = −{f, xi }ḡij {xj , xk }ḡkl {xl , h}
for all f, h ∈ C ∞ (Σ). Given γ 2 , ḡij and x1 , . . . , xm , the above equation makes
sense in an arbitrary Poisson algebra. The main purpose of this paper is to study
algebras which satisfy such a relation.
3. Metric Lie-Rinehart algebras
The idea of modeling the algebraic structures of differential geometry in a commutative algebra is quite old. We shall follow a pedestrian approach, were we assume
that a (commutative) algebra A is given (corresponding to the algebra of functions),
together with an A-module g (corresponding to the module of vector fields) which
is also a Lie algebra and has an action on A as derivations. Under appropriate assumptions on the ingoing objects, such systems has been studied by many authors
over the years, see e.g [Her53, Koz60, Pal61, Rin63, Nel67, Hue90]. Our starting
4
JOAKIM ARNLIND AND AHMED AL-SHUJARY
point is the definition given by G. Rinehart [Rin63]. In the following, we let the
field K denote either R or C.
Definition 3.1 (Lie-Rinehart algebra). Let A be a commutative K-algebra and let
g be an A-module which is also a Lie algebra over K. Given a map ω : g → Der(A),
the pair (A, g) is called a Lie-Rinehart algebra if
(3.1)
ω(aα)(b) = a ω(α)(b)
(3.2)
[α, aβ] = a[α, β] + ω(α)(a) β,
for α, β ∈ g and a, b ∈ A. (In most cases, we will leave out ω and write α(a) instead
of ω(α)(a).)
Let us point out some immediate examples of Lie-Rinehart algebras.
Example 3.2. Let A be an algebra and let g = Der(A) be the A-module of derivations of A. It is easy to check that Der(A) is a Lie algebra with respect to composition of derivations, i.e.
[α, β](a) = α(β(a)) − β(α(a)).
The pair (A, Der(A)) is a Lie-Rinehart algebra with respect to the action of elements
of Der(A) as derivations.
Example 3.3. Let A = C ∞ (M ) be the algebra (over R) of smooth functions on
a manifold M , and let g = X (A) be the A-module of vector fields on M . With
respect to the standard action of a vector field as a derivation of C ∞ (M ), the pair
(C ∞ (M ), X (A)) is a Lie-Rinehart algebra.
Morphisms of Lie-Rinehart algebras are defined as follows.
Definition 3.4. Let (A1 , g1 ) and (A2 , g2 ) be Lie-Rinehart algebras. A morphism
of Lie-Rinehart algebras is a pair of maps (φ, ψ), with φ : A1 → A2 an algebra
homomorphism and ψ : g1 → g2 a Lie algebra homomorphism, such that
ψ(aα) = φ(a)ψ(α) and φ α(a) = ψ(α) φ(a) ,
for all a ∈ A1 and α ∈ g1 .
A lot of attention has been given to the cohomology of the Chevalley–Eilenberg
complex consisting of alternating A-multilinear maps with values in a module
M . Namely, defining C k (g, M ) to be the A-module of alternating maps from gk
to an (A, g)-module M , on introduces the standard differential d : C k (g, M ) →
C k+1 (g, M ) as
dτ (α1 , . . . , αk+1 ) =
(3.3)
k+1
X
i=1
+
(−1)i+1 αi τ (α1 , . . . , α̂i , . . . , αk+1 )
k+1
X
i<j
(−1)i+j τ [αi , αj ], α1 , . . . , α̂i , . . . , α̂j , . . . , αk+1 ,
where α̂i indicates that αi is not present among the arguments. The fact that
d ◦ d = 0 implies that one can construct the cohomology of this complex in analogy
with de Rahm cohomology of smooth manifolds. However, as we shall be more
interested in Riemannian aspects, it is natural to study the case when there exists
a metric on the module g. More precisely, we make the following definition.
KÄHLER–POISSON ALGEBRAS
5
Definition 3.5. Let (A, g) be a Lie-Rinehart algebra and let M be an A-module.
An A-bilinear form g : M × M → A is called a metric on M if it holds that
(1) g(m1 , m2 ) = g(m2 , m1 ) for all m1 , m2 ∈ M ,
(2) the map ĝ : M → M ∗ , given by ĝ(m1 ) (m2 ) = g(m1 , m2 ), is an A-module
isomorphism,
where M ∗ denotes the dual of M . We shall often refer to property (2) as the metric
being non-degenerate.
Definition 3.6. A metric Lie-Rinehart algebra (A, g, g) is a Lie-Rinehart algebra
(A, g) together with a metric g : g × g → A.
Let us introduce morphisms of metric Lie-Rinehart algebras as morphisms of LieRinehart algebras that preserve the metric.
Definition 3.7. Let (A1 , g1 , g1 ) and (A2 , g2 , g2 ) be metric Lie-Rinehart algebras.
A morphism of metric Lie-Rinehart algebras is a morphism of Lie-Rinehart algebras
(φ, ψ) : (A1 , g1 ) → (A2 , g2 ) such that
φ g1 (α, β) = g2 ψ(α), ψ(β)
for all α, β ∈ g1 .
The theory of affine connections can readily be introduced, together with torsionfreeness and metric compatibility.
Definition 3.8. Let (A, g) be a Lie-Rinehart algebra and let M be an A-module.
A connection ∇ on M is a map ∇ : g → EndK (M ), written as α → ∇α , such that
(1) ∇aα+β = a∇α + ∇β
(2) ∇α (am) = a∇α m + α(a)m
for all a ∈ A, α, β ∈ g and m ∈ M .
Definition 3.9. Let (A, g) be a Lie-Rinehart algebra and let M be an A-module
with connection ∇ and metric g. The connection is called metric if
(3.4)
α g(m1 , m2 ) = g(∇α m1 , m2 ) + g(m1 , ∇α m2 )
for all α ∈ g and m1 , m2 ∈ M .
Definition 3.10. Let (A, g) be a Lie-Rinehart algebra and let ∇ be a connection
on g. The connection is called torsion-free if
∇α β − ∇β α − [α, β] = 0
for all α, β ∈ g.
As in differential geometry, one can show that there exists a unique torsion-free
and metric connection associated to the Riemannian metric. The first step involves
proving Kozul’s formula.
Proposition 3.11. Let (A, g, g) be a metric Lie-Rinehart algebra. If ∇ is a metric
and torsion-free connection on g then it holds that
2g ∇α β, γ = α g(β, γ) + β g(γ, α) − γ g(α, β)
(3.5)
+ g(β, [γ, α]) + g(γ, [α, β]) − g(α, [β, γ])
for all α, β, γ ∈ g.
6
JOAKIM ARNLIND AND AHMED AL-SHUJARY
Proof. Starting from the right-hand-side of (3.5) and using the metric condition to
rewrite the first three terms as
α g(β, γ) = g(∇α β, γ) + g(β, ∇α γ),
together with the torsion-free condition to rewrite the last three terms as
g(β, [γ, α]) = g(β, ∇γ α) − g(β, ∇α γ),
immediately gives 2g(∇α β, γ).
By using Proposition 3.11 together with the fact that the metric is non-degenerate,
one obtains the following result.
Proposition 3.12. Let (A, g, g) be a metric Lie-Rinehart algebra. Then there
exists a unique metric and torsion-free connection on g.
Remark 3.13. The unique connection in Proposition 3.12 will be referred to as the
Levi-Civita connection of a metric Lie-Rinehart algebra.
Proof. For every α, β ∈ g, the right-hand-side of (3.5) defines a linear form ω ∈ g∗
ω(γ) = 12 α g(β, γ) + 12 β g(γ, α) − 21 γ g(α, β)
+ 21 g(β, [γ, α]) + 21 g(γ, [α, β]) − 21 g(α, [β, γ]).
By assumption (see Definition 3.5), the metric induces an isomorphism map ĝ :
g → g∗ , which implies that there exists an element ∇α β = ĝ −1 (ω) ∈ g such
that g(∇α β, γ) = ω(γ). This shows that ∇α β exists for all α, β ∈ g such that
relation (3.5) is satisfied. Next, let us show that ∇ defines a connection on g, which
amounts to checking the four properties in Definition 3.8. This is a straight-forward
computation using (3.5) and the fact that, for instance,
g(∇aα β, γ) = g(a∇α β, γ)
for all γ ∈ g
implies that ∇aα β = a∇α β since the metric is non-degenerate. Let us illustrate
the computation with the following example. From (3.5) it follows that
2g(∇aα β, γ) = aα g(β, γ) + β g(γ, aα) − γ g(aα, β)
+ g(β, [γ, aα]) + g(γ, [aα, β]) − g(aα, [β, γ])
= aα g(β, γ) + aβ g(γ, α) + β(a)g(γ, α) − aγ g(α, β) − γ(a)g(α, β)
+ g(β, γ(a)α + a[γ, α]) + g(γ, −β(a)α + a[α, β]) − ag(α, [β, γ])
= 2ag(∇α β, γ) + β(a)g(γ, α) − γ(a)g(α, β) + γ(a)g(β, α) − β(a)g(γ, α)
= 2ag(∇α β, γ).
The remaining properties of a connection is proved in an analogous way. To show
that ∇ is metric, one again uses (3.5) to substitute g(∇α β, γ) and g(β, ∇α γ) and
find that
α g(β, γ) − g(∇α β, γ) − g(β, ∇α γ) = 0.
That the torsion-free condition holds follows from
g(∇α β, γ) − g(∇β α, γ) − g([α, β], γ) = 0,
which can be seen using (3.5). Hence, we conclude that there exists a metric and
torsion-free affine connection satisfying (3.5). Moreover, since the metric is nondegenerate, such a connection is unique. Finally, as every metric and torsion-free
KÄHLER–POISSON ALGEBRAS
7
connection on g satisfies (3.5) (by Proposition 3.11) we conclude that there exists
a unique metric and torsion-free connection on g.
In what follows, we shall recall some of the properties satisfied by a metric and
torsion-free connection. The differential geometric proofs goes through with only
a change in notation needed, but we provide them here for easy reference, and
to adapt the formulation to our particular situation. We refer to [Koz60, Nel67]
for a nice overview of differential geometric constructions in modules over general
commutative algebras.
Following the usual definitions, we introduce the curvature as
(3.6)
R(α, β)γ = ∇α ∇β γ − ∇β ∇α γ − ∇[α,β] γ
as well as
R(α, β, γ) = R(α, β)γ
R(α, β, γ, δ) = g(α, R(γ, δ)β).
Let us also consider the extension of ∇ to multilinear maps T : gk → A
k
X
T α1 , . . . , ∇β αi , . . . , αk ,
(∇β T )(α1 , . . . , αk ) = β T (α1 , . . . , αk ) −
i=1
k
as well as to g-valued multilinear maps T : g → g
k
X
T α1 , . . . , ∇β αi , . . . , αk .
(∇β T )(α1 , . . . , αk ) = ∇β T (α1 , . . . , αk ) −
i=1
As in classical geometry, one proceeds to derive the Bianchi identities.
Proposition 3.14. Let ∇ be the Levi-Civita connection of a metric Lie-Rinehart
algbera (A, g, g) and let R denote corresponding curvature. Then it holds that
(3.7)
(3.8)
R(α, β, γ) + R(γ, α, β) + R(β, γ, α) = 0,
∇α R (β, γ, δ) + ∇β R (γ, α, δ) + ∇γ R (α, β, δ) = 0,
for all α, β, γ, δ ∈ g.
Proof. The first Bianchi identity (3.7) is proven by acting with ∇γ on the torsion
free condition ∇α β − ∇β α − [α, β] = 0, and then summing over cyclic permutations
of α, β, γ. Since [[α, β], γ] + [[β, γ], α] + [[γ, α], β] = 0, the desired result follows.
The second identity
is obtained by a cyclic permutation (of α, β, γ) in R ∇α β −
∇β α − [α, β], γ, δ = 0. One has
0 = R ∇α β − ∇β α − [α, β], γ, δ + cycl.
= R(∇γ α, β, δ) + R(α, ∇γ β, δ) − R([α, β], γ, δ) + cycl.
On the other hand, one has
(∇γ R)(α, β, δ) = ∇γ R(α, β, δ) − R(∇γ α, β, δ)
− R(α, ∇γ β, δ) − R(α, β, ∇γ δ),
and substituting this into the previous equation yields
0 = ∇γ R(α, β, δ) − ∇γ R (α, β, δ) − R(α, β, ∇γ δ) − R([α, β], γ, δ) + cycl.
After inserting the definition of R, and using that [[α, β], γ] + cycl. = 0, the second
Bianchi identity follows.
8
JOAKIM ARNLIND AND AHMED AL-SHUJARY
Finally, one is able to derive the classical symmetries of the curvature tensor.
Proposition 3.15. Let ∇ be the Levi-Civita connection of a metric Lie-Rinehart
algbera (A, g, g) and let R denote corresponding curvature. Then it holds that
(3.9)
R(α, β, γ, δ) = −R(β, α, γ, δ) = −R(α, β, δ, γ).
(3.10)
R(α, β, γ, δ) = R(δ, γ, α, β),
for all α, β, γ, δ ∈ g.
Proof. The identity R(α, β, γ, δ) = −R(α, β, δ, γ) follows immediately from the definition of R. Let us now prove that R(α, β, γ, δ) = −R(β, α, γ, δ). Starting from
γ(δ(a)) − δ(γ(a)) − [γ, δ](a) = 0 and letting a = g(α, β) yields
h
i
h
i
γ g(∇δ α, β) + g(α, ∇δ β) − δ g(∇γ α, β) + g(α, ∇γ β)
− (∇[γ,δ] α, β) − (α, ∇[γ,δ] β) = 0.
when using that ∇ is a metric connection; i.e τ (g(α, β)) = g(∇τ α, β) + g(α, ∇τ β)
for τ = γ, δ, [γ, δ]. A further expansion using the metric property gives
g(∇γ ∇δ α, β) + g(α, ∇γ ∇δ β) − g(∇δ ∇γ α, β) − g(α, ∇δ ∇γ β)
− g(∇[γ,δ] α, β) − g(α, ∇[γ,δ] β) = 0,
which is equivalent to
g(R(γ, δ)α, β) = −g(R(γ, δ)β, α).
Next, one can make use of equation (3.7) in Proposition 3.14, from which it follows
that
(3.11)
R(α, β, γ, δ) + R(α, δ, β, γ) + R(α, γ, δ, β) = 0.
It is a standard algebraic result that any quadri-linear map satisfying (3.9) and
(3.11) also satisfies (3.10) (see e.g. [Hel01]).
4. Kähler–Poisson algebras
In this section, we shall introduce a type of Poisson algebras, that resembles the
smooth functions on an (isometrically) embedded almost Kähler manifold, in such
a way that an analogue of Riemannian geometry may be developed. Namely, let
us consider a unital Poisson algebra (A, {·, ·}) and let {x1 , . . . , xm } be a set of
distinguished elements of A, corresponding to functions providing an embedding
into Rm , in the geometrical case. One may also consider the setting of algebraic
(Poisson) varieties where A is a finitely generated Poisson algebra and {x1 , . . . , xm }
denotes a set of generators. Our aim is to introduce equation (2.2) in A and
investigate just how far one may take the analogy with Riemannian geometry. After
introducing Kähler–Poisson algebras below, we will show that they are, in a natural
way, metric Lie-Rinehart algebras, which implies that the results of Section 3 can
be applied; in particular, there exists a unique torsion-free metric connection on
every Kähler–Poisson algebra. Note that Lie-Rinehart algebras related to Poisson
algebras have extensively been studied by Huebschmann (see e.g. [Hue90, Hue99]).
In Section 2 it was shown that the following identity holds on an almost Kähler
manifold:
(2.2)
γ 2 {f, h} = −{f, xi }ḡij {xj , xk }ḡ kl {xl , h}.
KÄHLER–POISSON ALGEBRAS
9
This equation is well-defined in a Poisson algebra, and we shall use it to define the
main object of our investigation.
Definition 4.1. Let A be a Poisson algebra over K and let {x1 , . . . , xm } ⊆ A.
Given gij ∈ A, for i, j = 1, . . . , m, such that gij = gji , we say that the triple
K = A, {x1 , . . . , xm }, g is a Kähler–Poisson-algebra if there exists η ∈ A such
that
m
X
(4.1)
η{a, xi }gij {xj , xk }gkl {xl , b} = −{a, b}
i,j,k,l=1
for all a, b ∈ A.
Remark 4.2. From now on, we shall use the differential geometric convention that
repeated indices are summed over from 1 to m, and omit explicit summation symbols.
Given a Kähler–Poisson-algebra K, we let g denote the A-module generated by all
inner derivations, i.e.
g = {a1 {c1 , ·} + · · · + aN {cN , ·} : ai , ci ∈ A and N ∈ N}.
It is a standard fact that g is a Lie algebra over K with respect to
[α, β](a) = α β(a) − β α(a) .
The matrix g induces a bilinear symmetric form on g, defined by
(4.2)
g(α, β) = α(xi )gij β(xj ),
and we refer to g as the metric on g. To the metric g one may associate the map
ĝ : g → g∗ defined as
ĝ(α)(β) = g(α, β).
Proposition 4.3. If K = A, {x1 , . . . , xm }, g is a Kähler–Poisson-algebra then
the metric g is non-degenerate; i.e. the map ĝ : g → g∗ is a module isomorphism.
Proof. Let us first show that g is injective; i.e. we will show that ĝ(α)(β) = 0,
for all β ∈ g, implies that α = 0. Thus, write α = αi {xi , ·}, and assume that
g(α, β) = 0 for all β ∈ g. In particular, we can choose β = η{c, xk }gkm {·, xm }, for
arbitrary c ∈ A, which implies that
0 = g(α, β) = ηαk {xk , xi }gij {c, xk }gkm {xj , xm }
= −αk η{xk , xi }gij {xj , xm }gmk {xk , c}.
Using the relation (4.1), one obtains
αk {xk , c} = 0
for all c ∈ A, which is equivalent to α = 0. This shows that ĝ is injective. Let us
now show that ĝ is surjective. Thus, let ω ∈ g∗ and set
α = ηω({xi , ·})gij {xj , ·} ∈ g,
which gives
ĝ(α)(ak {bk , ·}) = ηω({xi , ·})gij {xj , xl }glm ak {bk , xm }
= −ηak {bk , xm }gml {xl , xj }gji ω({xi , ·}).
10
JOAKIM ARNLIND AND AHMED AL-SHUJARY
Since ω is a module homomorphism one obtains
ĝ(α)(ak {bk , ·}) = ω(−ηak {bk , xm }gml {xl , xj }gji {xi , ·})
= ω(ak {bk , ·}),
by using (4.1), which proves that every element of g∗ is in the image of ĝ. We
conclude that ĝ is a module isomorphism.
Corollary 4.4. If (A, {x1 , . . . , xm }, g) is a Kähler–Poisson algebra then (A, g, g)
is a metric Lie-Rinehart algebra.
Proof. It is easy to check that (A, g) satisfies the conditions of a Lie-Rinehart
algebra, and Proposition 4.3 implies that the metric is non-degenerate. Hence,
(A, g, g) is a metric Lie-Rinehart algebra.
Let us now introduce some notation for Kähler–Poisson algebras. Thus, we set
P ij = {xi , xj }
P i (a) = {xi , a},
for a ∈ A, as well as
Dij = ηP i k P jk = η{xi , xl }glk {xj , xk }
Di (a) = ηP k (a)Pk i = η{xk , a}gkl {xl , xi },
and note that Dij = Dji . With respect to this notation, (4.1) can be stated as
Di (a)Pi (b) = {a, b}.
(4.3)
The metric will be used to lower indices in analogy with differential geometry. E.g.
P i j = P ik gkj
Di j = Dik gkj .
Furthermore, one immediately derives the following useful identities
(4.4)
Dij Pj (a) = P i (a),
P ij Dj (a) = P i (a) and Di j Djk = Dik .
by using (4.1).
There is a natural embedding ι : g → Am , given by
ι(ai {bi , ·}) = ai {bi , xk }ek ,
m
where {ek }m
k=1 denotes the canonical basis of the free module A . Moreover, g
m
defines a bilinear form on A via
g(X, Y ) = X i gij Y j
for X = X i ei ∈ Am and Y = Y i ei ∈ Am , and we introduce the map D : Am → Am
by setting
D(X) = Di j X j ei
for X = X i ei ∈ Am .
Proposition 4.5. The map D : Am → Am is an orthogonal projection; i.e.
D2 (X) = D(X) and g(D(X), Y ) = g(X, D(Y ))
for all X, Y ∈ Am .
KÄHLER–POISSON ALGEBRAS
11
Proof. First, it is clear that D is an endomorphism of Am . It follows immediately
from (4.4) that
D2 (X) = Di j Dj k X k ei = Di j Djl glk X k ei = Dil glk X k ei = Di k X k ei = D(X).
Furthermore, using that Dij = Dji one finds that
g D(X), Y = Di j X j gik Y k = X j Dil glj gik Y k = X j glj Dli gik Y k
= X j gjl Dl k Y k = g X, D(Y ) ,
which completes the proof.
From Proposition 4.5 we conclude that
T A = im(D)
is a finitely generated projective module. As a corollary, we prove that g is a finitely
generated projective module by showing that g is isomorphic to T A.
Proposition 4.6. The map ι : g → Am is an isomorphism from g to T A.
Proof. First, it is clear from the definition that ι is a module homomorphism.
Considered as a submodule of Am , elements of T A can be characterized by the fact
that D(X) = X for all X ∈ T A. Thus, by showing that
D ι(ak {bk , ·}) = Di j ak {bk , xj }ei = −ak Di j P j (bk )ak ei
= −ak P i (bk ) = ι(ak {bk , ·})
it follows that ι(ak {bk , ·}) ∈ T A. Let us now show that ι is injective; assume that
ι(ak {bk , ·}) = 0, which implies that
ak {bk , xi } = 0
for i = 1, . . . , m.
Next, for arbitrary c ∈ A, we write
ak {bk , c} = −ηak {bk , xi }gij P jl glm {xm , c},
by using (4.1). Since ak {bk , xi } = 0, one obtains ak {bk , c} = 0 for all c ∈ A.
To prove that ι is surjective, we start from an arbitrary X = X i ei ∈ T A, and
note that
ι X i gij Di (·) = X i gij Dik ek = D(X) = X
by using that D(X) = X for all X ∈ T A. Hence, we may conclude that ι is an
isomorphism from g to T A.
Corollary 4.7. g is a finitely generated projective module.
Note that the above result is clearly not dependent on whether or not the underlying Poisson algebra has the structure of a Kähler–Poisson algebra, as the definition
of g involves only inner derivations. Hence, as soon as the Poisson algebra admits the structure of a Kähler–Poisson algebra, it follows that the module of inner
derivations is projective. Furthermore, the fact that g is a projective module has
several implications for the underlying Lie-Rinehart algebra [Rin63, Hue90]. Next,
let us show that the derivations Di generate g as an A-module.
Proposition 4.8. The A-module g is generated by {D1 , . . . , Dm }.
12
JOAKIM ARNLIND AND AHMED AL-SHUJARY
Proof. First of all, it is clear that every element in the module generated by Di ,
written as
α(c) = αi Di (c) = ηαi {xi , xj }gjk {c, xk },
is an element of g. Conversely, let α ∈ g be an arbitrary element written as
X
α(c) =
aN {bN , c}.
N
for c ∈ A. Using the Kähler–Poisson condition (4.1) one may write this as
X
X
α(a) =
aN {bN , c} = −
ηaN {bN , xi }gij {xj , xk }gkl {xl , c}
N
=
X
N
N
i
aN {b , x }gij Dj (c),
N
which clearly lies in the module generated by {D1 , . . . , Dm }.
Thus, every α ∈ g may be written as α = αi Di for some αi ∈ A. It turns out that
this is a very convenient way of writing elements of g, which shall be extensively
used in the following. Note that if the Kähler–Poisson algebra comes from an almost
Kähler manifold M , then Di is quite close to a partial derivative on M in the sense
that (∂a xi )gik Dk (f ) = ∂a f , for f ∈ C ∞ (M ).
4.1. The trace of linear maps. As we shall be interested in both Ricci and scalar
curvature, which are defined using traces of linear maps, we introduce
(4.5)
tr(L) = g L(Di ), Dj Dij .
for an A-linear map L : g → g. This trace coincides with the ordinary trace on
g∗ ⊗A g; namely, consider
X
L=
ωN ⊗A αN ∈ g∗ ⊗A g
N
as a linear map L : g → g in the standard way via
X
L(β) =
ωN (β)αN ,
N
together with
tr(L) =
X
ωN (αN ).
N
i
Writing αN = αN
i D one finds that
X
X
k
j
kj
g L(Di ), Dj Dij =
g ωN (Di )αN
ωN (Di )αN
k D , D Dij =
k D Dij
N
=
X
N
k
i
ωN (αN
k D iD )
=
X
N
N
k
ωN (αN
k D )
=
X
ωN (αN ).
N
In particular, this implies that the trace defined via (4.5) is independent of the
Kähler–Poisson structure.
KÄHLER–POISSON ALGEBRAS
13
4.2. Morphisms of Kähler–Poisson algebras. As Kähler–Poisson algebras are
also metric Lie-Rinehart algebras, we shall require that a morphism of Kähler–
Poisson algebras is also a morphism of metric Lie-Rinehart algebras (as defined
in Section 3). However, as the definition of a Kähler–Poisson also involves the
choice of a set of distinguished elements, we will require a morphism to respect
the subalgebra generated by these elements. To this end, we start by making the
following definition.
Definition 4.9. Given a Kähler–Poisson algebra (A, {x1 , . . . , xm }, g), let Afin ⊆ A
denote the subalgebra generated by {x1 , . . . , xm }.
Equipped with this definition, we introduce morphisms of Kähler–Poisson algebras
in the following way.
′
Definition 4.10. Let K = (A, {x1 , . . . , xm }, g) and K′ = (A′ , {y 1 , . . . , y m }, g ′ ) be
Kähler–Poisson algebras together with their corresponding modules of derivations
g and g′ , respectively. A morphism of Kähler–Poisson algebras is a pair of maps
(φ, ψ), with φ : A → A′ and ψ : g → g′ , such that (φ, ψ) is a morphism of the
metric Lie-Rinehart algebras (A, g, g) and (A, g′ , g ′ ) and φ is a Poisson algebra
homomorphism such that φ(Afin ) ⊆ A′fin .
Note that if the algebras are finitely generated such that A = Afin and A′ = A′fin
(which is the case in many examples), the condition φ(Afin ) ⊆ A′fin is automatically
satisfied. Although a morphism of Kähler–Poisson algebras is given by a choice of
two maps φ and ψ, it is often the case that φ determines ψ in the following sense.
′
Proposition 4.11. Let (φ, ψ) : (A, {x1 , . . . , xm }, g) → (A′ , {y 1 , . . . , y m }, g ′ ) be a
morphism of Kähler–Poisson algebras such that for all α′ ∈ g′
α′ φ(a) = 0 ∀ a ∈ A ⇒ α′ = 0
then
Proof. Let
ψ a{b, ·}A = φ(a){φ(b), ·}A′ .
′
(φ, ψ) : (A, {x1 , . . . , xm }, g) → (A′ , {y 1 , . . . , y m }, g ′ )
be a morphism of Kähler–Poisson algebras fulfilling the assumption above. Since φ
is a Poisson algebra homomorphism, one obtains for α = a{b, ·}A
φ α(c) = φ a{b, c}A = φ(a){φ(b), φ(c)}A′
for all a ∈ A. By the definition of a Lie-Rinehart morphism, this has to equal
ψ(α)(φ(c)); i.e.
ψ(α)(φ(c)) = φ(a){φ(b), φ(c)}A′ .
Thus, ψ(α) agrees with φ(a){φ(b), ·}A′ on the image of φ, which implies that
ψ(α) = φ(a){φ(b), ·}A′
since any derivation is determined by its action on the image of φ by assumption.
For instance, the requirements in Proposition 4.11 are clearly satisfied if φ is surjective.
14
JOAKIM ARNLIND AND AHMED AL-SHUJARY
4.3. Construction of Kähler–Poisson algebras. Given a Poisson algebra (A, {·, ·})
one may ask if there exist {x1 , . . . , xm } and gij such that (A, {x1 , . . . , xm }, g) is a
Kähler–Poisson algebra? Let us consider the case when A is a finitely generated
algebra, and let {x1 , . . . , xm } be an arbitrary set of generators. If we denote by P
the matrix with entries {xi , xj } and by g the matrix with entries gij , the Kähler–
Poisson condition (4.1) may be written in matrix notation as
ηPgPgP = −P.
Given an arbitrary antisymmetric matrix P, we shall find g by first writing P in
a block diagonal form, with antisymmetric 2 × 2 matrices on the diagonal. This is
a well known result in linear algebra, in which case the eigenvalues appear in the
diagonal blocks. For an antisymmetric matrix with entries in a commutative ring,
a similar result holds.
Lemma 4.12. Let MN (R) denote the set of N × N matrices with entries in R. For
N ≥ 2, let P ∈ MN (R) be an antisymmetric matric. Then there exists V ∈ MN (R),
an antisymmetric Q ∈ MN −2 (R) and λ ∈ R such that
0 λ
0
.
V T PV = −λ 0
0
Q
Proof. We shall construct the matrix V by using elementary row and column operations. Note that if a matrix E represents an elementary row operation, then
E T PE is obtained by applying the elementary operation to both the row and the
corresponding column. Denoting the matrix elements of P by pij , we start by constructing a matrix Vk such that (VkT PVk )k1 = (VkT PVk )k2 = 0 (which necessarily
implies that also the (1k) and (2k) matrix elements are zero). To this end, let Vk1
denote the matrix representing the elementary row operation that multiplies the
k’th row by p12 , and let Vk2 represent the operation that adds the first row, multiplied by −pk2 , to the k’th row. Furthermore, Vk3 represents the operation of adding
the second row, multiplied by pk1 , to the k’th row. Setting Vk = Vk1 Vk2 Vk3 it is easy
to see that VkT PVk is an antisymmetric matrix where the (1k), (2k), (k1) and (k2)
matrix elements are zero. Consequently, we set V = V3 V4 · · · VN and conclude that
V T PV is of the desired form.
Proposition 4.13. Let P ∈ MN (R) be an antisymmetric matric, and let N̂ denote
the integer part of N/2. Then there exists V ∈ MN (R) and λ1 , . . . , λN̂ ∈ R such
that
V T PV = diag(Λ1 , . . . , ΛN̂ )
if N is even,
T
V PV = diag(Λ1 , . . . , ΛN̂ , 0)
where
Λk =
0
−λk
if N is odd,
λk
.
0
Proof. Let us prove the statement by using induction together with Lemma 4.12.
Thus, assume that there exists V ∈ MN (R) such that
V T PV = diag(Λ1 , . . . , Λk , Qk+1 )
where Qk+1 ∈ MN −2k is an antisymmetric matrix. Clearly, by Lemma 4.12, this
holds true for k = 1. Next, assume that N − 2k ≥ 2. Applying Lemma 4.12 to
KÄHLER–POISSON ALGEBRAS
15
T
Qk+1 we conclude that there exists Vk+1 ∈ MN −2k (R) such that Vk+1
Qk+1 Vk+1 =
diag(Λk+1 , Qk+2 ). Furthermore, defining Wk+1 ∈ MN (R) by Wk+1 = diag(12k , Vk+1 )
one finds that
(V Wk+1 )T P(V Wk+1 ) = diag(Λ1 , . . . , Λk+1 , Qk+2 ).
By induction, it follows that one may repeat this procedure until N − 2k < 2. If N
is even, then N − 2k = 0 and the statement follows. If N is odd, then N − 2k = 1
and, since V T PV is antisymmetric, it follows that the (N N ) matrix element is
zero, giving the stated result.
Returning to the case of a Poisson algebra generated by x1 , . . . , xm , assume for the
moment that m = 2N for a positive integer N . By Proposition 4.13, there exists a
matrix V
V T PV = P0
where P0 is a block diagonal matrix of the form
P0 = diag(Λ1 , . . . , ΛN )
with
Λk =
0
−λk
λk
.
0
In the same way, defining g0 = diag(g1 , . . . , gN ) with
λ 1 0
gk =
λk 0 1
λ = λ1 · · · λN
we set g = V g0 V T . Noting that
P0 g0 P0 g0 P0 = −λ2 P0
one finds
0 = P0 g0 P0 g0 P0 + λ2 P0 = V T PV g0 V T P0 V gV T P0 V + λ2 V T PV
= V T PgPgP + λ2 P V
It is a general fact that for an arbitrary matrix V there exists a matrix Ṽ such that
Ṽ V = V Ṽ = (det V )1. Multiplying the above equation from the left by Ṽ T and
from the right by Ṽ yields
(4.6)
det(V )2 PgPgP + λ2 P = 0.
As long as det(V ) is not a zero divisor, this implies that
PgPgP = −λ2 P.
Thus, given a finitely generated Poisson algebra A, the above procedure gives a
rather general way to associate a localization A[λ−1 ] and a metric g to A, such
that (A[λ−1 ], {x1 , . . . , xm }, g) is a Kähler–Poisson algebra. Note that the above
argument, with only slight notational changes, also applies to the case when m is
odd, in which case an extra block of 0 will appear in P0 .
16
JOAKIM ARNLIND AND AHMED AL-SHUJARY
5. The Levi-Civita connection
Since every Kähler–Poisson algebra is also a metric Lie-Rinehart algebra, the results
of Section 3 immediately applies. In particular, there exists a unique torsion-free
and metric connection on the module g. In this section, we shall derive an explicit
expression for the Levi-Civita connection of an arbitrary Kähler–Poisson algebra.
It turns out to be convenient to formulate the results in terms of the generators
{D1 , . . . , Dm }. Kozul’s formula gives the connection as
2g(∇Di Dj , Dk ) = Di g(Dj , Dk ) + Dj g(Dk , Di ) − Dk g(Di , Dj )
(5.1)
− g([Dj , Dk ], Di ) + g([Dk , Di ], Dj ) + g([Di , Dj ], Dk ),
and one notes that an element α = a{b, ·} ∈ g may be recovered from g(α, Di ) as
g(α, Di )Di (f ) = a{b, xk }Di k Di (f ) = a{b, xk }Dk (f ) = a{b, f } = α(f ).
Thus, one immediately obtains ∇Di Dj = g(∇Di Dj , Dk )Dk . However, it turns out
that one can obtain a more compact formula for the connection. Let us start by
proving the following result.
Lemma 5.1. g([Di , Dj ], Dk ) = Di Djk − Dj Dik .
Proof. For convenience, let us introduce the notation P̂ ij = η{xi , xj } and, consequently, P̂ i j = P̂ ik gkj . In this notation, one finds Di (a) = P̂ i j {a, xj }. Thus, one
obtains
g([Di , Dj ], Dk ) = [Di , Dj ](xl )Dk l = P̂ i m {Djl , xm }Dk l − P̂ j n {Dil , xn }Dk l
= P̂ i m {P̂ j n {xl , xn }, xm } − P̂ j n {P̂ i m {xl , xm }, xn } Dk l
= P̂ i m P̂ j n − {{xn , xl }, xm } − {{xl , xm }, xn } Dk l
+ P̂ i m {P̂ j n , xm }{xl , xn } − P̂ j n {P̂ i m , xn }{xl , xm } Dk l
= P̂ i m P̂ j n {{xm , xn }, xk } + P̂ i m {P̂ j n , xm }{xk , xn }
− P̂ j n {P̂ i m , xn }{xk , xm },
by using the Jacobi identity together with {a, xi }Dk i = {a, xk }. Furthermore, in
the second and third term, one uses Leibniz’s rule to obtain
g([Di , Dj ], Dk ) = P̂ i m P̂ j n {{xm , xn }, xk } + P̂ i m {P̂ j n {xk , xn }, xm }
− P̂ i m P̂ j n {{xk , xn }, xm } − P̂ j n {P̂ i m {xk , xm }, xn } + P̂ j n P̂ i m {{xk , xm }, xn }
= P̂ i m P̂ j n {{xm , xn }, xk } + {{xn , xk }, xm } + {{xk , xm }, xn }
+ Di Djk − Dj (Dik ) = Di Djk − Dj (Dik ),
by again using the Jacobi identity.
The above result allows for the following formulation of the Levi-Civita connection
for a Kähler–Poisson algebra.
Proposition 5.2. If ∇ denotes the Levi-Civita connection of a Kähler–Poisson
algebra K then
1
1
1
(5.2)
∇Di Dj = Di (Djk )Dk − Dj (Dik )Dk + Dk (Dij )Dk ,
2
2
2
KÄHLER–POISSON ALGEBRAS
17
or, equivalently, ∇Di Dj = Γij k Dk where
(5.3)
Γij k =
1
1
1 i jl
D (D )Dlk − Dj (Dil )Dlk + Dk (Dij ).
2
2
2
Proof. Since g(Di , Dj ) = Dij , Kozul’s formula (5.1) together with Lemma 5.1 gives
2g(∇Di Dj , Dk ) = Di (Djk ) + Dj (Dki ) − Dk (Dij ) − Dj (Dki ) + Dk (Dji )
+ Dk (Dij ) − Di (Dkj ) + Di (Djk ) − Dj (Dik )
= Di (Djk ) − Dj (Dki ) + Dk (Dij ),
which proves (5.2). The fact that one may write the connection as ∇Di Dj = Γij k Dk
follows from Dij Dj = Di and Dk (a)Dk (b) = Dk (a)Dk (b).
Thus, for arbitrary elements of g, one obtains
∇α β = α(βi )Di + Γij k αi βj Dk
(5.4)
where α = αi Di and β = βi Di , and curvature is readily introduced as
R(α, β)γ = ∇α ∇β γ − ∇β ∇α γ − ∇[α,β] γ.
Ricci curvature is defined as
Ric(α, β) = tr γ → R(γ, α)β
and using the trace from Section 4.1, one obtains
Ric(α, β) = g(R(Di , α)β, Dj )Dij .
To define the scalar curvature, one considers the Ricci curvature as a linear map
Ric : g → g
with
Ric(α) = Ric(α, Di )Di ,
giving
S = tr α → Ric(α) = g R(Di , Dk )Dl , Dj Dij Dkl .
Note that since the metric is nondegenerate, there exists a unique element ∇f ∈ g
such that g(∇f, α) = α(f ) for all α ∈ g; we call ∇f the gradient of f . Now, it is
easy to see that
∇f = Di (f )Di
since
g(Di (f )Di , αj Dj ) = Di (f )αj Dij = αj Dj (f ) = α(f ).
The divergence of an element α ∈ g is defined as
div(α) = tr(β → ∇β α),
and, finally, the Laplacian
∆(f ) = div(∇f ).
18
JOAKIM ARNLIND AND AHMED AL-SHUJARY
6. Examples
As shown in Section 2, the algebra of smooth functions on an almost Kähler manifold M becomes a Kähler–Poisson algebra when choosing x1 , . . . , xm to be embedding coordinates, providing an isometric embedding into Rm , endowed with
the standard Euclidean metric. (Recall that, by Nash’s theorem [Nas56], such an
embedding always exists.) In this section, we shall present examples of a more
algebraic nature to illustrate the fact that algebras of smooth functions are not the
only examples of Kähler–Poisson algebras.
Keeping in mind the general construction procedure in Section 4.3, we consider
finitely generated Poisson algebras with a low number of generators.
6.1. Poisson algebras generated by two elements. Let A be a unital Poisson
algebra generated by the two elements x1 = x ∈ A and x2 = y ∈ A, and set
0
{x, y}
P=
−{x, y}
0
It is easy to check that for an arbitrary symmetric matrix g
PgPgP = −{x, y}2 det(g)P.
Thus, as long as {x, y}2 det(g) is not a zero-divisor, one may localize to obtain a
Kähler–Poisson algebra
K = (A[({x, y}2 det(g))−1 ], {x, y}, g).
For the sake of illustrating the concepts and formulas we have developed so far,
let us explicitly work out an example based on an algebra A0 , generated by two
elements. Let us start by choosing an element λ ∈ A0 for which the localization
A = A0 [p−1 , λ−1 ] exists, and then defining the metric as
1 1 0
g=
λ 0 1
From the above considerations, we know that (A, {x, y}, g) is a Kähler–Poisson
algebra with η = λ2 /p2 , where p = {x, y}. For convenience we also introduce
γ = p/λ such that η = 1/γ 2. Let us start by computing the derivations Dx = D1
and Dy = D2 , which generate the module g:
λ
1
{·, y} = − {y, ·}
p
γ
λ
1
Dy = η{y, xi }gij {·, xj } = − {·, x} = {x, ·}
p
γ
Dx = η{x, xi }gij {·, xj } =
as well as
1 x
1
D
and Dy = g2k Dk = Dy .
λ
λ
Moreover, they provide an orthogonal set of generators since
Dx = g1k Dk =
1
1
1 p2
{y, xi }gij {y, xj } = 2
=λ
γ
γ
γ λ
g(Dy , Dy ) = λ
g(Dx , Dy ) = 0,
g(Dx , Dx ) =
KÄHLER–POISSON ALGEBRAS
and one obtains
(Dij ) = g(Di , Dj ) =
19
λ 0
.
0 λ
Note that g is a free module with basis {Dx , Dy } since
(
(
x
y
−a γ1 {y, x} = 0
aD
(x)
+
bD
(x)
=
0
aDx + bDy = 0 ⇒
⇒
b γ1 {x, y} = 0
aDx (y) + bDy (y) = 0
⇒
(
a=0
b=0
by using that λ is invertible.
Let us introduce the derivation Dλ = γ −1 {λ, ·} and note that
1 x
1
D (λ)Dy − Dy (λ)Dx .
λ
λ
From Proposition 5.2 one computes the connection:
1
1
1
∇Dx Dx = D1 (D1k )Dk − D1 (D1k )Dk + Dk (D11 )Dk
2
2
2
1 y
1 i
1 x
= D (λ)Dx + D (λ)Dy = D (λ)Di
2
2
2
and similarly
1
1
∇Dy Dy = Dx (λ)Dx + Dy (λ)Dy = ∇Dx Dx
2
2
1 y
1 x
y
∇Dx D = D (λ)Dy − D (λ)Dx = Dλ
2
2
1
1 y
x
∇Dy D = D (λ)Dx − Dx (λ)Dy = −Dλ
2
2
Moreover, the curvature can readily be computed
1
y
1
x
y
x
2
2
x
y
R(D , D )D = Dx (λ) + Dy (λ) − Dx D (λ) − Dy D (λ) D
2
2
x
1
1
x
y
y
2
2
x
y
R(D , D )D = − Dx (λ) + Dy (λ) − Dx D (λ) − Dy D (λ) D ,
2
2
Dλ = [Dx , Dy ] =
as well as the scalar curvature
1
S=
Dx Dx (λ) + Dy Dy (λ) − 2Dx (λ)2 − 2Dy (λ)2 .
λ
Moreover, one finds that
∇f = Dx (f )Dx + Dy (f )Dy
div(αx Dx + αy Dy ) = Dx (αx ) + Dy (αy )
∆(f ) = Dx Dx (f ) + Dy Dy (f )
= Dx Dx (f ) + Dy Dy (f ) − Dx (λ)Dx (f ) − Dy (λ)Dy (f ).
6.2. Poisson algebras generated by three elements. Let A be a unital Poisson
algebra generated by x1 = x, x2 = y, x3 = z ∈ A. Writing {x, y} = a, {y, z} = b
and {z, x} = c, i.e.
0
a −c
b ,
P = −a 0
c −b 0
20
JOAKIM ARNLIND AND AHMED AL-SHUJARY
one readily checks that for an arbitrary symmetric matrix g
PgPgP = −τ P
with
τ = a2 |g|33 + b2 |g|11 + c2 |g|22 + 2ab|g|31 − 2ac|g|32 − 2bc|g|21 ,
where |g|ij denotes the determinant of the matrix obtained from g by deleting the
i’th row and the j’th column. Thus, one may construct the Kähler–Poisson algebra
K = {A[τ −1 ], {x, y, z}, g}.
In particular, if g = diag(λ, λ, λ), then τ = λ2 (a2 + b2 + c2 ).
Let us now construct a particular class of algebras with a natural geometric
interpretation and a close connection to algebraic geometry. Let R[x, y, z] be the
polynomial ring in three variables over the real numbers, and write x1 = x, x2 = y
and x3 = z. For arbitrary C ∈ R[x, y, z], it is straight-forward to show that
{xi , xj } = εijk ∂k C,
where εijk denotes the totally antisymmetric symbol with ε123 = 1, defines a Poisson
structure on R[x, y, z] which is well-defined on the quotient AC = R[x, y, z]/(C)
since
{xi , C} = {xi , xj }∂j C = εijk (∂k C)(∂j C) = 0.
In the spirit of algebraic geometry, the algebra AC has a natural interpretation
as the polynomial functions on the level set C(x, y, z) = 0 in R3 . Choosing the
metric gij = δij 1 (corresponding to the Euclidean metric on R3 ) one obtains a
Kähler–Poisson algebra (AbC , {x, y, z}, g) where
2
2
2
AbC = AC [τ −1 ] and τ = ∂x C + ∂y C + ∂z C ,
with η = τ −1 . Note that the points in R3 , for which τ (x, y, z) = 0, coincide with
the singular points of C(x, y, z) = 0; i.e. points where ∂x C = ∂y C = ∂z C = 0.
As an illustration, let us choose C = 21 (ax2 + by 2 + cz 2 − 1) for a, b, c ∈ R, giving
{x, y} = cz,
{y, z} = ax
and
and
η = a2 x2 + b2 y 2 + c2 z 2
together with
b 2 y 2 + c2 z 2
ij
−abxy
(D ) = η
−acxz
−abxy
a2 x2 + c2 z 2
−bcyz
{z, x} = by.
−1
−acxz
−bcyz .
2 2
a x + b2 y 2
A straight-forward, but somewhat lengthy, calculation gives
x
x
x
x
D
D
D
D
b D y
b D y
R(Dy , Dz ) Dy = ax R
R(Dx , Dy ) Dy = cz R
Dz
Dz
Dz
Dz
x
x
0
−cz by
D
D
b = abcη 3 cz
b Dy where R
0
−ax ,
R(Dz , Dx ) Dy = by R
z
z
−by ax
0
D
D
KÄHLER–POISSON ALGEBRAS
21
and the scalar curvature becomes
S = 2abcη 2 .
7. Summary
In this note, we have introduced the concept of Kähler–Poisson algebras as a mean
to study Poisson algebras from a metric point of view. As shown, the single relation
(4.1) has consequences that allow for an identification of geometric objects in the
algebra, which share crucial properties with their classical counterparts. The idea
behind the construction was to identify a distinguished set of elements in the algebra
that serve as “embedding coordinates”, and then construct the projection operator
D that projects from the tangent space of the ambient manifold onto that of the
embedded submanifold. It is somewhat surprising that (4.1) encodes the crucial
elements that are needed for the algebra to resemble an algebra of functions on an
almost Kähler manifold.
As outlined in Section 4.3, a large class of Poisson algebras admit a Kähler–
Poisson algebra as an associated localization, which shows a certain generality of
our treatment. Thus, even if one is not interested in metric structures on a Poisson
algebra, the tools we have developed might be of help. For instance, if a Poisson
algebra can be given the structure of a Kähler–Poisson algebra, one immediately
concludes that the module generated by the inner derivations is a finitely generated
projective module. A statement which is clearly independent of any metric structure. A comparison with differential geometry is close at hand, where the structure
of a Riemannian manifold can be used to prove results about the underlying manifold (or even the topological structure).
Let us end with a brief outlook. After having studied the basic properties of
Kähler–Poisson algebras in this paper, there are several natural questions that
can be studied. For instance, what is the interplay between the cohomology (of
Lie-Rinehart algebras) and the Levi-Civita connection? Can one perhaps use the
connection to compute cohomology? Is there a natural way to study the moduli
spaces of Poisson algebras; i.e. how many (non-isomorphic) Kähler–Poisson structures does there exist on a given Poisson algebra? We hope to return to these, and
many other interesting questions, in the near future.
Acknowledgments
We would like to thank M. Izquierdo for ideas and discussions. Furthermore, J. A.
is supported by the Swedish Research Council.
References
[AH14]
J. Arnlind and G. Huisken. Pseudo-Riemannian geometry in terms of multi-linear brackets. Lett. Math. Phys., 104(12):1507–1521, 2014.
[AHH12] J. Arnlind, J. Hoppe, and G. Huisken. Multi-linear formulation of differential geometry
and matrix regularizations. J. Differential Geom., 91(1):1–39, 2012.
[Ber79] R. Berger. Géométrie algébrique de Poisson. C. R. Acad. Sci. Paris Sér. A-B,
289(11):A583–A585, 1979.
[Bry88] J.-L. Brylinski. A differential complex for Poisson manifolds. J. Differential Geom.,
28(1):93–114, 1988.
[BS10]
D. N. Blaschke and H. Steinacker. Curvature and gravity actions for matrix models.
Classical Quantum Gravity, 27(16):165010, 15, 2010.
22
JOAKIM ARNLIND AND AHMED AL-SHUJARY
[Hel01]
S. Helgason. Differential geometry, Lie groups, and symmetric spaces, volume 34 of
Graduate Studies in Mathematics. American Mathematical Society, Providence, RI,
2001.
[Her53] J.-C. Herz. Pseudo-algèbres de Lie. I. C. R. Acad. Sci. Paris, 236:1935–1937, 1953.
[Hue90] J. Huebschmann. Poisson cohomology and quantization. J. Reine Angew. Math., 408:57–
113, 1990.
[Hue99] J. Huebschmann. Extensions of Lie-Rinehart algebras and the Chern-Weil construction.
In Higher homotopy structures in topology and mathematical physics (Poughkeepsie,
NY, 1996), volume 227 of Contemp. Math., pages 145–176. Amer. Math. Soc., Providence, RI, 1999.
[Kar02] A. Karabegov. A covariant Poisson deformation quantization with separation of variables
up to the third order. Lett. Math. Phys., 61(3):255–261, 2002.
[Kon03] M. Kontsevich. Deformation quantization of Poisson manifolds. Lett. Math. Phys.,
66(3):157–216, 2003.
[Koz60] J. L. Kozul. Lectures on fibre bundles and differential geometry. Tata Institute of Fundamental Research, Bombay, 1960.
[Lic77] A. Lichnerowicz. Les variétés de Poisson et leurs algèbres de Lie associées. J. Differential
Geometry, 12(2):253–300, 1977.
[Nas56] J. Nash. The imbedding problem for Riemannian manifolds. Ann. of Math. (2), 63:20–
63, 1956.
[Nel67] E. Nelson. Tensor Analysis. Princeton University Press, Princeton, New Jersey, 1967.
[Pal61] R. S. Palais. The cohomology of Lie rings. In Proc. Sympos. Pure Math., Vol. III, pages
130–137. American Mathematical Society, Providence, R.I., 1961.
[Rin63] G. S. Rinehart. Differential forms on general commutative algebras. Trans. Amer. Math.
Soc., 108:195–222, 1963.
[Wei83] A. Weinstein. The local structure of Poisson manifolds. J. Differential Geom., 18(3):523–
557, 1983.
(Joakim Arnlind) Dept. of Math., Linköping University, 581 83 Linköping, Sweden
E-mail address: joakim.arnlind@liu.se
(Ahmed Al-Shujary) Dept. of Math., Linköping University, 581 83 Linköping, Sweden
E-mail address: ahmed.al-shujary@liu.se
| 0 |
Instruction Sequence Notations
with Probabilistic Instructions
arXiv:0906.3083v2 [] 1 Oct 2014
J.A. Bergstra and C.A. Middelburg
Informatics Institute, Faculty of Science, University of Amsterdam,
Science Park 107, 1098 XG Amsterdam, the Netherlands
J.A.Bergstra@uva.nl,C.A.Middelburg@uva.nl
Abstract. This paper concerns instruction sequences that contain probabilistic instructions, i.e. instructions that are themselves probabilistic by
nature. We propose several kinds of probabilistic instructions, provide an
informal operational meaning for each of them, and discuss related work.
On purpose, we refrain from providing an ad hoc formal meaning for the
proposed kinds of instructions. We also discuss the approach of projection semantics, which was introduced in earlier work on instruction
sequences, in the light of probabilistic instruction sequences.
Keywords: instruction sequence, probabilistic instruction, projection semantics.
1998 ACM Computing Classification: D.1.4, F.1.1, F.1.2.
1
Introduction
In this paper, we take the first step on a new subject in a line of research whose
working hypothesis is that the notion of an instruction sequence is relevant
to diverse subjects in computer science (see e.g. [12,13,14,15]). In this line of
research, an instruction sequence under execution is considered to produce a
behaviour to be controlled by some execution environment: each step performed
actuates the processing of an instruction by the execution environment and a
reply returned at completion of the processing determines how the behaviour
proceeds. The term service is used for a component of a system that provides an
execution environment for instruction sequences, and a model of systems that
provide execution environments for instruction sequences is called an execution
architecture. This paper is concerned with probabilistic instruction sequences.
We use the term probabilistic instruction sequence for an instruction sequence that contains probabilistic instructions, i.e. instructions that are themselves probabilistic by nature, rather than an instruction sequence of which the
instructions are intended to be processed in a probabilistic way. We will propose
several kinds of probabilistic instructions, provide an informal operational meaning for each of them, and discuss related work. We will refrain from a formal
semantic analysis of the proposed kinds of probabilistic instructions. Moreover,
we will not claim any form of completeness for the proposed kinds of probabilistic instructions. Other convincing kinds might be found in the future. We
will leave unanalysed the topic of probabilistic instruction sequence processing,
which includes all phenomena concerning services and execution architectures
for which probabilistic analysis is necessary.
Viewed from the perspective of machine-execution, execution of a probabilistic instruction sequence using an execution architecture without probabilistic
features can only be a metaphor. Execution of a deterministic instruction sequence using an execution architecture with probabilistic features, i.e. an execution architecture that allows for probabilistic services, is far more plausible. Thus,
it looks to be that probabilistic instruction sequences find their true meaning
by translation into deterministic instruction sequences for execution architectures with probabilistic features. Indeed projection semantics, the approach to
define the meaning of programs which was first presented in [9], need not be
compromised when probabilistic instructions are taken into account.
This paper is organized as follows. First, we set out the scope of the paper
(Section 2) and review the special notation and terminology used in the paper
(Section 3). Next, we propose several kinds of probabilistic instructions (Sections 4 and 5). Following this, we formulate a thesis on the behaviours produced
by probabilistic instruction sequences under execution (Section 6) and discuss
the approach of projection semantics in the light of probabilistic instruction sequences (Section 7). We also discuss related work (Section 8) and make some
concluding remarks (Section 10).
In the current version of this paper, we moreover mention some outcomes of
a sequel to the work reported upon in this paper (Section 9).
2
On the Scope of this Paper
We go into the scope of the paper and clarify its restrictions by giving the
original motives. However, the first version of this paper triggered off work on the
behaviour of probabilistic instruction sequences under execution and outcomes
of that work invalidated some of the arguments used to motivate the restrictions.
The relevant outcomes of the work concerned will be mentioned in Section 9.
We will propose several kinds of probabilistic instructions, chosen because of
their superficial similarity with kinds of deterministic instructions known from
PGA (ProGram Algebra) [9], PGLD (ProGramming Language D) with indirect
jumps [10], C (Code) [19], and other similar notations and not because any computational intuition about them is known or assumed. For each of these kinds,
we will provide an informal operational meaning. Moreover, we will exemplify
the possibility that the proposed unbounded probabilistic jump instructions are
simulated by means of bounded probabilistic test instructions and bounded deterministic jump instructions. We will also refer to related work that introduces
something similar to what we call a probabilistic instruction and connect the proposed kinds of probabilistic instructions with similar features found in related
work.
2
We will refrain from a formal semantic analysis of the proposed kinds of
probabilistic instructions. When we started with the work reported upon in this
paper, the reasons for doing so were as follows:
– In the non-probabilistic case, the subject reduces to the semantics of PGA.
Although it seems obvious at first sight, different models, reflecting different
levels of abstraction, can and have been distinguished (see e.g. [9]). Probabilities introduce a further ramification.
– What we consider sensible is to analyse this double ramification fully. What
we consider less useful is to provide one specific collection of design decisions
and working out its details as a proof of concept.
– We notice that for process algebra the ramification of semantic options after
the incorporation of probabilistic features is remarkable, and even frustrating
(see e.g. [24,27]). There is no reason to expect that the situation is much
simpler here.
– Once that a semantic strategy is mainly judged on its preparedness for a
setting with multi-threading, the subject becomes intrinsically complex –
like the preparedness for a setting with arbitrary interleaving complicates
the semantic modeling of deterministic processes in process algebra.
– We believe that a choice for a catalogue of kinds of probabilistic instructions
can be made beforehand. Even if that choice will turn out to be wrong,
because prolonged forthcoming semantic analysis may give rise to new, more
natural, kinds of probabilistic instructions, it can at this stage best be driven
by direct intuitions.
In this paper, we will leave unanalysed the topic of probabilistic instruction
sequence processing, i.e. probabilistic processing of instruction sequences, which
includes all phenomena concerning services and concerning execution architectures for which probabilistic analysis is necessary. At the same time, we admit
that probabilistic instruction sequence processing is a much more substantial
topic than probabilistic instruction sequences, because of its machine-oriented
scope. We take the line that a probabilistic instruction sequence finds its operational meaning by translation into a deterministic instruction sequence and
execution using an execution architecture with probabilistic features.
3
Preliminaries
In the remainder of this paper, we will use the notation and terminology regarding instructions and instruction sequences from PGA. The mathematical
structure that we will use for quantities is a signed cancellation meadow. That
is why we briefly review PGA and signed cancellation meadows in this section.
In PGA, it is assumed that a fixed but arbitrary set of basic instructions has
been given. The primitive instructions of PGA are the basic instructions and in
addition:
– for each basic instruction a, a positive test instruction +a;
3
– for each basic instruction a, a negative test instruction −a;
– for each natural number l, a forward jump instruction #l;
– a termination instruction !.
The intuition is that the execution of a basic instruction a produces either T
or F at its completion. In the case of a positive test instruction +a, a is executed
and execution proceeds with the next primitive instruction if T is produced.
Otherwise, the next primitive instruction is skipped and execution proceeds with
the primitive instruction following the skipped one. If there is no next instruction to be executed, execution becomes inactive. In the case of a negative test
instruction −a, the role of the value produced is reversed. In the case of a plain
basic instruction a, execution always proceeds as if T is produced. The effect
of a forward jump instruction #l is that execution proceeds with the l-th next
instruction. If l equals 0 or the l-th next instruction does not exist, execution
becomes inactive. The effect of the termination instruction ! is that execution
terminates.
The constants of PGA are the primitive instructions and the operators of
PGA are:
– the binary concatenation operator ; ;
– the unary repetition operator ω .
Terms are built as usual. We use infix notation for the concatenation operator
and postfix notation for the repetition operator.
A closed PGA term is considered to denote a non-empty, finite or periodic
infinite sequence of primitive instructions.1 Closed PGA terms are considered
equal if they denote the same instruction sequence. The axioms for instruction
sequence equivalence are given in [9]. The unfolding equation X ω = X ; X ω is
derivable from those equations. Moreover, each closed PGA term is derivably
equal to one of the form P or P ; Qω , where P and Q are closed PGA terms in
which the repetition operator does not occur.
In [9], PGA is extended with a unit instruction operator u which turns
sequences of instructions into single instructions. The result is called PGAu .
In [35], the meaning of PGAu programs is described by a translation from PGAu
programs into PGA programs.
In the sequel, the following additional assumption is made: a fixed but arbitrary set of foci and a fixed but arbitrary set of methods have been given.
Moreover, we will use f.m, where f is a focus and m is a method, as a general
notation for basic instructions. In f.m, m is the instruction proper and f is the
name of the service that is designated to process m.
The signature of signed cancellation meadows consists of the following constants and operators:
– the constants 0 and 1;
– the binary addition operator + ;
1
A periodic infinite sequence is an infinite sequence with only finitely many distinct
suffixes.
4
–
–
–
–
the
the
the
the
binary multiplication operator · ;
unary additive inverse operator −;
unary multiplicative inverse operator
unary signum operator s.
−1
;
Terms are build as usual. We use infix notation for the binary operators
+ and · , prefix notation for the unary operator −, and postfix notation for
the unary operator −1 . We use the usual precedence convention to reduce the
need for parentheses. We introduce subtraction and division as abbreviations:
p − q abbreviates p + (−q) and p/q abbreviates p · (q −1 ). We use the notation n
for numerals and the notation pn for exponentiation with a natural number as
exponent. The term n is inductively defined as follows: 0 = 0 and n + 1 = n + 1.
The term pn is inductively defined as follows: p0 = 1 and pn+1 = pn · p.
The constants and operators from the signature of signed cancellation meadows are adopted from rational arithmetic, which gives an appropriate intuition
about these constants and operators. The equational theories of signed cancellation meadows is given in [8]. In signed cancellation meadows, the functions min
and max have a simple definition (see also [8]).
A signed cancellation meadow is a cancellation meadow expanded with a
signum operation. The prime example of cancellation meadows is the field of rational numbers with the multiplicative inverse operation made total by imposing
that the multiplicative inverse of zero is zero, see e.g. [20].
4
Probabilistic Basic and Test Instructions
In this section, we propose several kinds of probabilistic basic and test instructions. It is assumed that a fixed but arbitrary signed cancellation meadow M
has been given. Moreover, we write q̂, where q ∈ M, for max(0, min(1, q)).
We propose the following probabilistic basic instructions:
– %(), which produces T with probability 1/2 and F with probability 1/2;
– %(q), which produces T with probability q̂ and F with probability 1 − q̂,
for q ∈ M.
The probabilistic basic instructions have no side-effect on a state.
The basic instruction %() can be looked upon as a shorthand for %(1/2). We
distinguish between %() and %(1/2) for reason of putting the emphasis on the
fact that it is not necessary to bring in a notation for quantities ranging from 0
to 1 in order to design probabilistic instructions.
Once that probabilistic basic instructions of the form %(q) are chosen, an
unbounded ramification of options for the notation of quantities is opened up. We
will assume that closed terms over the signature of signed
√ cancellation meadows
are used to denote quantities. Instructions such as %( 1 + 1) are implicit in the
√
form %(q), assuming that it is known how to view
as a notational extension
of signed cancellation meadows (see e.g. [8]).
Like all basic instructions, each probabilistic basic instruction give rise to
two probabilistic test instructions:
5
– %() gives rise to +%() and −%();
– %(q) gives rise to +%(q) and −%(q).
Probabilistic primitive instructions of the form +%(q) and −%(q) can be considered probabilistic branch instructions where q is the probability that the branch
is not taken and taken, respectively, and likewise the probabilistic primitive instructions +%() and −%().
We find that the primitive instructions %() and %(q) can be replaced by #1
without loss of (intuitive) meaning. Of course, in a resource aware model, #1
may be much cheaper than %(q), especially if q is hard to compute. Suppose
that %(q) is realized at a lower level by means of %(), which is possible, and
suppose that q is a computable real number. The question arises whether the
expectation of the time to execute %(q) is finite.
To exemplify the possibility that %(q) is realized by means of %() in the case
where q is a rational number, we look at the following probabilistic instruction
sequences:
−%(2/3) ; #3 ; a ; ! ; b ; ! ,
(+%() ; #3 ; a ; ! ; +%() ; #3 ; b ; !)ω .
It is easy to see that these instruction sequences produce on execution the same
behaviour: with probability 2/3, first a is performed and then termination follows; and with probability 1/3, first b is performed and then termination follows.
In the case of computable real numbers other than rational numbers, use must
be made of a service that does duty for the tape of a Turing machine (such a
service is, for example, described in [18]).
Let q ∈ M, and let random(q) be a service with a method get whose reply is T with probability q̂ and F with probability 1 − q̂. Then a reasonable
view on the meaning of the probabilistic primitive instructions %(q), +%(q) and
−%(q) is that they are translated into the deterministic primitive instructions
random(q).get, +random(q).get and −random(q).get, respectively, and executed
using an execution architecture that provides the probabilistic service random(q).
Another option is possible here: instead of a different service random(q) for each
q ∈ [0, 1] and a single method get, we could have a single service random with
a different method get(q) for each q ∈ [0, 1]. In the latter case, %(q), +%(q)
and −%(q) would be translated into the deterministic primitive instructions
random.get(q), +random.get(q) and −random.get(q).
5
Probabilistic Jump Instructions
In this section, we propose several kinds of probabilistic jump instructions. It
is assumed that the signed cancellation meadow M has been expanded with an
operation N such that, for all q ∈ M, N(q) = 0 iff q = n for some n ∈ N.
We write l̄, where l ∈ M is such that N(l) = 0, for the unique n ∈ N such that
l = n.
We propose the following probabilistic jump instructions:
6
– #%U(k), having the same effect as #j with probability 1/k for j ∈ [1, k̄],
for k ∈ M with N(k) = 0;
– #%G(q)(k), having the same effect as #j with probability q̂ · (1 − q̂)j−1 for
j ∈ [1, k̄], for q ∈ M and k ∈ M with N(k) = 0;
– #%G(q)l, having the same effect as #l̄ · j with probability q̂ · (1 − q̂)j−1 for
j ∈ [1, ∞), for q ∈ M and l ∈ M with N(l) = 0.
The letter U in #%U(k) indicates a uniform probability distribution, and the
letter G in #%G(q)(k) and #%G(q)l indicates a geometric probability distribution. Instructions of the forms #%U(k) and #%G(q)(k) are bounded probabilistic jump instructions, whereas instructions of the form #%G(q)l are unbounded
probabilistic jump instructions.
Like in the case of the probabilistic basic instructions, we propose in addition
the following probabilistic jump instructions:
– #%G()(k) as the special case of #%G(q)(k) where q = 1/2;
– #%G()l as the special case of #%G(q)l where q = 1/2.
We believe that it must be possible to eliminate all probabilistic jump instructions. In particular, we believe that it must be possible to eliminate all
unbounded probabilistic jump instructions. This belief can be understood as the
judgement that it is reasonable to expect from a semantic model of probabilistic
instruction sequences that the following identity and similar ones hold:
+a ; #%G()2 ; (+b ; ! ; c)ω =
+a ; +%() ; #8 ; #10 ;
(+b ; #5 ; #10 ; +%() ; #8 ; #10 ;
! ; #5 ; #10 ; +%() ; #8 ; #10 ;
c ; #5 ; #10 ; +%() ; #8 ; #10)ω .
Taking this identity and similar ones as our point of departure, the question arises
what is the most simple model that justifies them. A more general question is
whether instruction sequences with unbounded probabilistic jump instructions
can be translated into ones without probabilistic jump instructions provided it
does not bother us that the instruction sequences may become much longer (e.g.
expectation of the length bounded, but worst case length unbounded).
6
The Probabilistic Process Algebra Thesis
In the absence of probabilistic instructions, threads as considered in BTA (Basic
Thread Algebra) [9] or its extension with thread-service interaction [17] can
be used to model the behaviours produced by instruction sequences under execution.2 Processes as considered in general process algebras such as ACP [6],
2
In [9], BTA is introduced under the name BPPA (Basic Polarized Process Algebra).
7
CCS [32] and CSP [26] can be used as well, but they give rise to a more complicated modeling of the behaviours of instruction sequences under execution (see
e.g. [13]).
In the presence of probabilistic instructions, we would need a probabilistic
thread algebra, i.e. a variant of BTA or its extension with thread-service interaction that covers probabilistic behaviours. When we started with the work
reported upon in this paper, it appeared that any probabilistic thread algebra is
inherently more complicated to such an extent that the advantage of not using a
general process algebra evaporates. Moreover, it appeared that any probabilistic
thread algebra requires justification by means of an appropriate probabilistic
process algebra. This led us to the following thesis:
Thesis. Modeling the behaviours produced by probabilistic instruction sequences
under execution is a matter of using directly processes as considered in some
probabilistic process algebra.
A probabilistic thread algebra has to cover the interaction between instruction sequence behaviours and services. Two mechanisms are involved in that.
They are called the use mechanism and the apply mechanism (see e.g. [17]).
The difference between them is a matter of perspective: the former is concerned
with the effect of services on behaviours of instruction sequences and therefore
produces behaviours, whereas the latter is concerned with the effect of instruction sequence behaviours on services and therefore produces services. When we
started with the work reported upon in this paper, it appeared that the use
mechanism would make the development of a probabilistic thread algebra very
intricate.
The first version of this paper triggered off work on the behaviour of probabilistic instruction sequences under execution by which the thesis stated above
is refuted. The ultimate point is that meanwhile an appropriate and relatively
simple probabilistic thread algebra has been devised (see [16]). Moreover, our
original expectations about probabilistic process algebras turned out to be too
high.
The first probabilistic process algebra is presented in [23] and the first probabilistic process algebra with an asynchronous parallel composition operator is
presented in [5]. A recent overview of the work on probabilistic process algebras
after that is given in [29]. This overview shows that the multitude of semantic
ideas applied and the multitude of variants of certain operators devised have
kept growing, and that convergence is far away. In other words, there is little well-established yet. In particular, for modeling the behaviours produced by
probabilistic instruction sequences, we need operators for probabilistic choice,
asynchronous parallel composition, and abstraction from internal actions. For
this case, the attempts by one research group during about ten years to develop a satisfactory ACP-like process algebra (see e.g. [1,2,3]) have finally led
to a promising process algebra. However, it is not yet clear whether the process
algebra concerned will become well-established.
8
All this means that a justification of the above-mentioned probabilistic thread
algebra by means of an appropriate probabilistic process algebra will be of a
provisional nature for the time being.
7
Discussion of Projectionism
Notice that once we move from deterministic instructions to probabilistic instructions, instruction sequence becomes an indispensable concept. Instruction
sequences cannot be replaced by threads or processes without taking potentially
premature design decisions. In preceding sections, however, we have outlined
how instruction sequences with the different kinds of probabilistic instructions
can be translated into instruction sequences without them. Therefore, it is a
reasonable to claim that, like for deterministic instruction sequence notations,
all probabilistic instruction sequence notations can be provided with a probabilistic semantics by translation of the instruction sequences concerned into
appropriate single-pass instruction sequences. Thus, we have made it plausible
that projectionism is feasible for probabilistic instruction sequences.
Projectionism is the point of view that:
– any instruction sequence P , and more general even any program P , first and
for all represents a single-pass instruction sequence as considered in PGA;
– this single-pass instruction sequence, found by a translation called a projection, represents in a natural and preferred way what is supposed to take
place on execution of P ;
– PGA provides the preferred notation for single-pass instruction sequences.
In a rigid form, as in [9], projectionism provides a definition of what constitutes
a program.
The fact that projectionism is feasible for probabilistic instruction sequences,
does not imply that it is uncomplicated. To give an idea of the complications
that may arise, we will sketch below found challenges for projectionism.
First, we introduce some special notation. Let N be a program notation. Then
we write N2pga for the projection function that gives, for each program P in N,
the closed PGA terms that denotes the single-pass instruction sequence that
produces on execution the same behaviour as P .
We have found the following challenges for projectionism:
– Explosion of size. If N2pga(P ) is much longer than P , then the requirement
that it represents in a natural way what is supposed to take place on execution of P is challenged. For example, if the primitive instructions of N include
instructions to set and test up to n Boolean registers, then the projection to
N2pga(P ) may give rise to a combinatorial explosion of size. In such cases,
the usual compromise is to permit single-pass instruction sequences to make
use of services (see e.g. [17]).
– Degradation of performance. If N2pga(P )’s natural execution is much slower
than P ’s execution, supposing a clear operational understanding of P , then
9
the requirement that it represents in a natural way what is supposed to
take place on execution of P is challenged. For example, if the primitive
instructions of N include indirect jump instructions, then the projection to
N2pga(P ) may give rise to a degradation of performance (see e.g. [10]).
– Incompatibility of services. If N2pga(P ) has to make use of services that are
not deterministic, then the requirement that it represents in a natural way
what is supposed to take place on execution of P is challenged. For example,
if the primitive instructions of N include instructions of the form +%(q) or
−%(q), then P cannot be projected to a single-pass instruction sequence
without the use of probabilistic services. In this case, either probabilistic
services must be permitted or probabilistic instruction sequences must not
be considered programs.
– Complexity of projection description. The description of N2pga may be so
complex that it defeats N2pga(P )’s purpose of being a natural explanation of
what is supposed to take place on execution of P . For example, the projection semantics given for recursion in [7] suffers from this kind of complexity
when compared with the conventional denotational semantics. In such cases,
projectionism may be maintained conceptually, but rejected pragmatically.
– Aesthetic degradation. In N2pga(P ), something elegant may have been replaced by nasty details. For example, if N provides guarded commands, then
N2pga(P ), which will be much more detailed, might be considered to exhibit
signs of aesthetic degradation. This challenge is probably the most serious
one, provided we accept that such elegant features belong to program notations. Of course, it may be decided to ignore aesthetic criteria altogether.
However, more often than not, they have both conceptual and pragmatic
importance.
One might be of the opinion that conceptual projectionism can accept explosion of size and/or degradation of performance. We do not share this opinion:
both challenges require a more drastic response than a mere shift from a pragmatic to a conceptual understanding of projectionism. This drastic response may
include viewing certain mechanisms as intrinsically indispensable for either execution performance or program compactness. For example, it is reasonable to
consider the basic instructions of the form %(q), where q is a computable real
number, indispensable if the expectations of the times to execute their realizations by means of %() are not all finite.
Nevertheless, projectionism looks to be reasonable for probabilistic programs:
they can be projected adequately to deterministic single-pass instruction sequences for an execution architecture with probabilistic services.
8
Related Work
In [38], a notation for probabilistic programs is introduced in which we can write,
for example, random(p·δ0 +q·δ1 ). In general, random(λ) produces a value according to the probability distribution λ. In this case, δi is the probability distribution
that gives probability 1 to i and probability 0 to other values. Thus, for p+q = 1,
10
p·δ0 +q·δ1 is the probability distribution that gives probability p to 0, probability
q to 1, and probability 0 to other values. Clearly, random(p·δ0 +q·δ1 ) corresponds
to %(p). Moreover, using this kind of notation, we could write #( k1 ·(δ1 +· · ·+δk̄ ))
for #%U(k) and #(q̂ · δ1 + q̂ · (1 − q̂) · δ2 + · · · + q̂ · (1 − q̂)k−1 · δk ) for #%G(q)(k).
In much work on probabilistic programming, see e.g. [25,30,33], we find the
binary probabilistic choice operator p ⊕ (for p ∈ [0, 1]). This operator chooses
between its operands, taking its left operand with probability p. Clearly, P p ⊕ Q
can be taken as abbreviations for +%(p);u(P ;#2);u(Q). This kind of primitives
dates back to [28] at least.
Quite related, but from a different perspective, is the toss primitive introduced in [21]. The intuition is that toss(bm, p) assigns to the Boolean memory
cell bm the value T with probability p̂ and the value F with probability 1− p̂. This
means that toss(bm, p) has a side-effect on a state, which we understand as making use of a service. In other words, toss(bm, p) corresponds to a deterministic
instruction intended to be processed by a probabilistic service.
Common in probabilistic programming are assignments of values randomly
chosen from some interval of natural numbers to program variables (see e.g. [37]).
Clearly, such random assignments correspond also to deterministic instructions
intended to be processed by probabilistic services. Suppose that x=i is a primitive instruction for assigning value i to program variable x. Then we can write:
#%U(k) ; u(x=1 ; #k) ; u(x=2 ; #k−1) ; . . . ; u(x=k ; #1). This is a realistic representation of the assignment to x of a value randomly chosen from {1, . . . , k}.
However, it is clear that this way of representing random assignments leads to
an exponential blow up in the size of any concrete instruction sequence representation, provided the concrete representation of k is its decimal representation.
The refinement oriented theory of programs uses demonic choice, usually
written ⊓, as a primitive (see e.g. [30,31]). A demonic choice can be regarded as a
probabilistic choice with unknown probabilities. Demonic choice could be written
+⊓ in a PGA-like notation. However, a primitive instruction corresponding to
demonic choice is not reasonable: no mechanism for the execution of +⊓ is
conceivable. Demonic choice exists in the world of specifications, but not in the
world of instruction sequences. This is definitely different with +%(p), because
a mechanism for its execution is conceivable.
Features similar to probabilistic jump instructions are not common in probabilistic programming. To our knowledge, the mention of probabilistic goto statements of the form pr goto {l1 , l2 } in [4] is the only mention of a similar feature
in the computer science literature. The intuition is that pr goto {l1 , l2 }, where
l1 and l2 are labels, has the same effect as goto l1 with probability 1/2 and has
the same effect as goto l2 with probability 1/2. Clearly, this corresponds to a
probabilistic jump instruction of the form #%U(1/2).
It appears that quantum computing has something to offer that cannot be
obtained by conventional computing: it makes a stateless generator of random
bits available (see e.g. [22,34]). By that quantum computing indeed provides a
justification of +%(1/2) as a probabilistic instruction.
11
9
On the Sequel to this Paper
The first version of this paper triggered off work on the behaviour of probabilistic
instruction sequences under execution and outcomes of that work invalidated
some of the arguments used to motivate the restricted scope of this paper. In
this section, we mention the relevant outcomes of that work.
After the first version of this paper (i.e. [11]) appeared, it was found that:
– The different levels of abstraction that have been distinguished in the nonprobabilistic case can be distinguished in the probabilistic case as well and
only the models at the level of the behaviour of instruction sequences under
execution are not essentially the same.
– The semantic options at the behavioural level after the incorporation of
probabilistic features are limited because of (a) the orientation towards behaviours of a special kind and (b) the semantic constraints induced by the
informal explanations of the enumerated kinds of probabilistic instructions
and the desired elimination property of all but one kind.
– For the same reasons, preparedness for a setting with multi-threading does
not really complicate matters.
The current state of affairs is as follows:
– BTA and its extension with thread-service interaction, which are used to
describe the behaviour of instruction sequences under execution in the nonprobabilistic case, have been extended with probabilistic features in [16].
– In [16], we have added the probabilistic basic and test instructions proposed in Section 4 to PGLB (ProGramming Language B), an instruction
sequence notation rooted in PGA and close to existing assembly languages,
and have given a formal definition of the behaviours produced by the instruction sequences from the resulting instruction sequence notation in terms of
non-probabilistic instructions and probabilistic services.
– The bounded probabilistic jump instructions proposed in Section 5 can be
given a behavioural semantics in the same vein. The unbounded probabilistic
jump instructions fail to have a natural behavioural semantics in the setting
of PGA because infinite instruction sequences are restricted to eventually
periodic ones.
– The extension of BTA with multi-threading, has been generalized to probabilistic multi-threading in [16] as well.
10
Conclusions
We have made a notational proposal of probabilistic instructions with an informal semantics. By that we have contrasted probabilistic instructions in an execution architecture with deterministic services with deterministic instructions in
an execution architecture with partly probabilistic services. The history of the
proposed kinds of instructions can be traced.
12
We have refrained from an ad hoc formal semantic analysis of the proposed
kinds of instructions. There are many solid semantic options, so many and so
complex that another more distant analysis is necessary in advance to create a
clear framework for the semantic analysis in question.
The grounds of this work are our conceptions of what a theory of probabilistic
instruction sequences and a complementary theory of probabilistic instruction
sequence processing (i.e. execution architectures with probabilistic services) will
lead to:
– comprehensible explanations of relevant probabilistic algorithms, such as the
Miller-Rabin probabilistic primality test [36], with precise descriptions of the
kinds of instructions and services involved in them;
– a solid account of pseudo-random Boolean values and pseudo-random numbers;
– a thorough exposition of the different semantic options for probabilistic instruction sequences;
– explanations of relevant quantum algorithms, such as Shor’s integer factorization algorithm [39], by first giving a clarifying analysis in terms of probabilistic instruction sequences or execution architectures with probabilistic
services and only then showing how certain services in principle can be realized very efficiently with quantum computing.
Projectionism looks to be reasonable for probabilistic programs: they can be
projected adequately to deterministic single-pass instruction sequences for an
execution architecture with appropriate probabilistic services. At present, it is
not entirely clear whether this extends to quantum programs.
References
1. Andova, S., Baeten, J.C.M.: Abstraction in probabilistic process algebra. In: Margaria, T., Yi, W. (eds.) TACAS 2001. Lecture Notes in Computer Science, vol.
2031, pp. 204–219. Springer-Verlag (2001)
2. Andova, S., Baeten, J.C.M., Willemse, T.A.C.: A complete axiomatisation of
branching bisimulation for probabilistic systems with an application in protocol
verification. In: Baier, C., Hermans, H. (eds.) CONCUR 2006. Lecture Notes in
Computer Science, vol. 4137, pp. 327–342. Springer-Verlag (2006)
3. Andova, S., Georgievska, S.: On compositionality, efficiency, and applicability of
abstraction in probabilistic systems. In: Nielsen, M., et al. (eds.) SOFSEM 2009.
Lecture Notes in Computer Science, vol. 5404, pp. 67–78. Springer-Verlag (2009)
4. Arons, T., Pnueli, A., Zuck, L.: Parameterized verification by probabilistic abstraction. In: Gordon, A.D. (ed.) FOSSACS 2003. Lecture Notes in Computer Science,
vol. 2620, pp. 87–102. Springer-Verlag (2003)
5. Baeten, J.C.M., Bergstra, J.A., Smolka, S.A.: Axiomatizing probabilistic processes:
ACP with generative probabilities. Information and Computation 121(2), 234–255
(1995)
6. Baeten, J.C.M., Weijland, W.P.: Process Algebra, Cambridge Tracts in Theoretical
Computer Science, vol. 18. Cambridge University Press, Cambridge (1990)
13
7. Bergstra, J.A., Bethke, I.: Predictable and reliable program code: Virtual machine
based projection semantics. In: Bergstra, J.A., Burgess, M. (eds.) Handbook of
Network and Systems Administration, pp. 653–685. Elsevier, Amsterdam (2007)
8. Bergstra, J.A., Bethke, I., Ponse, A.: Cancellation meadows: A generic basis theorem and some applications. Computer Journal 56(1), 3–14 (2013)
9. Bergstra, J.A., Loots, M.E.: Program algebra for sequential code. Journal of Logic
and Algebraic Programming 51(2), 125–156 (2002)
10. Bergstra, J.A., Middelburg, C.A.: Instruction sequences with indirect jumps. Scientific Annals of Computer Science 17, 19–46 (2007)
11. Bergstra, J.A., Middelburg, C.A.: Instruction sequence notations with probabilistic
instructions. arXiv:0906.3083v1 [] (June 2009)
12. Bergstra, J.A., Middelburg, C.A.: Instruction sequence processing operators. Acta
Informatica 49(3), 139–172 (2012)
13. Bergstra, J.A., Middelburg, C.A.: On the behaviours produced by instruction sequences under execution. Fundamenta Informaticae 120(2), 111–144 (2012)
14. Bergstra, J.A., Middelburg, C.A.: On the expressiveness of single-pass instruction
sequences. Theory of Computing Systems 50(2), 313–328 (2012)
15. Bergstra, J.A., Middelburg, C.A.: Instruction sequence based non-uniform complexity classes. Scientific Annals of Computer Science 24(1), 47–89 (2014)
16. Bergstra, J.A., Middelburg, C.A.: A thread algebra with probabilistic features.
arXiv:1409.6873v1 [cs.LO] (September 2014)
17. Bergstra, J.A., Ponse, A.: Combining programs and state machines. Journal of
Logic and Algebraic Programming 51(2), 175–192 (2002)
18. Bergstra, J.A., Ponse, A.: Execution architectures for program algebra. Journal of
Applied Logic 5(1), 170–192 (2007)
19. Bergstra, J.A., Ponse, A.: An instruction sequence semigroup with involutive antiautomorphisms. Scientific Annals of Computer Science 19, 57–92 (2009)
20. Bergstra, J.A., Tucker, J.V.: The rational numbers as an abstract data type. Journal of the ACM 54(2), Article 7 (2007)
21. Chadha, R., Cruz-Filipe, L., Mateus, P., Sernadas, A.: Reasoning about probabilistic sequential programs. Theoretical Computer Science 379(1–2), 142–165 (2007)
22. Gay, S.J.: Quantum programming languages: Survey and bibliography. Mathematical Structures in Computer Science 16(4), 581–600 (2006)
23. Giacalone, A., Jou, C.C., Smolka, S.A.: Algebraic reasoning for probabilistic concurrent systems. In: Proceedings IFIP TC2 Working Conference on Programming
Concepts and Methods. pp. 443–458. North-Holland (1990)
24. van Glabbeek, R.J., Smolka, S.A., Steffen, B.: Reactive, generative and stratified models of probabilistic processes. Information and Computation 121(1), 59–80
(1995)
25. He Jifeng, Seidel, K., McIver, A.K.: Probabilistic models for the guarded command
language. Science of Computer Programming 28(2–3), 171–192 (1997)
26. Hoare, C.A.R.: Communicating Sequential Processes. Prentice-Hall, Englewood
Cliffs (1985)
27. Jonsson, B., Larsen, K.G., Yi, W.: Probabilistic extensions of process algebras. In:
Bergstra, J.A., Ponse, A., Smolka, S.A. (eds.) Handbook of Process Algebra, pp.
685–710. Elsevier, Amsterdam (2001)
28. Kozen, D.: A probabilistic PDL. Journal of Computer and System Sciences 30(2),
162–178 (1985)
29. López, N., Núñez, M.: An overview of probabilistic process algebras and their
equivalences. In: Baier, C., et al. (eds.) Validation of Stochastic Systems. Lecture
Notes in Computer Science, vol. 2925, pp. 89–123. Springer-Verlag (2004)
14
30. McIver, A.K., Morgan, C.C.: Demonic, angelic and unbounded probabilistic choices
in sequential programs. Acta Informatica 37(4–5), 329–354 (2001)
31. Meinicke, L., Solin, K.: Refinement algebra for probabilistic programs. Electronic
Notes in Theoretical Computer Science 201, 177–195 (2008)
32. Milner, R.: Communication and Concurrency. Prentice-Hall, Englewood Cliffs
(1989)
33. Morgan, C.C., McIver, A.K., Seidel, K.: Probabilistic predicate transformers. ACM
Transactions on Programming Languages and Systems 18(3), 325–353 (1996)
34. Perdrix, S., Jorrand, P.: Classically controlled quantum computation. Mathematical Structures in Computer Science 16(4), 601–620 (2006)
35. Ponse, A.: Program algebra with unit instruction operators. Journal of Logic and
Algebraic Programming 51(2), 157–174 (2002)
36. Rabin, M.O.: Probabilistic algorithms. In: Traub, J.F. (ed.) Algorithms and Complexity: New Directions and Recent Results, pp. 21–39. Academic Press, New York
(1976)
37. Schöning, U.: A probabilistic algorithm for k-SAT based on limited local search
and restart. Algorithmica 32(4), 615–623 (2002)
38. Sharir, M., Pnueli, A., Hart, S.: Verification of probabilistic programs. SIAM Journal of Computing 13(2), 292–314 (1984)
39. Shor, P.W.: Algorithms for quantum computation: Discrete logarithms and factoring. In: FOCS ’94. pp. 124–134. IEEE Computer Society Press (1994)
15
| 6 |
Formal verification of an interior point algorithm
instanciation
Guillaume Davy∗‡ , Eric Feron† , Pierre-Loic Garoche∗ , and Didier Henrion‡
arXiv:1801.03833v1 [cs.LO] 11 Jan 2018
∗
Onera - The French Aerospace Lab, Toulouse, FRANCE
‡
CNRS LAAS, Toulouse, FRANCE
†
Georgia Institute of Technology, Atlanta GA, USA
Abstract. With the increasing power of computers, real-time algorithms
tends to become more complex and therefore require better guarantees
of safety. Among algorithms sustaining autonomous embedded systems,
model predictive control (MPC) is now used to compute online trajectories, for example in the SpaceX rocket landing. The core components
of these algorithms, such as the convex optimization function, will then
have to be certified at some point. This paper focuses specifically on
that problem and presents a method to formally prove a primal linear
programming implementation.
We explain how to write and annotate the code with Hoare triples in
a way that eases their automatic proof. The proof process itself is performed with the WP-plugin of Frama-C and only relies on SMT solvers.
Combined with a framework producing all together both the embedded
code and its annotations, this work would permit to certify advanced
autonomous functions relying on online optimization.
1
Introduction
The increasing power of computers, makes possible the use of complex numerical methods in real time within cyber-physical systems. These algorithms,
despite having been studied for a long time, raise new issues when used online.
Among these algorithms, we are concerned specifically with numerical optimization which is used in model predictive control (MPC) for example, by SpaceX
to perform computation of trajectory for rocket landing [1].
These iterative algorithms perform complex math operations in a loop until they reach an -close optimal value. This implies some uncertainty on the
number of iterations required but also on the reliability of the computed result.
We address both these issues in this paper. As a first step, we focus on linear programming, with the long-term objective of proving more general convex
optimization problems. We therefore chose to study an interior point algorithm
(IPM, Interior Point Method) instead of the famous simplex methods. As a matter of fact simplex is bound to linear constraints while IPMs can address more
general convex cones such as the Lorentz cone problems (quadratic programming
QP and second-order cone programming SOCP), or Semi-Definite Programming
(SDP).
To give the best possible guarantee, we rely on formal methods to prove the
algorithm soundness. More
specifically we use Hoare
Logic[2,3] to express what
we expect from the algorithm and rely on Weakest
Precondition[4] approach to
prove that the code satisfy Fig. 1. Complete toolchain we are interested in, this
article focus on writing C code and annotation
them.
Figure 1 sketches our fully automatic process which, when provided with a
convex problem with some unknown values, generates the code, the associated
annotation and prove it automatically. We are not going to present all the process
in this paper but concentrate on how to write the embedded code, annotate it
and automatize its proof.
In this first work, we focus on the algorithm itself assuming a real semantics
for float variables and leave the floating point problem for future work. However
the algorithm itself is expressed in C with all the associated hassle and complexity. We proved the absence of runtime error, the functionality of the code and
its termination.
This paper is structured as follow. In Section 2 we present some key notions
for both convex optimization and formal proof of our algorithm. In Section 3 we
present the code structure supporting the later proof process. In Section 4 we
introduce our code annotatations. Section 5 focuses on the proof process. Section
6 presents some experimental results.
2
Preliminaries
In order to support the following analyses, we introduce the notions and notations used throughout the paper. First, we discuss Linear Programming (LP)
problems and a primal IPM algorithm to solve it. Then, we introduce the reader
to Hoare logic based reasoning.
2.1
Optimization
Linear Programming is a class of optimization problems. A linear program is
defined by a matrix A of size m × n, and two vectors b, c of respective size m,
and n.
Definition 1 (Linear program) Let us consider A ∈ Rm×n , b ∈ Rm and
c ∈ Rn . We define P (A, b, c) as the linear program:
min
x∈Rn ,Ax≤b
hc, xi with hc, xi = cT x
(1)
Definition 2 (Linear program solution) Let us consider the problem P (A, b, c)
and assume that an optimal point x∗ exists and is unique. We have then the following definitions:
Ef ={x ∈ Rn | Ax ≤ b}
f (x) =hc, xi
∗
x = arg min f
x∈Ef
(feasible set of P )
(2)
(cost function)
(3)
(optimal point)
(4)
Primal interior point algorithm. We decided
to use interior point method (IPM) to enable the future extension of this work to more
advanced convex programs. We chose a primal algorithm for its simplicity compared to
primal/dual algorithms and we followed Nesterov’s book [5, chap. 4] for the theory. The
following definitions present the key ingredients of IPM algorithms: barrier function, central path, Newton steps and approximate centering condition.
Fig. 2. Barrier function
Barrier function. Computing the extrema,
i.e. minimum or maximum, of a function without additional constraints, could be done by analyzing the zeros of its gradiants. However this does not apply in presence of constraints. An approach that
amounts to introducing a penalty function F : Ef → R to represent the feasible
set, ie. the acceptable region. This function must tend towards infinity when
x approaches the border of Ef , cf. Figure 2 for a logarithmic barrier function
encoding a set of linear constraints.
Definition 3 The adjusted cost function is a linear combination of the previous
objective function f and the barrier function F .
f˜(x, t) = t × f (x) + F (x) with t ∈ R
(5)
The variable t balances the impact of the barrier: when t = 0, f˜(x, t) is
independent from the objective while when t → +∞, f˜(x, t) is equivalent to
t × f (x).
Central path. We are interested in the values of x minimizing f˜ when t varies
from 0 to +∞. These values for x characterize a path, the central path:
Definition 4 (Central path and analytic center)
x ∗ : R+
t
→ Ef
7→ arg min f˜(x, t)
(6)
x∈Ef
x∗ (0) is called the analytic center, it is independent from the cost function.
The central path has an interesting property
when t increases:
Property 1
lim x∗ (t) = x∗
(7)
t→+∞
The algorithm prerforms a sequence of iterations, updating a point X that follows the central
path and eventually reaches the optimal point. At
the beginning of an iteration, there exists a real t
such that X = x∗ (t). Then t is increased by dt > 0
and x∗ (t+dt) is the new point X. This translation
dX is performed by a Newton step as sketched in Fig. 3. One step along the
central path
Figure 3.
Newton step. The Newton’s method computes an
approximation of a root of a function G : Rk →
Rl . It is a first order method, ie. it relies on the
gradient of the function and, from a point in the
domain of the neighbourhood of a root, performs
a sequence of iterations, called Newton steps. Figure 4 illustrates one of such step.
Definition 5 A Newton step transforms Yn into
Yn+1 as follows:
Fig. 4. Newton step for k =
−1
l=1
Yn+1 − Yn = − G0 (Yn )
G(Yn )
(8)
We apply the Newton step to the gradient of f˜, computing its root which
coincides with the minimum of f˜. We obtain
dX = − F 00 (X)
−1
((t + dt)c + F 0 (X))
(9)
Self-concordant barrier. The convergence of the Newton method is guaranted
only in the neighbourhood to the function root. This neighbourhood is called
the region of quadratic convergence; this region evolves on each iteration since t
varies. To guarantee that the iterate X remains in the region after each iteration,
we require the barrier function to be self-concordant:
Definition 6 (Self-concordant barrier) A closed convex function g is a νself-concordant barrier if
3
D3 g(x)[u, u, u] ≤ 2(uT g 00 (x)u) 4
(10)
g 0 (x)T g 00 (x)g 0 (x) < ν
(11)
and
From now on we assume that F is a self-concordant barrier. Thus F 00 is
non-degenerate([5, Th4.1.3]) and we can define:
Definition 7 (Local-norm)
kyk∗x =
q
y T × F 00 (x)−1 × y
(12)
This local-norm allows to define the Approximate Centering Condition(ACC),
the crucial property which guarantees that X remains in the region of quadratic
convergence:
Definition 8 (ACC) Let x ∈ Ef and t ∈ R+ , ACC(x, t, β) is a predicate
defined by
kf˜0 (x)k∗x = ktc + F 0 (x)k∗x ≤ β
(13)
In the following, we choose a specific value for β, as defined in (14).
√
3− 5
(14)
β<
2
The only step remaining is the computation of the largest dt such that X
remains in the region of quadratic convergence around x∗ (t + dt).
dt =
γ
kck∗x
(15)
with γ a constant.
This choice maintains the ACC at each iteration([5, Th4.2.8]):
Theorem 1 (ACC preserved) If we have ACC(X, t, β) and γ ≤
then we also have ACC(X + dX, t + dt, β).
√
β
√
1+ β
−β
For this work, we use the classic self-concordant barrier for linear program:
m
P
F (x) =
−log(bi − Ai × x) with A1 , An the columns of A.
i=0
Importance of the analytic center. x∗F is required to initiate the algorithm. In
case of offline use the value could be precomputed and validated. However in
case of online use, its computation itself has to be proved. Fortunatly this can
be done by a similiar algorithm with comparable proofs.
2.2
Formal methods
For the same program, different semantics can be used to specify its behavior:
(i) a denotational semantics, expressing the program as a mathematical function,
(ii) an operational semantics, expressing it as a sequence of basic computations,
or (iii) an axiomatic semantics. In the latter case, the semantics can be defined
in an incomplete way, as a set of projective statements, i.e. observations. This
idea was formalized by Floyd [3], Hoare [6] as a way to specify the expected
behavior, the specification, of a program through pre- and post-condition, or
assume-guarantee contracts.
Definition 9 (Hoare Triple) Let C : M → M be a program with M the set of
its possible memories. Let P and Q, two predicates on M. We say that the Hoare
triple {P } C {Q} is valid when
∀m ∈ M, P (m) ⇒ Q(C(m))
(16)
– P is called a precondition or requires and Q the postcondition or ensures.
– If C is a function, {P } C {Q} is called a contract.
These Hoare triples can be used to annotate programs written in C. In the
following, we rely on the ANSI C Specification Language (ACSL)[7], the specification language of Frama-C, to annotate functions.
The Frama-C tool processes the annotation language, identifying each Hoare
Triple and converting them into logical formulas, using the Weakest Precondition
strategy.
Definition 10 (Weakest Precondition) The Weakest Precondition of a program c and a postcondition Q is a formula WP(C, Q) such that:
1. {WP(C, Q)} C {Q} is a valid Hoare triple
2. For all P , {P } C {Q} valid implies P ⇒ WP(C, Q)
Theorem 2 (Proving Hoare Triple)
WP(C, Q) = R
P ⇒R
{P } C {Q}
The WP property can be computed mechanically, propagating back the postcondition along the program instructions. An example of such rules is given in
Figure 5. The only exception is the while loop rule which requires to be provided
with an invariant, this rule is presented later in the document in Figure. 12.
Assignement
Sequence
Conditional
WP(x = E, Q) = ∀y, y = E ⇒ Q[x ← y]
WP(S2, Q) = O
WP(S1, O) = R
WP(S1;S2, Q) = R
WP(S1, Q) = P1
WP(S2, Q) = P2
WP(if (E) S1 else S2, Q) = E ⇒ P1 ∧ ¬E ⇒ P2
Fig. 5. Examples of WP rules
Automation The use of SMT solvers enables the automatic proof of some programs and their annotations. This is however only feasible for properties that
could be solved without the need of proof assistant. This requires to write both
programs (cf. Section 3) and annotations (cf. Section 4) with some considerations
for the proof process.
3
Writing Provable Code
In order to ease the proof process we write the algorithm code in a very specific
manner. In the current section we present our modeling choices: we made all
variables global, split the code into small meaningfull functions for matrix operations, and transform the while loop into a for loop to address the termination
issue.
Variables. One of the difficulties when analyzing C code are memory related
issues. Two different pointers can reference the same part of the stack. A fine and
complex modeling of the memory in the predicate encoding, such as separation
logic[8] could address these issues. Another more pragmatic approach amounts to
substitute all local variables and function arguments with global static variables.
Two static arrays can’t overlap since their memory is allocated at compile time.
Since we are targetting embedded system, static variables will also permit to
compute and reduce memory footprints. However there are two major drawbacks:
the code is a less readable and variables are accessible from any function. These
two points usually lead the programmer to mistakes but could be accepted in
case of code generation. We tag all variables with the function they belong to
by prefixing each variable with its function name. This brings traceability.
Function. Proving large functions is
usually hard with SMT-based reasonR⇒Q
{P } C {R}
ing since the generated goals are too
Function call
WP(f(),
Q) = P
complex to be discharged automatically. A more efficient approach is to
with void f() { C }
associate small pieces of code with local contracts, These intermediate anFig. 6. WP rules used for function call
notations act as cut-rules in the proof
process. The Figure 6 presents the function call used in the WP algorithm.
Let A = B[C] be a piece of code containing C. Replacing C by a call to f()
{ C } requires either to inline the call or to write a new contract {P } f() {Q},
characterizing two smaller goals instead of a larger one. Specifically in the proof
of a B[f()], C has been replaced by P and Q which is simpler than a WP
computation.
Therefore instead of having one large function, our code is structured into
several functions: one per basic operation. Each associated contract focuses on a
really specific element of the proof without interference with the others. Thereby
formulas sent to SMT solvers are smaller and the code is modular.
Matrix operation. This is extremely useful for matrix operations. In C, a M × N
Matrix operations is written as M × N scalar operation affecting an array representing the resulting matrix. With our methods these operations are gathered
in a function annotated by the logic representation of the matrix operation, cf
Figure 7. Contracts associated to this small function associate high-level matrix
operation to the C low-level computation, acting as refinement contracts.
The encoding hides low-level op-
erations to the rest of the code, lead- /*@ ensures MatVar(dX, 2, 1)
==\old(mat_scal(MatVar(cholesky, 2, 1), -1.0));
ing to two kinds of goals :
@ assigns *(dX+(0..2)); */
– Low level operation (memory and
basic matrix operation).
– High level operation (mathematics on matrices).
The structure of the final code
after the split in small functions is
shown in Figure 8.
void set_dX()
{
dX[0] = -cholesky[0];
dX[1] = -cholesky[1];
}
Fig. 7. Example of matrix operation: dx %=
-cholesky encapsulated in a function for dx
and cholesky of size 2 × 1
– compute fill X with the the analytic
center and call pathfollowing.
– pathfollowing contains the main loop which
udpate dX and dt.
– compute_pre compute Hessian and gradiant of
F which are required for dt and dX.
– udpate_dX and udpate_dt call the associated subfunction and
update the corresponding value.
– compute_dt performs (15), it requires to call Chowlesky to compute
the local norm of c.
– compute_dX performs (9), Chowlesky is used to inverse the hessian matrix.
Fig. 8. Call tree of the implementation(rose boxes are matrix computation)
While-loop. The interior point algorithm is iterative: it performs the
same operation until reaching the stop condition. This stopping criteria depends
on the desired precision :
(1 + β)β
1
(1 +
)
(17)
1−β
Proving the termination amounts to find a suitable bound guaranting the
obtention of a converged value. We present here the convergence proof of [5]
and use it to ensure termination. It relies on the use of the following geometric
progression:
tk ≥ tstop =
Definition 11
Lower(k) =
γ k−1
γ(1 − 2β)
(1 +
)
(1 − β)kck∗x∗
1+β
F
(18)
This sequence minimizes t for each iteration k of the algorithm([5, Th4.2.9]):
Theorem 3 For all k ∈ N∗ ,
tk ≥ Lower(k)
(19)
Combined with (17), a maximal number of iteration called klast can be computed:
Theorem 4 (Required number of iterations)
ln(1 +
(β+1)∗β
1−β )
klast = 1 +
γ∗(1−2β)
) − ln()
− ln( (1−β)∗kck
∗
∗
ln(1 +
x
γ
β+1 )
F
(20)
Since we have a termination proof
based on the number of iterations, the
while-loop can be soundly replaced
by a for-loop with klast iterations. As
shown in Figure 9, the number of iterations is greater than the one obtained with original while-loop with
the stopping criteria but it permits
to have absolute guarantee on termination.
Analytic center kck∗x∗ is required to Fig. 9. Evolution of t and Lower with the
F
compute klast , therefore a worst case algorithm, notice that t remains always
execution time can be computed if greater than Lower
and only if kck∗x∗ has a lower bound
F
at compilation time.
4
Annotate the code
The code is prepared to ease its proof but the specification still remains to be
formalized as function contracts, describing the computation of an -optimal solution. This requires to enrich ACSL with some new mathematics definitions.
We introduce a set of axiomatic definition to specify optimization related properties. These definitions require, in turn, additional concepts related to matrices.
Similar approaches were already proposed [9] but were too specfic to ellipsoid
problems. We present here both the main annotation and the function local
contracts, which ease the global proof.
Matrix axiomatic. To write the mathematics property, we need to be able to
express the notion of Matrix and operations over it. Therefore we defined a new
ACSL axiomatic. An ACSL axiomatic permits the specifier to extend the ACSL
language with new types and operators, acting as an algebraic specification.
axiomatic matrix
{
type LMat;
First, we defined the new type: LMat standing for Logic Matrix. This type is
abstract therefore it will be defined by its operators.
// Getters
logic integer getM(LMat A);
logic integer getN(LMat A);
logic real mat_get(LMat A, integer i, integer j);
// Constructors
logic LMat MatVar(double* ar, integer m, integer n) reads ar[0..(m*n)];
logic LMat MatCst_1_1(real x0);
logic LMat MatCst_2_3(real x0, real x1, real x2, real x3, real x4, real x5);
Getters allow to extract information from the type while constructors bind
new LMat object. The first constructor is followed by a read clause stating which
part of the memory affects the corresponding LMat object. The Constant constructor take directly the element of the matrix as argument, it can be replaced
with an ACSL array for bigger matrix sizes.
logic
logic
logic
logic
...
LMat
LMat
LMat
LMat
mat_add(LMat A, LMat B);
mat_mult(LMat A, LMat B);
transpose(LMat A);
inv(LMat A);
Then the theory defined the operations on the LMat type. These are defined
axiomatically, with numerous axioms to cover their various behavior. We only
give here a excerpt from that library.
axiom getM_add: \forall LMat A, B; getM(mat_add(A, B))==getM(A);
axiom mat_eq_def:
\forall LMat A, B;
(getM(A)==getM(B))==> (getN(A)==getN(A))==>
(\forall integer i, j; 0<=i<getM(A) ==> 0<=j<getN(A) ==>
mat_get(A,i,j)==(mat_get(B,i,j))==>
A == B;
...
}
Matrix operations. As explained in previous Section, the matrix computation
are encapsulated into smaller functions. Their contract states the equality between the resulting matrix and the operation computed. An extensionality axiom
(mat_eq_def) is required to prove this kind of contract. Extensionality means
that if two objects have the same external properties then they are equal.
This axiom belong to the matrix axiomatix but is too general to be used
therefore lemmas specific to the matrices size are added for each matrix affectation. This lemma can be proven with the previous axioms and therefore does
not introduce more assumption.
The proof remains difficult or hardly automatic for SMT solvers therefore we
append additional assertions, as sketched in Figure 10, at then end of function
stating all the hypothesis of the extensionality lemma. Proving these postconditions is straighfoward and smaller goals need now to be proven.
assert
assert
assert
assert
assert
assert
getM(MatVar(dX,2,1)) == 2;
getN(MatVar(dX,2,1)) == 1;
getM(MatVar(cholesky,2,1)) == 2;
getN(MatVar(cholesky,2,1)) == 1;
mat_get(MatVar(dX,2,1),0,0) == mat_get(\old(mat_scal(MatVar(cholesky,2,1),-1.0)),0,0);
mat_get(MatVar(dX, 2, 1),1,0) == mat_get(\old(mat_scal(MatVar(cholesky,2,1),-1.0)),1,0);
Fig. 10. Assertion appended to the function from figure 7
Assertions also act as cutWP(C, P ⇒ Q) = R
WP(C, P ) = S
rules in ACSL since it intro- Assert
WP(C;assert
P
;,
Q)
= R∧S
duces the property in the set
of hypothesis considered (see.
Figure 11).
Fig. 11. WP rules used for assert
This works for small example, when scaling each C instruction is embedded inside a correctly annotated
function.
Optimization axiomatic. Beside generic matrix operators we also need some
operators specifc to our algorithm.
axiomatic Optim
{
logic LMat hess(LMat x0, LMat x1, LMat x2);
logic LMat grad(LMat x0, LMat x1, LMat x2);
Hessian and gradiant are hard to define without real analysis which is well
beyond the scope of this article. Therefore we decided to directly axiomatize
some theorems relying on their definition like [5, Th4.1.14].
logic real sol(LMat x0, LMat x1, LMat x2);
The sol operator represents x∗ , the exact solution which can be defined by
Property 2 (Axiomatic characterization of Definition 2) s is a solution
of 1 if and only if
1. For all y ∈ Ef , cT y ≥ s
2. For all y ∈ R, ∀x ∈ Ef , cT x ≥ y implies s ≥ y
An ACSL equivalent definition is:
logic real sol(LMat A, LMat b, LMat c);
axiom sol_min: \forall LMat A, b, c;
\forall LMat y; mat_gt(mat_mult(A, y), b) ==>
dot(c, y) >= sol(A, b, c);
axiom sol_greater: \forall LMat A, b, c;
\forall Real y;
(\forall LMat x; mat_gt(mat_mult(A, x), b) ==> dot(c, x) >= y) ==>
sol(A, b, c) >= y;
Then we defined some operators representing definitions 7, 8 and 11.
logic real norm(LMat x0, LMat x1, LMat x2, LMat x3) =
\sqrt(mat_get(mat_mult(transpose(x2), mat_mult(inv(hess(x0, x1, x3)), x2)), (0), (0)));
logic boolean acc(LMat x0, LMat x1, LMat x2, real x3, LMat x4, real x5) =
((norm(x0, x1, mat_add(grad(x0, x1, x4), mat_scal(x2, x3)), x4))<=(x5));
...
}
Contract on pathfollowing. A sound algorithm must produce a point in the
feasible set such that its cost is -close to sol. This is asserted by two global
post-conditions:
ensures mat_gt(mat_mult(A, MatVar(X, N, 1)), b);
ensures dot(MatVar(X, 2, 1), c) - sol(A, b, c) < EPSILON
as well as two preconditions stating that X is feasible and close enough to
the analytic center:
requires mat_gt(mat_mult(A, MatVar(X, N, 1)), b);
requires acc(A, b, c, 0, MatVar(X, N, 1), BETA);
Thanks to our two new theories Matrix and Optim, writing and reading this
contract is straighforward and can be checked by anyone familiar with linear
programming.
Main Loop. A loop needs to be annoted by an invariant to have its Weakest
precondition computed (cf. Figure 12)
For loop
WP(E, I) = P
(¬F ∧ I) ⇒ Q
{F ∧ I} C;G {I}
WP(for (E;F;G) inv I {C}, Q) = P
Fig. 12. WP rules for a loop
We need three invariants for our path following algorithms. The first one
guarantees the feasibility of X while the second one states the conservation of
the ACC (cf. Def. 8) The third invariant assert that t is increasing enough
on each iteration, more specially that it is greater than a geometric progression(Definition 11).
/*@
@
@
for
loop-invariant mat_gt(mat_mult(A, MatVar(X, N, 1)), b);
loop-invariant acc(A, b, c, t, MatVar(X, N, 1), BETA);
loop-invariant t > lower(l);*/
(int l = 0; l < NBR;l++) { ... }
Proving the initialization is straighfoward, thanks to the main preconditions.
The first invariant preservation is stated by [5, Th4.1.5] which was translated
into an ACSL lemma, the second by Theorem 1 and the third one by Theorem
3.
The last two loop invariants are combined to prove the second postcondition
of pathfollowing thanks to Theorem 5 and NBR equals klast (20).
Theorem 5 [5, Th4.2.7] Let t ≥ 0, and X such that ACC(X, t, β) then
cT X − cT X ∗ <
1
(β + 1)β
× (1 +
)
t
1−β
(21)
Loop body. In the main loop there are three function calls: update_pre computing some common values, update_t and update_x(Figure 8). Therefore Theorem
1 is broken into several properties and the corresponding post-conditions. For
example, the contract of update_t is:
/*@ requires MatVar(hess, N,
@ requires acc(A, b, c, t,
@ ensures acc(A, b, c, t,
@ ensures t > \old(t)*(1 +
void update_t();
N)==hess(A, b, MatVar(X, N, 1));
MatVar(X, N, 1), BETA);
MatVar(X, N, 1), BETA + GAMMA);
GAMMA/(1 + BETA));*/
The first postcondition is an intermediary results stating that:
ACC(X, t + dt, β + γ)
(22)
This result is used as precondition for update_x. The second precondition
corresponds to the product of t by the common ratio of the geometric progression
Lower, cf. Definition 11 which will be used to prove the second invariant of the
loop. The first precondition is a postcondition from update_pre and the second
one is the first loop invariant.
5
Automatic proof with SMT solvers.
For each annotated piece of code, the Frama-C WP plugin computes the Weakest
precondition and generates all the first order formulas required to validate the
Hoare triples.
There are two main solutions to prove goals: proving them thanks to a proof
assistant – this requires to be done by a human –, or proving them with a fully
automatic SMT solver. We decided to rely only on SMT solvers in order to be
able to completely automatize the process. Therefore it is better to have lots of
small goals instead of several larger ones. We splited the code for this reason and
we now split the proof of lemmas into several intermediate lemmas. For example,
in order to prove (22) we wrote update_t_ensures1 where P1 is ACC(X, t, β)
γ
and P2 is dt = kck
∗
x
∀x, t, dt; P1 ⇒ P2 ⇒ ACC(X, t + dt, β)
(update_t_ensures1)
which itself need update_t_ensures1_l0
∀x, t, dt; P1 ⇒ P2 ⇒ kF 0 (X) + c(t + dt)k∗x ≤ β + γ (update_t_ensures1_l0)
Equation update_t_ensures1_l0 needs 3 lemmas to be proven:
∀x, t, dt; P2 ⇒ kc × dtk∗x = γ
(update_t_ensures1_l3)
∀x, t, dt; P1 ⇒ kF 0 (X) + c × tk∗x ≤ β
(update_t_ensures1_l2)
∀x, t, dt; P1 ⇒ P2 ⇒ kF 0 (X) + c(t + dt)k∗x ≤ kF 0 (X) + c × t)k∗x + kc × dtk∗x
(update_t_ensures1_l1)
The proof tree for the first ensures of update_t can be found in Figure 13.
Fig. 13. Proof tree for (22)(In green proven goal, in white axioms)
6
Experimentations
Frama-C is a powerful tool but not always built for our specific needs therefore
we had to do some tricks to make it prove our goals.
Using multiple files Frama-C automatically adds, for each goal, all the lemmas
as hypotheses. This increases significantely the size of the goal. To avoid this
issue that prevented some proofs, we wrote each function or lemma in a separate
file. In this file we add as axioms all the lemmas required to prove the goal. This
allows us to prove each goal independently with a minimal context.
The impact of the separation into multiple function(Section 3) and the separation into multiple files is shown in Table 1.
The annotated code can be retrieved from https://github.com/davyg/
proved_primal
7
Related work
Related works include first activities related to the validation of numerical intensive control algorithms. This article is an extension of Wang et al [10] which was
presenting annotations for a convex optimization algorithm, namely IMP, but
the process was both manual and theoretical: the code annotated within Matlab
and without machine checked proofs. An other work from the same authors [9]
2×5
Size of A
4 × 15
8 × 63
Experiences
exp1
exp2
exp3
exp1
exp2
exp3
exp1
exp2
exp3
nb function
1
12
12
1
26
26
1
78
78
nb file
1
1
12
1
1
26
1
1
78
nb proven goal
21
48
48
43
97
97
12
257
264
nb goal
25
48
48
46
97
97
109
264
264
Table 1. Proof results for compute_dt with one function and one file(exp1), with
multiple function(exp2) or with multiple file(exp3) and a Timeout of 30s for Alt-Ergo
for random generated problem of specific sizes.
presented a similar method than ours but limited to simple control algorithms,
linear controllers. The required theories in ACSL were both different and less
general than the ones we are proposing here.
Concerning soundness of convex optimization, Cimini and Bemporad [11]
presents a termination proof for a quadratic program but without any concerns
for the proof of the implementation itself. A similar criticism applies to Tøndel,
Johansen and Bemporad [12] where another possible approach to online linear
programming is proposed, moreover it is unclear how this could scale and how
to extend it to other convex programs. Roux et al [13,14] also presented a way
to certify convex optimization algorithm, namely SDP and its sum-of-Squares
(SOS) extension, but the certification is done a posteriori which is incompatible
with online optimization.
A last set of works, e.g. the work of Boldo et al [15], concerns the formal proof of complex numerical algorithms, relying only on theorem provers.
Although code can be extracted from the proof, the code is usually not directly
suitable for embedded system: too slow and require different compilation step
which should also be proven to have the same guarantee than our method.
8
Conclusion
In this article we presented a method to guarantee the safety of numerical algorithms in a critical embedded system. This allows to embed complex algorithms
in critical real-time systems with formal guarantee on both their result and termination. This method was applied to a primal algorithm solving linear program.
The implementation is first designed to be easier to prove. Then it is annotated so that in a third time Frama-C and SMT solver can prove the specification
automatically. Combined to a code generator such as CVX [16] but with annotation generation it could lead to a tool taking an optimization problem and
generating its code and proof automatically.
We worked with real variables to concentrate on runtime errors, termination
and functionality and left floating points errors for a future work.
This proof relies on several point: the tools used, the axiomatics we wrote,
the main ACSL contract and the theorems used as axioms.
There is also some unchecked code which is independent from the core proof
of the algorithm and remains for further work: the Chowlesky decomposition
and the Hessian and gradiant computation. We also plan to extend the whole
work to convex programming.
References
1. Blackmore, L.: Autonomous precision landing of space rockets. National Academy
of Engineering, Winter Bridge on Frontiers of Engineering 4(46) (December 2016)
2. Hoare, C.A.R.: An axiomatic basis for computer programming. Commun. ACM
12(10) (1969) 576–580
3. Floyd, R.W.: Assigning meanings to programs. Proceedings of Symposium on
Applied Mathematics 19 (1967) 19–32
4. Dijkstra, E.W.: Guarded commands, nondeterminacy and formal derivation of
programs. Commun. ACM 18(8) (1975) 453–457
5. Nesterov, Y., Nemirovski, A.: Interior-point Polynomial Algorithms in Convex
Programming. Volume 13 of Studies in Applied Mathematics. Society for Industrial
and Applied Mathematics (1994)
6. Hoare, C.A.R.: An axiomatic basis for computer programming. Commun. ACM
12 (October 1969) 576–580
7. Baudin, P., Filliâtre, J.C., Marché, C., Monate, B., Moy, Y., Prevosto, V.:
ACSL: ANSI/ISO C Specification Language. version 1.11. http://frama-c.com/
download/acsl.pdf
8. Reynolds, J.C.: Separation logic: a logic for shared mutable data structures. In:
Proceedings 17th Annual IEEE Symposium on Logic in Computer Science. (2002)
55–74
9. Herencia-Zapana, H., Jobredeaux, R., Owre, S., Garoche, P.L., Feron, E., Perez,
G., Ascariz, P.: Pvs linear algebra libraries for verification of control software
algorithms in c/acsl. In Goodloe, A., Person, S., eds.: NASA Formal Methods Forth International Symposium, NFM 2012, Norfolk, VA USA, April 3-5, 2012.
Proceedings. Volume 7226 of Lecture Notes in Computer Science., Springer (2012)
147–161
10. Wang, T., Jobredeaux, R., Pantel, M., Garoche, P.L., Feron, E., Henrion, D.: Credible autocoding of convex optimization algorithms. Optimization and Engineering
17(4) (Dec 2016) 781–812
11. Cimini, G., Bemporad, A.: Exact complexity certification of active-set methods for
quadratic programming. IEEE Transactions on Automatic Control PP(99) (2017)
1–1
12. Tøndel, P., Johansen, T.A., Bemporad, A.: An algorithm for multi-parametric
quadratic programming and explicit MPC solutions. Automatica 39(3) (2003)
489–497
13. Roux, P.: Formal proofs of rounding error bounds - with application to an automatic positive definiteness check. J. Autom. Reasoning 57(2) (2016) 135–156
14. Martin-Dorel, É., Roux, P.: A reflexive tactic for polynomial positivity using numerical solvers and floating-point computations. In Bertot, Y., Vafeiadis, V., eds.:
Proceedings of the 6th ACM SIGPLAN Conference on Certified Programs and
Proofs, CPP 2017, Paris, France, January 16-17, 2017, ACM (2017) 90–99
15. Boldo, S., Faissole, F., Chapoutot, A.: Round-off error analysis of explicit onestep numerical integration methods. In: 2017 IEEE 24th Symposium on Computer
Arithmetic (ARITH). (July 2017) 82–89
16. Grant, M., Boyd, S.: CVX: Matlab software for disciplined convex programming,
version 2.1. http://cvxr.com/cvx (March 2014)
| 3 |
Adjacency Criterion For Gradient Flow With Multiple Local Maxima
arXiv:1412.6731v3 [math.DS] 8 Sep 2017
Xudong Chen1
Abstract— In this paper, we investigate the geometry of a
general class of gradient flows with multiple local maxima.
we decompose the underlying space into disjoint regions of
attraction and establish the adjacency criterion. The criterion
states a necessary and sufficient condition for two regions of
attraction of stable equilibria to be adjacent. We then apply
this criterion on a specific type of gradient flow which has as
many as n! local maxima. In particular, we characterize the set
of equilibria, compute the index of each critical manifold and
moreover, find all pairs of adjacent neighbors. As an application
of the adjacency criterion, we introduce a stochastic version
of the double bracket flow and set up a Markov model to
approximate the sample path behavior. The study of this specific
prototype with its special structure provides insight into many
other difficult problems involving simulated annealing.
I. I NTRODUCTION
Ascent/Descent equations often provide the most direct
demonstration of the existence of a maximum/minima and
can provide an easily implemented algorithm to find the
maximum/minima (see, for example, [1]–[3]). Of course
when the function being maximized/minimized has multiple
local maxima/minima, steepest ascent/descent needs to be
modified and for the last several decades the modification of
choice has been some type of simulated annealing procedure.
However, because simulated annealing is slow and subject to
variable results because of its stochastic nature, there remains
considerable interest in finding methods for improving its
speed and, in general, learning more about its performance.
But often this requires the knowledge of the sample path
behavior.
In this paper, we start with the development of a basic
theorem about the geometry of a general class of gradient
flows. In particular, we decompose the underlying space into
disjoint regions of attraction associated with the gradient
flow, and then establish the adjacency criterion. This theorem
states a necessary and sufficient condition for two regions of
attraction of stable equilibria to be adjacent, i.e, for them to
share a boundary of co-dimension one.
The adjacency criterion has a large potential impact on
studying stochastic gradients. For example, to approximate
the sample path behavior of a stochastic gradient, R. W.
Brockett established a Markov model in [4] whose states
consist of all stable equilibria. Each transition probability
is evaluated by solving a related first-hitting time problem,
the computational complexity is in a large-scale. The adjacency criterion then sheds light on the problem. Knowing
adjacent neighbors reduces the computational amount as
the transition probability between non-adjacent equilibria is
1 X. Chen is with the ECEE department, University of Colorado at
Boulder. xudong.chen@colorado.edu
negligible when compared with the transition probability
between adjacent equilibria, especially under the case where
the noise is moderate.
The adjacency criterion also relates to a number of geometric factors. For example, the depth of the potential well
associated with a stable equilibrium, the volume of a region
of attraction, the area of the boundary shared by a pair of
adjacent neighbors and etc, all these factors are important
for explaining simulated annealing.
As an demonstration of adjacency criterion, we consider
a prototype system with the gradient flow of the type Ḣ =
[H, [H, π(H)]], the map π projects a matrix onto its diagonal.
This flow is a prototype for gradient systems with multiple
stable equilibria, it has as many as n! stable equilibria, each is
a diagonal matrix and one-to-one corresponds to an element
in the permutation group Sn . In this paper, we will characterize all pairs of adjacent neighbors, the characterization is
simple and clean. In particular, we will show that each pair
of adjacent neighbors is related to a simple transposition,
so then the n! regions of attraction are arranged in a way
that each one has (n − 1) adjacent neighbors. A stochastic
version of the gradient flow is studied at the end of this paper
as a concrete example of the application of the adjacency
criterion. In particular, we set up an optimal control problem
as an approach to evaluate the transition probability.
After this introduction, we proceed in steps: in section 2,
we will introduce the adjacency criterion after a quick review
of some basic notions about Morse-Bott gradient systems.
In section 3, we will introduce the isospectral manifold,
the normal metric and the double bracket flow under a
special class of potentials functions. In section 4, we will
focus on a specific quadratic potential function and show
that the set of equilibria under the potential function is a
mix of isolated points and continuum manifolds. In section
5, we will show that the quadratic potential is a MorseBott function by explicitly working out the Hessian of the
potential. In particular, we will compute the index and the
co-index of each critical manifold. In section 6, we will apply
the adjacency criterion to characterize all pairs of adjacent
neighbors. In the last section, we will work on a stochastic
gradient as an application of the adjacency criterion. In
particular, we will show how the adjacency criterion will
simplify the evaluation of transition probability.
II. A DJACENCY CRITERION
In this section, we will introduce the adjacency criterion.
Some mathematical backgrounds are needed here, in this
paper we will only introduce terminologies that are necessary
for establishing the criterion.
A potential Ψ on a Riemannian manifold M is a MorseBott function if the set of equilibria under the gradient flow
grad(Ψ) is a finite disjoint union of connected submanifolds
{E1 , · · · , En }, and the Hessian of Ψ is nondegenerate when
restricted at the normal space N p Ek at any point p ∈ Ek and
for any Ek . (see, for example, [5] for more details about
Morse-Bott functions)
For convenience, we say each set Ek is a critical manifold.
Assume Ψ is a Morse-Bott function, the index of each
submanifold Ek is the number of negative eigenvalues of
the Hessian restricted at N p Ek for some p ∈ Ek , this is welldefined because the index of Ek is independent of the choice
of p. Similarly, the co-index of Ek is the number of positive
eigenvalues of the Hessian restricted at N p Ek . An equation
relating the index, co-index and the dimension of M is
indEk + co-indEk + dim Ek = dim M
So two stable manifolds S j and Sk are adjacent if and only
if there is a Ki such that Ki ⊂ ∂W s (S j ) ∩ ∂W s (Sk ).
(1)
A stable critical manifold is then a critical manifold of coindex 0.
Let Ek be a critical manifold, the stable manifold of Ek
is defined by
W s (Ek ) = {p ∈ M| lim ϕt (p) ∈ Ek }
t→∞
(2)
where ϕt (p) is the solution of the differential equation
ṗ =grad(Ψ) parametrized by time t. The unstable manifold
of Ek is defined in a similar way as
W u (Ek ) = {p ∈ M| lim ϕt (p) ∈ Ek }
t→−∞
and W u (Ek )
(3)
The dimensions of W s (Ek )
are indEk + dim Ek
and co-indEk + dim Ek respectively. A decomposition of M,
with respect to stable/unstable manifolds of Ek ’s, is given by
n
[
M=
W s (Ek )
(4)
W u (Ek )
(5)
k=1
or
n
[
M=
k=1
If only stable critical manifolds {S1 , · · · , Sl } are concerned,
then
M=
l
[
(6)
We are interested in how these cells are arranged in the
underlying space. With terminologies above, we are now
ready to state the adjacency criterion.
Fact 1 (Adjacency Criterion): Suppose Ψ is a Morse-Bott
function on a smooth manifold M. Let {S1 , · · · , Sl } be the
collection of stable critical manifolds and let {K1 , · · · , Km }
be the critical manifolds of co-index 1. If
(W u (Ki ) − Ki ) ⊂
i=1
l
[
W s (S j )
(7)
j=1
then the boundary of each W s (S j ) is piece-wise smooth, each
piece is a stable manifold of some Ki , i.e,
∂W s (S j ) =
Before heading to its application on the isospectral manifold, we point out that the assumption described by equation
(7) includes a broad range of gradient systems. For example,
in a Morse-Smale gradient system, all equilibria are isolated
points and the unstable manifold of each Ki is a one dimensional curve and its boundaries consists of stable equilibrium
points as illustrated in Fig. 1. The class of Morse-Smale
gradient system is a residual class in all gradient systems(see,
for example, [6], [7] for more details about Morse-Smale
systems).
III. I SOSPECTRAL MANIFOLD AND A CLASS OF
SYMMETRIC POTENTIALS
W s (Si )
i=1
m
[
Fig. 1. In this figure, each Si is a stable equilibrium, each Ki j is an
equilibrium of co-index 1 and E is an equilibrium of co-index 2. The
pair (S1 , S2 ), for example, consists of two adjacent neighbors because their
regions of attraction share the stable manifold of K12 as a co-dimension
one boundary. On the other hand, S1 and S3 are not adjacent because their
regions of attraction intersect each other only at E.
[
Ki ∈∂W s (S j )
W s (Ki )
(8)
In this section, we will introduce the isospectral manifold
with strongly disjoint eigenvalues, the normal metric and a
special class of symmetric potential functions.
Let Sym(Λ) denote the isospectral manifold, in case here,
it refers to the collection of all n-by-n real symmetric
matrices with a fixed set of eigenvalues Λ = {λ1 , · · · , λn }.
We assume that the set Λ is strongly disjoint, i.e,
• for any two nonempty disjoint subsets {λa1 , · · · , λa p }
p
and {λb1 , · · · , λbq } of Λ, the inequality 1p ∑k=1
λak 6=
1 q
q ∑k=1 λbk holds.
The assumption on Λ is a stronger version of the condition
that all eigenvalues are distinct. Yet this condition is generic
in the sense that if we randomly pick a vector (λ1 , · · · , λn )
in Rn , then Λ is strongly disjoint for almost sure because the
p
set defined by equations 1p ∑k=1
λak = 1q ∑qk=1 λbk is a finite
n
union of hyperplanes in R . On the other side, as we will
see later this condition on Λ is necessary and sufficient for
a class of potential functions on Sym(Λ) to be Morse-Bott
functions which is a key assumption for application of the
adjacency criterion.
The so called normal metric g on Sym(Λ) is defined as
follows, the tangent space TH Sym(Λ) consists of elements of
the form [H, Ω] with Ω skew symmetric, since all eigenvalues
are distinct, the adjoint map adH : Ω 7→ [H, Ω] is then an
isomorphism. For any two tangent vectors [H, Ω1 ], [H, Ω2 ]
in TH Sym(Λ), the normal metric g at H is then defined by
g([H, Ω1 ], [H, Ω2 ]) := −tr(Ω1 Ω2 )
(9)
It is routine to check that g is positive definite. Equipped with
the normal metric, the gradient flow of a smooth function
Ψ ∈ C∞ (Sym(Λ)) is then the double bracket flow
Ḣ = [H, [H, Ψ0 (H)]]
(10)
where Ψ0 (H) denotes the derivative of Ψ with respect to H
For convenience, we assume that eigenvalues are ordered
as λ1 < · · · < λn and we denote by d1 , · · · , dn the diagonal
entries of a matrix H in Sym(Λ). There is a special class
of potentials functions Ψ on Sym(Λ), each is symmetric in
diagonal entries and can be generated by a scalar function
φ ∈ C∞ [λ1 , λn ] via the equation
n
Ψ(d1 , · · · , dn ) = ∑
Z di
φ (x)dx
(11)
i=1 λ1
As the sum of the diagonal entries is constant for any matrix
in Sym(Λ), there is actually a broad class of symmetric
potentials can be expressed in this way. In fact, we have
shown in [8] that if Ψ(d1 , · · · , dn ) can be expressed as a
power series of symmetric polynomials, then there is a scalar
function φ such that the equation (11) holds. Moreover, we
have also shown in [8] that for almost all scalar functions φ ,
the potential Ψ generated by equation (11) is a Morse-Bott
function.
In this paper, we will work on a simple case where φ (x) =
x. The corresponding potential is then in a diagonal form as
Ψ=
1 n 2
∑ di
2 i=1
(12)
In the rest of paper, we will just call this specific quadratic
potential the diagonal potential. In the next two sections,
we will characterize the critical manifolds associated with
the gradient flow and compute explicitly the Hessian at each
equilibrium point.
IV. T HE SET OF EQUILIBRIA ASSOCIATED WITH THE
DIAGONAL POTENTIAL
For convenience, we let π be a projection map sending
each matrix H to the diagonal matrix diag(d1 , · · · , dn ). Then
the gradient vector field f (H) with respect to the diagonal
potential is simply
f (H) = [H, [H, π(H)]]
(13)
A matrix H ∈ Sym(Λ) is an equilibrium if f (H) = 0
and it happens if and only if [H, π(H)] = 0 because
tr(π(H) f (H)) = −tr([H, π(H)]2 ). This leads us to
Lemma 2: If H is an equilibrium, then there exists a
permutation matrix P such that PT HP is block-diagonal, i.e,
PT HP = Diag(H1 , · · · , Hk )
(14)
Suppose di1 , · · · , dini are diagonal entries of Hi , then
di1 = · · · = dini
(15)
and this holds for each Hi .
Proof: The commutator [H, π(H)] vanishes if and only
if hi j (di − d j ) = 0 for each pair of (i, j).
Let Λi be the set of eigenvalues
of Hi , then (Λ1 , · · · , Λk ) is
S
a partition of Λ, i.e, Λ = ki=1 Λi and Λi ∩ Λ j = ∅ if i 6= j.
For convenience, we let #(Λi ) denote the cardinality of Λi ,
let s(Λi ) denote the sum of its elements and we define
µ(Λi ) := s(Λi )/#(Λi )
(16)
di1 = · · · = dini = µ(Λi )
(17)
then
A symmetric matrix H is said to be irreducible if there
isn’t a permutation matrix P such that PT HP is a nontrivial
block-diagonal matrix.
Lemma 3: Each Hi in equation (14) is irreducible, moreover µ(Λi ) 6= µ(Λ j ) if i 6= j.
Proof: Suppose Hi is not irreducible, we may assume
that Hi = Diag(Hi1 , Hi2 ), let Λi1 = Λi2 be the set of eigenvalues of Hi1 and Hi2 respectively, then µ(Λi1 ) = µ(Λi2 ) =
µ(Λi ) which contradict the fact that Λ is strongly disjoint.
By applying the same arguments, we conclude that µ(Λi )
and µ(Λ j ) are disjoint if i 6= j.
An equilibrium H gives rise to a partition of Λ. Conversely
a partition of Λ will correspond to a set of equilibria. Let A
be the collection of all choices of partitions of Λ. Given a
choice of partition α = (Λ1 , · · · , Λk ), we define a subset of
Sym(Λ) as
Eα := {Diag(H1 , · · · , Hk )|π(Hi ) = µ(Λi )}
(18)
clearly Eα consists exclusively of equilibria. Moreover it is
a smooth manifold as a consequence of the next fact
Fact 4: Let Λ0 = {λ10 , · · · , λn0 0 } be a subset of Λ and
0
let C(Λ0 ) ⊂ Rn be the convex hull of all vectors
(λσ0 (1) , · · · , λσ0 (n0 ) ) where σ varies over all permutations of
{1, · · · , n0 }, then the image of the projection
0
π : Sym(Λ0 ) −→ R#(Λ )
(19)
C(Λ0 )(see,
is the convex hull
for example, [9]). Let ~e be a
0
vector of all ones in R#(Λ ) and define
X(Λ0 ) := π −1 (µ(Λ0 )~e)
then π is a submersion at each point H ∈ X(Λ0 ).
(20)
other hand, either PT Eα P = Eα or PT Eα P ∩ Eα = ∅, and
P fixes Eα if and only if P is block-diagonal, i.e, P =
Diag(P1 , · · · , Pk ) with each Pi a ni -by-ni permutation matrix.
There are exactly Πki=1 ni ! many block-diagonal permutations
matrices, so by Burnside’s counting formula, there are as
many as n!/Πki=1 ni ! disjoint sets in the orbit of Eα , each is
a smooth submanifold in Sym(Λ).
Fig. 2.
The convex hull C(Λ) under the case Λ = {λ1 , λ2 , λ3 }. Let
~e := (1, 1, 1)T , then the line R~e intersects C(Λ) perpendicularly at the point
µ(Λ)~e.
A complete proof can be found in [8]. Assuming the fact
above, then an immediate consequence is that X(Λ0 ) is a
smooth manifold of dimension 21 (#(Λ0 ) − 1)(#(Λ0 ) − 2). An
identification is that Eα ' Πki=1 X(Λi ), so
k
dim Eα = ∑ (#(Λi ) − 1)(#(Λi ) − 2)
(21)
i=1
An upshot is that the set Eα is a set of discrete points if and
only if each #(Λi ) is either one or two, in the case all Λi ’s
are singletons, the set Eα consists only of a diagonal matrix.
At this moment, we have characterized the set equilibria
that are block-diagonal, equilibria that are off block-diagonal
can be generated by letting the group of permutation matrices
act on Eα ’s by conjugation as we will describe below.
Lemma 5: If H is an equilibrium, then so is PT HP for
any permutation matrix P.
Proof: We check that π(PT HP) = PT π(H)P and hence
conjugation by permutation matrices commutes with the
commutator, i.e,
PT [H, π(H)]P = [PT HP, π(PT HP)]
So
f (PT HP) = 0
(22)
exactly when f (H) = 0.
Lemma 5 can be regarded as a converse argument of
lemma 2.
Theorem 6: Suppose α = (Λ1 , · · · , Λk ) is a choice of partition and suppose #(Λi ) = ni . Let the group of permutation
matrices act on Eα by conjugation, then the orbit of Eα
contains as many as n!/Πki=1 ni ! disjoint smooth submanifolds
in Sym(Λ), each consists exclusively of equilibria. The set
of equilibria is then the union of orbits of Eα as α varies
over A .
Proof: For any permutation matrix P, the set PT Eα P
consists exclusively of equilibria by lemma 5. On the
In the rest of this paper, we will abuse the term critical
manifold by referring it to an Eα or any set in its orbit as Eα
may have multiple connected components. (see, for example,
a discussion on the number of components in [8]). Notice
that in the orbit of Eα , there exist multiple critical manifolds
that are block-diagonal, for example, the orbit of a diagonal
matrix is the set of all diagonal matrices. Actually given
a choice of partition α = (Λ1 , · · · , Λn ), there are as many
as k! critical manifolds that are block-diagonal leaving the
others off block-diagonal. If we want to choose a canonical
representative among the orbit, then permute Λ1 , · · · , Λk
if necessary so that µ(Λ1 ) < · · · < µ(Λk ), this is possible
because eigenvalues of Λ are strongly disjoint.
V. T HE H ESSIAN OF THE DIAGONAL POTENTIAL
In this section, we will show that the diagonal potential is
a Morse-Bott function by explicitly working out the Hessian
of the potential at each equilibrium point and compute its
eigenvalues.
Let H denote the Hessian of a potential Ψ. On a Riemannian manifold, it is defined by
H = ∇2 Ψ
(23)
where ∇ is the Levi-Civita connection and if we evaluate the
Hessian at an equilibrium point, then
H (X,Y ) = X(g(grad(Ψ),Y ))
(24)
In this specific case, we have
Fact 7: Suppose H is an equilibrium, then the Hessian H
evaluated at H is given by
H ([H, Ωi ], [H, Ω j ]) =
−tr([H, Ωi ][π(H), Ω j ])
−hπ([H, Ωi ]), π([H, Ω j ])i (25)
where h·, ·i is the normal inner-product in Rn .
We omit the proof here as this is a direct computation
following equation (24). Given a choice of partition α =
(Λ1 , · · · , Λn ), we assume #(Λi ) = ni , then the dimension of
the normal space at a point H ∈ Eα is given by
k
1
dim NH Eα = {n(n − 1) − ∑ (ni − 1)(ni − 2)}
2
i=1
(26)
For convenience, let nα := dim NH Eα . We will now construct
an orthogonal basis N of NH Eα with respect to H , i.e,
H (N, N) 6= 0, ∀N ∈ N
(27)
H (N, N 0 ) = 0, if N 6= N 0
(28)
Suggested by the partition α, we divide a n-by-n matrix
N into k-by-k blocks as
B11 · · · B1k
..
..
(29)
N = ...
.
.
Bk1 · · · Bkk
with the pq-th block of dimension n p -by-nq . The basis N
consists of two parts: the block-diagonal part Nd and the off
block-diagonal part No .
1. Constructing No : Write H = Diag(H1 , · · · , Hk ) with
each Hi ∈ Sym(Λi ) and let ~vi1 , · · · ,~vini be the unitlength eigenvectors of Hi with respect to the eigenvalues
λi1 , · · · , λini .
Given integers p and q, define a set of symmetric matrices
N pq in the way that for each matrix N pq,i j ∈ N pq , all blocks
except B pq and Bqp are zeros while
B pq (= BTqp ) = (λ pi − λq j )~v pi~vTq j
(30)
for some i = 1, · · · , n p and some j = 1, · · · , nq , so there are
exactly n p nq symmetric matrices in the set N pq .
Lemma 8: The set No := p<q N pq is contained in NH Eα .
If N pq,i j ∈ N pq and N p0 q0 ,i0 j0 ∈ N p0 q0 , then
3. Orthogonality of No and Nd : First notice that by
combining No and Nd , there are exactly nα matrices, i.e,
k
nα =
∑
1≤i< j≤k
ni n j + ∑ (ni − 1)
(33)
i=1
because of the equality ∑ki=1 ni = n. So it all remains to show
that
Lemma 11: Matrices in No are orthogonal to those in Nd
with respect to the Hessian.
Proof: Divide any n-by-n skew symmetric matrix Ω
into k-by-k blocks as we did in equation (29). Define a skew
symmetric matrix Ω pq,i j by setting all blocks but B pq and
Bqp zeros while
1
B pq (= −BTqp ) = ~v pi~vTq j
2
then it is a straitforward computation that
N pq,i j = [H, Ω pq,i j ]
(34)
(35)
On the other side, if we define a skew symmetric matrix Ωs,t
by setting all blocks zeros but leaving Bss = Ω̃s,t , then
S
Ns,t = [H, Ωs,t ]
(36)
A computation shows that
H (N pq,i j , N p0 q0 ,i0 j0 )
= − 2(λ pi − λq j )(µ(Λ p ) − µ(Λq ))∆ pp0 ,qq0 ,ii0 , j j0
(31)
with ∆ pp0 ,qq0 ,ii0 , j j0 := δ pp0 δqq0 δii0 δ j j0 and δi j is the Kronecker
delta.
This is a straitforward computation following the formula
(25), more details can be found in [8]. The set No contains
as many as ∑1≤i< j≤k ni n j off-block-diagonal matrices.
2. Constructing Nd : still assume H = Diag(H1 , · · · , Hk )
with each Hi ∈ Sym(Λi ). Fix i, we let ~e ∈ Rni be a vector with
all entries ones and let ~e⊥ be the hyperplane perpendicular
to ~e. An identification is that the tangent space of the convex
hull C(Λ0 ) at any of its point is ~e⊥ .
Lemma 9: Let {~u1 , · · · ,~uni −1 } be an orthonormal basis
of ~e⊥ , then there exists a set of skew symmetric matrices
{Ω̃i,1 , · · · , Ω̃i,ni −1 } such that π([Hi , Ω̃i, j ]) = u j .
Proof: This follows fact 4 as the projection π is a
submersion.
Define a set of symmetric matrices Ni in the way that for
each matrix Ni, j ∈ Ni , all blocks except Bii are zeros while
Bii = [Hi , Ω̃i, j ]
(32)
for some j = 1, · · · , ni − 1, so there are exactly (ni − 1)
symmetric matrices in the set Ni .
Lemma 10: The set Nd := ki=1 Ni is contained in NH Eα .
If Ni, j ∈ Ni and Ni0 , j0 ∈ Ni0 , then H (Ni, j , Ni0 , j0 ) = δii0 δ j j0 .
S
This again follows the formula (25). The set Nd contains
as many as ∑ki=1 (ni − 1) block-diagonal matrices.
[π(H), Ωs,t ] = 0
(37)
π([H, Ω pq,i j ]) = 0
(38)
These two equalities annihilate the right hand side of formula
(25), so H (N pq,i j , Ns,t ) = 0.
Theorem 12: The Hessian H is nondegenerate when restricted at the normal space NH Eα for any H ∈ Eα and
any Eα . Moreover, H is invariant under conjugation of
permutation matrices, so the diagonal potential by equation
(12) is a Morse-Bott function.
Proof: The invariance is an consequence of lemma
5 and formula (25). The rest is an outcome by combining
lemma 8, lemma 10 and lemma 11.
We end this section with a discussion on the basis N .
For each symmetric matrix N ∈ NH Eα , there is a unique
matrix N 0 ∈ NH Eα such that H (N, ·) = tr(N 0 ·) because H
is nondegenerate. This induces a linear map LH on NH Eα by
sending N to N 0 . Each linear subspace spanned by N pq or Ni
is invariant under LH , moreover each matrix N pq,i j ∈ N pq
is an eigenmatrix of LH with respect to the eigenvalue
−(µ(Λ p ) − µ(Λq ))/(λ pi − λq j ). All eigenvalues from the
linear subspace spanned by Nd are positive by lemma 10. So
at this moment for each critical manifold, we have computed
its index and co-index.
VI. A PPLICATION OF ADJACENCY CRITERION ON THE
ISOSPECTRAL MANIFOLD WITH THE DIAGONAL
POTENTIAL
In this section, we will work out all stable critical manifolds and characterize all pairs of adjacent neighbors.
Lemma 13: A critical manifold is stable if and only if it
is a singleton consisting of a diagonal matrix.
Proof: Suppose M is a stable critical manifold, without
loss of generality, we assume M = Eα because the Hessian
is invariant under conjugation of permutation matrices, if
Eα is stable, then so is any critical manifold in its orbit.
Suppose α = (Λ1 , · · · , Λk ), then by lemma 10, each Λi is
a singleton, otherwise there exist at least ∑ki=1 (#(Λi ) − 1)
positive eigenvalues of LH . So M is necessarily a singleton
consisting of a diagonal matrix. On the other hand, if H is
a diagonal matrix, then it is a stable equilibrium because all
eigenvalues of LH are equal −(λ pi − λq j )/(λ pi − λq j )=-1 as
suggested by our discussion in the end of last section.
A diagonal matrix as an stable equilibrium is worth having
its own notation sσ , the subindex σ indicates the permutation
on the set of indices {1, · · · , n}, i.e, sσ = (λσ (1) , · · · , λσ (n) ).
We now characterize critical manifolds of co-index 1. Recall
that λ1 < · · · < λn , we say that two eigenvalues λi < λ j are
close in order if j = i + 1.
Lemma 14: Suppose α = (Λ1 , · · · , Λk ) and Eα is a critical
manifold is of co-index 1, then all but one Λ p are singletons,
and Λ p consists of two eigenvalues and they are closed in
order.
Proof: By lemma 10 and lemma 13, we first conclude
that there is exactly one Λ p among {Λ1 , · · · , Λk } such that it
is not a singleton and #(Λ p ) = 2. The matrix set N p is then
a singleton contains exactly one block-diagonal matrix N p .
At this moment, there is at least one positive eigenvalue
of LH with eigenmatrix N p because of lemma 10 and the
fact that spanN p is an invariant subspace. Suppose Λ p =
{λ p1 , λ p2 }, then to prevent from having extra positive eigenvalues of LH , two eigenvalues λ p1 and λ p2 are necessarily
closed in order because if not, suppose λq is in between λ p1
and λ p2 . We consider the two eigenvalues of LH contributed
by the two dimensional linear subspace spanned by N pq ,
these two eigenvalues are −(µ(Λ p ) − λq ))/(λ p1 − λq ) and
−(µ(Λ p ) − λq )/(λ p2 − λq ). By assumption one is negative
while the other is positive.
On the other hand, suppose λ p1 < λ p2 and they are
close in order, since λ p1 < µ(Λ p ) < λ p2 , it is clear that
−(µ(Λ p ) − λq ))/(λ pi − λq ) < 0 for any i = 1, 2 and any
λq ∈ Λ − {λi1 , λi2 }.
Each critical manifold of co-index 1 is a discrete set of
two elements. A typical set looks like
..
.
λi + λi+1
· · · ±(λi − λi+1 )
1
..
.
.
.
.
(39)
.
.
.
2
±(λi − λi+1 ) · · ·
λi + λi+1
..
.
To relate with diagonal matrices, we let Sn be the group of
permutations on indices {1, · · · , n}, in convention a permutation is said to be a simple transposition if it is a 2-cycle
(i, i + 1). We say two permutations σ1 and σ2 are related by
a simple transposition σ̂ if σ1 · σ̂ = σ2 . Suppose
sσ1 = diag(· · · , λi , · · · , λi+1 , · · · )
(40)
sσ2 = diag(· · · , λi+1 , · · · , λi , · · · )
(41)
we then denote by Kσ1 ,σ2 the two matrices in equation (39).
The collection of equilibria of co-index 1 is then the union
of Kσ1 ,σ2 where (σ1 , σ2 ) varies over all pairs of permutations
that are related by a simple transposition.
A closed set Z ⊂ Sym(Λ) is invariant if for any H ∈ Z, the
solution ϕt (H) remains in Z for any t ∈ R. An observation
is that
Lemma 15: Let α = (Λ1 , · · · , Λk ) be a choice of partition.
We define
Zα := {Diag(H1 , · · · , Hk )|Hi ∈ Sym(Λi )}
(42)
then Zα is an invariant subset in Sym(Λ).
Proof: If H ∈ Zα and we write H = Diag(H1 , · · · , Hk ),
then the dynamical system is decoupled in the sense that
Ḣi = [Hi , [Hi , π(Hi )]], ∀i = 1, · · · , k
(43)
and Ḣ = Diag(Ḣ1 , · · · , Ḣk ).
Remark: Since the gradient flow f (H) commutes conjugation by permutation matrices, so PT Zα P is also an invariant
subset of Sym(Λ) for any permutation matrix P.
Theorem 16 (Adjacent neighbors of diagonal matrices):
All stable critical manifolds are singletons, and they are
diagonal matrices. Two diagonal matrices sσ1 and sσ2 are
adjacent if and only if σ1 and σ2 are related by a simple
transposition.
Proof: Fix the pair (σ1 , σ2 ), and assume k+ and k−
are the two matrices in Kσ1 ,σ2 . Let W u (k+ ) and W u (k− ) be
the unstable manifolds of k+ and k− respectively, we will
show that the boundary of either of unstable manifolds is
{sσ1 , sσ2 }, i.e,
∂W u (k+ ) = ∂W u (k− ) = sσ1 ∪ sσ2
(44)
But this is true by lemma 15 because if let Λ0 := {λ1 , λ2 }
and consider the double bracket flow on Sym(Λ0 ), then there
are exactly four isolated equilibria, two diagonal matrices as
stable equilibria and the other two are
1
λi + λi+1
±(λi − λi+1 )
0
(45)
k± =
λi + λi+1
2 ±(λi − λi+1 )
The isospectral manifold Sym(Λ0 ) is diffeomorphic to the
circle S1 , if we parametrize it by θ ∈ R, then the induced
dynamical system of θ on the covering space R is then
1
θ̇ = − (λi − λi+1 )2 sin(2θ )
2
(46)
It is clear that Zπ are stable equilibria and (Z + 12 )π are the
unstable ones, this is consistent with the earlier arguments.
0 )
Moreover this implies that the boundary of either W u (k+
0 ) is the union of diag(λ , λ
or W u (k−
i i+1 ) and diag(λi+1 , λi ).
This then completes the proof.
where c is a normalization factor of the density. (see, for example, [10] the derivation of the formula.) This implies that
the equiprobability surface of the stochastic flow coincides
with the equipotential surface of Ψ. If ε is small enough, then
the density function is highly peaked at diagonal matrices as
they are the local maxima. So a typical trajectory will spend
most of its time around stable equilibria, this suggests that
we approximate the behavior of the sample path by setting up
a Markov model whose states are the n! diagonal matrices,
and the trajectory of a sample path is simplified by a chain
T
Tk−1
T
2
1
· · · −−→ sσk
sσ2 −→
sσ1 −→
Fig. 3. Each vertex is the projection of a diagonal matrix and each edge is
the projection of an unstable manifold W u (Kσi ,σ j ). For convenience, we use
Cσi ,σ j to denote W u (Kσi ,σ j ) with emphasis that each W u (σi , σ j ) consists of
two disjoint curves with the same image under π.
There are as many as 21 (n − 1) · n! pairs of adjacent
neighbors. To better understand the geometry behind the
theorem, we consider the convex polytope C(Λ). Each vertex
of the polytope, known as a 0-dim face, corresponds to a
vector (λσ (1) , · · · , λσ (n) ). It is the image of a diagonal matrix
under the projection map π. Each edge of the polytope,
known as a 1-dim face, is then the image of an unstable
manifold W u (Kσ ,σ 0 ). Notice that each W u (Kσ ,σ 0 ) has two
disjoint curves, and both have the same image under π. It
is clear then each edge corresponds to a pair of adjacent
neighbors.
VII. A PPLICATION OF ADJACENCY THEOREM ON
STOCHASTIC GRADIENT FLOW
There is a stochastic version of the double bracket flow
by adding an isotropic noise into the equation
dH =[H, [H, π(H)]]dt + ε ∑[Ωi j , H]dωi j
+
ε2
[Ωi j , [Ωi j , H]]dt
2 ∑
(47)
The third term in the right hand side of equation above
appears as a consequence of the Itô rule so that the solution
still evolves on the isospectral manifold. In the literatures
of NMR, the last two terms relate to the Lindblad terms
modeling the heat bath. Each Ωi j is a skew symmetric matrix
defined by Ωi j =~ei~eTj −~e j~eTi where {~e1 , · · · ,~en } is a standard
basis in Rn . The significance of the stochastic effects is
modeled by the scalar ε.
It happens that there is an explicit formula for the steady
state solution of the Fokker-Planck equation associated with
our stochastic equation. It takes the form
ρ(H) = c exp(
2Ψ(H)
)
ε2
(48)
(49)
The rest of this section is to develop a method to evaluate
the transition probabilities.
There are two main challenging problems when coming
to the computation: the scale and the model. For each state,
there are as many as (n! − 1) transition probabilities we need
to evaluate, so in general the amount of computation is about
n!(n! − 1). To evaluate each transition probability, we need
to investigate a corresponding first hitting time model: we
ask for the probability P(T |sσi → sσ j ) of the event that the
passage time is than T for a sample path to reach state sσ j
from state sσi without visiting any other state. We now show
how adjacency theorem enters to simplify the problems
Reduce the computational complexity: Let Aσi be the set
of adjacent neighbors of sσi . Suppose sσ j ∈
/ Aσi , then we infer
that under the case ε 1
P(T |sσi → sσ j )
∑
P(T |sσi → sσ )
(50)
σ ∈Aσi
for any T > 0. This is reasonable because from geometric
perspective, if a sample path escapes from the region of
attraction W s (sσi ), there is a high percentage a path gets
trapped by one of its neighbors. So our first step of approximation is to set P(T |sσi → sσ j ) = 0 for any σ j ∈
/ Aσi . So
then for each state, the amount of computation is reduced to
(n − 1).
Approximate the transition probability: For each state sσi ,
the sample space of transition probability is discrete in space
Aσi but continuous in time. In general there is no exact
formula for computing P(T |sσi → sσ j ). One approach to
approximate the density function is to relate the first hitting
problem to an optimal control problem.
In quantum mechanics, it is known that the probability
for a system to stay in a quantum state of energy E is
proportional to exp(−E). This idea of Boltzmann sheds light
on our problem. We consider the control problem
Ḣ = [H, [H, π(H)]] + ε[H,U]
(51)
with U skew symmetric and the goal is to minimize the
energy, i.e,
1
E(T |σi → σ j ) := − min
U(t) 2
Z T
tr(U 2 (t))dt
(52)
0
under the assumption that H(0) = σi and H(T ) = σ j are
fixed. We then approximate P(T |σi → σ j ) by a scalar proportion of exp(−E(T |σi → σ j )).
R EFERENCES
Fig. 4. Illustrating the idea behind equation (50): a typical sample path
only connects adjacent neighbors while a nontypical sample path does not.
By solving the Euler-Lagrange equation, we conclude
that each optimal trajectory, or the energy minimizing
path(EMP) has to satisfy
Ḣ = [H, Ω]
(53)
Ω̇ = [H, [π(H), [H, π(H)]]] + [H, π([H, [H, π(H)]])]
(54)
It is hard to compute the MEP in general, however in the
case H(0) and H(T ) are both diagonal matrices, each MEP
coincides with a geodesic. In particular, there are two MEPs
and they together form the unstable manifold of Kσ1 ,σ2 . This
then simplifies the situation to a scalar problem: suppose the
simple transposition relating σi and σ j is the 2-cycle (λ1 , λ2 ),
then the control model is given by
1
θ̇ = − (λ1 − λ2 )2 sin(2θ ) + 2εu
2
and the goal is to minimize
(55)
RT 2
0 u dt.
Before we ending this section, we point out that locating
a MEP that connects two adjacent neighbors and computing
the minimal consumption of energy is more than an ad-hoc
plan for evaluating the transition probability. For example,
consider the situation that the control is intermittent and
impulse-like, and our goal is to steer the system from one
stable equilibrium to the other. Then questions such as in
what direction one can escape from a region of attraction
by means of an impulse? How we can save most of energy
and/or time to reach the target? These questions relate to
the design of a path that concatenates pairs of adjacent
neighbors, the analysis done in this section is then essential.
ACKNOWLEDGEMENTS
The author thanks Dr. Roger W. Brockett at Harvard
University for his comments on an earlier draft.
[1] R. W. Brockett. Dynamical systems that sort lists, diagonalize
matrices, and solve linear programming problems. Linear algebra
and its applications, 146:79–91, 1991.
[2] A. M. Bloch. Steepest descent, linear programming and hamiltonian
flows. Contemp. Math. AMS, 114:77–88, 1990.
[3] R. W. Brockett and W. S. Wong. A gradient flow for the assignment
problem. In New Trends in Systems Theory, pages 170–177. Springer,
1991.
[4] R. W. Brockett. Modeling the transient behavior of stochastic gradient
algorithms. In Decision and Control and European Control Conference
(CDC-ECC), 2011 50th IEEE Conference on, pages 4461–4466. IEEE,
2011.
[5] A. Banyaga and D. Hurtubise. Morse-bott homology. Transactions of
the American Mathematical Society, 362(8):3997–4043, 2010.
[6] A. Banyaga and D. Hurtubise. Lectures on Morse homology, volume 29. Springer Science & Business Media, 2013.
[7] J. Palis. On morse-smale dynamical systems. Topology, 8(4):385–404,
1969.
[8] X. Chen. Symmetric potential functions on the isospectral manifolds.
in preparation.
[9] A. Horn. Doubly stochastic matrices and the diagonal of a rotation
matrix. American Journal of Mathematics, 76(3):620–630, 1954.
[10] R. W. Brockett. Notes on stochastic processes on manifolds. In
Systems and Control in the Twenty-first Century, pages 75–100.
Springer, 1997.
| 3 |
Tight Tradeoffs for Real-Time Approximation of Longest
Palindromes in Streams
Paweł Gawrychowski1 , Oleg Merkurev2 , Arseny M. Shur2 , and Przemysław Uznański3
1
arXiv:1610.03125v1 [] 10 Oct 2016
2
Institute of Informatics, University of Warsaw, Poland
Institute of Mathematics and Computer Science, Ural Federal University, Ekaterinburg,
Russia
3
Department of Computer Science, ETH Zürich, Switzerland
January 19, 2018
Abstract
We consider computing a longest palindrome in the streaming model, where the symbols arrive oneby-one and we do not have random access to the input. While computing the answer exactly using
sublinear space is not possible in such a setting, one can still hope for a good approximation guarantee.
Our contribution is twofold. First, we provide lower bounds on the space requirements for randomized
approximation algorithms processing inputs of length n. We rule out Las Vegas algorithms, as they
cannot achieve sublinear space complexity. For Monte Carlo algorithms, we prove a lower bounds of
Ω(M log min{|Σ|, M }) bits of memory; here M = n/E for approximating the answer with additive error
log n
for approximating the answer with multiplicative error (1 + ε). Second, we design
E, and M = log(1+ε)
three real-time algorithms for this problem. Our Monte Carlo approximation algorithms for both additive
and multiplicative versions of the problem use O(M ) words of memory. Thus the obtained lower bounds
are asymptotically tight up to a logarithmic factor. The third algorithm is deterministic and finds a
longest palindrome exactly if it is short. This algorithm can be run in parallel with a Monte Carlo
algorithm to obtain better results in practice. Overall, both the time and space complexity of finding a
longest palindrome in a stream are essentially settled.
1
Introduction
In the streaming model of computation, a very long input arrives sequentially in small portions and cannot
be stored in full due to space limitation. While well-studied in general, this is a rather recent trend in
algorithms on strings. The main goals are minimizing the space complexity, i.e., avoiding storing the already
seen prefix of the string explicitly, and designing real-time algorithm, i.e., processing each symbol in worstcase constant time. However, the algorithms are usually randomized and return the correct answer with
high probability. The prime example of a problem on string considered in the streaming model is pattern
matching, where we want to detect an occurrence of a pattern in a given text. It is somewhat surprising
that one can actually solve it using polylogarithmic space in the streaming model, as proved by Porat and
Porat [15]. A simpler solution was later given by Ergün et al. [6], while Breslauer and Galil designed a
real-time algorithm [3]. Similar questions studied in such setting include multiple-pattern matching [4],
approximate pattern matching [5], and parametrized pattern matching [10].
We consider computing a longest palindrome in the streaming model, where a palindrome is a fragment
which reads the same in both directions. This is one of the basic questions concerning regularities in texts
and it has been extensively studied in the classical non-streaming setting, see [1, 8, 12, 14] and the references
therein. The notion of palindromes, but with a slightly different meaning, is very important in computational
1
biology, where one considers strings over {A, T, C, G} and a palindrome is a sequence equal to its reverse
complement (a reverse complement reverses the sequences and interchanges A with T and C with G); see [9]
and the references therein for a discussion of their algorithmic aspects. Our results generalize to biological
palindromes in a straightforward manner.
We denote by LPS(S) the problem of finding the maximum length of a palindrome in a string S (and a
starting position of a palindrome of such length in S). Solving LPS(S) in the streaming model was recently
considered by Berenbrink et al. [2], who developed tradeoffs between the bound on the error and the space
complexity for additive and multiplicative variants of the problem, that is, for approximating the length of
the longest palindrome with either additive or multiplicative error. Their algorithms were Monte Carlo, i.e.,
returned the correct answer with high probability. They also proved that any Las Vegas algorithm achieving
n
log |Σ|) bits √
of memory, which matches the space complexity of
additive error E must necessarily use Ω( E
their solution up to a logarithmic factor in the E ∈ [1, n] range, but leaves a few questions. Firstly, does
the lower bound
√ still hold for Monte Carlo algorithms? Secondly, what is the best possible space complexity
when E ∈ ( n, n] in the additive variant, and what about the multiplicative version? Finally, are there
real-time algorithms achieving these optimal space bounds? We answer all these questions.
Our main goal is to settle the space complexity of LPS. We start with the lower bounds in Sect. 2.
First, we show that Las Vegas algorithms cannot achieve sublinear space complexity at all. Second, we
prove a lower bound of Ω(M log min{|Σ|, M }) bits of memory for Monte Carlo algorithms; here M = n/E
log n
for approximating the answer with
for approximating the answer with additive error E, and M = log(1+ε)
multiplicative error (1+ε). Then, in Sect. 3 we design real-time Monte Carlo algorithms matching these lower
bounds up to a logarithmic factor. More precisely, our algorithm for LPS with additive error E ∈ [1, n] uses
O(n/E) words of space, while our algorithms for LPS with multiplicative error ε ∈ (0, 1] (resp., ε ∈ (1, n])
log n
uses O log(nε)
(resp., O( log(1+ε)
)) words of space1 . Finally we present, for any m, a deterministic O(m)ε
space real-time algorithm solving LPS exactly if the answer is less than m and detecting a palindrome of
length ≥ m otherwise. The last result implies that if the input stream is fully random, then with high
probability its longest palindrome can be found exactly by a real-time algorithm within logarithmic space.
Notation and Definitions. Let S denote a string of length n over an alphabet Σ = {1, . . . , N }, where
N is polynomial in n. We write S[i] for the ith symbol of S and S[i..j] for its substring (or factor )
S[i]S[i+1] · · · S[j]; thus, S[1..n] = S. A prefix (resp. suffix ) of S is a substring of the form S[1..j] (resp.,
S[j..n]). A string S is a palindrome if it equals its reversal S[n]S[n−1] · · · S[1]. By L(S) we denote the length
of a longest palindrome which is a factor of S. The symbol log stands for the binary logarithm.
We consider the streaming model of computation: the input string S[1..n] (called the stream) is read left
to right, one symbol at a time, and cannot be stored, because the available space is sublinear in n. The
space is counted as the number of O(log n)-bit machine words. An algorithm is real-time if the number of
operations between two reads is bounded by a constant. An approximation algorithm for a maximization
problem has additive error E (resp., multiplicative error ε) if it finds a solution with the cost at least OP T −E
T
(resp., OP
1+ε ), where OP T is the cost of optimal solution; here both E and ε can be functions of the size of
the input. In the LPS(S) problem, OP T = L(S).
A Las Vegas algorithm always returns a correct answer, but its working time and memory usage on
the inputs of length n are random variables. A Monte Carlo algorithm gives a correct answer with high
probability (greater than 1 − 1/n) and has deterministic working time and space.
2
Lower Bounds
In this section we use Yao’s minimax principle [17] to prove lower bounds on the space complexity of the LPS
problem in the streaming model, where the length n and the alphabet Σ of the input stream are specified.
We denote this problem by LPSΣ [n].
1 Note that (a) log(1 + ε) is equivalent to ε whenever ε < 1; (b) the space used by the algorithms is O(n) for any values of
errors; (c) the multiplicative lower bound applies to ε > n−0.99 , and thus is not contradicting the algorithm space usage.
2
Theorem 2.1 (Yao’s minimax principle for randomized algorithms). Let X be the set of inputs for a problem
and A be the set of all deterministic algorithms solving it. Then, for any x ∈ X and A ∈ A, the cost of
running A on x is denoted by c(a, x) ≥ 0.
Let p be the probability distribution over A, and let A be an algorithm chosen at random according to p.
Let q be the probability distribution over X , and let X be an input chosen at random according to q. Then
maxx∈X E[c(A, x)] ≥ mina∈A E[c(a, X)].
We use the above theorem for both Las Vegas and Monte Carlo algorithms. For Las Vegas algorithms, we
consider only correct algorithms, and c(x, a) is the memory usage. For Monte Carlo algorithms, we consider
all algorithms (not necessarily correct) with memory usage not exceeding a certain threshold, and c(x, a) is
the correctness indicator function, i.e., c(x, a) = 0 if the algorithm is correct and c(x, a) = 1 otherwise.
Our proofs will be based on appropriately chosen padding. The padding requires a constant number of
fresh characters. If Σ is twice as large as the number of required fresh characters, we can still use half of
it to construct a difficult input instance, which does not affect the asymptotics. Otherwise, we construct a
difficult input instance over Σ, then add enough new fresh characters to facilitate the padding, and finally
reduce the resulting larger alphabet to binary at the expense of increasing the size of the input by a constant
factor.
Lemma 2.2. For any alphabet Σ = {1, 2, . . . , σ} there exists a morphism h : Σ∗ → {0, 1}∗ such that, for
any c ∈ Σ, |h(c)| = 2σ + 6 and, for any string w, w contains a palindrome of length ` if and only if h(w)
contains a palindrome of length (2σ + 6) · `.
Proof. We set:
h(c) = 11s 01s−c 10011s−c 01c 1.
Clearly |h(c)| = 2σ + 6 and, because every h(c) is a palindrome, if w contains a palindrome of length ` then
h(w) contains a palindrome of length (2σ + 6) · `. Now assume that h(w) contains a palindrome of length
(2σ + 6) · `, where ` ≥ 1. If ` = 1 then we obtain that w should contain a palindrome of length 1, which
always holds. Otherwise, the palindrome contains 00 inside and we consider two cases.
1. The palindrome is centered inside 00. Then it corresponds to an odd palindrome of length ` in w.
2. The palindrome maps some 00 to another 00. Then it corresponds to an even palindrome of length `
in w.
In either case, the claim holds.
For the padding we will often use an infinite string ν = 01 11 02 12 03 13 . . ., or more precisely its prefixes of
length d, denoted ν(d). Here 0 and 1 should be understood
√ as two characters not belonging to the original
alphabet. The longest palindrome in ν(d) has length O( d).
Theorem 2.3 (Las Vegas approximation). Let A be a Las Vegas streaming algorithms solving LPSΣ [n]
with additive error E ≤ 0.99n or multiplicative error (1 + ε) ≤ 100 using s(n) bits of memory. Then
E[s(n)] = Ω(n log |Σ|).
Proof. By Theorem 2.1, it is enough to construct a probability distribution P over Σn such that for any
deterministic algorithm D, its expected memory usage on a string chosen according to P is Ω(n log |Σ|) in
bits.
Consider solving LPSΣ [n] with additive error E. We define P as the uniform distribution over ν( E2 )x$$yν( E2 )R ,
0
where x, y ∈ Σn , n0 = n2 − E2 − 1, and $ is a special character not in Σ. Let us look at the memory usage of
0
D after having read ν( E2 )x. We say that x is “good” when the memory usage is at most n2 log |Σ| and “bad”
0
otherwise. Assume that 21 |Σ|n of all x’s are good, then there are two strings x 6= x0 such that the state of D
after having read both ν( E2 )x and ν( E2 )x0 is exactly the same. Hence the behavior of D on ν( E2 )x$$xR ν( E2 )R
and ν( E2 )x0 $$xR ν( E2 )R is exactly the same. The former is a palindrome of length n = 2n0 + E + 2, so D
must answer at least 2n0 + 2, and consequently the latter also must contain a palindrome of length at least
3
2n0 + 2. A palindrome inside ν( E2 )x0 $$xR ν( E2 )R is either fully contained within ν( E2 ), x0 , xR or it is a middle
√
palindrome. But the longest palindrome inside ν( E2 ) is of length O( E) < 2n0 + 2 (for n large enough) and
the longest palindrome inside x or xR is of length n0 < 2n0 + 2, so since we have exluced other possibilities,
ν( E2 )x0 $$xR ν( E2 )R contains a middle palindrome of length 2n0 + 2. This implies that x = x0 , which is a
0
contradiction. Therefore, at least 12 |Σ|n of all x’s are bad. But then the expected memory usage of D is at
0
least n4 log |Σ|, which for E ≤ 0.99n is Ω(n log |Σ|) as claimed.
Now consider solving LPSΣ [n] with multiplicative error (1 + ε). An algorithm with multiplicative error
ε
, so if the expected memory usage of such
(1 + ε) can also be considered as having additive error E = n · 1+ε
an algorithm is o(n log |Σ|) and (1 + ε) ≤ 100 then we obtain an algorithm with additive error E ≤ 0.99n
and expected memory usage o(n log |Σ|), which we already know to be impossible.
Now we move to Monte Carlo algorithms. We first consider exact algorithms solving LPSΣ [n]; lower
bounds on approximation algorithms will be then obtained by padding the input appropriately. We introduce
an auxiliary problem midLPSΣ [n], which is to compute the length of the middle palindrome in a string of
even length n over an alphabet Σ.
Lemma 2.4. There exists a constant γ such that any randomized Monte Carlo streaming algorithm A solving
midLPSΣ [n] or LPSΣ [n] exactly with probability 1 − n1 uses at least γ · n log min{|Σ|, n} bits of memory.
Proof. First we prove that if A is a Monte Carlo streaming algorithm solving midLPSΣ [n] exactly using less
1
than b n2 log |Σ|c bits of memory, then its error probability is at least n|Σ|
.
By Theorem 2.1, it is enough to construct probability distribution P over Σn such that for any deterministic algorithm D using less than b n2 log |Σ|c bits of memory, the expected probability of error on a string
1
chosen according to P is at least n|Σ|
.
Let n0 =
n
2.
0
For any x ∈ Σn , k ∈ {1, 2, . . . , n0 } and c ∈ Σ we define
w(x, k, c) = x[1]x[2]x[3] . . . x[n0 ]x[n0 ]x[n0 − 1]x[n0 − 2] . . . x[k + 1]c0k−1 .
Now P is the uniform distribution over all such w(x, k, c).
0
Choose an arbitrary maximal matching of strings from Σn into pairs (x, x0 ) such that D is in the same
state after reading either x or x0 . At most one string per state of D is left unpaired, that is at most
0
0
n
n
2b 2 log |Σ|c−1 strings in total. Since there are |Σ|n = 2n log |Σ| ≥ 2 · 2b 2 log |Σ|c−1 possible strings of length n0 ,
at least half of the strings are paired. Let s be longest common suffix of x and x0 , so x = vcs and x0 = v 0 c0 s,
where c 6= c0 are single characters. Then D returns the same answer on w(x, n0 − |s|, c) and w(x0 , n0 − |s|, c),
even though the length of the middle palindrome is exactly 2|s| in one of them, and at least 2|s| + 2 in the
other one. Therefore, D errs on at least one of these two inputs. Similarly, it errs on either w(x, n0 − |s|, c0 )
1
or w(x, n0 − |s|, c0 ). Thus the error probability is at least 2n01|Σ| = n|Σ|
.
Now we can prove the lemma for midLPSΣ [n] with a standard amplification trick. Say that we have
a Monte Carlo streaming algorithm, which solves midLPSΣ [n] exactly with error probability ε using s(n) bits
of memory. Then we can run its k instances simultaneously and return the most frequently reported answer.
The new algorithm needs O(k · s(n)) bits of memory and its error probability εk satisfies:
X k
εk ≤
(1 − ε)i εk−i ≤ 2k · εk/2 = (4ε)k/2 .
i
2i<k
log(4/n)
1−o(1)
log n
1
= 61 1+log
Let us choose κ = 61 log(1/(n|Σ|))
|Σ|/ log n = Θ( log n+log |Σ| ) = γ · log |Σ| log min{|Σ|, n}, for some
constant γ. Now we can prove the
Assume that A uses less than κ · n log |Σ| = γ · n log min{|Σ|, n}
1 theorem.
1
bits of memory. Then running 2κ
≥ 34 2κ
(which holds since κ < 16 ) instances of A in parallel requires less
than b n2 log |Σ|c bits of memory. But then the error probability of the new algorithm is bounded from above
by
3
16κ
18
16
4
1
1
=
≤
n
n|Σ|
n|Σ|
4
which we have already shown to be impossible.
The lower bound for midLPSΣ [n] can be translated into a lower bound for solving LPSΣ [n] exactly by
padding the input so that the longest palindrome is centered in the middle. Let x = x[1]x[2] . . . x[n] be the
input for midLPSΣ [n]. We define
w(x) = x[1]x[2]x[3] . . . x[n/2]1 000 . . . 0 1x[n/2 + 1] . . . x[n].
n
Now if the length of the middle palindrome in x is k, then w(x) contains a palindrome of length at least
n + k + 2. In the other direction, any palindrome inside w(x) of length ≥ n must be centered somewhere in
the middle block consisting of only zeroes and both ones are mapped to each other, so it must be the middle
palindrome. Thus, the length of the longest palindrome inside w(x) is exactly n + k + 2, so we have reduced
solving midLPSΣ [n] to solving LPSΣ [2n + 2]. We already know that solving midLPSΣ [n] with probability
1
≥ 1 − n1
1 − n1 requires γ · n log min{|Σ|, n} bits of memory, so solving LPSΣ [2n + 2] with probability 1 − 2n+2
0
requires γ · n log{|Σ|, n} ≥ γ · (2n + 2) log min{|Σ|, 2n + 2} bits of memory. Notice that the reduction needs
O(log n) additional bits of memory to count up to n, but for large n this is much smaller than the lower
bound if we choose γ 0 < γ4 .
To obtain a lower bound for Monte Carlo additive approximation, we observe that any algorithm solving
E
LPSΣ [n] with additive error E can be used to solve LPSΣ [ n−E
E+1 ] exactly by inserting 2 zeroes between every
two characters, in the very beginning, and in the very end. However, this reduction requires log( E2 ) ≤ log n
additional bits of memory for counting up to E2 and cannot be used when the desired lower bound on the
n
n
log min(|Σ|, E
) is significantly smaller than log n. Therefore, we need a separate
required number of bits Ω( E
technical lemma which implies that both additive and multiplicative approximation with error probability
1
n require Ω(log n) bits of space.
Lemma 2.5. Let A be any randomized Monte Carlo streaming algorithm solving LPSΣ [n] with additive error
at most 0.99n or multiplicative error at most n0.49 and error probability n1 . Then A uses Ω(log n) bits of
memory.
Proof. By Theorem 2.1, it is enough to construct a probability distribution P over Σn , such that for any
deterministic algorithm D using at most s(n) = o(log n) bits of memory, the expected probability of error
1
on a string chosen according to P is 2s(n)+2
.
0
n0
Let n = s(n) + 1. For any x, y ∈ Σ , let w(x, y) = ν( n2 − n0 )R xy R ν( n2 − n0 ). Observe that if x = y then
w(x, y) contains
a palindrome
of length n, and otherwise the longest palindrome there has length at most
√
√
2n0 + O( n) = O( n), thus any algorithm with additive error of at most 0.99n or with a multiplicative
error at most n0.49 must be able to distinguish between these two cases (for n large enough).
0
Let S ⊆ Σn be an arbitrary family of strings of length n0 such that |S| = 2 · 2s(n) , and let P be the
uniform distribution on all strings of the form w(x, y), where x and y are chosen uniformly and independently
0
from S. By a counting argument, we can create at least |S|
4 pairs (x, x ) of elements from S such that the
state of D is the same after having read ν( n2 − n0 )R x and ν( n2 − n0 )R x0 . (If we create the pairs greedily, at
most one such x per state of memory can be left unpaired, so at least |S| − 2s(n) = |S|
2 elements are paired.)
Thus, D cannot distinguish between w(x, x0 ) and w(x, x), and between w(x0 , x0 ) and w(x0 , x), so its error
1
1
probability must be at least |S|/2
|S|2 = 4·2s(n) . Thus if s(n) = o(log n), the error rate is at least n for n large
enough, a contradiction.
Combining the reduction with the technical lemma and taking into account that we are reducing to a
n
problem with string length of Θ( E
), we obtain the following.
Theorem 2.6 (Monte Carlo additive approximation). Let A be any randomized Monte Carlo streaming algorithm solving LPSΣ [n] with additive error E with probability 1 − n1 . If E ≤ 0.99n then A uses
n
n
Ω( E
log min{|Σ|, E
}) bits of memory.
5
n
Proof. Define σ = min{|Σ|, E
}.
n
Because of Lemma 2.5 it is enough to prove that Ω( E
log σ) is a lower bound when
E≤
n
γ
·
log σ.
2 log n
(1)
Assume that there is a Monte Carlo streaming algorithm A solving LPSΣ [n] with additive error E using
n
n
o( E
log σ) bits of memory and probability 1 − n1 . Let n0 = n−E/2
E/2+1 ≥ E (the last inequality, equivalent
E
to n ≥ E · E−2 holds because E ≤ 0.99n and because we can assume that E ≥ 200). Given a string
x[1]x[2] . . . x[n0 ], we can simulate running A on 0E x[1]0E/2 x[2]0E/2jx[3] . . . k0E/2 x[n0 ]0E/2 to calculate R (using
log(E/2) ≤ log n additional bits of memory), and then return
R
E/2+1
. We call this new Monte Carlo
0
streaming algorithm A . Recall that A reports the length of the longest palindrome with additive error E.
Therefore, if the original string contains a palindrome of length r, the new string contains a palindrome
of length E2 · (r + 1) + r, so R ≥ r(E/2 + 1) and A0 will return at least r. In the other direction, if A0
returns r, then the new string contains a palindrome of length r(E/2 + 1). If such palindrome is centered
so that x[i] is matched with x[i + 1] for some i, then it clearly corresponds to a palindrome of length r
in the original string. But otherwise every x[i] within the palindrome is matched with 0, so in fact the
whole palindrome corresponds to a streak of consecutive zeroes in the new string and can be extended to
the left and to the right to start and end with 0E , so again it corresponds to a palindrome of length r in
1
≥ 1 − n10 and
the original string. Therefore, A0 solves LPSΣ [n0 ] exactly with probability 1 − (n0 (E/2+1)+E/2)
0
uses o( n (E/2+1)+E/2
log σ) + log n = o(n0 log σ) + log n bits of memory. Observe that by Lemma 2.4 we get
E/2
a lower bound
γ
γ n
γ
γ · n0 log min{|Σ|, n0 } ≥ · n0 log σ + · log σ ≥ · n0 log σ + log n
2
2 E
2
(where the last inequality holds because of Eq.(1)). Then, for large n we obtain contradiction as follows
o(n0 log σ) + log n <
γ 0
· n log σ + log n.
2
Finally, we consider multiplicative approximation. The proof follows the same basic idea as of Theorem 2.6, however is more technically involved. The main difference is that due to uneven padding, we are
reducing to midLPSΣ [n0 ] instead of LPSΣ [n0 ].
Theorem 2.7 (Monte Carlo multiplicative approximation). Let A be any randomized Monte Carlo streaming
algorithm solving LPSΣ [n] with multiplicative error (1 + ε) with probability 1 − n1 . If n−0.98 ≤ ε ≤ n0.49 then
log n
log n
A uses Ω( log(1+ε)
log min{|Σ|, log(1+ε)
}) bits of memory.
Proof. For ε ≥ n0.001 the claimed lower bound reduces to Ω(1) bits, which obviously holds. Thus we can
assume that ε < n0.001 . Define
1
log n
− 2}.
σ = min{|Σ|,
50 log(1 + 2ε)
log n
First we argue that it is enough to prove that A uses Ω( log(1+ε)
log σ) bits of memory. Since log(1 + 2ε) ≤
0.001 log n + o(log n), we have that:
1
log n
− 2 ≥ 18 − o(1)
50 log(1 + 2ε)
(2)
1
log n
log n
− 2 = Θ(
).
50 log(1 + 2ε)
log(1 + 2ε)
(3)
log(1 + 2ε) = Θ(log(1 + ε))
(4)
and consequently:
Finally, observe that:
6
because log 2(1 + ε) = Θ(log(1 + ε)) for ε ≥ 1, and log(1 + ε) = Θ(ε) for ε < 1. From (3) and (4) we conclude
that:
log n
}).
(5)
log σ = Θ(log min{|Σ|,
log(1 + ε)
log n
Because of Lemma 2.5 and equations (4) and (5), it is enough to prove that Ω( log(1+ε)
log σ) is a lower
bound when
log σ
log(1 + 2ε) ≤ γ ·
,
(6)
100
log n
log n
as otherwise Ω( log(1+ε)
log σ) = Ω( log(1+2ε)
log σ) = Ω(log n).
Assume that there is a Monte Carlo streaming algorithm A solving LPSΣ [n] with multiplicative error
log n
(1+ε) with probability 1− n1 using o( log(1+ε)
log σ) bits of memory. Let x = x[1]x[2] . . . x[n0 ]x[n0 +1] . . . x[2n0 ]
0
be an input for midLPSΣ [2n0 ]. We choose n0 so that n = (1 + 2ε)n +1 · n0.99 . Then n0 = log(1+2ε) (n0.01 ) − 1 =
log n
1
d+1
0
· n0.99 e for any 0 ≤ d ≤ n0 .
100 log(1+2ε) − 1. We choose i0 , i1 , i2 , i3 , . . . , in so that i0 + . . . + id = d(1 + 2ε)
−0.98
0.99
0.01
(Observe that for ε = Ω(n
) we have i0 > n
and i1 , . . . , id > 2n
− 1.) Finally we define:
w(x) = ν(in0 )R x[1]ν(in0 −1 )R . . . x[n0 ]ν(i0 )R ν(i0 )x[n0 + 1]ν(i1 ) . . . ν(in0 −1 )x[2n0 ]ν(in0 ).
If x contains a middle palindrome of length exactly 2k, then w(x) contains a middle palindrome of length
2(1 + 2ε)k+1 · n0.99√
. Also, based on the properties of ν, any non-middle centered palindrome in w(x) has
length at most O( n), which is less than n0.99 for n large enough. Since d2(1 + 2ε)k · n0.99 e · (1 + ε) <
(2(1 + 2ε)k · n0.99 + 1) · (1 + ε) < 2(1 + 2ε)k+1 · n0.99 , value of k can be extracted from the answer of A. Thus,
if A approximates the middle palindrome in w(x) with multiplicative error (1 + ε) with probability 1 − n1
log n
using o( log(1+ε)
log σ) bits of memory, we can construct a new algorithm A0 solving midLPSΣ [2n0 ] exactly
1
with probability 1 − n1 > 1 − 2n
0 using
o(
log n
log σ) + log n
log(1 + ε)
(7)
bits of memory. By Lemma 2.4 we get a lower bound
γ · 2n0 log min{|Σ|, 2n0 }
=
≥
γ
log n
·
log σ − 2γ log σ
50 log(1 + 2ε)
γ
log n
·
log σ + log n − 2γ log σ
100 log(1 + 2ε)
(8)
(where the last inequality holds because of (6)). On the other hand, for large n
γ
log n
log n
1
·
log
σ
−
2γ
log
σ
+
log
n
=
−
2
γ log σ + log n
100 log(1+2ε)
100 log(1+2ε)
log n
log σ + log n
= Θ log(1+ε)
so (8) exceeds (7), a contradiction.
3
Real-Time Algorithms
In this section we design real-time Monte Carlo algorithms within the space bounds matching the lower
bounds from Sect. 2 up to a factor bounded by log n. The algorithms make use of the hash function known
as the Karp-Rabin fingerprint [11]. Let p be a fixed prime from the range [n3+α , n4+α ] for some α > 0, and
r be a fixed integer randomly chosen from {1, . . . , p−1}. For a string S, its forward hash and reversed hash
are defined, respectively, as
!
!
n
n
X
X
F
i
R
n−i+1
φ (S) =
S[i] · r mod p and φ (S) =
S[i] · r
mod p .
i=1
i=1
7
Clearly, the forward hash of a string coincides with the reversed hash of its reversal. Thus, if u is a palindrome,
then φF (u) = φR (u). The converse is also true modulo the (improbable) collisions of hashes, because for two
strings u 6= v of length m, the probability that φF (u) = φF (v) is at most m/p. This property allows one to
detect palindromes with high probability by comparing hashes. (This approach is somewhat simpler than the
one of [2]; in particular, we do not need “fingerprint pairs” used there.) In particular, a real-time algorithm
makes O(n) comparisons and thus faces a collision with probability O(n−1−α ) by the choice of p. All further
considerations assume that no collisions happen. For an input stream S, we denote F F (i, j) = φF (S[i..j])
and F R (i, j) = φR (S[i..j]). The next observation is quite important.
Proposition 3.1 ( [3]). The following equalities hold:
F F (i, j) = r−(i−1) F F (1, j) − F F (1, i−1) mod p ,
F R (i, j) = F R (1, j) − rj−i+1 F R (1, i−1) mod p .
Let I(i) denote the tuple (i, F F (1, i−1), F R (1, i−1), r−(i−1) mod p, ri mod p). The proposition below is
immediate from definitions and Proposition 3.1.
Proposition 3.2. 1) Given I(i) and S[i], the tuple I(i+1) can be computed in O(1) time.
2) Given I(i) and I(j+1), the string S[i..j] can be checked for being a palindrome in O(1) time.
3.1
Additive Error
Theorem 3.3. There is a real-time Monte Carlo algorithm solving the problem LPS(S) with the additive
error E = E(n) using O(n/E) space, where n = |S|.
First we present a simple (and slow) algorithm which solves the posed problem, i.e., finds in S a palindrome
of length `(S) ≥ L(S) − E, where L(S) is the length of the longest palindrome in S. Later this algorithm will
be converted into a real-time one. We store the sets I(j) for some values of j in a doubly-linked list SP in
the decreasing order of j’s. The longest palindrome currently found is stored as a pair answer = (pos, len),
where pos is its initial position and len is its length. Let tE = b E2 c.
In Algorithm ABasic we add I(j) to the list SP for each j divisible by tE . This allows us to check for
palindromicity, at ith iteration, all factors of the form S[ktE ..i]. We assume throughout the section that at
the beginning of ith iteration the value I(i) is stored in a variable I.
Algorithm 1 : Algorithm ABasic, ith iteration
1: if i mod tE = 0 then
2:
add I to the beginning of SP
3: read S[i]; compute I(i + 1) from I; I ← I(i + 1)
4: for all elements v of SP do
5:
if S[v.i..i] is a palindrome and answer.len < i−v.i+1 then
6:
answer ← (v.i, i−v.i+1)
Proposition 3.4. Algorithm ABasic finds in S a palindrome of length `(S) ≥ L(S) − E using O(n/E) time
per iteration and O(n/E) space.
Proof. Both the time and space bounds arise from the size of the list SP , which is bounded by n/tE =
O(n/E); the number of operations per iteration
isproportional to this size due to Proposition 3.2. Now let
S[i..j] be a longest palindrome in S. Let k = tiE tE . Then i ≤ k < i + tE . At the kth iteration, I(k) was
added to SP ; then the palindrome S[k..j−(k−i)] was found at the iteration j − (k − i). Its length is
jE k
j − (k − i) − k + 1 = j − i − 2(k − i) + 1 > (j − i + 1) − 2tE = L(S) − 2
≥ L(S) − E,
2
as required.
8
The resource to speed up Algorithm ABasic stems from the following
Lemma 3.5. During one iteration, the length answer.len increases by at most 2 · tE .
Proof. Let S[j..i] be the longest palindrome found at the ith iteration. If i − j + 1 ≤ 2tE then the statement
is obviously true. Otherwise the palindrome S[j+tE ..i−tE ] of length i − j + 1 − 2tE was found before (at
the (i−tE )th iteration), and the statement holds again.
Lemma 3.5 implies that at each iteration SP contains only two elements that can increase answer.len.
Hence we get the following Algorithm A.
Algorithm 2 : Algorithm A, ith iteration
1: if i mod tE = 0 then
2:
add I to the beginning of SP
3:
if i = tE then
4:
sp ← f irst(SP )
5: read S[i]; compute I(i + 1) from I; I ← I(i + 1)
6: sp ← previous(sp)
7: while i − sp.i + 1 ≤ answer.len and (sp 6= last(SP )) do
8:
sp ← next(sp)
9: for all existing v in {sp, next(sp)} do
10:
if S[v.i..i] is a palindrome and answer.len < i−v.i+1 then
11:
answer ← (v.i, i−v.i+1)
. if exists
Due to Lemma 3.5, the cycle at lines 9–11 of Algorithm A computes the same sequence of values of
answer as the cycle at lines 4–6 of Algorithm ABasic. Hence it finds a palindrome of required length by
Proposition 3.4. Clearly, the space used by the two algorithms differs by a constant. To prove that an
iteration of Algorithm A takes O(1) time, it suffices to note that the cycle in lines 7–8 performs at most two
iterations. Theorem 3.3 is proved.
3.2
Multiplicative Error for ε ≤ 1
Theorem 3.6. There is a real-time Monte Carlo algorithm solving the problem LPS(S) with multiplicative
error ε = ε(n) ∈ (0, 1] using O log(nε)
space, where n = |S|.
ε
As in the previous section, we first present a simpler algorithm MBasic with non-linear working time and
then upgrade it to a real-time algorithm. The algorithm must find a palindrome of length `(S) ≥ L(S)
1+ε . The
next lemma is straightforward.
Lemma 3.7. If ε ∈ (0, 1], the condition `(S) ≥ L(S)(1 − ε/2) implies `(S) ≥ L(S)
1+ε .
We set qε = log 2ε . The main difference in the construction of algorithms with the multiplicative and
additive error is that here all sets I(i) are added to the list SP , but then, after a certain number of steps,
are deleted from it. The number of iterations the set I(i) is stored in SP is determined by the time-to-live
function ttl (i) defined below. This function is responsible for both the correctness of the algorithm and the
space bound.
Let β(i) be the position of the rightmost 1 in the binary representation of i (the position 0 corresponds
to the least significant bit). We define
ttl (i) = 2qε +2+β(i) .
(9)
The definition is illustrated by Fig. 1. Next we state a few properties of the list SP .
Lemma 3.8. For any integers a ≥ 1 and b ≥ 0, there exists a unique integer j ∈ [a, a + 2b ) such that
ttl (j) ≥ 2qε +2+b .
9
Algorithm 3 : Algorithm MBasic, ith iteration
1: add I to the beginning of SP
2: for all v in SP do
3:
if v.i + ttl (v.i) = i then
4:
delete v from SP
5: read S[i]; compute I(i + 1) from I; I ← I(i + 1)
6: for all v in SP do
7:
if S[v.i..i] is a palindrome and answer.len < i−v.i+1 then
8:
answer ← (v.i, i−v.i+1)
1
8
16
21
24
28
32
36
38 40 42 44
46
53
Figure 1: The state of the list SP after the iteration i = 53 (qε = 1 is assumed). Black squares indicate the
numbers j for which I(j) is currently stored. For example, (9) implies ttl (28) = 21+2+2 = 32, so I(28) will
stay in SP until the iteration 28 + 32 = 60.
Proof. By (9), ttl (j) ≥ 2qε +2+b if and only if β(j) ≥ b, i.e., j is divisible by 2b by the definition of β. Among
any 2b consecutive integers, exactly one has this property.
Figure 1 shows the partition of the range (0, i] into intervals having lengths that are powers of 2 (except
for the leftmost interval). In general, this partition consists of the following intervals, right to left:
l
n m
(i − 2qε +2 , i], (i − 2qε +3 , i − 2qε +2 ], . . . , (i − 2m , i − 2m−1 ], (0, i − 2m ], where m = log qε +2 − 1. (10)
2
Lemma 3.8 and (9) imply the following lemma on the distribution of the elements of SP .
Lemma 3.9. After each iteration, the first interval (resp., the last interval; each of the remaining intervals)
in (10) contains 2qε +2 (resp., at most 2qε +1 ; exactly 2qε +1 ) elements of the list SP .
The number of the intervals in (10) is O(log(nε)), so from Lemma 3.9 and the definition of qε we have
the following.
Lemma 3.10. After each iteration, the size of the list SP is O log(nε)
.
ε
Proposition 3.11. Algorithm MBasic finds a palindrome of length `(S) ≥
L(S)
1+ε
using O( log(nε)
) time per
ε
iteration and O( log(nε)
) space.
ε
Proof. Both the time per iteration and the space are dominated by the size of the list SP . Hence the required
complexity bounds follow from Lemma 3.10. For the proof of correctness, let S[i..j] be a palindrome of length
L(S). Further, let d = blog L(S)c.
If d < qε + 2, the palindrome S[i..j] will be found exactly, because I(i) is in SP at the jth iteration:
i + ttl (i) ≥ i + 2qε +2 ≥ i + 2d+1 > i + L(S) > j .
Otherwise, by Lemma 3.8 there exists a unique k ∈ [i, i + 2d−qε −1 ) such that ttl (k) ≥ 2d+1 . Hence at the
iteration j −(k −i) the palindrome S[i+(k−i)..j−(k−i)] will be found, because I(k) is in SP at this iteration:
k + ttl (k) ≥ i + ttl (k) ≥ i + 2d+1 > j ≥ j − (k − i) .
The length of this palindrome satisfies the requirement of the proposition:
j − (k − i) − (i + (k − i)) + 1 = L(S) − 2(k − i) ≥ L(S) − 2d−qε ≥ L(S) −
The reference to Lemma 3.7 finishes the proof.
10
L(S)
ε
≥
L(S)
1
−
.
2qε
2
Now we speed up Algorithm MBasic. It has two slow parts: deletions from the list SP and checks for
palindromes. Lemmas 3.12 and 3.13 show that, similar to Sect. 3.1, O(1) checks are enough at each iteration.
Lemma 3.12. Suppose that at some iteration the list SP contains consecutive elements I(d), I(c), I(b), I(a).
Then b − a ≤ d − b.
Proof. Let j be the number of the considered iteration. Note that a < b < c < d. Consider the interval
in (10) containing a. If a ∈ (j − 2qε +2 , j], then b − a = 1 and d − b = 2, so the required inequality holds.
Otherwise, let a ∈ (j −2qε +2+x , j −2qε +2+x−1 ]. Then by (9) β(a) ≥ x; moreover, any I(k) such that a < k ≤ j
and β(k) ≥ x is in SP . Hence, b − a ≤ 2x . By Lemma 3.9 each interval, except for the leftmost one, contains
at least 2qε +1 ≥ 4 elements. Thus each of the numbers b, c, d belongs either to the same interval as a or
to the previous interval (j − 2qε +2+x−1 , j − 2qε +2+x−2 ]. Again by (9) we have β(b), β(c), β(d) ≥ x − 1. So
c−b, d−c ≥ 2x−1 , whence the result.
We call an element I(a) of SP valuable at ith iteration if i − a + 1 > answer.len and S[a..i] can be a
palindrome. (That is, Algorithm MBasic does not store enough information to predict that the condition in
its line 7 is false for v = I(a).)
Lemma 3.13. At each iteration, SP contains at most three valuable elements. Moreover, if I(d0 ), I(d) are
consecutive elements of SP and i − d0 < answer.len ≤ i − d, where i is the number of the current iteration,
then the valuable elements are consecutive in SP , starting with I(d).
Proof. Let d be as in the condition of the lemma. If I(d) is followed in SP by at most two elements, we
are done. If it is not the case, let the next three elements be I(c), I(b), and I(a), respectively. If S[a..i]
is a palindrome then S[a+(b−a)..i−(b−a)] is also a palindrome. At the iteration i−(b−a) the tuple I(b)
was in SP , so this palindrome was found. Hence, at the ith iteration the value answer.len is at least
the length of this palindrome, which is i − a + 1 − 2(b − a). By Lemma 3.12, b − a ≤ d − b, implying
answer.len ≥ i − a + 1 − (b − a) − (d − b) = i − d + 1. This inequality contradicts the definition of d;
hence, S[a..i] is not a palindrome. By the same argument, the elements following I(a) in SP do not produce
palindromes as well. Thus, only the elements I(d), I(c), I(b) are valuable.
Now we turn to deletions. The function ttl (x) has the following nice property.
Lemma 3.14. The function x → x + ttl (x) is injective.
Proof. Note that β(x + ttl (x)) = β(x) from the definition of ttl . Hence the equality x + ttl (x) = y + ttl (y)
implies β(x) = β(y), then ttl (x) = ttl (y) by (9), and finally x = y.
Lemma 3.14 implies that at most one element is deleted from SP at each iteration. To perform this
deletion in O(1) time, we need an additional data structure. By BS(x) we denote a linked list of maximal
segments of 1’s in the binary representation of x. For example, the binary representation of x = 12345 and
BS(x) are as follows:
13 12 11 10 9 8 7 6 5 4 3 2 1 0
1 1 0 1 0 0 0 0 1 1 1 0 0 1
BS(12345) = {[0, 0], [3, 5], [10, 10], [12, 13]}
Clearly, BS(x) uses O(log x) space.
Lemma 3.15. Both β(x) and BS(x + 1) can be obtained from BS(x) in O(1) time.
Proof. The first number in BS(x) is β(x). Let us construct BS(x + 1). Let [a, b] be the first segment in
BS(x). If a > 2, then BS(x + 1) = [0, 0] ∪ BS(x). If a = 1, then BS(x + 1) = [0, b] ∪ (BS(x)\[1, b]).
Now let a = 0. If BS(x) = {[0, b]} then BS(x + 1) = {[b+1, b+1]}. Otherwise let the second segment in
BS(x) be [c, d]. If c > b + 2, then BS(x + 1) = [b+1, b+1] ∪ (BS(x)\[0, b]). Finally, if c = b + 2, then
BS(x + 1) = [b+1, d] ∪ (BS(x)\{[0, b], [c, d]}).
11
Thus, if we support one list BS which is equal to BS(i) at the end of the ith iteration, we have β(i). If
I(a) should be deleted from SP at this iteration, then β(a) = β(i) (see Lemma 3.14). The following lemma
is trivial.
Lemma 3.16. If a < b and ttl (a) = ttl (b), then I(a) is deleted from SP before I(b).
By Lemma 3.16, the information about the positions with the same ttl (in other words, with the same
β) are added to and deleted from SP in the same order. Hence it is possible to keep a queue QU (x) of the
pointers to all elements of SP corresponding to the positions j with β(j) = x. These queues constitute the
last ingredient of our real-time Algorithm M.
Algorithm 4 : Algorithm M, ith iteration
1: add I to the beginning of SP
2: if i = 1 then
3:
sp ← f irst(SP )
4: compute BS[i] from BS; BS ← BS[i]; compute β(i) from BS
5: if QU (β(i)) is not empty then
6:
v ← element of SP pointed by f irst(QU (β(i)))
7:
if v = sp then
8:
sp ← next(sp)
9:
delete v; delete f irst(QU (β(i)))
10: add pointer to f irst(SP ) to QU (β(i))
11: read S[i]; compute I(i + 1) from I; I ← I(i + 1)
12: sp ← previous(sp)
13: while i − sp.i + 1 ≤ answer.len and sp 6= last(SP ) do
14:
sp ← next(sp)
15: for all existing v in {sp, next(sp), next(next(sp))} do
16:
if S[v.i..i] is a palindrome and answer.len < i−v.i+1 then
17:
answer ← (v.i, i−v.i+1)
. if exists
Proof of Theorem 3.6. After every iteration, Algorithm M has the same list SP (see Fig. 1) as Algorithm
MBasic, because these algorithms add and delete the same elements. Due to Lemma 3.13, Algorithm M
returns the same answer as Algorithm MBasic. Hence by Proposition 3.11 Algorithm M finds a palindrome
of required length. Further, Algorithm M supports the list BS of size O(log n) and the array QU containing
O(log n) queues of total size equal to the size of SP . Hence, it uses O( log(nε)
) space in total by Lemma 3.10.
ε
The cycle in lines 13–14 performs at most three iterations. Indeed, let z be the value of sp after the previous
iteration. Then this cycle starts with sp = previous(z) (or with sp = z if z is the first element of SP ) and
ends with sp = next(next(z)) at the latest. By Lemma 3.15, both BS(i) and β(i) can be computed in O(1)
time. Therefore, each iteration takes O(1) time.
log n
Remark Since for n−0.99 ≤ ε ≤ 1 the classes O log(1+ε)
and O log(nε)
coincide, Algorithm M uses space
ε
within a log n factor from the lower bound of Theorem 2.7. Furthermore, for an arbitrarily slowly growing
function ϕ Algorithm M uses o(n) space whenever ε = ϕ(n)
n .
3.3
Multiplicative Error for ε > 1
Theorem 3.17. There is a real-time Monte Carlo algorithm solving the problem LPS(S) with multiplicative
log(n)
error ε = ε(n) ∈ (1, n] using O log(1+ε)
space, where n = |S|.
12
We transform Algorithm M into real-time Algorithm M’ which solves LPS(S) with the multiplicative
log n
error ε > 1 using O log(1+ε)
space. The basic idea of transformation is to replace all binary representations
with those in base proportional to 1 + ε, and thus shrink the size of the lists SP and BS.
First, we assume without loss of generality that ε ≥ 7, as otherwise we can fix ε = 1 and apply Algorithm M. Fix k ≤ 21 (1 + ε) as the largest such even integer (in particular, k ≥ 4). Let β 0 (i) be the position
of the rightmost non-zero digit in the k-ary representation of i. We define
(
0
9
· k β (i) if β 0 (i) > 0,
0
(11)
ttl (i) = 2
4 otherwise.
The list SP 0 is the analog of the list SP from Sect. 3.2. It contains, after ith iteration, the tuples I(j)
for all positions j ≤ i such that j + ttl0 (j) > i. Similar to (10), we partition the range (0; i] into intervals
and then count the elements of SP 0 in these intervals. The intervals are, right to left,
(i − 4, i], i − 92 k, i − 4 , i − 92 k 2 , i − 92 k , . . . , i − 92 k m , i − 92 k m−1 , 0, i − 92 k m .
(12)
For convenience, we enumerate the intervals starting with 0.
Lemma 3.18. Each interval in (12) contains at most 5 elements of SP 0 . Each of the intervals 0, ..., m
contains at least 3 elements of SP 0 .
Proof. The 0th interval contains exactly 4 elements. For any j = 1, . . . , m+1, an element x of the jth
interval is in SP 0 if and only if its position
is divisible by k j ; see (11). The length of this interval is less
9 j
9
than 2 k , giving us upper bound of 2 = 5 elements. Similarly, if j 6= m+1, the jth interval has the length
9 j
9 j−1
and thus contains at least 92 k−1
elements of SP 0 . Since k ≥ 4, the claim follows.
2k − 2k
k
Next we modify Algorithm MBasic, replacing ttl by ttl0 and SP by SP 0 .
log n
Proposition 3.19. Modified Algorithm MBasic finds a palindrome of length `(S) ≥ L(S)
1+ε using O log(1+ε)
space.
Proof. Let S[i..j] be a palindrome of length L(S). Let d = logk L(S)
. Without loss of generality we assume
4
0
d ≥ 0, as otherwise L(S) < 4 ≤ ttl (i) and the palindrome S[i..j] will be detected exactly. Since L(S) ≥ 4k d ,
let a1 < a2 < a3 < a4 < a5 be consecutive positions which are multiples of k d (i.e., β 0 (a1 ), . . . , β 0 (a5 ) ≥ d)
such that a2 ≤ i+j
2 < a3 . Then in particular i < a1 , and there is a palindrome S[a1 ..(i + j − a1 )] such that
a3 ≤ (i + j − a1 ) < a5 . Since a1 + ttl 0 (a1 ) ≥ a5 , this particular palindrome will be detected by the modified
Algorithm MBasic; thus `(S) ≥ a3 − a1 = 2k d . However, we have L(S) < 4k d+1 , hence L(S)
< 2k ≤ (1 + ε).
log n `(S)
log n
0
.
Space complexity follows from bound on size of the list SP , which is at most 5 log k = O log(1+ε)
To follow the framework from the case of ε ≤ 1, we provide analogous speedup to the checks for palindromes. We adopt the same notion of an element valuable at ith iteration as in Sect. 3.2. First we need the
following property, which is a more general analog of Lemma 3.12; an analog of Lemma 3.13 is then proved
with its help.
Lemma 3.20. Suppose that at some iteration the list SP 0 contains consecutive elements I(d), I(c), and
d ≤ i − answer.len, where i is the number of the current iteration. Further, let I(a) be another element of
SP 0 at this iteration and a < c. If c, d belong to the same interval of (12), then I(a) is not valuable.
Proof. Let c, d belong to the jth interval. Thus they are divisible by k j and d − c = k j . Since a < c, a is
c+a
j
divisible by k j as well. One of the numbers d+a
2 , 2 is divisible by k ; take it as b. Let δ = b − a. If S[a..i]
is a palindrome, then S[b..i − δ] is also a palindrome. Since at the ith iteration the left border of the jth
interval was smaller than c, then at the (i − δ)th iteration this border was smaller than b; hence, I(b) was in
SP 0 at that iteration, and the palindrome S[b..i − δ] was found. Its length is
i − δ − b + 1 = i + 1 + a − 2b ≥ i + 1 + a − (d + a) ≥ i − d + 1 > answer.len,
which is impossible by the definition of answer.len. So S[a..i] is not a palindrome, and the claim follows.
13
Lemma 3.21. At each iteration, SP 0 contains at most three valuable elements. Moreover, if I(d0 ), I(d) are
consecutive elements of SP 0 and i − d0 < answer.len ≤ i − d, where i is the number of the current iteration,
then the valuable elements are consecutive in SP 0 , starting with I(d).
Proof. Let a < b < c < d be such that the elements I(d), I(c), I(b) are consecutive in SP 0 and I(a) belongs to
SP 0 . Then either b, c or c, d are in the same interval of (12), and thus a is not valuable by Lemma 3.20.
To complete the proof, we now turn to deletions, proving the following analog of Lemma 3.14.
Lemma 3.22. The function h(x) = x + ttl 0 (x) maps at most two different values of x to the same value.
Moreover, if h(x) = h(y) and β 0 (x) ≥ β 0 (y), then β 0 (x) = β 0 (h(x)) + 1 and β 0 (y) = 0.
Proof. Let h(x) = h(y). If β 0 (x) = β 0 (y) then ttl0 (x) = ttl0 (y) by (11), implying x = y. Hence all preimages
0
of h(x) have different values of β 0 . Assume β 0 (x) > β 0 (y). Then we have, for some integer j, x = j · k β (x)
0
0
and h(x) = (j + 4)k β (x) + k2 · k β (x)−1 by (11). Since k is even, we get β 0 (h(x)) = β 0 (x) − 1. If β 0 (y) > 0,
we repeat the same argument and obtain β 0 (x) = β 0 (y), contradicting our assumption. Thus β 0 (y) = 0. The
claim now follows.
We also define a list BS 0 (x),
encoding of the k-ary representation of x. The
which maintains an RLE
n
0
list BS 0 (x) has length O log
,
can
be
updated
to
BS
(x+1)
in O(1) time, and provides the value β 0 (x)
log k
in O(1) time also (cf. Lemma 3.15). Further, Lemma 3.16 holds for the function ttl0 , so we introduce the
queues QU 0 (x) in the same way as the queues QU (x) in Sect. 3.2. Having all the ingredients, we present
Algorithm M’ which speeds up the modified Algorithm MBasic and thus proves Theorem 3.17. The only
significant difference between Algorithm M and Algorithm M’ is in the deletion of tuples from the list
(compare lines 5–9 of Algorithm M against lines 5–15 of Algorithm M’).
Algorithm 5 : Algorithm M’, ith iteration
1: add I to the beginning of SP 0
2: if i = 1 then
3:
sp ← f irst(SP 0 )
4: compute BS 0 [i] from BS 0 ; BS 0 ← BS 0 [i]; compute β(i) from BS 0
5: if QU 0 (β(i) + 1) is not empty then
6:
v ← element of SP 0 pointed by f irst(QU 0 (β(i)))
7:
if v.i + ttl0 (v.i) = i then
8:
if v = sp then
9:
sp ← next(sp)
10:
delete v; delete f irst(QU 0 (β(i) + 1))
11: v ← element of SP 0 pointed by f irst(QU 0 (0))
12: if v.i + ttl0 (v.i) = i then
13:
if v = sp then
14:
sp ← next(sp)
15:
delete v; delete f irst(QU 0 (0))
16: add pointer to f irst(SP 0 ) to QU 0 (β(i))
17: read S[i]; compute I(i + 1) from I; I ← I(i + 1)
18: sp ← previous(sp)
19: while i − sp.i + 1 ≤ answer.len and sp 6= last(SP 0 ) do
20:
sp ← next(sp)
21: for all existing v in {sp, next(sp), next(next(sp))} do
22:
if S[v.i..i] is a palindrome and answer.len < i−v.i+1 then
23:
answer ← (v.i, i−v.i+1)
14
. if exists
3.4
The Case of Short Palindromes
A typical string contains only short palindromes, as Lemma 3.23 below shows (for more on palindromes in
random strings, see [16]). Knowing this, it is quite useful to have a deterministic real-time algorithm which
finds a longest palindrome exactly if it is “short”, otherwise reporting that it is “long”. The aim of this section
is to describe such an algorithm (Theorem 3.24).
Lemma 3.23. If an input stream S ∈ Σ∗ is picked up uniformly at random among all strings of length n,
where n ≥ |Σ|, then for any positive constant c the probability that S contains a palindrome of length greater
log n
is O(n−c ).
than 2(c+1)
log |Σ|
Proof. A string S contains a palindrome of length greater than m if and only if S contains a palindrome of
length m+1 or m+2. The probability P of containing such a palindrome is less than the expected number
M of palindromes of length m+1 and m+2 in S. A factor of S of length l is a palindrome with probability
1/|Σ|bl/2c ; by linearity of expectation, we have
M=
Substituting m =
2(c+1) log n
,
log |Σ|
n−m−1
n−m
+
.
b(m+1)/2c
|Σ|
|Σ|b(m+2)/2c
we get M = O(n−c ), as required.
Theorem 3.24. Let m be a positive integer. There exists a deterministic real-time algorithm working in
O(m) space, which
- solves LPS(S) exactly if L(S) < m;
- finds a palindrome of length m or m+1 as an approximated solution to LPS(S) if L(S) ≥ m.
Proof. To prove Theorem 3.24, we present an algorithm based on the Manacher algorithm [14]. We add two
features: work with a sliding window instead of the whole string to satisfy the space requirements and lazy
computation to achieve real time. (The fact that the original Manacher algorithm admits a real-time version
was shown by Galil [7]; we adjusted Galil’s approach to solve LPS.) The details follow.
j−i
We say that a palindromic factor S[i..j] has center i+j
2 and radius 2 . Thus, odd-length (even-length)
palindromes have integer (resp., half integer) centers and radiuses. This looks a bit weird, but allows one to
avoid separate processing of these two types of palindromes. Manacher’s algorithm computes, in an online
fashion, an array of maximal radiuses of palindromes centered at every position of the input string S. A
variation, which outputs the length L of the longest palindrome in a string S, is presented as Algorithm
EBasic below. This variation is similar to the one of [13]. Here, n stays for the length of the input processed
so far, c is the center of the longest suffix-palindrome of the processed string. The array of radiuses Rad has
length 2n−1 and its elements are indexed by all integers and half integers from the interval [1, n]. Initially,
Rad is filled with zeroes. The left endmarker is added to the string for convenience. After each iteration, the
following invariant holds: the element Rad[i] has got its true value if i < c and equals zero if i > c; the value
Rad[c] = n − c can increase at the next iteration. Note that the longest palindrome in S coincides with the
longest suffix-palindrome of S[1..i] for some i. At the moment when the input stream ends, the algorithm
has already found all such suffix-palindromes, so it can stop without filling the rest of the array Rad.
Note that n calls to AddLetter perform at most 3n iterations of the cycle in lines 10–15 (each call performs
the first iteration plus zero or more “additional” iterations; the value of c gets increased before each additional
iteration and never decreases). So, Algorithm EBasic works in O(n) time but not in real time; for example,
reading the last letter of the string an b requires n iterations of the cycle.
By conditions of the theorem, we are not interested in palindromes of length > m+1. Thus, processing
a suffix-palindrome of length m or m+1 we assume that the symbol comparison in line 12 fails. So the
procedure AddLetter needs no access to S[i] or Rad[i] whenever i < n − m. Hence we store only recent
values of S and Rad and use circular arrays CS and CRad of size O(m) for this purpose. For example,
the symbol S[n−i] is stored in CS[(n−i) mod (m+1)] during m+1 successful iterations of the outer cycle
(lines 3–6), and then is replaced by S[n−i+m+1]; the same scheme applies to the array Rad. In this way,
all elements of S and Rad, needed by Algorithm EBasic, are accessible in constant time. Further, we define
15
Algorithm 6 : Algorithm EBasic, ith iteration
1: procedure Manacher
2:
c ← 1; L ← 1; n ← 1; S[0] ←’#’
3:
while not (end of input) do
4:
read(S[n + 1]); AddLetter(S[n + 1])
5:
if 2 ∗ Rad[c] + 1 > L then
6:
L ← 2 ∗ Rad[c] + 1
7:
return L
8: procedure AddLetter(a)
9:
s←c
10:
while c < n + 1 do
11:
Rad[c] ← min(Rad[2 ∗ s − c], n − c)
12:
if c + Rad[c] = n and S[c − Rad[c] − 1] = a then
13:
Rad[c] ← Rad[c] + 1
14:
break
15:
c ← c + 0.5
16:
n←n+1
. longest suf-pal of S[1..n + 1] is found
. next candidate for the center
a queue Q of size q for lazy computations; it contains symbols that are read from the input and await
processing.
Now we describe real-time Algorithm E. It reads input symbols to Q and stops when the end of the input
is reached. After reading a symbol, it runs procedure Manacher, requiring the latter to pause after three
“inner iterations” (of the cycle in lines 10–15). The procedure reads symbols from Q; if it tries to read from
the empty queue, it pauses. When the procedure is called next time, it resumes from the moment it was
stopped. Algorithm E returns the value of L after the last call to procedure Manacher.
To analyze Algorithm E, consider the value X = q + n − c after some iteration (clearly, this iteration has
number q+n) and look at the evolution of X over time. Let ∆f denote the variation of the quantity f at
one iteration. Note that ∆(q+n) = 1. Let us describe ∆X. First assume that the iteration contains three
inner iterations. Then ∆n = 0, 1, 2 or 3 and, respectively, ∆c = 1.5, 1, 0.5 or 0. Hence
∆X = 1 − ∆c = 1 + (∆n − 3)/2 = 1 − (1 − ∆q − 3)/2 = −(∆q)/2.
If the number of inner iterations is one or two, then q becomes zero (and was 0 or 1 before this iteration);
hence ∆n = 1 − ∆q ≥ 1. Then ∆c ≤ 0.5 and finally ∆X > 0. From these conditions on ∆X it follows that
(∗) if the current value of q is positive, then the current value of X is less than the value of X at the
moment when q was zero for the last time.
Let X 0 be the previous value of X mentioned in (∗). Since the difference n − c does not exceed the radius of
some palindrome, X 0 ≤ m/2. Since q ≤ X < X 0 , the queue Q uses O(m) space. Therefore the same space
bound applies to Algorithm E.
It remains to prove that Algorithm E returns the same number L as Algorithm EBasic with a sliding
window, in spite of the fact that Algorithm E stops earlier (the content of Q remains unprocessed by procedure
Manacher). Suppose that Algorithm E stops with q > 0 after n iterations. Then the longest palindrome
that could be found by processing the symbols in Q has the radius X = n + q − c. Now consider the iteration
mentioned in (∗) and let n0 and c0 be the values of n and c after it; so X 0 = n0 −c0 . Since q was zero after that
iteration, procedure Manacher read the symbol S[n0 ] during it; hence, it tried to extend a suffix-palindrome
of S[1..n0 −1] with the center c00 ≤ c0 . If this extension was successful, then a palindrome of radius at least
X 0 was found. If it was unsuccessful, then c0 ≥ c00 + 1/2 and hence S[1..n0 −1] has a suffix-palindrome of
length at least X 0 − 1/2. Thus, a palindrome of length X ≤ X 0 − 1/2 is not longer than a longest palindrome
seen before, and processing the queue cannot change the value of L. Thus, Algorithm E is correct.
16
The center and the radius of the longest palindrome in S can be updated each time the inequality in line
6 of procedure Manacher holds. Theorem 3.24 is proved.
Remark Lemma 3.23 and Theorem 3.24 show a practical way to solve LPS. Algorithm E is fast and
lightweight (2m machine words for the array Rad, m symbols in the sliding window and at most m symbols
in the queue; compare to 17 machine words per one tuple I(i) in the Monte Carlo algorithms). So it makes
direct sense to run Algorithm M and Algorithm E, both in O(log n) space, in parallel. Then either Algorithm
E will give an exact answer (which happens with high probability if the input stream is a “typical” string) or
both algorithms will produce approximations: one of fixed length and one with an approximation guarantee
(modulo the hash collision).
References
[1] A. Apostolico, D. Breslauer, and Z. Galil. Parallel detection of all palindromes in a string. Theoret.
Comput. Sci., 141:163–173, 1995.
[2] P. Berenbrink, F. Ergün, F. Mallmann-Trenn, and E. Sadeqi Azer. Palindrome recognition in the streaming model. In STACS 2014, volume 25 of LIPIcs, pages 149–161, Germany, 2014. Schloss DagstuhlLeibniz-Zentrum fuer Informatik, Dagstuhl Publishing.
[3] D. Breslauer and Z. Galil. Real-time streaming string-matching. In Combinatorial Pattern Matching,
volume 6661 of LNCS, pages 162–172, Berlin, 2011. Springer.
[4] R. Clifford, A. Fontaine, E. Porat, B. Sach, and T. A. Starikovskaya. Dictionary matching in a stream.
In ESA 2015, volume 9294 of LNCS, pages 361–372. Springer, 2015.
[5] R. Clifford, A. Fontaine, E. Porat, B. Sach, and T. A. Starikovskaya. The k -mismatch problem revisited.
In SODA 2016, pages 2039–2052. SIAM, 2016.
[6] F. Ergün, H. Jowhari, and M. Saglam. Periodicity in streams. In RANDOM 2010, volume 6302 of
LNCS, pages 545–559. Springer, 2010.
[7] Z. Galil. Real-time algorithms for string-matching and palindrome recognition. In Proc. 8th annual
ACM symposium on Theory of computing (STOC’76), pages 161–173, New York, USA, 1976. ACM.
[8] Z. Galil and J. Seiferas. A linear-time on-line recognition algorithm for “Palstar”. J. ACM, 25:102–111,
1978.
[9] P. Gawrychowski, F. Manea, and D. Nowotka. Testing generalised freeness of words. In STACS 2014,
volume 25 of LIPIcs, pages 337–349. Dagstuhl Publishing, 2014.
[10] M. Jalsenius, B. Porat, and B. Sach. Parameterized matching in the streaming model. In STACS 2013,
volume 20 of LIPIcs, pages 400–411. Dagstuhl Publishing, 2013.
[11] R. Karp and M. Rabin. Efficient randomized pattern matching algorithms. IBM Journal of Research
and Development, 31:249–260, 1987.
[12] D. E. Knuth, J. Morris, and V. Pratt. Fast pattern matching in strings. SIAM J. Comput., 6:323–350,
1977.
[13] D. Kosolobov, M. Rubinchik, and A. M. Shur. Finding distinct subpalindromes online. In Proc. Prague
Stringology Conference. PSC 2013, pages 63–69. Czech Technical University in Prague, 2013.
[14] G. Manacher. A new linear-time on-line algorithm finding the smallest initial palindrome of a string.
J. ACM, 22(3):346–351, 1975.
17
[15] B. Porat and E. Porat. Exact and approximate pattern matching in the streaming model. In FOCS
2009, pages 315–323. IEEE Computer Society, 2009.
[16] M. Rubinchik and A. M. Shur. The number of distinct subpalindromes in random words. Fundamenta
Informaticae, 145:371–384, 2016.
[17] A. C.-C. Yao. Probabilistic computations: Toward a unified measure of complexity (extended abstract).
In FOCS 1977, pages 222–227. IEEE Computer Society, 1977.
18
| 8 |
On Hierarchical Communication Topologies in
the π-calculus
Emanuele D’Osualdo1 and C.-H. Luke Ong2
arXiv:1601.01725v2 [] 19 Apr 2016
1
TU Kaiserslautern dosualdo@cs.uni-kl.de
2
University of Oxford lo@cs.ox.ac.uk
Abstract. This paper is concerned with the shape invariants satisfied
by the communication topology of π-terms, and the automatic inference
of these invariants. A π-term P is hierarchical if there is a finite forest
T such that the communication topology of every term reachable from
P satisfies a T-shaped invariant. We design a static analysis to prove a
term hierarchical by means of a novel type system that enjoys decidable
inference. The soundness proof of the type system employs a non-standard
view of π-calculus reactions. The coverability problem for hierarchical
terms is decidable. This is proved by showing that every hierarchical term
is depth-bounded, an undecidable property known in the literature. We
thus obtain an expressive static fragment of the π-calculus with decidable
safety verification problems.
1
Introduction
Concurrency is pervasive in computing. A standard approach is to organise concurrent software systems as a dynamic collection of processes that communicate
by message passing. Because processes may be destroyed or created, the number
of processes in the system changes in the course of the computation, and may
be unbounded. Moreover the messages that are exchanged may contain process
addresses. Consequently the communication topology of the system—the hypergraph [19, 18] connecting processes that can communicate directly—evolves over
time. In particular, the connectivity of a process (i.e. its neighbourhood in this
hypergraph) can change dynamically. The design and analysis of these systems
is difficult: the dynamic reconfigurability alone renders verification problems
undecidable. This paper is concerned with hierarchical systems, a new subclass of
concurrent message-passing systems that enjoys decidability of safety verification
problems, thanks to a shape constraint on the communication topology.
The π-calculus of Milner, Parrow and Walker [19] is a process calculus designed
to model systems with a dynamic communication topology. In the π-calculus,
processes can be spawned dynamically, and they communicate by exchanging
messages along synchronous channels. Furthermore channel names can themselves
be created dynamically, and passed as messages, a salient feature known as
mobility, as this enables processes to modify their neighbourhood at runtime.
It is well known that the π-calculus is a Turing-complete model of computation.
Verification problems on π-terms are therefore undecidable in general. There are
2
E. D’Osualdo and C.-H. L. Ong
however useful fragments of the calculus that support automatic verification. The
most expressive such fragment known to date is the depth-bounded π-calculus of
Meyer [12]. Depth boundedness is a constraint on the shape of communication
topologies. A π-term is depth-bounded if there is a number k such that every
simple path3 in the communication topology of every reachable π-term has length
bounded by k. Meyer [14] proved that termination and coverability (a class of
safety properties) are decidable for depth-bounded terms.
Unfortunately depth boundedness itself is an undecidable property [14], which
is a serious impediment to the practical application of the depth-bounded fragment to verification. This paper offers a two-step approach to this problem.
First we identify a (still undecidable) subclass of depth-bounded systems, called
hierarchical, by a shape constraint on communication topologies (as opposed to
numeric, as in the case of depth-boundedness). Secondly, by exploiting this richer
structure, we define a type system, which in turn gives a static characterisation of
an expressive and practically relevant fragment of the depth-bounded π-calculus.
Example 1 (Client-server pattern). To illustrate our approach, consider a simple
system implementing a client-server pattern. A server S is a process listening on
a channel s which acts as its address. A client C knows the address of a server
and has a private channel c that represents its identity. When the client wants
to communicate with the server, it asynchronously sends c along the channel s.
Upon receipt of the message, the server acquires knowledge of (the address of)
the requesting client; and spawns a process A to answer the client’s request R
asynchronously; the answer consists of a new piece of data, represented by a new
name d, sent along the channel c. Then the server forgets the identity of the
client and reverts to listening for new requests. Since only the requesting client
knows c at this point, the server’s answer can only be received by the correct
client. Figure 1a shows the communication topology of a server and a client, in
the three phases of the protocol.
The overall system is composed of an unbounded number of servers and
clients, constructed according to the above protocol. The topology of a reachable
configuration is depicted in Fig. 1b. While in general the topology of a mobile
system can become arbitrarily complex, for such common patterns as client-server,
the programmer often has a clear idea of the desired shape of the communication
topology: there will be a number of servers, each with its cluster of clients; each
client may in turn be waiting to receive a number of private replies. This suggests
a hierarchical relationship between the names representing servers, clients and
data, although the communication topology itself does not form a tree.
T-compatibility and hierarchical terms Recall that in the π-calculus there
is an important relation between terms, ≡, called structural congruence, which
equates terms that differ only in irrelevant presentation details, but not in
behaviour. For instance, the structural congruence laws for restriction tell us that
the order of restrictions is irrelevant—νx.νy.P ≡ νy.νx.P —and that the scope
3
a simple path is a path with no repeating edges.
On Hierarchical Communication Topologies in the π-calculus
S
s
C
d
c
s
C
cc
c
cc c
c
C
C
C
R
→
S
s
S
C
c
d
d
→
S
3
A
(a) the protocol
d
c
C
C
C
s
A
c
···
C d ··· d
C
C
C
R
R
R
s
A
A
A
A
C d ··· d
A
A
c
cc
(b) a reachable configuration
(c) forest representation
Fig. 1. Evolution of the communication topology of a server interacting with a client.
R represents a client’s pending request and A a server’s pending answer.
of a restriction can be extended to processes that do not refer to the restricted
name—i.e., (νx.P ) k Q ≡ νx.(P k Q) when x does not occur free in Q—without
altering the meaning of the term. The former law is called exchange, the latter is
called scope extrusion.
Our first contribution is a formalisation in the π-calculus of the intuitive
notion of hierarchy illustrated in Example 1. We shall often speak of the forest
representation of a π-term P , forest(P ), which is a version of the abstract syntax
tree of P that captures the nesting relationship between the active restrictions
of the term. (A restriction of a π-term is active if it is not in the scope of a
prefix.) Thus the internal nodes of a forest representation are labelled with
(active) restriction names, and its leaf nodes are labelled with the sequential
subterms. Given a π-term P , we are interested in not just forest(P ), but also
forest(P 0 ) where P 0 ranges over the structural congruents of P , because these
are all behaviourally equivalent representations. See Fig. 4 for an example of
the respective forest representations of the structural congruents of a term. In
our setting a hierarchy T is a finite forest of what we call base types. Given
a finite forest T , we say that a term P is T-compatible if there is a term P 0 ,
which is structurally congruent to P , such that the parent relation of forest(P 0 )
is consistent with the partial order of T .
In Example 1 we would introduce base types srv, cl and data associated with
the restrictions νs, νc and νd respectively, and we would like the system to be
compatible to the hierarchy T = srv cl data, where is the is-parent-of
relation. That is, we must be able to represent a configuration with a forest that,
for instance, does not place a server name below a client name nor a client name
below another client name. Such a representation is shown in Fig. 1c.
In the Example, we want every reachable configuration of the system to be
compatible with the hierarchy. We say that a π-term P is hierarchical if there is
a hierarchy T such that every term reachable from P is T-compatible. Thus the
hierarchy T is a shape invariant of the communication topology under reduction.
4
E. D’Osualdo and C.-H. L. Ong
a
b
S
a
b
≡
R
S
a
b
→
R
S0
R0
Fig. 2. Standard view of π-calculus reactions
It is instructive to express depth boundedness as a constraint on forest
representation: a term P is depth-bounded if there is a constant k such that
every term reachable from P has a structurally congruent P 0 whereby forest(P 0 )
has height bounded by k. It is straightforward to see that hierarchical terms are
depth-bounded; the converse is however not true.
A type system for hierarchical terms While membership of the hierarchical
fragment is undecidable, by exploiting the forest structure, we have devised
a novel type system that guarantees the invariance of T-compatibility under
reduction. Furthermore type inference is decidable, so that the type system can
be used to infer a hierarchy T with respect to which the input term is hierarchical.
To the best of our knowledge, our type system is the first that can infer a shape
invariant of the communication topology of a system.
The typing rules that ensure invariance of T-compatibility under reduction
arise from a new perspective of the π-calculus reaction, one that allows compatibility to a given hierarchy to be tracked more readily. Suppose we are presented
with a T-compatible term P = C[S, R] where C[-, -] is the reaction context, and
the two processes S = ahbi.S 0 and R = a(x).R0 are ready to communicate over
a channel a. After sending the message b, S continues as the process S 0 , while
upon receipt of b, R binds x to b and continues as R00 = R0 [ b/x ]. Schematically,
the traditional understanding of this transaction is: first extrude the scope of b
to include R, then let them react, as shown in Fig. 2.
Instead, we seek to implement the reaction without scope extrusion: after the
message is transmitted, the sender continues in-place as S 0 , while R00 is split in
0
0
two parts Rmig
k R¬mig
, one that uses the message (the migratable part) and one
0
that does not. As shown in Fig. 3, the migratable part of R00 , Rmig
, is “installed”
under b so that it can make use of the acquired name, while the non-migratable
0
one, R¬mig
, can simply continue in-place.
Crucially, the reaction context, C[-, -], is left unchanged. This means that if the
starting term is T-compatible, the reaction context of the reactum is T-compatible
as well. We can then focus on imposing constraints on the use of names of R0
0
so that the migration does not result in Rmig
escaping the scope of previously
bound names.
By using these ideas, our type system is able to statically accept π-calculus
encodings of such system as that discussed in Example 1. The type system can
be used, not just to check that a given T is respected by the behaviour of a
term, but also to infer a suitable T when it exists. Once typability of a term
On Hierarchical Communication Topologies in the π-calculus
a
a
→
b
S
5
R
b
R0
S 0 mig
R0
¬mig
Fig. 3. T-compatibility preserving reaction
is established, safety properties such as unreachability of error states, mutual
exclusion or bounds on mailboxes, can be verified algorithmically. For instance, in
Example 1, a coverability check can prove that each client can have at most one
reply pending in its mailbox. To prove such a property, one needs to construct an
argument that reasons about dynamically created names with a high degree of
precision. This is something that counter abstraction and uniform abstractions
based methods have great difficulty attaining.
Our type system is (necessarily) incomplete in that there are depth-bounded,
or even hierarchical, systems that cannot be typed. The class of π-terms that
can be typed is non-trivial, and includes terms which generate an unbounded
number of names and exhibit mobility.
Outline. In Section 2 we review the π-calculus, depth-bounded terms, and
related technical preliminaries. In Section 3 we introduce T-compatibility and the
hierarchical terms. We present our type system in Section 4. Section 5 discusses
soundness of the type system. In Section 6 we give a type inference algorithm;
and in Section 7 we present results on expressivity and discuss applications. We
conclude with related and future work in Sections 8 and 9. All missing definitions
and proofs can be found in Appendix.
2
2.1
The π-calculus and the depth-bounded fragment
Syntax and semantics
We use a π-calculus with guarded replication to express recursion [16]. Fix a
universe N of names representing channels and messages. The syntax is defined
by the grammar:
P 3 P, Q ::= 0 | νx.P | P1 k P2 | M | !M
M ::= M + M | π.P
π ::= a(x) | ahbi | τ
process
choice
prefix
Definition 1. Structural congruence, ≡, is the least relation that respects αconversion of bound names, and is associative and commutative with respect to +
6
E. D’Osualdo and C.-H. L. Ong
(choice) and k (parallel composition) with 0 as the neutral element, and satisfies
laws for restriction: νa.0 ≡ 0 and νa.νb.P ≡ νb.νa.P , and
!P ≡ P k !P
P k νa.Q ≡ νa.(P k Q)
Replication
(if a 6∈ fn(P ))
Scope Extrusion
In P = π.Q, we call Q the continuation of P and will often omit Q altogether
when Q = 0. In a term νx.P we will occasionally refer to P as the scope of x. The
name x is bound in both νx.P , and in a(x).P . We will write fn(P ), bn(P ) and
bnν (P ) for the set of free, bound and restriction-bound names in P , respectively.
A sub-term is active if it is not under a prefix. A name is active when it is bound
by an active restriction. We write actν (P ) for the set of active names of P . Terms
of the form M and !M are called sequential. We write S for the set of sequential
terms, actS (P ) for the set of active sequential processes of P , and P i for the
parallel composition of i copies of P .
Intuitively, a sequential process acts like a thread running finite-control
sequential code. A term τ .(P k Q) is the equivalent of spawning a process
Q and continuing as P —although in this context the rôles of P and Q are
interchangeable. Interaction is by synchronous communication over channels.
An input prefix a(x) is a blocking receive on the channel a binding the variable
x to the message. An output prefix ahbi is a blocking send of the message b
along the channel a; here b is itself the name of a channel that can be used
subsequently for further communication: an essential feature for mobility. A
non-blocking send can be simulated by spawning a new process doing a blocking
send. Restrictions are used to make a channel name private. A replication !(π.P )
can be understood as having a server that can spawn a new copy of P whenever
a process tries to communicate with it. In other words it behaves like an infinite
parallel composition (π.P k π.P k · · · ).
For conciseness, we assume channels are unary (the extension to the polyadic
case is straightforward). In contrast to process calculi without mobility, replication
and systems of tail recursive equations are equivalent methods of defining recursive
processes in the π-calculus [17, Section 3.1].
We rely on the following mild assumption, that the choice of names is unambiguous, especially when selecting a representative for a congruence class:
Name Uniqueness Assumption. Each name in P is bound at most once and
fn(P ) ∩ bn(P ) = ∅.
Normal Form. The notion of hierarchy, which is central to this paper, and
the associated type system depend heavily on structural congruence. These are
criteria that, given a structure on names, require the existence of a specific
representative of the structural congruence class exhibiting certain properties.
However, we cannot assume the input term is presented as that representative;
even worse, when the structure on names is not fixed (for example, when inferring
types) we cannot fix a representative and be sure that it will witness the desired
properties. Thus, in both the semantics and the type system, we manipulate
On Hierarchical Communication Topologies in the π-calculus
7
a neutral type of representative called normal form, which is a variant of the
standard form [19]. In this way we are not distracted by the particular syntactic
representation we are presented with.
We say that a term P is in normal form (P ∈ Pnf ) if it is in standard form
and each of its inactive subterms is also in normal form. Formally, normal forms
are defined by the grammar
Pnf 3 N ::= νx1 . · · · νxn .(A1 k · · · k Am )
A ::= π1 .N1 + · · · + πn .Nn | !(π1 .N1 + · · · + πn .Nn )
where the sequences x1 . . . xn and A1 . . . Am may be empty; when they are both
empty the normal form is the term 0. We further assume w.l.o.g. that normal
forms Q
satisfy Name Uniqueness. Given a finite set of indexes I = {i1 , P
. . . , in } we
write i∈I Ai for (Ai1 k · · · k Ain ), which is 0 when I is empty; and i∈I πi .Ni
for (πi1 .Ni1 + · · · + πin .Nin ). This notation is justified by commutativity and
associativity of the parallel and choice operators. Thanks to the structural laws
of restriction, we also write νX.P where X = {x1 , . . . , xn }, or νx1 x2 · · · xn .P ,
for νx1 . · · · νxn .P ; or just P when X is empty. When X and Y are disjoint sets
of names, we use juxtaposition for union.
Every process P ∈ P is structurally congruent to a process in normal form. The
function nf : P → Pnf , defined in Appendix, extracts, from a term, a structurally
congruent normal form.
Q
Given a process P with normal form νX. i∈I Ai , the communication topology 4
of P , written GJP K, is defined as the labelled hypergraph with X as hyperedges
and I as nodes, each labelled with the corresponding Ai . An hyperedge x ∈ X is
connected with i just if x ∈ fn(Ai ).
Semantics. We are interested in the reduction semantics of a π-term, which can
be described using the following rule.
Definition 2 (Semantics of π-calculus). The operational semantics of a term
P0 ∈ P is defined by the (pointed) transition system (P, →, P0 ) on π-terms, where
P0 is the initial term, and the transition relation, → ⊆ P 2 , is defined by P → Q
if either (i) to (iv) hold, or (v) and (vi) hold, where
(i)
(ii)
(iii)
(iv)
P ≡ νW.(S k R k C) ∈ Pnf ,
S = (ahbi.νYs .S 0 ) + Ms ,
R = (a(x).νYr .R0 ) + Mr ,
Q ≡ νW Ys Yr .(S 0 k R0 [ b/x ] k C),
(v) P ≡ νW.(τ .νY.P 0 k C) ∈ Pnf ,
(vi) Q ≡ νW Y.(P 0 k C).
We define the set of reachable terms from P as Reach(P ) := { Q | P →∗ Q },
writing →∗ to mean the reflexive, transitive closure of →. We refer to the
restrictions, νYs , νYr and νY , as the restrictions activated by the transition
P → Q.
Notice that the use of structural congruence in the definition of → takes unfolding
replication into account.
4
This definition arises from the “flow graphs” of [19]; see e.g. [14, p. 175] for a formal
definition.
8
E. D’Osualdo and C.-H. L. Ong
Example 2 (Client-server). We can model a variation of the client-server pattern
sketched in the introduction, with the term νs c.P where P = !S k !C k !M ,
S = s(x).νd.xhdi, C = c(m).(shmi k m(y).chmi) and M = τ .νm.chmi. The term
!S represents a server listening to a port s for a client’s requests. A request is a
channel x that the client sends to the server for exchanging the response. After
receiving x the server creates a new name d and sends it over x. The term !M
creates unboundedly many clients, each with its own private mailbox m. A client
on a mailbox m repeatedly sends requests to the server and concurrently waits
for the answer on the mailbox before recursing.
In the following examples, we use CCS-style nullary channels, which can be
understood as a shorthand: c.P := c(x).P and c.P := νx.chxi.P where x 6∈ fn(P ).
Example 3 (Resettable counter). A counter with reset is a process reacting to
messages on three channels inc, dec and rst. An inc message increases the value
of the counter, a dec message decreases it or causes a deadlock if the counter is
zero, and a rst message resets the counter to zero. This behaviour is exhibited
by the process Ci = ! pi (t). inc i .(t k pi hti) + dec i .(t.pi hti) + rst i .(νt0i .pi ht0i i) .
Here, the number of processes t in parallel with pi hti represents the current
value
of the counter i. A system νp1 t1 .(C1 k p1 ht1 i) k νp2 t2 .(C2 k p2 ht2 i) can for
instance simulate a two-counter machine when put in parallel with a finite control
process sending signals along the channels inc i , dec i and rst i .
Example 4 (Unbounded ring). Let R = νm.νs0 .(M k mhs0 i k s0 ), S = !(s.n)
and M = ! m(n).s0 .νs.(S k mhsi k s) . The term R implements an unboundedly
growing ring. It initialises the ring with a single “master” node pointing at
itself (s0 ) as the next in the ring. The term M , implementing the master node’s
behaviour, waits on s0 and reacts to a signal by creating a new slave with address
s connected with the previous next slave n. A slave S simply propagates the
signals on its channel to the next in the ring.
2.2
Forest representation of terms
In the technical developement of our ideas, we will manipulate the structure of
terms in non-trivial ways. When reasoning about these manipulations, a term is
best viewed as a forest representing (the relevant part of) its abstract syntax tree.
Since we only aim to capture the active portion of the term, the active sequential
subterms are the leaves of its forest view. Parallel composition corresponds to
(unordered) branching, and names introduced by restriction are represented by
internal (non-leaf) nodes.
A forest is a simple, acyclic, directed graph, f = (Nf , f ), where the edge
relation n1 f n2 means “n1 is the parent of n2 ”. We write ≤f and <f for
the reflexive transitive and the transitive closure of f respectively. A path is a
sequence of nodes, n1 . . . nk , such that for each i < k, ni f ni+1 . Henceforth
we drop the subscript f from f , ≤f and <f (as there is no risk of confusion),
and assume that all forests are finite. Thus every node has a unique path to a
root (and that root is unique).
On Hierarchical Communication Topologies in the π-calculus
1
2
a
3
c
a
b
c
A1
A1 A2 A3 A4
b
A3
A2 A4
a
4
A1
b
A1
A2
c
A4
a
c
b
A3
A2 A4
5
A2
9
b
c
a
A3
A1 A4
A3
Fig. 4. Examples of forests in FJP K where P = νa b c.(A1 k A2 k A3 k A4 ), A1 = a(x),
A2 = b(x), A3 = c(x) and A4 = ahbi.
An L-labelled forest is a pair ϕ = (fϕ , `ϕ ) where fϕ is a forest and `ϕ : Nϕ → L
is a labelling function on nodes. Given a path n1 . . . nk of fϕ , its trace is the
induced sequence `ϕ (n1 ) . . . `ϕ (nk ). By abuse of language, a trace is an element
of L∗ which is the trace of some path in the forest.
We define L-labelled forests inductively from the empty forest (∅, ∅). We write
ϕ1 ] ϕ2 for the disjoint union of forests ϕ1 and ϕ2 , and l[ϕ] for the forest with a
single root, which is labelled with l ∈ L, and whose children are the respective
roots of the forest ϕ. Since the choice of the set of nodes is irrelevant, we will
always interpret equality between forests up to isomorphism (i.e. a bijection on
nodes respecting parent and labeling).
Definition 3 (Forest representation). We represent the structural congruence class of a term P ∈ P with the set of labelled forests FJP K := {forest(Q) |
Q ≡ P } with labels in actν (P ) ] actS (P ) where forest(Q) is defined as
(∅, ∅)
Q[(∅, ∅)]
forest(Q) :=
x[forest(Q0 )]
forest(Q1 ) ] forest(Q2 )
if
if
if
if
Q=0
Q is sequential
Q = νx.Q0
Q = Q1 k Q2
Note that leaves (and only leaves) are labelled with sequential processes.
The restriction height, heightν (forest(P )), is the length of the longest path
formed of nodes labelled with names in forest(P ).
In Fig. 4 we show some of the possible forest representations of an example term.
2.3
Depth-bounded terms
Definition 4 (Depth-bounded term [12]). The nesting of restrictions of a
term is given by the function
nestν (M ) := nestν (!M ) := nestν (0) := 0
nestν (νx.P ) := 1 + nestν (P )
nestν (P k Q) := max(nestν (P ), nestν (Q)).
10
E. D’Osualdo and C.-H. L. Ong
The depth of a term is defined as the minimal nesting of restrictions in its
congruence class, depth(P ) := min {nestν (Q) | P ≡ Q}. A term P ∈ P is depthbounded if there exists k ∈ N such that for each Q ∈ Reach(P ), depth(Q) ≤ k.
We write Pdb for the set of terms with bounded depth.
Notice that nestν is not an invariant of structural congruence, whereas depth
and depth-boundedness are.
Example 5. Consider the congruent terms P and Q
P = νa.νb.νc. a(x) k bhci k c(y) ≡ νa.a(x) k νc. (νb.bhci) k c(y) = Q
We have nestν (P ) = 3 and nestν (Q) = 2; but depth(P ) = depth(Q) = 2.
It is straightforward to see that the nesting of restrictions of a term coincides
with the height of its forest representation, i.e., for every P ∈ P, nestν (P ) =
heightν (forest(P )).
Example 6 (Depth-bounded term). The term in Example 2 is depth-bounded: all
the reachable terms are congruent to terms of the form
Qijk = νs c. P k N i k Req j k Ans k
for some i, j, k ∈ N where N = νm.chmi, Req = νm.(shmi k m(y).chmi) and
Ans = νm.(νd.mhdi k m(y).chmi). For any i, j, k, nestν (Qijk ) ≤ 4.
Example 7 (Depth-unbounded term). Consider the term in Example 4 and the
following run:
R →∗ νm s0 .(M k νs1 .(!(s1 .s0 ) k mhs1 i k s1 ))
→∗ νm s0 .(M k νs1 .(!(s1 .s0 ) k νs2 .(!(s2 .s1 ) k mhs2 i k s2 ))) →∗ . . .
The scopes of s0 , s1 , s2 and the rest of the instantiations of νs are inextricably
nested, thus R has unbounded depth: for each n ≥ 1, a term with depth n is
reachable.
Depth boundedness is a semantic notion. Because the definition is a universal
quantification over reachable terms, analysis of depth boundedness is difficult.
Indeed the membership problem is undecidable [14]. In the communication
topology interpretation, depth has a tight relationship with the maximum length
of the simple paths. A path v1 e1 v2 . . . vn en vn+1 in GJP K is simple if it does not
repeat hyper-edges, i.e., ei =
6 ej for all i 6= j. A term is depth-bounded if and only
if there exists a bound on the length of the simple paths of the communication
topology of each reachable term [12]. This allows terms to grow unboundedly in
breadth, i.e., the degree of hyper-edges in the communication topology.
Q
A term P is Q
embeddable in a term Q, written P Q, if P ≡ νX. i∈I Ai ∈ Pnf
and Q ≡ νXY.( i∈I Ai k R) ∈ Pnf for some term R. In [12] the term embedding
ordering, , is shown to be both a simulation relation on π-terms, and an effective
well-quasi ordering on depth-bounded terms. This makes the transition system
(Reach(P )/≡ , →/≡ , P ) a well-structured transition system (WSTS) [7, 1] under
the term embedding ordering. Consequently a number of verification problems
are decidable for terms in Pdb .
On Hierarchical Communication Topologies in the π-calculus
11
Theorem 1 (Decidability of termination [12]). The termination problem
for depth-bounded terms, which asks, given a term P0 ∈ Pdb , if there is an infinite
sequence P0 → P1 → . . ., is decidable.
Theorem 2 (Decidability of coverability [12, 24]). The coverability problem for depth-bounded terms, which asks, given a term P ∈ Pdb and a query
Q ∈ P, if there exists P 0 ∈ Reach(P ) such that Q P 0 , is decidable.
3
T-compatibility and hierarchical terms
A hierarchy is specified by a finite forest (T , ). In order to formally relate active
restrictions in a term to nodes of the hierarchy T , we annotate restrictions with
types. For the moment we view types abstractly as elements of a set T, equipped
with a map base : T → T . An annotated restriction ν(x : τ ) where τ ∈ T will be
associated with the node base(τ ) in the hierarchy T . Elements of T are called
types, and those of T are called base types. In the simplest case and, especially for
Section 3, we may assume T = T and base(t) = t. In Section 4 we will consider a
set T of types generated from T , and a non-trivial base map.
Definition 5 (Annotated term). A T-annotated π-term (or simply annotated
π-term) P ∈ P T has the same syntax as ordinary π-terms except that restrictions
take the form ν(x : τ ) where τ ∈ T. In the abbreviated form νX, X is a set of
annotated names (x : τ ).
Structural congruence, ≡, of annotated terms, is defined by Definition 1, with
the proviso that the type annotations are invariant under α-conversion and
replication. For example, ! π.ν(x : τ ).P ≡ π.ν(x : τ ).P k ! π.ν(x : τ ).P and
ν(x : τ ).P ≡ ν(y : τ ).P [ y/x ]; observe that the annotated restrictions that occur
in a replication unfolding are necessarily inactive.
The forest representation of an annotated π-term is obtained from Definition 3
by replacing the case of Q = ν(x : τ ).Q0 by
forest(ν(x : τ ).Q0 ) := (x, t)[forest(Q0 )]
where base(τ ) = t. Thus the forests in FJP K have labels in (actν (P )×T )]actS (P ).
We write FT for the set of forests with labels in (N × T ) ] S. We write PnfT for
the set of T-annotated π-terms in normal form.
The definition of the transition relation of annotated terms, P → Q, is
obtained from Definition 2, where W, Ys , Yr and Y are now sets of annotated
names, by replacing clauses (iv) and (vi) by
(iv’) Q ≡ νW Ys0 Yr0 .(S 0 k R0 [ b/x ] k C)
(vi’) Q ≡ νW Y 0 .(P 0 k C)
respectively, such that Ys N = Ys0 N , Yr N = Yr0 N , and Y N = Y N ,
where X N := {x ∈ N | ∃τ.(x : τ ) ∈ X}. I.e. the type annotation of the names
that are activated by the transition (i.e. those from Ys , Yr and Y ) are not required
to be preserved in Q. (By contrast, the annotation of every active restriction in
P is preserved by the transition.) While in this context inactive annotations can
12
E. D’Osualdo and C.-H. L. Ong
be ignored by the transitions, they will be used by the type system in Section 4,
to establish invariance of T-compatible.
Now we are ready to explain what it means for an annotated term P to be
T-compatible: there is a forest in FJP K such that every trace of it projects to a
chain in the partial order T .
Definition 6 (T-compatibility). Let P ∈ P T be an annotated π-term. A forest
ϕ ∈ FJP K is T-compatible if for every trace ((x1 , t1 ) . . . (xk , tk ) A) in ϕ it holds
that t1 < t2 < . . . < tk . The π-term P is T-compatible if FJP K contains a
T-compatible forest. A term is T-shaped if each of its subterms is T-compatible.
As a property of annotated terms, T-compatibility is by definition invariant
under structural congruence.
A term P 0 ∈ P T is a type annotation (or simply annotation) of P ∈ P if its
type-erasure, written pP q, coincides with P . (We omit the obvious definition
of type-erasure.) A consistent annotation of a transition of terms, P → Q, is a
choice function that, given an annotation P 0 of P , returns an annotation Q0 of Q
such that P 0 → Q0 . Note that it follows from the definition that the annotation of
every active restriction in P 0 is preserved in Q0 . The effect of the choice function
is therefore to pick a possibly new annotation for each restriction in Q0 that is
activated by the transition. Thus, given a semantics (P, →, P ) of a term P , and
an annotation P 0 of P , and a consistent annotation for every transition of the
semantics, there is a well-defined pointed transition system (P T , →0 , P 0 ) such
that every transition sequence of the former lifts to a transition sequence of the
latter. We call (P T , →0 , P 0 ) a consistent annotation of the semantics (P, →, P ).
Definition 7 (Hierarchical term). A term P ∈ P is hierarchical if there exist
a finite forest T = T and a consistent annotation (P T , →0 , P 0 ) of the semantics
(P, →, P ) of P , such that all terms reachable from P 0 are T-compatible.
Example 8. The term in Examples 2 and 6 is hierarchical: take the hierarchy
T = s c m d and annotate each name in Qijk as follows: s : s, c : c, m : m
and d : d. The annotation is consistent, and forest(Qijk ) is T-compatible for all
i,j and k.
Example 4 gives an example of a term that is not hierarchical. The forest
representation of the reachable terms shown in Example 7 does not have a
bounded height, which means that if T has n base types, there is a reachable
term with a representation of height bigger than n, which implies that there will
be a path repeating a base type.
Let us now study this fragment. First it is easy to see that invariance of
T-compatibility under reduction →, for some finite T , puts a bound |T | on the
height of the T-compatible reachable forests, and consequently a bound on depth.
Theorem 3. Every hierarchical term is depth-bounded. The converse is false.
Thanks to Theorem 2, an immediate corollary of Theorem 3 is that coverability
and termination are decidable for hierarchical terms.
On Hierarchical Communication Topologies in the π-calculus
13
Unfortunately, like the depth-bounded fragment, membership of the hierarchical fragment is undecidable. The proof is by adapting the argument for the
undecidability of depth boundedness [14].
Lemma 1. Every terminating π-term is hierarchical.
Proof. Since the transition system of a term, quotiented by structural congruence,
is finitely branching, by König’s lemma the computation tree of a terminating
term is finite, so it contains finitely many reachable processes and therefore
finitely many names. Take the set of all (disambiguated) active names of the
reachable terms and fix an arbitrary total order T on them. The consistent
annotation with (x : x) for each name will prove the term hierarchical.
Theorem 4. Determining whether an arbitrary π-term is hierarchical, is undecidable.
Proof. The π-calculus is Turing-complete, so termination is undecidable. Suppose
we had an algorithm to decide if a term is hierarchical. Then we could decide
termination of an arbitrary π-term by first checking if the term is hierarchical; if
the answer is yes, we can decide termination for it by Theorem 1, otherwise we
know that it is not terminating by Lemma 1.
Theorem 4—and the corresponding version for depth-bounded terms—is a
serious impediment to any practical application of hierarchical terms to verifcation:
when presented with a term to verify, one has to prove that it belongs to one of
the two fragments, manually, before one can apply the relevant algorithms.
While the two fragments have a lot in common, hierarchical systems have a
richer structure, which we will exploit to define a type system that can prove a
term hierarchical, in a feasible, sound but incomplete way. Thanks to the notion
of hierarchy, we are thus able to statically capture an expressive fragment of the
π-calculus that enjoys decidable coverability.
4
A type system for hierarchical topologies
The purpose of this section is to devise a static check to determine if a term is
hierarchical. To do so, we define a type system, parametrised over a forest T ,
which satisfies subject reduction. Furthermore we prove that if a term is typable
then T-shapedness is preserved by reduction of the term. Typability together
with T-shapedness of the initial term would then prove the term hierarchical.
As we have seen in the introduction, the typing rules make use of a new
perspective on π-calculus reactions. Take the term
P = νa.(νb.ahbi.S k νc.a(x).R) = C[ahbi.S, a(x).R]
where C[−1 , −2 ] = νa.(νb.[−1 ] k νc.[−2 ]) is the reaction context. Standardly the
synchronisation of the two sequential processes over a is preceded by an extrusion
14
E. D’Osualdo and C.-H. L. Ong
of the scope of b to include νc.a(x).R, followed by the actual reaction:
νa. νb.(ahbi.S) k νc.a(x).R ≡ νa.νb. ahbi.S k νc.a(x).R
→ νa.νb. S k νc.(R[ b/x ])
This dynamic reshuffling of scopes is problematic for establishing invariance
of T-compatibility under reduction: notice how νc is brought into the scope of
νb, possibly disrupting T-compatibility. (For example, the preceding reduction
would break T-compatibility of the forest representations if the tree T is either
a c b or b a c.) We therefore adopt a different view. After the message is
transmitted, the sender continues in-place as S, while R is split into two parts
Rmig k R¬mig , one that uses the message (the migratable one) and one that does
not. The migratable portion Rmig is “installed” under νb so that it can make use
of the acquired name, while the non-migratable one can simply continue in-place:
νa. νb.(ahbi.S) k νc.a(x).R → νa. νb.(S k Rmig [ b/x ]) k νc.R¬mig
|
{z
}
|
{z
}
C[ahbi.S, a(x).R]
C[SkRmig [ b/x ], R¬mig ]
Crucially, the reaction context C is unchanged. This means that if the starting
term is T-compatible, the context of the reactum is T-compatible as well. Naturally,
this only makes sense if Rmig does not use c. Thus our typing rules impose
constraints on the use of names of R so that the migration does not result in
Rmig escaping the scope of bound names such as c.
The formal definition of “migratable” is subtle. Consider the term
νf.a(x).νc d e. xhci k chdi k ahei.ehf i
Upon synchronisation with νb.ahbi, surely xhci will need to be put under the scope
of νb after substituting b for x, hence the first component of the continuation,
xhci, is migratable. However this implies that the scope of νc will need to be
placed under νb, which in turn implies that chdi needs to be considered migratable
as well. On the other hand, νe.ahei.ehf i must be placed in the scope of f , which
may not be known by the sender, so it is not considered migratable.The following
definition makes these observations precise.
Definition
8 (Linked to, tied to, migratable). Given a normal form P =
Q
νX. i∈I Ai we say that Ai is linked to Aj in P , written i ↔P j, if fn(Ai )∩fn(Aj )∩
X=
6 ∅. We define the tied-to relation as the transitive closure of ↔P . I.e. Ai is
tied to Aj , written i aP j, if ∃k1 , . . . , kn ∈ I. i ↔P k1 ↔P k2 . . . ↔P kn ↔P j,
for some n ≥ 0. Furthermore, we say that a name y is tied to Ai in P , written
y /P i, if ∃j ∈ I. y ∈ Q
fn(Aj ) ∧ j aP i. Given an input-prefixed normal form
a(y).P where P = νX. i∈I Ai , we say that Ai is migratable in a(y).P , written
Miga(y).P (i), if y /P i.
These definitions have an intuitive meaning with respect to the communication
topology of a normal form P : two sequential subterms are linked if they are
connected by an hyperedge in the communication topology of P , and are tied to
each other if there exists a path between them.
On Hierarchical Communication Topologies in the π-calculus
15
The following lemma indicates how the tied-to relation fundamentally constrains the possible shape of the forest of a term.
Q
Lemma 2. Let P = νX. i∈I Ai ∈ Pnf , if i aP j then if a forest ϕ ∈ FJP K has
leaves labelled with Ai and Aj respectively, they belong to the same tree in ϕ (i.e.,
have a common ancestor in ϕ).
Example 9. Take the normal form P = νa b c.(A1 k A2 k A3 k A4 ) where
A1 = a(x), A2 = b(x), A3 = c(x) and A4 = ahbi. We have 1 ↔P 4, 2 ↔P 4,
therefore 1 aP 2 aP 4 and a /P 2. In Fig. 4 we show some of the forests in FJP K.
Forest 1 represents forest(P ). The fact that A1 , A2 and A4 are tied is reflected
by the fact that none of the forests place them in disjoint trees. Now suppose we
select only the forests in FJP K that respect the hierarchy a b: in all the forests
in this set, the nodes labelled with A1 , A2 and A4 have a as common ancestor
(as in forests 1, 2, 3 and 4). In particular, in these forests A2 is necessarily a
descendent of a even if a is not one of its free names.
In Section 3 we introduced annotations in a rather abstract way by means
of a generic domain of types T. In Definition 7 we ask for the existence of an
annotation for the semantics of a term. Specifically, one can decide an arbitrary
annotation for each active name. A type system however will examine the term
statically, which means that it needs to know what could be a possible annotation
for a variable, i.e., the name bound in an input action. This information is directly
related to the notion of data-flow, that is the set of names that are bound to a
variable during runtime. Since a static method cannot capture this information
precisely, we make use of sorts [17], also known as simple types, to approximate
it. The annotation of a restriction will carry not only which base type should
be associated with its instances, but also instructions on how to annotate the
messages received or sent through those instances. Concretely, we define
T 3 τ ::= t | t[τ ]
where t ∈ T is a base type.
A name with type t cannot be used as a channel but can be used as a message;
a name with type t[τ ] can be used to transmit a name of type τ . We will write
base(τ ) for t when τ = t[τ 0 ] or τ = t. By abuse of notation we write, for a set of
types X, base(X) for the set of base types of the types in X.
As is standard, we keep track of the types of free names by means of a typing
environment. An environment Γ is a partial map from names to types, which
we will write as a set of type assignments, x : τ . Given a set of names X and
an environment Γ , we write Γ (X) for the set {Γ (x) | x ∈ X ∩ dom(Γ )}. Given
two environments Γ and Γ 0 with dom(Γ ) ∩ dom(Γ 0 ) = ∅, we write Γ Γ 0 for their
union. For a type environment Γ we define
minT (Γ ) := {(x : τ ) ∈ Γ | ∀(y : τ 0 ) ∈ Γ. base(τ 0 ) 6< base(τ )}.
A judgement Γ `T P means that P ∈ PnfT can be typed under assumptions
Γ , over the hierarchy T ; we say that P is typable if Γ `T P is provable for some
16
E. D’Osualdo and C.-H. L. Ong
∀i ∈ I. Γ, X `T Ai
∀i ∈ I. ∀x : τx ∈ X. x /P i =⇒ base(Γ (fn(Ai ))) < base(τx )
Par
Q
Γ `T νX. i∈I Ai
∀i ∈ I. Γ `T πi .Pi
Choice
P
Γ `T
i∈I πi .Pi
a : ta [τb ] ∈ Γ
Γ `T A
Γ `T !A
b : τb ∈ Γ
Γ `T ahbi.Q
Repl
Γ `T Q
Γ `T P
Γ `T τ .P
Tau
Out
Q
a : ta [τx ] ∈ Γ
Γ, x : τx `T νX. i∈I Ai
base(τx ) < ta ∨ ∀i ∈ I. Miga(x).P (i) =⇒ base(Γ (fn(Ai ) \ {a})) < ta
In
Q
Γ `T a(x).νX. i∈I Ai
Fig. 5. A type system for hierarchical terms. The term P stands for νX.
Q
i∈I Ai .
Γ and T . An arbitrary term P ∈ P T is said to be typable if its normal form is.
The typing rules are presented in Fig. 5.
The type system presents several non-standard features. First, it is defined on
normal forms as opposed to general π-terms. This choice is motivated by the fact
that different syntactic presentations of the same term may be misleading when
trying to analyse the relation between the structure of the term and T . The rules
need to guarantee that a reduction will not break T-compatibility, which is a
property of the congruence class of the term. As justified by Lemma 2, the scope
of names in a congruence class may vary, but the tied-to relation puts constraints
on the structure that must be obeyed by all members of the class. Therefore
the type system is designed around this basic concept, rather than the specific
scoping of any representative of the structural congruence class. Second, no type
information is associated with the typed term, only restricted names hold type
annotations. Third, while the rules are compositional, the constraints on base
types have a global flavour due to the fact that they involve the structure of T
which is a global parameter of typing proofs.
Let us illustrate intuitively how the constraints enforced by the rules guarantee
preservation of T-compatibility. Consider the term
P = νe a. νb. ahbi.A0 k νd. a(x).Q
with Q = νc.(A1 k A2 k A3 ), A0 = b(y), A1 = xhci, A2 = c(z).ahei and A3 = ahdi.
Let T be the forest with te ta tb tc and ta td , where tx is the base type of
the (omitted) annotation of the restriction νx, for x ∈ {a, b, c, d, e}. The reader
can check that forest(P ) is T-compatible.
In the traditional understanding of mobility, we would interpret the commu
nication of b over a as an application of scope extrusion to include νd. a(x).Q
in the scope of b and then syncronisation over a with the application of the
On Hierarchical Communication Topologies in the π-calculus
17
substitution [ b/x ] to Q; note that the substitution is only valid because the
scope of b has been extended to include the receiver.
Our key observation is that we can instead interpret this communication as a
migration of the subcomponents of Q that do get their scopes changed by the
reduction, from the scope of the receiver to the scope of the sender. For this
operation to be sound, the subcomponents of Q migrating to the sender’s scope
cannot use the names that are in the scope of the receiver but not of the sender.
In our specific example, after the synchronisation between the prefixes ahbi and
a(x), b is substituted to x in A1 resulting in the term A01 = bhci and A0 , A01 , A2
and A3 become active. The scope of A0 can remain unchanged as it cannot know
more names than before as a result of the communication. By contrast, A1 now
knows b as a result of the substitution [ b/x ]: A1 needs to migrate under the
scope of b. Since A1 uses c as well, the scope of c needs to be moved under b;
however A2 uses c so it needs to migrate under b with the scope of c. A3 instead
does not use neither b nor c so it can avoid migration and its scope remains
unaltered.
This information can be formalised using the tied-to relation: on one hand, A1
and A2 need to be moved together because 1 aQ 2 and they need to be moved
because x /Q 1, 2. On the other hand, A3 is not tied to neither A1 nor A2 in Q
and does not know x, thus it is not migratable. After reduction, our view of the
reactum is the term
νa. νb. A0 k νc.(A01 k A2 ) k νd.A3
the forest of which is T-compatible. Rule Par, applied to A1 and A2 , ensures
that c has a base type that can be nested under the one of b. Rule In does not
impose constraints on the base types of A3 because A3 is not migratable. It does
however check that the base type of e is an ancestor of the one of a, thus ensuring
that both receiver and sender are already in the scope of e. The base type of a
does not need to be further constrained since the fact that the synchronisation
happened on it implies that both the receiver and the sender were already under
its scope; this implies, by T-compatibility of P , that c can be nested under a.
We now describe the purpose of the rules of the type system in more detail.
Most of the rules just drive the derivation through the structure of the term. The
crucial constraints are checked by Par, In and Out.
The Out rule. The main purpose of rule Out is enforcing types to be consistent
with the dataflow of the process: the type of the argument of a channel a must
agree with the types of all the names that may be sent over a. This is a very
coarse sound over-approximation of the dataflow; if necessary it could be refined
using well-known techniques from the literature but a simple approach is sufficient
here to type interesting processes.
The Par rule. Rule Par is best understood imagining the normal form to be
typed, P , as the continuation of a prefix π.P . In this context a reduction exposes
each of the active sequential subterms of P which need to have a place in a
18
E. D’Osualdo and C.-H. L. Ong
T-compatible forest for the reactum. The constraint in Par can be read as follows.
A “new” leaf Ai may refer to names already present in the forests of the reaction
context; these names are the ones mentioned in both fn(Ai ) and Γ . Then we
must be able to insert Ai so that we can find these names in its path. However,
Ai must belong to a tree containing all the names in X that are tied to it in
P . So by requiring every name tied to Ai to have a base type greater than any
name in the context that Ai may refer to, we make sure that we can insert the
continuation in the forest of the context without violating T-compatibility. Note
that Γ (fn(Ai )) contains only types that annotate names both in Γ and fn(Ai ),
that is, names which are not restricted by X and are referenced by Ai (and
therefore come from the context).
The In rule. Rule In serves two purposes: on the one hand it requires the type
of the messages that can be sent through a to be consistent with the use of the
variable x which will be bound to the messages; on the other hand, it constrains
the base types of a and x so that synchronisation can be performed without
breaking T-compatibility.
The second purpose is achieved by distinguishing two cases, represented by
the two disjuncts of the condition on base types of the rule. In the first case, the
base type of the message is an ancestor of the base type of a in T . This implies
that in any T-compatible forest representing a(x).P , the name b sent as message
over a is already in the scope of P . Under this circumstance, there is no real
mobility, P does not know new names by the effect of the substitution [ b/x ],
and the T-compatibility constraints to be satisfied are in essence unaltered.
The second case is more complicated as it involves genuine mobility. This case
also requires a slightly non-standard feature: not only do the premises predicate
on the direct subcomponents of an input prefixed term, but also on the direct
subcomponents of the continuation. This is needed to be able to separate the
continuation in two parts: the one requiring migration and the one that does
not. The situation during execution is depicted in Fig. 6. The non migratable
sequential terms behave exactly as the case of the first disjunct: their scope is
unaltered. The migratable ones instead are intended to be inserted as descendents
of the node representing the message b in the forest of the reaction context.
For this to be valid without rearrangement of the forest of the context, we
need all the names in the context that are referenced in the migratable terms, to
be also in the scope at b; we make sure this is the case by requiring the free names
of any migratable Ai that are from the context (i.e. in Γ ) to have base types
smaller than the base type of a. The set base(Γ (fn(Ai ) \ {a})) indeed represents
the base types of the names in the reaction context referenced in a migratable
continuation Ai . In fact a is a name that needs to be in the scope of both the
sender and the receiver at the same time, so it needs to be a common ancestor of
sender and receiver in any T-compatible forest. Any name in the reaction context
and in the continuation of the receiver, with a base type smaller than the one of
a, will be an ancestor of a—and hence of the sender, the receiver and the node
representing the message—in any T-compatible forest. Clearly, remembering a
On Hierarchical Communication Topologies in the π-calculus
a
a
→
b
ahbi
19
a(x).P
b
P
mig
P
¬mig
Fig. 6. Explanation of constraints imposed by rule In. The dashed lines represent
references to names restricted in the reduction context.
is not harmful as it must be already in the scope of receiver and sender, so we
exclude it from the constraint.
Example 10. Take the normal form in Example 2. Let us fix T to be the forest
s c m d and annotate the normal form with the following types: s : τs = s[τm ],
c : τc = c[τm ], m : τm = m[d] and d : d. We want to prove ∅ `T νs c.P . We can
apply rule Par: in this case there are no conditions on types because, being
the environment empty, we have base(∅(fn(A))) = ∅ for every active sequential
term A of P . Let Γ = {(s : τs ), (c : τc )}. The rule requires Γ `T !S, Γ `T !C and
Γ `T !M , which can be proved by proving typability of S, C and M under Γ by
rule Repl.
To prove Γ `T S we apply rule In; we have s : s[τm ] ∈ Γ and we need to prove
that Γ, x : τm `T νd.xhdi. No constraints on base types are generated at this step
since the migratable sequential term νd.xhdi does not contain free variables typed
by Γ making Γ (fn(νd.xhdi) \ {a}) = Γ ({x}) empty. Next, Γ, x : τm `T νd.xhdi
can be proved by applying rule Par which amounts to checking Γ, x : τm , d : d `T
xhdi.0 (by a simple application of Out and the axiom Γ, x : τm , d : d `T 0) and
verifying the condition—true in T —base(τm ) < base(τd ): in fact d is tied to xhdi
and, for Γ 0 = Γ ∪ {x : τm }, base(Γ 0 (fn(xhdi))) = base(Γ 0 ({x, d})) = base({τm }).
The proof for Γ `T M is similar and requires c < m which is true in T .
Finally, we can prove Γ `T C using rule In; both the two continuations A1 =
shmi and A2 = m(y).chmi are migratable in C and since base(τm ) < base(τc )
is false we need the other disjunct of the condition to be true. This amounts
to checking that base(Γ (fn(A1 ) \ {c})) = base(Γ ({s, m})) = base({τs }) = s < c
(note m 6∈ dom(Γ )) and base(Γ (fn(Aa ) \ {c})) = base(Γ (∅)) < c (that holds
trivially).
To complete the typing we need to show Γ, m : τm `T A1 and Γ, m : τm `T A2 .
The former can be proved by a simple application of Out which does not impose
further constraints on T . The latter is proved by applying In which requires
base(τc ) < m, which holds in T .
Note how, at every step, there is only one rule that applies to each subproof.
Example 11. The term Example 4 is not typable under any T . To see why,
one can build the proof tree without assumptions on T by assuming that each
20
E. D’Osualdo and C.-H. L. Ong
restriction νx has base type tx . When typing mhsi we deduce that ts = tn , which
is in contradiction with the constraint that tn < ts required by rule Par when
typing νs.(S k mhsi k s).
5
Soundness of the type system
We now establish the soundness of the type system. Theorem 5 will show how
typability is preserved by reduction. Theorem 6 establishes the main property
of the type system: if a term is typable then T-shapedness is invariant under
reduction. This allows us to conclude that if a term is T-shaped and typable,
then every term reachable from it will be T-shaped.
The subtitution lemma states that substituting names without altering the
types preserves typability.
Lemma 3 (Substitution). Let P ∈ PnfT and Γ be a typing environment such
that Γ (a) = Γ (b). Then it holds that if Γ `T P then Γ `T P [ b/a ].
Before we state the main theorem, we define the notion of P -safe type
environment, which is a simple restriction on the types that can be assigned to
names that are free at the top-level of a term.
Definition 9 (P -safe environment). A type environment Γ is said to be P safe if for each x ∈ fn(P ) and (y : τ ) ∈ bnν (P ), base(Γ (x)) < base(τ ).
Theorem 5 (Subject Reduction). Let P and Q be two terms in PnfT and Γ
be a P -safe type environment. If Γ `T P and P → Q, then Γ `T Q.
The proof is by careful analysis of how the typing proof for P can be adapted
to derive a proof for Q. The only difficulty comes from the fact that some of the
subterms of P will appear in Q with a substitution applied. However, typability
of P ensures that we are only substituting names for names with the same type,
thus allowing us to apply Lemma 3.
To establish that T-shapedness is invariant under reduction for typable terms,
we will need to show that starting from a typable T-shaped term P , any step will
reduce it to a (typable) T-shaped term. The hypothesis of T-compatibility of P
can be used to extract a T-compatible forest ϕ from FJP K. While many forests in
FJP K can be witnesses of the T-compatibility of P , we want to characterise the
shape of a witness that must exist if P is T-compatible. The proof of invariance
relies on selecting a ϕ that does not impose unnecessary hierarchical dependencies
among names. Such forest is identified by ΦT (nf(P )): it is the shallowest among
all the T-compatible forests in FJP K.
Definition 10 (ΦT ). The function ΦT : PnfT → FT is defined inductively as
]
Q
ΦT ( i∈I Ai ) :=
{Ai []}
i∈I
o
Q
(x, base(τ ))[ΦT (νYx . j∈Ix Aj )] (x : τ ) ∈ minT (X)
Q
] ΦT (νZ. r∈R Ar )
ΦT (P ) :=
]n
On Hierarchical Communication Topologies in the π-calculus
21
Q
where X 6= ∅, P = νX. i∈I Ai Ix = {i ∈ I | x /P i} and
Yx = {(y : τ ) ∈ X | ∃i ∈ Ix . y ∈ fn(Ai )} \ minT (X)
S
Z = X\ (x : τ )∈minT (X) Yx ∪ {x : τ }
S
R = I\ (x : τ )∈minT (X) Ix
Forest 4 of Fig. 4 is ΦT (P ) when every restriction νx has base type x (for
x ∈ {a, b, c}) and T is the forest with nodes a, b and c and a single edge a b.
Lemma 4. Let P ∈ PnfT . Then:
a) ΦT (P ) is a T-compatible forest;
b) ΦT (P ) ∈ FJP K if and only if P is T-compatible;
c) if P ≡ Q ∈ P T then ΦT (P ) ∈ FJQK if and only if Q is T-compatible.
Theorem 6 (Invariance of T-shapedness). Let P and Q be terms in PnfT
such that P → Q and Γ be a P -safe environment such that Γ `T P . Then, if P
is T-shaped then Q is T-shaped.
The key of the proof is a) the use of ΦT (P ) to extract a specific T-compatible
forest, b) the definition of a way to insert the subtrees of the continuations of
the reacting processes in the forest of reaction context, in a way that preserves
T-compatibility. Thanks to the constraints of the typing rules, we will always
be able to find a valid place in the reaction context where to attach the trees
representing the reactum.
6
Type inference
In this section we will show that it is possible to take any non-annotated normal
form P and derive a forest T and an annotated version of P that can be typed
under T .
Inference for simple types has already been proved decidable in [8, 23]. In our
case, since our types are not recursive, the algorithm concerned purely with the
constraints imposed by the type system of the form τx = t[τy ] is even simpler.
The main difficulty is inferring the structure of T .
Let us first be more specific on assigning simple types. The number of ways
a term P can be annotated with types are infinite, simply from the fact that
types allow an arbitrary nesting as in t, t[t], t[t[t]] and so on. We observe that,
however, there is no use annotating a restriction with a type with nesting deeper
than the size of the program: the type system cannot inspect more deeply nested
types. Thanks to this observation we can restrict ourselves to annotations with
bounded nesting in the type’s structure. This also gives a bound on the number
of base types that need to appear in the annotated term. Therefore, there are
only finitely many possible annotations and possible forests under which P can
be proved typably hierarchical. A naı̈ve inference algorithm can then enumerate
all of them and type check each.
22
E. D’Osualdo and C.-H. L. Ong
Theorem 7 (Decidability of inference). Given a normal form P ∈ Pnf , it is
decidable if there exists a finite forest T , a T-annotated version P 0 ∈ P T of P
and a P 0 -safe environment Γ such that P 0 is T-shaped and Γ `T P 0 .
While enumerating all the relevant forests, annotations and environments is
impractical, more clever strategies for inference exist.
We start by annotating the term with type variables: each name x gets typed
with a type variable tx . Then we start the type derivation, collecting all the
constraints on types along the way. If we can find a T and type expressions to
associate to each type variable, so that these constraints are satisfied, the process
can be typed under T .
By inspecting rules Par and In we observe that all the “tied-to” and “migratable” predicates do not depend on T so for any given P , the type constraints
can be expressed simply by conjuctions and disjuctions of two kinds of basic
predicates:
1. data-flow constraints of the form tx = tx [ty ] where tx is a base type variable;
2. base type constraints of the form base(tx ) < base(ty ) which correspond to
constraints over the corresponding base type variables, e.g. tx < ty .
Note that the P -safety condition on Γ translates to constraints of the second
kind. The first kind of constraint can be solved using unification in linear time.
If no solution exists, the process cannot be typed. This is the case of processes
that cannot be simply typed. If unification is successful we get a set of equations
over base type variables. Any assignment of those variables to nodes in a suitable
forest that satisfies the constraints of the second kind would be a witness of
typability. An example of the type inference in action can be found in Appendix.
First we note that if there exists a T which makes P typable and T-compatible,
then there exists a T 0 which does the same but is a linear chain of base types (i.e.
a single tree with no branching). To see how, simply take T 0 to be any topological
sort of T .
Now, suppose we are presented with a set C of constraints of the form t < t0
(no disjuctions). One approach for solving them could be based on reductions to
SAT or CLP(FD). We instead outline a direct algorithm. If the constraints are
acyclic, i.e. it is not possible to derive t < t by transitivity, then there exists a
finite forest satisfying the constraints, having as nodes the base type variables. To
construct such forest, we can first represent the constraints as a graph with the
base type variables as vertices and an edge between t and t0 just when t < t0 ∈ C.
Then we can check the graph for acyclicity. If the test fails, the constraints are
unsatisfiable. Otherwise, any topological sort of the graph will represent a forest
satisfying C.
We can modify this simple procedure to support constraints including disjuctions by using backtracking on the disjuncts. Every time we arrive at an acyclic
assigment, we can check for T-shapedness (which takes linear time) and in case
the check fails we can backtrack again.
To speed up the backtracking algorithm, one can merge the acyclicity test
with the T-compatibility check. Acyclicity can be checked by constructing a
On Hierarchical Communication Topologies in the π-calculus
23
topological sort of the constraints graph. Every time we produce the next node
in the sorting, we take a step in the construction of Φ(P ) using the fact that the
currently produced node is the minimal base type among the remaining ones. We
can then backtrack as soon as a choice contradicts T-compatibility.
The complexity of the type checking problem is easily seen to be linear in
the size of the program. This proves, in conjuction with the finiteness of the
candidate guesses for T and annotations, that the type inference problem is in
NP. We conjecture that inference is also NP-hard.
We implemented the above algorithm in a tool called ‘James Bound’ (jb),
available at http://github.com/bordaigorl/jamesbound.
7
7.1
Expressivity and verification
Expressivity
Typably hierarchical terms form a rather expressive fragment. Apart from including common patterns as the client-server one, they generalise powerful models of
computation with decidable properties.
Relations with variants of CCS are the easiest to establish: CCS can be seen
as a syntactic subset of π-calculus when including 0-arity channels, which are
very easily dealt with by straightforward specialisations of the typing rules for
actions. One very expressive, yet not Turing-powerful, variant is CCS! [9] which
can be seen as our π-calculus without mobility. Indeed, every CCS! process is
typably hierarchical [4, Section 11.4].
Reset nets can be simulated by using resettable counters as defined in Example 3. The full encoding can be found in Appendix. The encoding preserves
coverability but not reachability.
CCS! was recently proven to have decidable reachability [9] so it is reasonable
to ask whether reachability is decidable for typably hierarchical terms.
We show this is not the case by introducing a weak encoding of Minsky
machines (in Appendix). The encoding is weak in the sense that not all of the
runs represent real runs of the encoded Minsky machine; however with reachability
one can distinguish between the reachable terms that are encodings of reachable
configurations and those which are not. We therefore reduce reachability of
Minsky machines to reachability of typably hierarchical terms.
Theorem 8. The reachability problem is undecidable for (typably) hierarchical
terms.
Theorem 8 can be used to clearly separate the (typably) hierarchical fragment
from other models of concurrent computation as Petri Nets, which have decidable
reachability and are thus less expressive.
7.2
Applications
Although reachability is not decidable, coverability is often quite enough to prove
non-trivial safety properties. To illustrate this point, let us consider Example 2
24
E. D’Osualdo and C.-H. L. Ong
again. In our example, each client waits for a reply reaching its mailbox before
issuing another request; moreover the server replies to each request with a single
message. Together, these observations suggest that the mailboxes of each client will
contain at most one message at all times. To automatically verify this property we
could use a coverability algorithm for depth-bounded systems: since the example is
typable, it is depth-bounded and such algorithm is guaranteed to terminate with a
correct answer. To formulate the property as a coverability problem, we can ask for
coverability of the following query: νs m.(!S k m(y).chmi k νd.mhdi k νd0 .mhd0 i).
This is equivalent to asking whether a term is reachable that embeds a server
connected with a client with a mailbox containing two messages. The query is
not coverable and therefore we proved our property.5
Other examples of coverability properties are variants of secrecy properties.
For instance, the coverability query νs m m0 .(!S k m(y).chmi k m0 (y).chm0 i k
νd.(mhdi k m0 hdi)) encodes the property “can two different clients receive the
same message?”, which cannot happen in our example.
It is worth noting that this level of accuracy for proving such properties
automatically is uncommon. Many approaches based on counter abstraction [22, 6]
or CFA-style abstractions [5] would collapse the identities of clients by not
distinguishing between different mailbox addresses. Instead a single counter is
typically used to record the number of processes in the same control state and of
messages. In our case, abstracting the mailbox addresses away has the effect of
making the bounds on the clients’ mailboxes unprovable in the abstract model.
A natural question at this point is: how can we go about verifying terms
which cannot be typed, as the ring example? Coverability algorithms can be
applied to untypable terms and they yield sound results when they terminate. But
termination is not guaranteed, as the term in question may be depth-unbounded.
However, even a failed typing attempt may reveal interesting information
about the structure of a term. For instance, in Example 11 one may easily see that
the cyclic dependencies in the constraints are caused by the names representing
the “next” process identities. In the general case heuristics can be employed
to automatically identify a minimal set of problematic restrictions. Once such
restrictions are found, a counter abstraction could be applied to those restrictions
only yielding a term that simulates the original one but introducing some spurious
behaviour. Type inference can be run again on the the abstracted term; on failure,
the process can be repeated, until a hierarchical abstraction is obtained. This
abstract model can then be model checked instead of the original term, yielding
sound but possibly imprecise results.
8
Related work
Depth boundedness in the π-calculus was first proposed in [12] where it is proved
that depth-bounded systems are well-structured transition systems. In [24] it
is further proved that (forward) coverability is decidable even when the depth
5
To fully prove a bound on the mailbox capacity one may need to also ask another
coverability question for the case where the two messages bear the same data-value d.
On Hierarchical Communication Topologies in the π-calculus
25
bound k is not known a priori. In [25] an approximate algorithm for computing
the cover set—an over-approximation of the set of reachable terms—of a system
of depth bounded by k is presented. All these analyses rely on the assumption
of depth boundedness and may even require a known bound on the depth to
terminate.
Several other interesting fragments of the π-calculus have been proposed in
the literature, such as name bounded [10], mixed bounded [15], and structurally
stationary [13]. Typically defined by a non-trivial condition on the set of reachable
terms – a semantic property, membership becomes undecidable. Links with Petri
nets via encodings of proper subsets of depth-bounded systems have been explored
in [15]. Our type system can prove depth boundedness for processes that are
breadth and name unbounded, and which cannot be simulated by Petri nets. In [2],
Amadio and Meyssonnier consider fragments of the asynchronous π-calculus and
show that coverability is decidable for the fragment with no mobility and bounded
number of active sequential processes, via an encoding to Petri nets. Typably
hierarchical systems can be seen as an extension of the result for a synchronous
π-calculus with unbounded sequential processes and a restricted form of mobility.
Recently Hüchting et al. [11] proved several relative classification results
between fragments of π-calculus. Using Karp-Miller trees, they presented an
algorithm to decide if an arbitrary π-term is bounded in depth by a given k. The
construction is based on an (accelerated) exploration of the state space of the
π-term, with non primitive recursive complexity, which makes it impractical. By
contrast, our type system uses a very different technique leading to a quicker
algorithm, at the expense of precision. Our forest-structured types can also act
as specifications, offering more intensional information to the user than just a
bound k.
Our types are based on Milner’s sorts for the π-calculus [17, 8], later refined
into I/O types [20] and their variants [21]. Based on these types is a system for
termination of π-terms [3] that uses a notion of levels, enabling the definition
of a lexicographical ordering. Our type system can also be used to determine
termination of π-terms in an approximate but conservative way, by using it in
conjuction with Theorem 1. Because the respective orderings between types of the
two approaches are different in conception, we expect the terminating fragments
isolated by the respective systems to be incomparable.
9
Future directions
The type system we presented in Section 4 is conservative: the use of simple types,
for example, renders the analysis context-insensitive. Although we have kept the
system simple so as to focus on the novel aspects, a number of improvements are
possible. First, the extension to the polyadic case is straightforward. Second, the
type system can be made more precise by using subtyping and polymorphism to
refine the analysis of control and data flow. Third, the typing rule for replication
introduces a very heavy approximation: when typing a subterm, we have no
information about which other parts of the term (crucially, which restrictions)
26
E. D’Osualdo and C.-H. L. Ong
may be replicated. By incorporating some information about which names can
be instantiated unboundedly in the types, the precision of the analysis can be
greatly improved. The formalisation and validation of these extensions is a topic
of ongoing research.
Another direction worth exploring is the application of this machinery to
heap manipulating programs and security protocols verification.
Acknowledgement. We would like to thank Damien Zufferey for helpful discussions
on the nature of depth boundedness, and Roland Meyer for insightful feedback
on a previous version of this paper.
Bibliography
[1] P. A. Abdulla, K. Cerans, B. Jonsson, and Y. Tsay. General decidability
theorems for infinite-state systems. In Symposium on Logic in Computer
Science, pages 313–321. IEEE Computer Society, 1996.
[2] R. M. Amadio and C. Meyssonnier. On decidability of the control reachability
problem in the asynchronous π-calculus. Nordic Journal of Computing, 9
(2):70–101, 2002.
[3] Y. Deng and D. Sangiorgi. Ensuring termination by typability. Information
and Computation, 204(7):1045–1082, 2006.
[4] E. D’Osualdo. Verification of Message Passing Concurrent Systems. PhD
thesis, University of Oxford, 2015. URL http://ora.ox.ac.uk/objects/
uuid:f669b95b-f760-4de9-a62a-374d41172879.
[5] E. D’Osualdo, J. Kochems, and C.-H. L. Ong. Automatic verification of
erlang-style concurrency. In F. Logozzo and M. Fähndrich, editors, Static
Analysis Symposium (SAS), volume 7935 of Lecture Notes in Computer
Science, pages 454–476. Springer, 2013.
[6] E. A. Emerson and R. J. Trefler. From asymmetry to full symmetry: New
techniques for symmetry reduction in model checking. In L. Pierre and
T. Kropf, editors, Correct Hardware Design and Verification Methods, volume
1703 of Lecture Notes in Computer Science, pages 142–156. Springer, 1999.
[7] A. Finkel and P. Schnoebelen. Well-structured transition systems everywhere!
Theoretical Computer Science, 256(1-2):63–92, 2001.
[8] S. J. Gay. A sort inference algorithm for the polyadic π-calculus. In M. S. V.
Deusen and B. Lang, editors, Principles of Programming Languages (POPL),
pages 429–438. ACM Press, 1993.
[9] C. He. The decidability of the reachability problem for CCS! . In Concurrency
Theory (CONCUR), volume 6901 of Lecture Notes in Computer Science,
pages 373–388. Springer, 2011.
[10] R. Hüchting, R. Majumdar, and R. Meyer. A theory of name boundedness.
In Concurrency Theory (CONCUR), 2013.
[11] R. Hüchting, R. Majumdar, and R. Meyer. Bounds on mobility. In Concurrency Theory (CONCUR), pages 357–371, 2014.
[12] R. Meyer. On boundedness in depth in the π-calculus. In IFIP International
Conference on Theoretical Computer Science, IFIP TCS, pages 477–489,
2008.
[13] R. Meyer. A theory of structural stationarity in the π-calculus. Acta
Informatica, 46(2):87–137, 2009.
[14] R. Meyer. Structural stationarity in the π-calculus. PhD thesis, University
of Oldenburg, 2009.
[15] R. Meyer and R. Gorrieri. On the relationship between π-calculus and
finite place/transition Petri nets. In Concurrency Theory (CONCUR), pages
463–480, 2009.
28
E. D’Osualdo and C.-H. L. Ong
[16] R. Milner. Functions as processes. Mathematical Structures in Computer
Science, 2(02):119–141, 1992.
[17] R. Milner. The polyadic pi-calculus: a tutorial. Springer-Verlag, 1993.
[18] R. Milner. Communicating and Mobile Systems: the π-Calculus. Cambridge
University Press, 1999.
[19] R. Milner, J. Parrow, and D. Walker. A calculus of mobile processes, I, II.
Information and Computation, 100(1):1–77, 1992.
[20] B. C. Pierce and D. Sangiorgi. Typing and subtyping for mobile processes.
In Symposium on Logic in Computer Science, pages 376–385, 1993.
[21] B. C. Pierce and D. Sangiorgi. Behavioral equivalence in the polymorphic
pi-calculus. Journal of the ACM, 47(3):531–584, 2000.
[22] A. Pnueli, J. Xu, and L. D. Zuck. Liveness with (0, 1, ∞)-counter abstraction.
In E. Brinksma and K. G. Larsen, editors, Computer Aided Verification
(CAV), volume 2404 of Lecture Notes in Computer Science, pages 107–122.
Springer, 2002.
[23] V. T. Vasconcelos and K. Honda. Principal typing schemes in a polyadic
π-calculus. In E. Best, editor, Concurrency Theory (CONCUR), volume 715
of Lecture Notes in Computer Science, pages 524–538. Springer, 1993.
[24] T. Wies, D. Zufferey, and T. Henzinger. Forward analysis of depth-bounded
processes. In Foundations of Software Science and Computation Structures
(FoSSaCS), pages 94–108, 2010.
[25] D. Zufferey, T. Wies, and T. Henzinger. Ideal abstractions for well-structured
transition systems. In Verification, Model Checking, and Abstract Interpretation (VMCAI), pages 445–460, 2012.
Appendix
A
Supplementary Material for Section 2
A.1
Definition and properties of nf
The function nf : P → Pnf , defined in Definition 11, extracts, from a term, a
normal form structurally equivalent to it.
Definition 11 (nf(P )). We define the function nf : P → Pnf as follows:
nf(0) := 0
nf(π.P ) := π. nf(P )
nf(M + M 0 ) := nf(M ) + nf(M 0 )
nf(P )
nf(Q)
nf(P k Q) :=
νXP XQ .(NP k NQ )
nf(νx.P ) := νx. nf(P )
nf(!M ) := !(nf(M ))
if nf(Q) = 0 6= nf(P )
if nf(P ) = 0
if nf(Q) = νXQ .NQ , nf(P ) = νXP .NP
and actν (NP ) = actν (NQ ) = ∅
Lemma 5. For each P ∈ P, P ≡ nf(P )
Proof. A straightforward induction on P .
Lemma 6. Let ϕ be a forest with labels in N ] S. Then ϕ = forest(Q) with
Q ≡ Qϕ where
Q
Qϕ := νXϕ . (n,A)∈I A
Xϕ := {`ϕ (n) ∈ N | n ∈ Nϕ }
I := {(n, A) | `ϕ (n) = A ∈ S}
provided
i) ∀n ∈ Nϕ , if `ϕ (n) ∈ S then n has no children in ϕ, and
ii) ∀n, n0 ∈ Nϕ , if `ϕ (n) = `ϕ (n0 ) ∈ N then n = n0 , and
iii) ∀n ∈ Nϕ , if `ϕ (n) = A ∈ S then for each x ∈ Xϕ ∩ fn(A) there exists n0 <ϕ n
such that `ϕ (n0 ) = x.
Proof. We proceed by induction on the structure of ϕ. The base case is when
ϕ = (∅, ∅), for which we have Qϕ = 0 and ϕ = forest(0).
When ϕ = ϕ0 ] ϕ1 we have that if conditions 8.i, 8.ii and 8.iii hold for ϕ, they
must hold for ϕ0 and ϕ1 as well, hence we can apply the induction hypothesis to
them obtaining ϕi forest(Qi ) with Qi ≡ Qϕi (i ∈ {0, 1}). We have ϕ = forest(Q0 k
Q1 ) by definition of forest, and we want to prove that Q0 k Q1 ≡ Qϕ . By condition
8.ii on ϕ, Xϕ0 and Xϕ1 must be disjoint; furthermore, by condition 8.iii on both
30
E. D’Osualdo and C.-H. L. Ong
ϕ0 and ϕ1 we can conclude that fn(Qϕi ) ∩ Xϕ1−i = ∅. We can therefore apply
scope extrusion: Q0 k Q1 ≡ Qϕ0 k Qϕ1 ≡ νXϕ0 Xϕ1 .(Pϕ0 k Pϕ1 ) = Qϕ .
The last case is when ϕ = l[ϕ0 ]. Suppose conditions 8.i, 8.ii and 8.iii hold for ϕ.
We distinguish two cases. If l = A ∈ S, by 8.i we have ϕ0 = (∅, ∅), ϕ = forest(A)
and A = Qϕ . If l = x ∈ N then we observe that conditions 8.i, 8.ii and 8.iii hold
for ϕ0 under the assumption that they hold for ϕ. Therefore ϕ0 = forest(Q0 ) with
Q0 ≡ Qϕ0 , and, by definition of forest, ϕ = forest(νx.Q0 ). By condition 8.ii we
have x 6∈ Xϕ0 so νx.Q0 ≡ νx.Qϕ0 ≡ ν(X ∪ {x}).Pϕ0 = Qϕ .
B
B.1
Supplementary Material for Section 3
Proof of Theorem 3
First, it is immediate to see that every hierarchical term is depth-bounded. Any
T-compatible forest cannot repeat a type in a path, which means that the number
of base types in T bounds the height of T-compatible forests. This automatically
gives a bound on the depth of any T-compatible term.
We show the converse is not true by presenting a depth-bounded process
which is not hierarchical. Take P = (!A k !B k !(C1 + C2 )) where
A = τ .ν(a : a).phai
C1 = p(x).!(q(y).D)
B = τ .ν(b : b).qhbi
C2 = q(x).!(p(y).D)
D = xhyi
then P is depth-bounded. However we can show there is no choice for consistent
annotations and T that can prove it hierarchical. Let h be the height of T .
From P we can reach, by reducing the τ actions of A and B, any of the terms
Qi,j = P k (νa.phai)i k (νb.qhbi)j (omitting annotations) for i, j ∈ N. The choice
for annotations can potentially assign a different type in T to each νa and νb.
Let n, m ∈ N be naturals strictly greater than 2h and consider the reachable
term Qn,nm ; from this term we can reach a term
m
n
ab
Q = P k νa. νb.D[ a/x, b/y ] k ! q(y).D[ a/x ]
by never selecting C2 as part of a redex. Each occurrence of a and b will have an
annotation: we assume type tia is assigned to each occurrence i ≤ n of νa in Qab
i
ab
and a type ti,j
b is assigned to each occurrence j of νb under ν(a : ta ) in Q . Each
ab
occurrence of νa in Q has in its scope more than h occurrences of νb. We cannot
extrude more than h occurrences of νb because we would necessarily violate
T-compatibility by obtaining a path of length greater than h in the forest of the
i,h+1
extruded term. Therefore, w.l.o.g., we can assume that the types ti,1
b , . . . , tb
i
are all descendants of ta , for each i ≤ n. Pictorially, the parent relation in T
entails the relations in Fig. 7 where the edges represent <T .
The type associations of the restrictions in Qab are already fixed in Qn,nm .
From Qn,nm we can however also reach any of the terms
n
Qib = P k · · · k ν(b : ti,1
).
νa.D[
a/x,
b/y
]
k
!
p(y).D[
b/x
]
b
On Hierarchical Communication Topologies in the π-calculus
t1a
31
tn
a
t2a
···
t1,1
b
t1,2
b
· · · t1,h+1
t2,1
b
b
t2,2
b
· · · t2,h+1
b
tn,1
b
tbn,2
· · · tn,h+1
b
Fig. 7. Structure of T in the counterexample.
for i ≤ m, by making C2 and ν(b : ti,1
b ).qhbi react and then repeatedly making
!q(y).D react with each ν(a : tja ).phai. Let us consider Q1b . As before, we cannot
extrude more than h occurrences of a or we would break T-compatibility. We
must however extrude (a : t1a ) to get T-compatibility since t1a <T t1,1
b . From these
two facts we can infer that there must be a type associated to one of the a, let
it be t2a , such that t1a <T t1,1
<T t2a . We can apply the same argument to Q2b
b
2,1
obtaining t1a <T t2a <T tb <T t3a . Since m > 2h we can repeat this h + 1 times
and get t1a <T t2a <T . . . <T th+1
which contradicts the assumption that the
a
height of T is h.
The reason why the counterexample presented in the proof above fails to
be hierarchical is that (unboundedly many) names are used in fundamentally
different ways in different branches of the execution.
C
C.1
Supplementary Material for Section 4
Proof of Lemma 2
We show that the claim holds in the case where Ai is linked to Aj in P . From this,
a simple induction over the length of linked-to steps required to prove i aP j,
can prove the lemma.
Suppose i ↔P j. Let Y = fn(Ai ) ∩ fn(Aj ) ∩ {x | (x : τ ) ∈ X}, we have Y =
6 ∅.
Both Ai and Aj are in the scope of each of the restrictions bounding names
y ∈ Y in any of the processes Q in the congruence class of P , hence, by definition
of forest, the nodes labelled with Ai and Aj generated by forest(Q) will have
nodes labelled with (y, base(X(y))) as common ancestors.
C.2
Some auxiliary lemmas
Lemma 7. If forest(P ) is T-compatible then for any term Q which is an αrenaming of P , forest(Q) is T-compatible.
Proof. Straightforward from the fact that T-compatibility depends only on the
type annotations.
Q
Lemma 8. Let P = νX.
i∈I Ai be a T-compatible normal form, Y ⊆ X and
Q
J ⊆ I. Then P 0 = νY. j∈J Aj is T-compatible.
32
E. D’Osualdo and C.-H. L. Ong
Proof. Take a T-compatible forest ϕ ∈ FJP K. By Lemma 7 we can assume without
loss of generality that ϕ = forest(Q) where proving Q ≡ P does not require
α-renaming. Clearly, removing the leaves that do not correspond to sequential
terms indexed by Y does not affect the T-compatibility of ϕ. Similarly, if a
restriction (x : τ ) ∈ X is not in Y , we can remove the node of ϕ labelled with
(x, base(τ )) by making its parent the new parent of its children. This operation
is unambiguous under Name Uniqueness and does not affect T-compatibility, by
transitivity of <. We then obtain a forest ϕ0 which is T-compatible and that, by
Lemma 6, is the forest of a term congruent to the desired normal form P 0 .
D
D.1
Supplementary Material for Section 5
Some Elementary Properties of the Type System
Lemma 9. Let P ∈ PnfT and Γ , Γ 0 be type environments.
a) if Γ `T P then fn(P ) ⊆ dom(Γ );
b) if dom(Γ 0 ) ∩ bn(P ) = ∅ and fn(P ) ⊆ dom(Γ ),
then Γ `T P if and only if Γ Γ 0 `T P ;
c) if P ≡ P 0 ∈ PnfT then, Γ `T P if and only if Γ `T P 0 .
D.2
Proof of Lemma 4
Item a) is an easy induction on the cardinality of X.
Item b) requires more work. By item a) Φ(P ) is T-compatible so Φ(P ) ∈ FJP K
proves that P is T-compatible.
Q
To prove the ⇐-direction we assume that P = νX. i∈I Ai is T-compatible
and proceed by induction on the cardinality
of X U
to show that Φ(P ) ∈ Q
FJP K. The
Q
base case is when X = ∅: Φ(P ) = Φ( i∈I Ai ) = i∈I {Ai []} = forest( i∈I Ai ) =
forest(P ) ∈ FJP K. For the induction step, we observe that X 6= ∅ implies
minT (X) 6= ∅ so, Z ⊂ X and for each (x : τ ) ∈ minT (X), Yx ⊂ X since
x 6∈ Yx . This, together with Lemma
8, allows us to Q
apply the induction hyQ
potesis on the terms Px = νYx . j∈Ix Aj and PR = νZ. r∈R Ar , obtaining that
there exist terms Qx ≡ Px and QR ≡ PR such that forest(Qx ) = Φ(Px ) and
forest(QR ) = Φ(PRQ
) where all the forests forest(Qx ) and forest(QR ) are T-com{ν(x : τ ).Qx | (x : τ ) ∈ minT (X)} k QR , then forest(Q) =
patible. Let Q =
Φ(P
).
To
prove
the
Q
Q claim we only need to show that Q ≡ P . We have Q ≡
{ν(x : τ ).νYx . j∈Ix Aj | (x : τ ) ∈ minT (X)} k PR and we want to apply extru
Q
U
sion to get Q ≡ νYU
{Ix | (x : τ ) ∈ minT (X)},
min .
i∈Imin Ai k PR for Imin =
Ymin = minT (X) ] {Yx | (x : τ ) ∈ minT (X)} which adds an obligation to prove
that
i) Ix are all pairwise disjoint so that Imin is well-defined,
ii) Yx are all pairwise disjoint and all disjoint from minT (X) so that Ymin is
well-defined,
On Hierarchical Communication Topologies in the π-calculus
33
iii) Yx ∩fn(Aj ) = ∅ for every j ∈ Iz with z =
6 x so that we can apply the extrusion
rule.
To prove condition i), assume by contradiction that there exists an i ∈ I
and names x, y ∈ minT (X) with x 6= y, such that both x and y are tied to
Ai in P . By transitivity of the tied-to relation, we have Ix = Iy . By Lemma 2
all the Aj with j ∈ Ix need to be in the same tree in any forest ϕ ∈ FJP K.
Since P is T-compatible there exist such a ϕ which is T-compatible and has
every Aj as label of leaves of the same tree. This tree will include a node nx
labelled with (x, base(X(x))) and a node ny labelled with (y, base(X(y))). By
T-compatibility of ϕ and the existence of a path between nx and ny we infer
base(X(x)) < base(X(y)) or base(X(y)) < base(X(x)) which contradicts the
assumption that x, y ∈ minT (X).
Condition ii) follows from condition i): suppose there exists a (z : τ ) ∈ X ∩
Yx ∩ Yy for x 6= y, then we would have that z ∈ fn(Ai ) ∩ fn(Aj ) for some i ∈ Ix
and j ∈ Iy , but then i aP j, meaning that i ∈ Iy and j ∈ Ix violating condition i).
The fact that Yx ∩ minT (X) = ∅ follows from the definition of Yx . The same
reasoning proves condition iii).Q
Q
Now we have Q ≡ νYmin . i∈IQ
Ai k νZ. r∈R Ar and we want to apply
min
extrusion again to get Q ≡ νYmin Z. {Ai | i ∈ (Imin ] R)} which is sound under
the following conditions:
iv) Ymin ∩ Z = ∅,
v) Imin ∩ R = ∅,
vi) Z ∩ fn(Ai ) = ∅ for all i 6∈ R
of which the first two hold trivially by construction, while the last follows from
condition viii) below, as a name in the intersection of Z and a fn(Ai ) would need
to be in X but not in Ymin . To be able to conclude that Q ≡ P it remains to
prove that
vii) I = Imin ] R and
viii) X = Ymin ] Z
which are also trivially valid by inspection of their definitions. This concludes
the proof for item b).
Finally, for every Q ∈ P T such that Q ≡ P , Φ(P ) ∈ FJQK if and only if
Φ(P ) ∈ FJP K by definition of FJ−K; since Φ(P ) is T-compatible we can infer
that Q is T-compatible if and only if Φ(P ) ∈ FJQK, which proves item c).
In light of Lemma 4, we can turn the computation of ΦT (P ) into an algorithm
to check T-compatibility of P : it is sufficient to compute ΦT (P ) and check at each
step that the sets Ix , R form a partition of I and the sets Yx , Z form a partition
of X. If the checks fail ΦT (P ) 6∈ FJP K and P is not T-compatible, otherwise the
obtained forest is a witness of T-compatibility.
D.3
Further Properties of ΦT (P )
Q
Lemma 10. Let P = νX. i∈I Ai ∈ PnfT be a T-compatible normal form. Then
for every trace ((x1 , t1 ) . . . (xk , tk ) Aj ) in the forest Φ(P ), for every i ∈ {1, . . . , k},
we have xi /P j (i.e. xi is tied to Aj in P ).
34
E. D’Osualdo and C.-H. L. Ong
Proof. Straightforward from the definition of Ix in Φ: when a node labelled by
(x, t) is introduced, its subtree is extracted from a recursive call on a term that
contains all and only the sequential terms that are tied to x.
Remark 1. Φ(P ) satisfies conditions 8.i, 8.ii and 8.iii of Lemma 6.
D.4
Proof of Lemma 3
We prove the lemma by induction on the structure of P . The base case is when
P ≡ 0, where the claim trivially holds.Q
P
For the induction step, let P ≡ νX. i∈I Ai with Ai = j∈J πij .Pij , for some
finite sets of indexes I and J. Since the presence of replication does not affect
the typing proof, we can safely ignore that case as it follows the same argument.
Let us assume Γ `T P and prove that Γ `T P [ b/a ].
Let Γ 0 be Γ ∪ X. From Γ `T P we have
Γ, X `T Ai
x /P i =⇒ base(Γ (fn(Ai ))) < base(τx )
(1)
(2)
for each i ∈ I and x : τx ∈ X. To extract from this assumptions a proof for
Γ `T P [ b/a ], we need to prove that (1) and (2) hold after the substitution.
Since the substitution does not apply to names in X and the tied to relation
is only concerned with names in X, the only relevant effect of the substitution is
modifying the set fn(Ai ) to fn(Ai [ b/a ]) = fn(Ai ) \ {a} ∪ {b} when a ∈ fn(Ai );
But since Γ (a) = Γ (b) by hypothesis, we have base(Γ (fn(Ai [ b/a ]))) < base(τx ).
It remains to prove (1) holds after the substitution as well. This amounts to
prove for each j ∈ J that Γ 0 `T πij .Pij =⇒ Γ 0 `T πij .Pij [ b/a ]; we prove this
by cases.
Suppose πij = αhβi for two names α and β, then from Γ 0 `T πij .Pij we know
the following
α : tα [τβ ] ∈ Γ 0
0
β : τβ ∈ Γ 0
Γ `T Pij
(3)
(4)
Condition (3) is preserved after the substitution because it involves only types so,
even if α or β are a, their types will be left untouched after they get substituted
with b from the hypothesis that Γ (a) = Γ (b). Condition (4) implies Γ 0 `T
Pij [ b/a ] by inductive hypothesis.
Q
Suppose now that πij = α(x) and Pij ≡ νY. k∈K A0k for some finite set of
indexes K; by hypothesis we have:
α : tα [τx ] ∈ Γ 0
0
Γ , x : τx `T Pij
base(τx ) < tα ∨ ∀k ∈ K. Migπij .Pij (k) =⇒ base(Γ 0 (fn(A0k ) \ {α})) < tα
(5)
(6)
(7)
Now x and Y are bound names so they are not altered by substitutions. The
substitution [ b/a ] can therefore only be affecting the truth of these conditions
On Hierarchical Communication Topologies in the π-calculus
35
when α = a or when a ∈ fn(A0k ) \ (Y ∪ {x}). Since we know a and b are assigned
the same type by Γ and Γ ⊆ Γ 0 , condition (5) still holds when substituting a for
b. Condition (6) holds by inductive hypotesis. The first disjunct of condition (7)
depends only on types, which are not changed by the substitution, so it holds
after applying it if and only if it holds before the application. To see that the
second disjunct also holds after the substitution we observe that the migratable
condition depends on x and fn(A0k ) ∩ Y which are preserved by the substitution;
moreover, if a ∈ fn(A0k ) \ {α} then Γ 0 (fn(A0k ) \ {α}) = Γ 0 (fn(A0k [ b/a ]) \ {α}).
This shows that the premises needed to derive Γ 0 , x : τx0 `T πij .Pij [ b/a ] are
implied by our hypothesis, which completes the proof.
D.5
Proof of Theorem 5
We will only prove the result for the case when P → Q is caused by a synchronising
send and receive action since the τ action case is similar and simpler. From
P → Q we know that P ≡ νW.(S k R k C) ∈ PnfT with S ≡ (ahbi.νYs .S 0 )+Ms and
R ≡ (a(x).νYr .R0 ) + Mr the synchronising sender and receiver respectively;
Q Q≡
k C). In what follows, let W 0 = W Ys Yr , C = h∈H Ch ,
νW YsQ
Yr .(S 0 k R0 [ b/x ] Q
S 0 = i∈I Si0 and R0 = j∈J Rj0 , all normal forms.
For annotated terms, the type system is syntax directed: there can be only
one proof derivation for each typable term. By Lemma 9.c, from the hypothesis
Γ `T P we can deduce Γ `T νW.(S k R k C). The proof derivation for this
typing judgment can only be of the following shape:
Γ W `T S
Γ W `T R ∀h ∈ H. Γ W `T Ch
Γ `T νW.(S k R k C)
Ψ
(8)
where Ψ represents the rest of the conditions of the Par rule.6 The fact that P
is typable implies that each of these premises must be provable. The derivation
proving Γ, W `T S must be of the form
a : ta [τb ] ∈ Γ W b : τb ∈ Γ W Γ W `T νYs .S 0
Γ W `T ahbi.νYs .S 0
Γ `T ahbi.νYs .S 0 + Ms
ΨMs
(9)
where Γ W `T νYs .S 0 is proved by an inference of the shape
∀i ∈ I. Γ W Ys `T Si0
∀i ∈ I. ΨSi0
Γ W `T νYs .S 0
(10)
Analogously, Γ W `T R must be proved by an inference with the following
shape
a : ta [τx ] ∈ Γ W Γ W, x : τx `T νYr .R0 ΨR0
Γ W `T a(x).νYr .R0
ΨMr
(11)
0
Γ W `T a(x).νYr .R + Mr
6
Note that Ψ is trivially true by P -safety of Γ .
36
E. D’Osualdo and C.-H. L. Ong
and to prove Γ W, x : τx `T νYr .R0
∀j ∈ J. Γ W, x : τx , Yr `T Rj0
Γ W, x : τx `T νYr .R0
∀j ∈ J. ΨRj0
(12)
We have to show that from this hypothesis we can infer that Γ `T Q or,
equivalently (by Lemma 9.c), that Γ `T Q0 where Q0 = νW Ys Yr .(S 0 k R0 [ b/x ] k
C). The derivation of this judgment can only end with an application of Par:
∀i ∈ I. Γ W 0 `T Si0
∀j ∈ J. Γ W 0 `T Rj0 [ b/x ] ∀h ∈ H. Γ W 0 `T Ch
Ψ0
Γ `T νW 0 .(S 0 k R0 [ b/x ] k C)
In what follows we show how we can infer these premises are provable as a
consequence of the provability of the premises of the proof of Γ `T νW.(S k R k
C).
From Lemma 9.b and Name Uniqueness, Γ W Ys `T Si0 from (10) implies
Γ W 0 `T Si0 for each i ∈ I.
Let Γr = Γ W, x : τx . We observe that by (9) and (11), τx = τb . From (11)
we know that Γr Yr `T Rj0 which, by Lemma 3, implies Γr Yr `T Rj0 [ b/x ]. By
Lemma 9.b we can infer Γr Yr Ys `T Rj0 [ b/x ] and by applying the same lemma
again using fn(Rj0 [ b/x ]) ⊆ dom(Γ W Yr Ys ) and Name Uniqueness we obtain
Γ W 0 `T Rj0 [ b/x ].
Again applying Lemma 9.b and Name Uniqueness, we have that Γ W `T Ch
implies Γ W 0 `T Ch for each h ∈ H.
To complete the proof we only need to prove that for each A ∈ {Si0 |
i ∈ I} ∪ {Rj0 | j ∈ J} ∪ {Ch | h ∈ H}, Ψ 0 = ∀(x : τx ) ∈ W 0 . x tied to A in Q0 =⇒
base(Γ (fn(A))) < base(τx ) holds. This is trivially true by the hypothesis that Γ
is P -safe.
D.6
Proof of Theorem 6
We will consider the input output synchronisation case as the τ action one is
similar and simpler. We will further assume that the sending action ahbi is such
that ν(a : τa ) and ν(b : τb ) are both active restrictions of P , i.e. (a : τa ) ∈ W ,
(b : τb ) ∈ W with P ≡ νW.(S k R k C). The case when any of these two names is
a free name of P can be easily handled with the aid of the assumption that Γ is
P -safe.
As in the proof of Theorem 5, the derivation of Γ `T P must follow the shape
of (8).
From T-shapedness of P we can conclude that both νYs .S 0 and νYr .R0 are
T-shaped. We note that substitutions do not affect T-compatibility since they
do not alter the set of bound names and their type annotations. Therefore,
we can infer that νYr .R0 [ b/a ] is T-shaped. By Lemma 4 we know that ϕ =
Φ(νW.(S k R k C)) ∈ FJP K, ϕr = Φ(νYr .R0 [ b/a ]) ∈ FJνYr .R0 [ b/x ]K and
ϕs = Φ(νYs .S 0 ) ∈ FJνYs .S 0 K. Let ϕr = ϕmig ] ϕ¬mig where only ϕmig contains
a leaf labelled with a term with b as a free name. These leaves will correspond
On Hierarchical Communication Topologies in the π-calculus
37
to the continuations Rj0 that migrate in a(x).νYr .R0 , after the application of the
substitution [ b/x ]. By assumption, inside P both S and R are in the scope of
the restriction bounding a and S must also be in the scope of the restriction
bounding b. Let ta = base(τa ) and tb = base(τb ), ϕ will contain two leaves nS and
nR labelled with S and R respectively, having a common ancestor na labelled
with (a, ta ); nS will have an ancestor nb labelled with (b, tb ). Let pa , pS and
pR be the paths in ϕ leading from a root to na , nS and nR respectively. By
T-compatibility of ϕ, we are left with only two possible cases: either 1) ta < tb
or 2) tb < ta .
Let us consider case 1) first. The tree in ϕ to which the nodes nS and nR
belong, would have the following shape:
na
nb
nS
nR
Now, we want to transform ϕ, by manipulating this tree, into a forest ϕ0 that
is T-compatible by construction and such that there exists a term Q0 ≡ Q with
forest(Q0 ) = ϕ0 , so that we can conclude Q is T-shaped.
To do so, we introduce the following function, taking a labelled forest ϕ, a
path p in ϕ and a labelled forest ρ and returning a labelled forest:
ins(ϕ, p, ρ) := (Nϕ ] Nρ , ϕ ] ρ ] ins , `ϕ ] `ρ )
where n ins n0 if n0 ∈ minρ (Nρ ) and if `ρ (n0 ) = (y, ty ) then
n ∈ max {m ∈ p | `ϕ (m) = (x, tx ), tx < ty }
ϕ
or if `ρ (n0 ) = A then
n ∈ max {m ∈ p | `ϕ (m) = (x, tx ), x ∈ fn(A)}.
ϕ
Note that for each n0 , since p is a path, there can be at most one n such that
n ins n0 .
To obtain the desired ϕ0 , we first need to remove the leaves nS and nR from
ϕ, as they represent the sequential processes which reacted, obtaining a forest
ϕC . We argue that the ϕ0 we need is indeed
ϕ0 = ins(ϕ1 , pS , ϕmig )
ϕ1 = ins(ϕ2 , pR , ϕ¬mig )
ϕ2 = ins(ϕC , pS , ϕs )
38
E. D’Osualdo and C.-H. L. Ong
It is easy to see that, by definition of ins, ϕ0 is T-compatible: ϕC , ϕs , ϕ¬mig and
ϕmig are T-compatible by hypothesis, ins adds parent-edges only when they do
not break T-compatibility.
0
To prove the claim we need to show that
forest of a term congruent
Qϕ is the
0
0
0
to νW Ys Yr .(S k R [ b/x ] k C). Let R = j∈J Rj0 , Jmig = {j ∈ J | x /νYr .R0 j},
J¬mig = J \ Jmig and Yr0 = {(x : τ ) ∈ Yr | x ∈ fn(Rj0 ), j ∈ J¬mig }. We know that
no Rj0 with j ∈ J¬mig can contain x as a free name so Rj0 [ b/x ] = Rj0 . Now
suppose we are able to prove that conditions 8.i, 8.ii and 8.iii of Lemma 6 hold
for ϕC , ϕ1 , ϕ2 and ϕ0 . Then we could use Lemma 6 to prove
a)
b)
c)
d)
ϕC = forest(QC ), QC ≡ QϕC = νW.C,
ϕ2 = forest(Q2 ), Q2 ≡ Qϕ2 = νW Ys .(S 0 k C),Q
ϕ1 = forest(Q1 ), Q1 ≡ Qϕ1 = νW Ys Yr0 .(S 0 k j∈J¬mig Rj0 k C),
ϕ0 = forest(Q0 ), Q0 ≡ Qϕ0 = νW Ys Yr .(S 0 k R0 [ b/x ] k C) ≡ Q
(it is straightforward to check that ϕC , ϕ2 , ϕ1 and ϕ0 have the right sets of nodes
and labels to give rise to the right terms). We then proceed to check for each of
the forests above that they satisfy conditions 8.i, 8.ii and 8.iii, thus proving the
theorem.
Condition 8.i requires that only leafs are labelled with sequential processes,
condition that is easily satisfied by all of the above forests since none of the
operations involved in their definition alters this property and the forests ϕ, ϕs
and ϕr satisfy it by construction.
Similarly, since νW.(S k R k C) is a normal form it satisfies Name Uniqueness,
8.ii is satisfied as we never use the same name more than once.
Condition 8.iii holds on ϕ and hence it holds on ϕC since the latter contains
all the nodes of ϕ labelled with names.
Now consider ϕs : in the proof of Theorem 5 we established that Γ `T P implies
that the premises ΨSi0 from (10) hold, that is base(Γ W (fn(Si0 ))) < base(τx ) holds
for all Si0 for i ∈ I and all (x : τx ) ∈ Ys such that x /νYs .S 0 i. Since fn(Si0 ) ∩ W ⊆
fn(S 0 ) we know that every name (w : τw ) ∈ W such that w ∈ fn(Si0 ) will appear
as a label (w, base(τw )) of a node nw in pS . Therefore, by definition of ins, we
have that for each n ∈ NϕC , nw <ϕ2 n; in other words, in ϕ2 , every leaf in Nϕs
labelled with Si0 is a descendent of a node labelled with (w, base(τw )) for each
(w : τw ) ∈ W with w ∈ fn(Si0 ). This verifies condition 8.iii on ϕ2 .
Similarly, by (12) the following premise must hold: base(Γ W (fn(Rj0 ))) <
base(τx ) for all Rj0 for j ∈ J and all (y : τy ) ∈ Yr such that y /νYr .R0 j. We can
then apply the same argument we applied to ϕ2 to show that condition 8.iii holds
on ϕ1 .
From (11) and the assumption ta < tb , we can conclude that the following
premise must hold: base(Γ W (fn(Rj0 ) \ {a})) < ta for each j ∈ J such that Rj0 is
migratable in a(x).νYr .R0 , i.e j ∈ Jmig . From this we can conclude that for every
name (w : τw ) ∈ W such that w ∈ fn(Rj0 [ b/x ]) with j ∈ Jmig there must be a
node in pa (and hence in pS ) labelled with (w, base(τw )). Now, some of the leaves
in ϕmig will be labelled with terms having b as a free name; we show that in fact
every node in ϕmig labelled with a (y, ty ) is indeed such that ty < tb . From the
On Hierarchical Communication Topologies in the π-calculus
39
proof of Theorem 5 and Lemma 3 we know that from the hypothesis we can infer
that Γ W `T νYr .R0 [ b/x ] and hence that for each j ∈ Jmig and each (y : τy ) ∈ Yr ,
if y is tied to Rj0 [ b/x ] in νYr .R0 [ b/x ] then base(Γ W (Rj0 [ b/x ])) < base(τy ). By
Lemma 10 we know that every root of ϕmig is labelled with a name (y, ty )
which is tied to each of the leaves in its tree. Therefore each such ty satisfies
base(Γ W (Rj0 [ b/x ])) < ty . By construction, there exists at least one j ∈ Jmig
such that x ∈ fn(Rj0 ) and consequently such that b ∈ fn(Rj0 [ b/x ]). From this
and b ∈ W we can conclude tb < ty for ty labelling a root in ϕmig . We can
then conclude that {nb } = maxϕ2 {m ∈ pS | `ϕ (m) = (z, tz ), tz < ty } for each
ty labelling a root of ϕmig , which means that each tree of ϕmig is placed as a
subtree of nb in ϕ0 . This verifies condition 8.iii for ϕ0 completing the proof.
Pictorially, the tree containing nS and nR in ϕ is now transformed in the
following tree in ϕ0 :
na
∈ ϕs
∈ ϕmig
∈ ϕ¬mig
nb
pS
pR
Case 2) — where tb < ta — is simpler as the migrating continuations can be
treated just as the non-migrating ones.
D.7
Role of ϕmig , ϕ¬mig and ins
To illustrate the role of ϕmig , ϕ¬mig and the ins operation in the above proof, we
show an example that would not be typable if we choose a simpler “migration”
transformation.
Consider the normal form P = νa b c.(!A k ahci) where A = a(x).νd.(ahdi k
bhxi). To make types consistent we need annotations satisfying a : ta [t], b : tb [t],
c : t and d : t. Any T satisfying the constraints tb < ta < t would allow us to prove
∅ `T P ; let then T be the forest with b a t with ta = a, tb = b and t = t.
Let P 0 = νa b c d.(!A k ahdi k bhci) be the (only) successor of P . The following
picture shows Φ(P ) in the middle, on the left a forest in FJP 0 K extracted by just
putting the continuation of A under the message, on the right the forest obtained
by using ins on the non-migrating continuations of A:
b
a
c
d
bhci ahdi
!A
←
b
a
!A
c
ahci
→
!A
b
a
c
d
bhci ahdi
Clearly, the tree on the left is not T-compatible since c and d have the same base
type t. Instead, the tree on the right can be obtained because ins inserts the
non-migrating continuation as close to the root as possible.
40
E
E. D’Osualdo and C.-H. L. Ong
Supplementary Material for Section 6
E.1
A type inference example
Take the term ! τ .νs c.P of Example 2. We start by annotating each restriction
νa with a fresh type variable ν(a : ta ). Then we perform a type derivation as in
Example 10, obtaining the following data-flow constraints:
ts = ts [tz ]
tc = tc [tx ]
tx = tx [ty ]
tz = tx = tm = tz [td ]
from which we learn that:
- td is unconstrained; we use the base type variable td for base(td );
- ty = td ;
- tx = tz and base(tm ) = tx .
We can therefore completely specify the types just by associating ts , tc , tx and td
to nodes in a forest: all the types would be determined as a consequence of the
data-flow constraints, apart from td to which we can safely assign the type td .
During the type derivation we also collected the following base type constraints:
base(tz ) < base(td )
base(tc ) < base(tm )
base(tc ) < base(tx )
base(tx ) < base(tc ) ∨ base(ts ) < base(tc )
These can be simplified and normalised using the equations on types seen above
obtaining the set
Cνs c.P = {tx < tc ∨ ts < tc , tc < tx , tx < td }
Hence any choice of T ⊇ {tx , tc , ts , td } such that ts <T tc <T tx <T td would
make the typing succeed.
F
F.1
Supplementary Material for Section 7
Encoding of Reset nets
A reset net N with n places is a finite set of transitions of the form (u, R) where
n
u ∈ {−1, 0, +1} is the update vector and R ⊆ {1, . . . , n} is the reset set. A
marking m is a vector in Nn ; a transition (u, R) is said to be enabled at m
if m − u > 0. The semantics of a reset net N with initial marking m0 is the
transition system (Nn , [i, m0 ) where m [i m0 if there exists a transition (u, R) in
N that is enabled in m and such that
(
mi + ui if i 6∈ R
m0i =
0
if i ∈ R
On Hierarchical Communication Topologies in the π-calculus
41
To simulate place i in a reset net we can construct a term that implements a
counter with increment and reset:
Ci = ! pi (t). inc i .(t k pi hti) + dec i .(t.pi hti) + rst i .(νt0i .pi ht0i i)
Here, the number of processes t in parallel with pi hti represent the current
value of the marking in place i. A transition (u, R) is encoded as a process
Tu,R = ! valid .Du .Iu .ZR .valid where Du = dec j1 . · · · .dec jk with {j1 , . . . , jk } =
{j | uj < 0}, Iu = inc i1 . · · · .inc il with {i1 , . . . , il } = {i | ui > 0}, and ZR =
rst r1 . · · · .rst rm with R = {r1 , . . . , rl }.
A marking m is encoded by a process
Q
Q
m
PN,m = valid k 1≤i≤n pi hti i k Ci k ti i k (u,R)∈N Tu,R
Actions on the name valid act as a global lock: a transition may need many steps
to complete, but by acquiring and releasing valid it can ensure no other transition
will fire in between. If a transition tries to decrement a counter below zero, the
counter would deadlock causing valid to be never released again. Therefore, the
encoding preserves coverability: m is coverable in N from m0 if and only if PN,m
is coverable from PN,m0 . Reachability is not preserved because each reset would
generate some ‘garbage’ term νt.(t k . . . k t) and thus, even when m is reachable,
PN,m might not be reachable alone, but only in parallel with some garbage.
The reader can verify that any encoding P can be typed under the hierarchy
valid < inc 1 < dec 1 < rst 1 < t1 < p1 < · · · <
inc n < dec n < rst n < tn < pn < t01 < · · · < t0n
by annotating each restriction νt0i as ν(t0i : t0i ) and using the P -safe environment
{(x : x) | x ∈ fn(P )}.
F.2
A weak encoding of Minsky machines
A k-counters Minsky machine is a finite list of instructions I1 , . . . , In each of
which can be either an increase or a decrease command. An increase command
inc i j increases counter i and jumps to instruction Ij . A decrease command
dec i j1 j2 decreases counter i jumping to instruction Ij1 if the counter is greater
than zero, or jumps to Ij2 otherwise. We implement a counter i with the process
Ci of Example 3. An increase Im = inc i j is encoded by ! im .inc i .ij . A decrease
Im = dec i j1 j2 is encoded by ! im .(dec i .ij1 + rst i .ij2 ) . A configuration of a
Minsky machine is the vector of values of its registers r1 , . . . , rk and the current
instruction j; its encoding is the term
Q
Q
ri
1≤i≤k νti .(pi hti i k ti ) k ij k
1≤m≤n PIm
where PIm is the encoding of the instruction Im .
When a counter is zero, performing a decrease command on it in the encoding
presents a non-deterministic choice between sending a decrease or a reset signal
42
E. D’Osualdo and C.-H. L. Ong
to the counter. In the branch where the decrease signal is sent, the counter
process will deadlock, ending up in a term that is clearly not an encoding of a
configuration of the Minsky machine. If instead a reset signal is sent, the counter
will refresh the name t with a new name, but the old one would be discarded as
there is no sequential term which knows it.
When a counter is not zero, the branch where the decrease signal is sent
will simply succeed, while the resetting one will generate some ‘garbage’ term
νt.(t k . . . k t) in parallel with the rest of the encoding of the Minsky machine’s
configuration.
A configuration of the machine is thus reachable if and only if its encoding
(without garbage) is reachable from the encoding of the machine. This proves
Theorem 8.
| 6 |
Convergence Rate of Riemannian Hamiltonian Monte Carlo and
Faster Polytope Volume Computation
arXiv:1710.06261v1 [] 17 Oct 2017
Yin Tat Lee∗, Santosh S. Vempala†
October 18, 2017
Abstract
We give the first rigorous proof of the convergence of Riemannian Hamiltonian Monte Carlo,
a general (and practical) method for sampling Gibbs distributions. Our analysis shows that
the rate of convergence is bounded in terms of natural smoothness parameters of an associated
Riemannian manifold. We then apply the method with the manifold defined by the log barrier
function to the problems of (1) uniformly sampling a polytope and (2) computing its volume, the
latter by extending Gaussian cooling to the manifold setting. In both cases, the total number
2
of steps needed is O∗ (mn 3 ), improving the state of the art. A key ingredient of our analysis is
a proof of an analog of the KLS conjecture for Gibbs distributions over manifolds.
Contents
.
.
.
.
.
3
3
5
6
7
7
2 Basics of Hamiltonian Monte Carlo
2.1 Hamiltonian Monte Carlo on Riemannian manifolds . . . . . . . . . . . . . . . . . . .
8
10
3 Convergence of Riemannian Hamiltonian Monte Carlo
3.1 Basics of geometric Markov chains . . . . . . . . . . . . .
3.2 Overlap of one-step distributions . . . . . . . . . . . . . .
3.2.1 Proof Outline . . . . . . . . . . . . . . . . . . . . .
3.2.2 Variation of Hamiltonian curve . . . . . . . . . . .
3.2.3 Local Uniqueness of Hamiltonian Curves . . . . . .
3.2.4 Smoothness of one-step distributions . . . . . . . .
3.3 Convergence bound . . . . . . . . . . . . . . . . . . . . . .
13
13
15
15
16
19
21
23
1 Introduction
1.1 Results . . . .
1.2 Approach and
1.3 Practicality .
1.4 Notation . .
1.5 Organization
. . . . . . . .
contributions
. . . . . . . .
. . . . . . . .
. . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4 Improved analysis of the convergence
24
4.1 Improved one-to-one correspondence for Hamiltonian curve . . . . . . . . . . . . . . 25
4.2 Improved smoothness of one-step distributions . . . . . . . . . . . . . . . . . . . . . . 26
∗
†
University of Washington and Microsoft Research, yintat@uw.edu
Georgia Tech, vempala@gatech.edu
1
5 Gibbs sampling on manifolds
29
5.1 Isoperimetry for Hessian manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
5.2 Sampling with the log barrier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
6 Polytope volume computation: Gaussian cooling on manifolds
6.1 Algorithm: cooling schedule . . . . . . . . . . . . . . . . . . . . .
6.2 Correctness of the algorithm . . . . . . . . . . . . . . . . . . . . .
6.2.1 Initial and terminal conditions . . . . . . . . . . . . . . .
6.2.2 Variance of Yx . . . . . . . . . . . . . . . . . . . . . . . .
6.2.3 Main lemma . . . . . . . . . . . . . . . . . . . . . . . . . .
6.3 Volume computation with the log barrier . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
32
32
32
32
35
37
38
7 Logarithmic barrier
7.1 Riemannian geometry on ML (G2 ) . . . .
7.2 Hamiltonian walk on ML . . . . . . . . .
7.3 Randomness of the Hamiltonian flow (ℓ0 )
7.4 Parameters R1 , R2 and R3 . . . . . . . . .
7.5 Stability of L2 + L4 + L∞ norm (ℓ1 ) . . .
7.6 Mixing Time . . . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
39
39
40
42
46
53
55
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
A Matrix ODE
59
B Concentration
61
C Calculus
61
D Basic definitions of Riemannian geometry
62
D.1 Curvature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
D.2 Hessian manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
2
1
Introduction
Hamiltonian dynamics provide an elegant alternative to Newtonian mechanics. The Hamiltonian
H, which captures jointly the potential and kinetic energy of a particle, is a function of its position
and velocity. First-order differential equations describe the change in both.
∂H(x, v)
dx
=
,
dt
∂v
dv
∂H(x, v)
=−
.
dt
∂x
As we review in Section 2, these equations preserve the Hamiltonian H.
Riemannian Hamiltonian Monte Carlo (or RHMC) [26, 25][6, 7] is a Markov Chain Monte Carlo
method for sampling from a desired distribution. The target distribution is encoded in the definition
of the Hamiltonian. Each step of the method consists of the following: At a current point x,
1. Pick a random velocity y according to a local distribution defined by x (in the simplest setting,
this is the standard Gaussian distribution for every x).
2. Move along the Hamiltonian curve defined by Hamiltonian dynamics at (x, y) for time (distance) δ.
For a suitable choice of H, the marginal distribution of the current point x approaches the desired
target distribution. Conceptually, the main advantage of RHMC is that it does not require a
Metropolis filter (as in the Metropolis-Hastings method) and its step sizes are therefore not severely
limited even in high dimension.
Over the past two decades, RHMC has become very popular in Statistics and Machine Learning,
being applied to Bayesian learning, to evaluate expectations and likelihood of large models by sampling from the appropriate Gibbs distribution, etc. It has been reported to significantly outperform
other known methods [2, 25] and much effort has been made to make each step efficient by the use
of numerical methods for solving ODEs.
In spite of all these developments and the remarkable empirical popularity of RHMC, analyzing
its rate of convergence and thus rigorously explaining its success has remained an open question.
1.1
Results
In this paper, we analyze the mixing rate of Hamiltonian Monte Carlo for a general function f as a
Gibbs sampler, i.e., to generate samples from the density proportional to e−f (x) . The corresponding
Hamiltonian is H(x, v) = f (x) + 21 log((2π)n det g(x)) + 12 v T g(x)−1 v for some metric g. We show
that for x in a compact manifold, the conductance of the Markov chain is bounded in terms of
a few parameters of the metric g and the function f . The parameters and resulting bounds are
given in Corollary 28 and Theorem 30. Roughly speaking, the guarantee says that Hamiltonian
Monte Carlo mixes in polynomial time for smooth Hamiltonians. We note that these guarantees
use only the smoothness and Cheeger constant (expansion) of the function, without any convexity
type assumptions. Thus, they might provide insight in nonconvex settings where (R)HMC is often
applied.
We then focus on logconcave densities in Rn , i.e., f (x) is a convex function. This class of
functions appears naturally in many contexts and is known to be sampleable in polynomial-time
given access to a function value oracle. For logconcave densities, the current fastest sampling
algorithms use n4 function calls, even for uniform sampling [20, 23], and n2.5 oracle calls given a
3
warm start after appropriate rounding (linear transformation) [14]. In the prototypical setting of
uniform sampling from a polytope Ax ≥ b, with m inequalities, the general complexity is no better,
with each function evaluation taking O(mn) arithmetic operations, for an overall complexity of
n4 · mn = mn5 in the worst case and n2.5 · mn after rounding from a warm start. The work of
Kannan and Narayanan [11] gives an algorithm of complexity mn2 · mnω−1 from an arbitrary start
and mn · mnω−1 from a warm start (here ω is the matrix multiplication exponent), which is better
than the general case when the number of facets m is not too large. This was recently improved
to mn0.75 · mnω−1 from a warm start [13]; the subquadratic complexity for the number of steps is
significant since all known general oracle methods cannot below a quadratic number of steps. The
leading algorithms and their guarantees are summarized in Table 1.
Year
1997 [10]
2003 [21]
2009 [11]
2016 [13]
2016 [15]
This paper
Algorithm
Ball walk#
Hit-and-run#
Dikin walk
Geodesic walk
Ball walk#
RHMC
Steps
n3
n3
mn
3
mn 4
n2.5
2
mn 3
Cost per step
mn
mn
mnω−1
mnω−1
mn
mnω−1
Table 1: The complexity of uniform polytope sampling from a warm start, where each step of
e
every algorithm uses O(n)
bit of randomness. The entries marked # are for general convex bodies
presented by oracles, while the rest are for polytopes.
In this paper, using RHMC, we improve the complexity of sampling polytopes. In fact we do
this for a general family of Gibbs distributions, of the form e−αφ(x) where φ(x) is a convex function
over a polytope. When φ(x) is the standard logarithmic barrier function and g(x) is its Hessian,
1
1
we get a sampling method that mixes in only n 6 m 2 +
1
2
2
n5 m5
1
1
α 5 +m− 5
+
n3
α+m−1
steps from a warm start!
When α = 1/m, the resulting distribution is very close to uniform over the polytope.
Theorem 1. Let φ be the logarithmic barrier for a polytope M with m constraints and n variables.
Hamiltonian Monte Carlo applied to the function f = exp(−αφ(x)) and the metric given by ∇2 φ
with appropriate step size mixes in
!
1
2
1
3
3n3
1
1
n
m
e
+ m2 n6
O
+
α + m−1 α 31 + m− 13
steps where each step is the solution of a Hamiltonian ODE.
In recent independent work, Mangoubi and Smith [24] analyze Euclidean HMC in the oracle setting, i.e., assuming an oracle for evaluating φ. Their analysis formally gives a dimension-independent
convergence rate based on certain regularity assumptions such as strong convexity and smoothness
of the Hamiltonian H. Unfortunately, these assumptions do not hold for the polytope sampling
problem.
An important application of sampling is integration. The complexity of integration for general
logconcave functions is also n4 oracle calls. For polytopes, the most natural questions is computing
its volume. For this problem, the current best complexity is n4 · mn, where the factor of O(mn) is
the complexity of checking membership in a polytope. Thus, even for explicitly specified polytopes,
the complexity of estimating the volume from previous work is asymptotically the same as that
4
for a general convex body given by a membership oracle. Here we obtain a volume algorithm with
2
complexity mn 3 · mnω−1 , improving substantially on previous algorithms. The volume algorithm
is based using Hamiltonian Monte Carlo for sampling from a sequence of Gibbs distributions over
polytopes. We remark that in the case when m = O(n)1 , the final complexity is o(n4 ) arithmetic
operations, improving by more than a quadratic factor in the dimension over the previous best
complexity of Õ(n6 ) operations for arbitrary polytopes. These results and prior developments are
given in Table 2.
Year
1989 [5]
1989-93 [17, 4, 1, 18, 19]
1997 [10]
2003 [22]
2015 [3]
This paper
Algorithm
DFK
many improvements
DFK, Speedy walk, isotropy
Annealing, hit-and-run
Gaussian Cooling*
RHMC + Gaussian Cooling
Steps
n23
n7
n5
n4
n3
2
mn 3
Cost per step
mn
mn
mn
mn
mn
mnω−1
e
Table 2: The complexity of volume estimation, each step uses O(n)
bit of randomness, all except
*
the last for general convex bodies (the result marked is for well-rounded convex bodies). The
current paper applies to general polytopes, and is the first improvement utilizing their structure.
Theorem 2. For any polytope P = {x : Ax ≥ b} with m constraints and n variables, and any
ε > 0, the Hamiltonian
volume algorithm estimates the volume of P to within 1 ± ε multiplicative
2
−2
e
3
factor using O mn ε
steps where each step consists of solving a first-order ODE and takes time
e mnω−1 LO(1) logO(1) 1 and L is the bit complexity2 of the polytope.
O
ε
A key ingredient in the analysis of RHMC is a new isoperimetric inequality for Gibbs distributions over manifolds. This inequality can be seen as an evidence of a manifold version of the
KLS hyperplane conjecture. For the family of Gibbs distributions induced by convex functions with
convex Hessians, the expansion is within a constant factor of that of a hyperplane cut. This result
might be of independent interest.
1.2
Approach and contributions
Traditional methods to sample from distributions in Rn are based on random walks that take straight
line steps (grid walk, ball walk, hit-and-run). While this leads to polynomial-time convergence for
logconcave distributions, the length of each step has to be small due to boundary effects, and a
Metropolis filter (rejection sampling) has to be applied to ensure the limiting
distribution is the
desired one. These walks cannot afford a step of length greater than δ = O √1n for a distribution
in isotropic position, and take a quadratic number of steps even for the hypercube. The Dikin walk
for polytopes [11], which explicitly takes into account the boundary of polytope at each step, has a
varying step size, but still runs into similar issues and the bound on its convergence rate is O(mn)
for a polytope with m facets.
1
We suspect that the LS barrier [12] might be used to get a faster algorithm even in the regime even if m is
sub-exponential. However, our proof requires a delicate estimate of the fourth derivative of the barrier functions.
Therefore, such a result either requires a new proof or a unpleasantly long version of the current proof.
2
L = log(m + dmax + kbk∞ ) where dmax is the largest absolute value of the determinant of a square sub-matrix of
A.
5
In a recent paper [13], we introduced the geodesic walk. Rather than using straight lines in
Euclidean space, each step of the walk is along a geodesic (locally shortest path) of a Riemannian
metric. More precisely, each step first makes a deterministic move depending on the current point
(drift), then moves along a geodesic in a random initial direction and finally uses a Metropolis filter.
Each step can be computed by solving a first-order ODE. Due to the combination of drift and
geodesic, the local 1-step distributions are smoother than that of the Dikin walk and larger steps
can be taken while keeping a bounded rejection probability for the filter. For sampling polytopes,
3
the manifold/metric defined by the standard log barrier gives a convergence rate of mn 4 , going
below the quadratic (or higher) bound of all previous sampling methods.
One major difficulty with geodesic walk is ensuring the stationary distribution is uniform. For
high dimensional problems, this necessitates taking a sufficiently small step size and then rejecting
some samples according to the desired transition probabilities according to Metropolis filter. Unfortunately, computing these transition probabilities can be very expensive. For the geodesic walk,
it entails solving an n × n size matrix ODE.
Hamiltonian Monte Carlo bears some similarity to the geodesic walk — each step is a random
(non-linear) curve. But the Hamiltonian-preserving nature of the process obviates the most expensive ingredient, Metropolis filter. Due to this, the step size can be made longer, and as a result we
2
obtain a faster sampling algorithm for polytopes that mixes in mn 3 steps (the per-step complexity
remains essentially the same, needing the solution of an ODE).
To get a faster algorithm for volume computation, we extends the analysis to a general family of
Gibbs distributions, including f (x) = e−αφ(x) where φ(x) is the standard log-barrier and α > 0. We
show that the smoothness we need for the sampling corresponding to a variant of self-concordance
defined in Definition 46. Furthermore, we establish an isoperimetric inequality for this class of
functions. This can be viewed as an extension of the KLS hyperplane conjecture from Euclidean to
Riemannian metrics (the analogous case in Euclidean space to what we prove here is the isoperimetry
of the Gaussian density function multiplied by any logconcave function, a case for which the KLS
conjecture holds). The mixing rate for this family of functions is sublinear for α = Ω(1).
Finally, we study the Gaussian Cooling schedule of [3]. We show that in the manifold setting,
2
the Gaussian distribution e−kxk /2 can be replaced by e−αφ(x) . Moreover, the speed of Gaussian
Cooling depends on the “thin-shell” constant of the manifold and classical self-concordance of φ.
Combining all of these ideas, we obtain a faster algorithm for polytope volume computation.
The resulting complexity of polytope volume computation is the same as that of sampling uniformly
2
from a warm start: mn 3 steps. To illustrate the improvement, for polytopes with m = O(n) facets,
5
the new bound is n 3 while the previous best bound was n4 .
1.3
Practicality
From the experiments, the ball walk/hit-and-run seem to mix in n2 steps, the geodesic walk seems
to mix in sublinear number of steps (due to the Metropolis filter bottleneck) and RHMC seems to
mix in only polylogarithmic number of steps. One advantage of RHMC compared to the geodesic
walk is that it does not require the expensive Metropolis filter that involves solving n × n matrix
ODEs. In the future, we plan to do an empirical comparison study of different sampling algorithms.
We are hopeful that using RHMC we might finally be able to sample from polytopes in millions of
dimensions after more than three decades of research on this topic!
6
1.4
Notation
Throughout the paper, we use lowercase letter for vectors and vector fields and uppercase letter for
d
matrices and tensors. We use ek to denote coordinate vectors. We use dt
for the usual derivative,
df (c(t))
∂
is the derivative of some function f along a curve c parametrized by t, we use ∂v
for
e.g.
dt
k
th
the usual partial derivative. We use D f (x)[v1 , v2 , · · · , vk ] for the k directional derivative of f at
x along v1 , v2 , · · · , vk . We use ∇ for the usual gradient and the connection (manifold derivative,
defined in Section D which takes into account the local metric), Dv for the directional derivative
of a vector with respect to the vector (or vector field) v (again, defined in Section D), and Dt if
the curve v(t) is clear from the context. We use g for the local metric. Given a point x ∈ M , g
is a matrix with entries gij . Its inverse has entries g ij . Also, n is the dimension, m the number of
inequalities. We use dT V for the total variation (or L1 ) distance between two distributions.
1.5
Organization
In Section 2, we define the Riemannian Hamiltonian Monte Carlo and study its basic properties
such as time-reversibility. In Section 3, we give the the first convergence rate analysis of RHMC.
However, the convergence rate is weak for the sampling applications (it is polynomial, but not better
than previous methods). In Section 4, we introduce more parameters and use them to get a tighter
analysis of RHMC. In Section 5, we study the isoperimetric constant of f (x) = e−αφ(x) under the
metric ∇2 φ(x). In Section 6, we study the generalized Gaussian Cooling schedule and its relation
to the thin-shell constant. Finally, in Section 7, we compute the parameters we need for the log
barrier function.
7
2
Basics of Hamiltonian Monte Carlo
In this section, we define the Hamiltonian Monte Carlo method for sampling from a general distribution e−H(x,y) . Hamiltonian Monte Carlo uses curves instead of straight lines and this makes
the walk time-reversible even if the target distribution is not uniform, with no need for a rejection
sampling step. In contrast, classical approaches such as the ball walk require an explicit rejection
step to converge to a desired stationary distribution.
Definition 3. Given a continuous, twice-differentiable function H : M×Rn ⊂ Rn ×Rn → R (called
the Hamiltonian, which often corresponds to the total energy of a system) where M is the x domain
of H, we say (x(t), y(t)) follows a Hamiltonian curve if it satisfies the Hamiltonian equations
∂H(x, y)
dx
=
,
dt
∂y
dy
∂H(x, y)
=−
.
dt
∂x
(2.1)
def
We define the map Tδ (x, y) = (x(δ), y(δ)) where the (x(t), y(t)) follows the Hamiltonian curve with
the initial condition (x(0), y(0)) = (x, y).
Hamiltonian Monte Carlo is the result of a sequence of randomly generated Hamiltonian curves.
Algorithm 1: Hamiltonian Monte Carlo
Input: some initial point x(1) ∈ M.
for i = 1, 2, · · · , T do
R
1
(k)
Sample y (k+ 2 ) according to e−H(x ,y) /π(x(k) ) where π(x) = Rn e−H(x,y) dy.
1
With probability 12 , set (x(k+1) , y (k+1) ) = Tδ (x(k) , y (k+ 2 ) ).
1
Otherwise, (x(k+1) , y (k+1) ) = T−δ (x(k) , y (k+ 2 ) ).
end
Output: (x(T +1) , y (T +1) ).
Lemma 4 (Energy Conservation). For any Hamiltonian curve (x(t), y(t)), we have that
d
H(x(t), y(t)) = 0.
dt
Proof. Note that
∂H dx ∂H dy
∂H ∂H
∂H ∂H
d
H(x(t), y(t)) =
+
=
−
= 0.
dt
∂x dt
∂y dt
∂x ∂y
∂y ∂x
Lemma 5 (Measure Preservation). For any t ≥ 0, we have that
det (DTt (x, y)) = 1
where DTt (x, y) is the Jacobian of the map Tt at the point (x, y).
8
Proof. Let (x(t, s), y(t, s)) be a family of Hamiltonian curves given by Tt (x + sdx , y + sdy ). We write
u(t) =
∂
∂
x(t, s)|s=0 , v(t) =
y(t, s)|s=0 .
∂s
∂s
By differentiating the Hamiltonian equations (2.1) w.r.t. s, we have that
du
∂ 2 H(x, y)
∂ 2 H(x, y)
=
u+
v,
dt
∂y∂x
∂y∂y
dv
∂ 2 H(x, y)
∂ 2 H(x, y)
=−
u−
v,
dt
∂x∂x
∂x∂y
(u(0), v(0)) = (dx , dy ).
This can be captured by the following matrix ODE
∂ 2 H(x(t),y(t))
∂y∂x
2
− ∂ H(x(t),y(t))
∂x∂x
dΦ
=
dt
∂ 2 H(x(t),y(t))
∂y∂y
2
− ∂ H(x(t),y(t))
∂x∂y
!
Φ(t)
dx
dy
Φ(0) = I
using the equation
DTt (x, y)
dx
dy
=
u(t)
v(t)
= Φ(t)
.
Therefore, DTt (x, y) = Φ(t). Next, we observe that
d
−1 d
log det Φ(t) = Tr Φ(t)
Φ(t) = Tr
dt
dt
∂ 2 H(x(t),y(t))
∂y∂x
2
− ∂ H(x(t),y(t))
∂x∂x
∂ 2 H(x(t),y(t))
∂y∂y
2
− ∂ H(x(t),y(t))
∂x∂y
!
= 0.
Hence,
det Φ(t) = det Φ(0) = 1.
Using the previous two lemmas, we now show that Hamiltonian Monte Carlo indeed converges
to the desired distribution.
Lemma 6 (Time reversibility). Let px (x′ ) denote the probability density of one step of the Hamiltonian Monte Carlo starting at x. We have that
π(x)px (x′ ) = π(x′ )px′ (x)
R
for almost everywhere in x and x′ where π(x) = Rn e−H(x,y) dy.
Proof. Fix x and x′ . Let Fδx (y) be the x component of Tδ (x, y). Let V+ = {y : Fδx (y) = x′ } and
x (x) = x′ )}. Then,
V− = {y : F−δ
π(x)px (x′ ) =
1
2
Z
y∈V+
1
e−H(x,y)
+
x
2
det DFδ (y)
Z
y∈V−
e−H(x,y)
.
x (y)
det DF−δ
We note that this formula assumed that DFδx is invertible. Sard’s theorem showed that Fδx (N ) is
def
measure 0 where N = {y : DFsx (y) is not invertible}. Therefore, the formula is correct except for
a measure zero subset.
9
By reversing time for the Hamiltonian curve, we have that for the same V± ,
Z
Z
′ ′
′ ′
1
e−H(x ,y )
e−H(x ,y )
1
+
π(x′ )px′ (x) =
x′ (y ′ )
2 y∈V+ det DF−δ
2 y∈V− det DFδx′ (y ′ )
(2.2)
where y ′ denotes the y component of Tδ (x, y) and T−δ (x, y) in the first
and second
sum respectively.
A B
We compare the first terms in both equations. Let DTδ (x, y) =
. Since Tδ ◦ T−δ = I
C D
and Tδ (x, y) = (x′ , y ′ ), the inverse function theorem shows that DT−δ (x′ , y ′ ) is the inverse map of
DTδ (x, y). Hence, we have that
−1
A B
· · · −A−1 B(D − CA−1 B)−1
′ ′
DT−δ (x , y ) =
.
=
···
···
C D
′
x (y ′ ) = −A−1 B(D − CA−1 B)−1 . Hence, we have that
Therefore, we have that Fδx (y) = B and F−δ
−1
x′
=
det DF−δ
(y ′ ) = det A−1 det B det D − CA−1 B
|det B|
.
A B
det
C D
A B
= 1 (Lemma 5), we have that
C D
x′
det DF−δ
(y ′ ) = |det (DFδx (y))| .
Using that det (DTt (x, y)) = det
Hence, we have that
1
2
Z
y∈V+
1
e−H(x,y)
=
x
2
det DFδ (y)
=
1
2
−H(x′ ,y ′ )
Z
Z
y∈V+
e−H(x,y)
x′ (y ′ )
det DF−δ
′
y∈V+
′
e−H(x ,y )
x′ (y ′ )
det DF−δ
where we used that e−H(x,y) = e
(Lemma 4) at the end.
For the second term in (2.2), by the same calculation, we have that
Z
Z
′ ′
1
e−H(x,y)
e−H(x ,y )
1
=
x (y)
2 y∈V− det DF−δ
2 y∈V+ det DFδx′ (y ′ )
Combining both terms we have the result.
The main challenge in analyzing Hamiltonian Monte Carlo is to bound its mixing time.
2.1
Hamiltonian Monte Carlo on Riemannian manifolds
Suppose we want to sample from the distribution e−f (x) . We define the following energy function
H:
1
1
def
(2.3)
H(x, v) = f (x) + log((2π)n det g(x)) + v T g(x)−1 v.
2
2
One can view x as the location and v as the velocity. The following lemma shows that the first
variable x(t) in the Hamiltonian curve satisfies a second-order differential equation. When we view
the domain M as a manifold, this equation is simply Dt dx
dt = µ(x), namely, x acts like a particle
under the force field µ. (For relevant background on manifolds, we refer the reader to Appendix D).
10
Lemma 7. In Euclidean coordinates, The Hamiltonian equation for (2.3) can be rewritten as
Dt
dx
=µ(x),
dt
dx
(0) ∼N (0, g(x)−1 )
dt
where µ(x) = −g(x)−1 ∇f (x) − 12 g(x)−1 Tr g(x)−1 Dg(x) and Dt is the Levi-Civita connection on
the manifold M with metric g.
Proof. From the definition of the Hamiltonian curve, we have that
dx
= g(x)−1 v
dt
1 dx T
dv
1
dx
= −∇f (x) − Tr g(x)−1 Dg(x) +
Dg(x) .
dt
2
2 dt
dt
Putting the two equations together, we have that
dx
dv
d2 x
= − g(x)−1 Dg(x)[ ]g(x)−1 v + g(x)−1
2
dt
dt
dt
1
dx dx
1
dx T
dx
= − g(x)−1 Dg(x)[ ]
− g(x)−1 ∇f (x) − g(x)−1 Tr g(x)−1 Dg(x) + g(x)−1
Dg(x) .
dt dt
2
2
dt
dt
Hence,
T
dx dx 1
1
d2 x
dx
−1
−1 dx
+
g(x)
Dg(x)[
]
−
g(x)
= − g(x)−1 ∇f (x) − g(x)−1 Tr g(x)−1 Dg(x) .
Dg(x)
2
dt
dt dt
2
dt
dt
2
(2.4)
Using the formula of Christoffel symbols
Dt
dx
d2 x X dxi dxj k
= 2 +
Γ ek
dt
dt
dt dt ij
where
Γkij =
1 X kl
g (∂j gli + ∂i glj − ∂l gij ),
2
l
ijk
we have that
Dt
X dxi dxj
d2 x 1
dx
= 2 + g(x)−1
(∂j gli + ∂i glj − ∂l gij )el
dt
dt
2
dt dt
ijl
=
T
d2 x
dx dx 1
dx
−1
−1 dx
+
g(x)
Dg(x)[
]
−
g(x)
Dg(x) .
dt2
dt dt
2
dt
dt
Putting this into (2.4) gives
Dt
dx
1
= − g(x)−1 ∇f − g(x)−1 Tr g(x)−1 Dg(x) .
dt
2
Motivated by this, we define the Hamiltonian map as the first component of the Hamiltonian
dynamics operator T defined earlier. For the reader familiar with Riemannian geometry, this is
similar to the exponential map (for background, see Appendix D).
11
Definition 8. Let Hamx,δ (vx ) = γ(δ) where γ(t) be the solution of the Hamiltonian equation
′
Dt dγ
dt = µ with initial conditions γ(0) = x and γ (0) = vx . We also denote Hamx,1 (vx ) by Hamx (vx ).
We now give two examples of Hamiltonian Monte Carlo.
Example 9. When g(x) = I, the Hamiltonian curve acts like stochastic gradient descent for the
function f with each random perturbation drawn from a standard Gaussian.
Dt
dx
= −∇f (x).
dt
When g(x) = ∇2 f (x), the Hamiltonian curve acts like a stochastic Newton curve for the function
f + ψ:
−1
dx
Dt
= − ∇2 f (x)
∇(f (x) + ψ(x))
dt
where the volumetric function ψ(x) = log det ∇2 f (x).
Next we derive a formula for the transition probability in Euclidean coordinates.
Lemma 10. For any x ∈ M ⊂ Rn and s > 0, the probability density of the 1-step distribution from
x is given by
s
X
1
det (g(y))
2
−1
exp − kvx kx
(2.5)
px (y) =
|det(DHamx,δ (vx ))|
(2π)n
2
vx :Hamx,δ (vx )=y
where DHamx,δ (vx ) is the Jacobian of the Hamiltonian map Hamx,δ .
Proof. We prove the formula by separately considering each vx ∈ Tx M s.t. Hamx,δ (vx ) = y, then
summing up. In the tangent space Tx M, the point vx follows a Gaussian step. Therefore, the
probability density of vx in Tx M is as follows:
1
1
2
exp
−
kv
k
pTxx M (vx ) =
x x .
2
(2π)n/2
Let y = Hamx,δ (vx ) and F : Tx M → Rn be defined by F (v) = idM→Rn ◦ Hamx,δ (v). Here Rn is
the same set as M but endowed with the Euclidean metric. Hence, we have
DF (vx ) = DidM→Rn (y)DHamx,δ (vx ).
The result follows from px (y) = |det(DF (vx ))|−1 pxTx M (vx ) and
det DF (vx ) = det (DidM→Rn (y)) det (DHamx,δ (vx ))
= det(g(y))−1/2 det (DHamx,δ (vx )) .
12
3
Convergence of Riemannian Hamiltonian Monte Carlo
Hamiltonian Monte Carlo is a Markov chain on a manifold whose stationary stationary distribution
has density q(x) proportional to exp(−f (x)). We will bound the conductance of this Markov chain
and thereby its mixing time to converge to the stationary distribution. Bounding conductance
involves showing (a) the induced metric on the state space satisfies a strong isoperimetric inequality
and (b) two points that are close in metric distance are also close in probabilistic distance, i.e., the
one-step distributions from two nearby points have large overlap. In this section and the next, we
present general conductance bounds using parameters determined by the associated manifold. In
Section 7, we bound these parameters for the manifold corresponding to the logarithmic barrier in
a polytope.
3.1
Basics of geometric Markov chains
For completeness, we will discuss some standard techniques in geometric random walks in this subsection. For a Markov chain with state space M, stationary distribution q and next step distribution
pu (·) for any u ∈ M, the conductance of the Markov chain is
R
def
S pu (M \ S)dq(u)
φ = inf
.
S⊂M min {q(S), q(M \ S)}
The conductance of an ergodic Markov chain allows us to bound its mixing time, i.e., the rate of
convergence to its stationary distribution, e.g., via the following theorem of Lovász and Simonovits.
Theorem 11 ([19]). Let qt be the distribution of the current point after t steps of a Markov chain
with stationary distribution q and conductance at least φ, starting from initial distribution q0 . For
any ε > 0,
s
t
dq0 (x)
φ2
1
.
Ex∼q0
dT V (qt , q) ≤ ε +
1−
ε
dq(x)
2
Definition 12. The isoperimetry of a metric space M with target distribution q is
R
d(S,x)≤δ q(x)dx − q(S)
def
ψ = inf min
δ>0 S⊂M δ min {q(S), q(M \ S)}
where d is the shortest path distance in M.
The proof of the following theorem follows the standard outline for geometric random walks (see
e.g., [29]).
Lemma 13. Given a metric space M and a time-reversible Markov chain p on M with stationary
distribution q. Fix any r > 0. Suppose that for any x, y ∈ M with d(x, z) < r, we have that
dT V (px , py ) ≤ 0.9. Then, the conductance of the Markov chain is Ω(rψ).
Proof. Let S be any measurable subset of M. Then our goal is to bound the conductance of the
Markov chain
R
S px (M \ S) dq(x)
= Ω (rψ) .
min {q(S), q(M \ S)}
R
R
Since the Markov chain is time-reversible (For any two subsets A, B, A px (B) dq(x) = B px (A) dq(x)),
we can write the numerator of the left hand side above as
!
Z
Z
1
px (S) dq(x) .
px (M \ S) dq(x) +
2
M\S
S
13
Define
S1 = {x ∈ S : px (M \ S) < 0.05}
S2 = {x ∈ M \ S : px (S) < 0.05}
S3 = M \ S1 \ S2 .
Without
loss of generality, we can assume that q(S1 ) ≥ (1/2)q(S) and q(S2 ) ≥ (1/2)q(M \ S) (if
R
not, S px (M \ S) dq(x) = Ω(1) and hence the conductance is Ω(1).)
Next, we note that for any two points x ∈ S1 and y ∈ S2 , dT V (px , py ) > 0.9. Therefore, by the
assumption, we have that d(x, y) ≥ r. Therefore, by the definition of ψr , we have that
Z
q(x)dx − q(S1 )
q(S3 ) ≥
d(S1 ,x)≤r
≥ rψ min {q(S1 ), q(M \ S1 )}
≥ rψ min {q(S1 ), q(S2 )} .
Going back to the conductance,
1
2
Z
S
px (M \ S) dq(x) +
Z
!
px (S) dq(x)
M\S
1
≥
2
Z
(0.05)dq(x)
S3
= Ω (rψ) min{q(S1 ), q(S2 )}
= Ω (rψ) min{q(S), q(M \ S)}.
Therefore, the conductance of the Markov chain is Ω(rψ).
Combining Theorem 13 and Lemma 13 gives the following result for bounding mixing time of
general geometric random walk.
Lemma 14. Given a metric space M and a time-reversible Markov chain p on M with stationary
distribution q. Suppose that there exist r > 0 and ψ > 0 such that
1. For any x, y ∈ M with d(x, z) < r, we have that dT V (px , py ) ≤ 0.9.
2. For any S ⊂ M, we have that
Z
0<d(S,x)≤r
q(x)dx ≥ rψ min {q(S), q(M \ S)} .
Let qt be the distribution of the current point after t steps of a Markov chain with stationary distribution q starting from initial distribution q0 . For any ε > 0,
s
t
1
dq0 (x)
Ex∼q0
1 − Ω(r 2 ψ 2 ) .
dT V (qt , q) ≤ ε +
ε
dq(x)
14
3.2
Overlap of one-step distributions
The mixing of the walk depends on smoothness parameters of the manifold and the functions f, g
used to define the Hamiltonian. Since each step of our walk involves a Gaussian vector, many
smoothness parameters depend on choices of the random vector. Formally, let γ be the Hamiltonian curve used in a step of Hamiltonian Monte Carlo. In the analysis, we need a large fraction
of Hamiltonian curves from any point on the manifold to be well-behaved. A Hamiltonian curve
can be problematic when its velocity or length is too large and this happens with non-zero probability. Rather than using supremum bounds for our smoothness parameters, it suffices to use large
probability bounds, where the probability is over the random choice of Hamiltonian curve at any
point x ∈ Ω. To capture the notion that “most Hamiltonian curves are well-behaved”, we use an
auxiliary function ℓ(γ) ≥ 0 which assigns a real number to each Hamiltonian curve γ and measures
how “good” the curve is. The smoothness parameters assume that this function ℓ is bounded and
Lipshitz. One possible choice of such ℓ is ℓ(γ) = kγ ′ (0)kγ(0) which measures the initial velocity, but
this will give us a weaker bound. Instead, we use the following which jointly bounds the change in
position (first term) and change in velocity (second term).
Definition 15. An auxiliary function ℓ is a non-negative real-valued function on the set of Hamiltonian curves, i.e., maps γ : [0, δ] → M , with bounded parameters ℓ0 , ℓ1 such that
1. For any variation γs of a Hamiltonian curve (see Definition 18) with ℓ(γs ) ≤ ℓ0 , we have
!
d
d
′
ℓ(γs ) ≤ ℓ1
γs (0)
+ δ Ds γs (0) γs (0) .
ds
ds
γs (0)
1
min 1, ℓℓ10δ where γ ∼ x indicates a ran2. For any x ∈ M , Pγ∼x (ℓ(γ) ≤ 21 ℓ0 ) ≥ 1 − 100
dom Hamiltonian curve starting at x, chosen by picking a random Gaussian initial velocity
according to the local metric at x.
3.2.1
Proof Outline
To bound the conductance of HMC, we need to show that one-step distributions from nearby points
have large overlap for reasonably large step size δ. To this end, recall that the probability density
of going from x to y is given by the following formula
s
X
1
det (g(y))
2
−1
exp − kvx kx .
px (y) =
|det (DHamx,δ (vx ))|
(2π)n
2
vx :Hamx,δ (vx )=y
In Section 3.2.2, we introduce the concept of variations of Hamiltonian curves and use it to bound
|det (DHamx,δ (vx ))|−1 . We can show that px (y) is in fact close to
s
X
1
det (g(y))
1
2
·
exp − kvx kx .
(3.1)
pex (y) =
δn
(2π)n
2
vx :Hamx,δ (vx )=y
To compare px (y) with pz (y), we need to relate vx and vz that map x and z to y respectively. In
Section 3.2.3, we shows that if x and z are close enough, for every vx , there is a unique vz such that
vx is close to vz and that Hamz,δ (vz ) = Hamx,δ (vx ). Combining these facts, we obtain our main
theorem for this section, stated in Subsection 3.2.4.
15
In the analysis, we use three important operators from the tangent space to itself. The motivation for defining these operators comes directly from Lemma 19, which studies the variation in
Hamiltonian curves as the solution of a Jacobi equation. In words, the operator R(.) below allows
us to write the change in the Hamiltonian curve as an ODE.
Definition 16. Given a Hamiltonian curve γ, let R(γ, t), M (γ, t) and Φ(γ, t) be the operators from
T M to T M defined by
R(t)u = R(u, γ ′ (t))γ ′ (t),
M (t)u = Du µ(γ(t)),
Φ(t)u = M (t)u − R(t)u.
When γ is explicit from the context, we simply write them as R(t), M (t) and Φ(t).
The key parameter R1 we use in this section is a bound on the Frobenius norm of Φ formally
defined as follows.
Definition 17. Given a manifold M with metric g and an auxiliary function ℓ with parameters
ℓ0 , ℓ1 , we define the smoothness parameter R1 depending only on M and the step size δ such that
kΦ(γ, t)kF,γ(t) ≤ R1
for any γ such that ℓ(γ) ≤ ℓ0 and any 0 ≤ t ≤ δ where the Frobenius norm kAkF,γ(t) is defined by
kAk2F,γ(t) = Eα,β∼N (0,g(x)−1 ) (αT Aβ)2 .
The above definitions are related to but different from our previous paper analyzing the geodesic
walk [13].
3.2.2
Variation of Hamiltonian curve
To bound the determinant of the Jacobian of Hamx , we study variations of Hamiltonian curves.
Definition 18. We call γs (t) a Hamiltonian variation if γs (·) satisfies the Hamiltonian equation for
s
every s. We call ∂γ
∂s a Jacobi field.
The following lemma shows that a Jacobi field satisfies the following Jacobi equation.
Lemma 19. Given a path c(s), let γs (t) = Hamc(s) (t(v + sw)) be a Hamiltonian variation. The
def
Jacobi field ψ(t) =
∂
∂s γs (t)|s=0
satisfies the following Jacobi equation
Dt2 ψ(t) = Φ(t)ψ(t)
(3.2)
Let Γt parallel transport from Tγ(t,0) M to Tγ(0,0) M and ψ(t) = Γt ψ(t). Then, ψ(t) satisfies the
following ODE on the tangent space Tγ(0,0) M:
′′
ψ (t) = Γt Φ(t)Γ−1
t ψ(t)
′
ψ (0) = w,
ψ(0) = Ds c(0).
16
∀t ≥ 0,
(3.3)
Proof. Taking derivative Ds on both sides of Dt ∂γ
∂t = µ(γ), and using Fact 67, we get
∂γ
∂t
∂γ ∂γ ∂γ
∂γ
+ R( ,
)
= Dt Ds
∂t
∂s ∂t ∂t
∂γ
∂γ ∂γ ∂γ
= Dt2
+ R( ,
) .
∂s
∂s ∂t ∂t
Ds µ(γ) = Ds Dt
In short, we have Dt2 ψ(t) = Φ(t)ψ(t). This shows (3.2).
Equation (3.3) follows from the fact that
Dt v(t) = Γt
d
Γ−1
t v(t)
dt
′
for any vector field on γ0 (t) (see Definition 11 in the appendix) applied to v(t) = ψ (t).
We now proceed to estimate the determinant of the Jacobian of Hamx . For this we will use the
following elementary lemmas describing the solution of the following second-order matrix ODE:
d2
Ψ(t) = Φ(t)Ψ(t),
dt2
d
Ψ(0) = B,
dt
Ψ(0) = A.
(3.4)
Lemma 20. Consider the matrix ODE (3.4). Let λ = max0≤t≤ℓ kΦ(t)k2 . For any t ≥ 0, we have
that
√
√
kBk
kΨ(t)k2 ≤ kAk2 cosh( λt) + √ 2 sinh( λt).
λ
Lemma 21. Consider the matrix ODE (3.4). Let λ = max0≤t≤ℓ kΦ(t)kF . For any 0 ≤ t ≤
have that
t3
2
kΨ(t) − A − BtkF ≤ λ t kAk2 + kBk2 .
5
√1 ,
λ
we
In particular, this shows that
Ψ(t) = A + Bt +
with kE(s)kF ≤ λ s2 kAk2 +
s3
5
kBk2 .
Z
t
0
(t − s)Φ(s)(A + Bs + E(s))ds
The proofs of these lemmas are in Appendix A. We continue with the main proof here.
Lemma 22. Let γ(t) = Hamx (tvx ) be a Hamiltonian curve and step size δ satisfy 0 < δ2 ≤ R11
where R1 = max0≤t≤h kΦ(t)kF,γ(t) . Then DHamx,δ is invertible with kDHamx,δ − δIkF,γ(δ) ≤ 5δ .
Also, we have,
log det
2
Z δ
δ 2 R1
t(δ − t)
1
DHamx,δ (vx ) −
TrΦ(t)dt ≤
.
δ
δ
10
0
17
(3.5)
Proof. We want to compute DHamx,δ (vx )[w] for some w ∈ Tx M . By definition, we have that
DHamx,δ (vx )[w] =
∂
γ(t, s)|t=δ,s=0
∂s
(3.6)
where γ(t, s) = Hamx (t(vx + sw)). Define ψ(t) as in Lemma 19 with c(s) = x. So, Ds c(0) = 0, i.e.,
ψ(0) = 0. Then, by the lemma,
′′
ψ (t) = Γt Φ(t)Γ−1
t ψ(t),
′
ψ (0) = w,
ψ(0) = 0.
Now, we define Ψ be the solution of the matrix ODE
Ψ′′ (t) = Γt Φ(t)Γ−1
t Ψ(t) ∀t ≥ 0,
Ψ′ (0) = I,
Ψ(0) = 0.
By the definition of Ψ, we see that ψ(t) = Ψ(t)w. Therefore, we have that
∂
−1
γ(t, s)|s=0 = Γ−1
t ψ(t) = Γt Ψ(t)w.
∂s
Combining it with (3.6), we have that DHamx,δ (vx ) = Γ−1
t Ψ(δ). Since Γt is an orthonormal matrix,
we have that
log det (DHamx,δ (vx )) = log det Ψ(δ).
(3.7)
Note that Γt Φ(t)Γ−1
t
that
F,γ(t)
= kΦ(t)kF,γ(t) ≤ R1 for all 0 ≤ t ≤ δ. Using this, Lemma 21 shows
1
Ψ(δ) − I
δ
F,x
≤ R1
δ2
kIk2
5
≤
1
5
(3.8)
Hence, Ψ(δ) is invertible, and so is DHamx .
By Lemma 64, we have that
1
log det( Ψ(δ)) − Tr
δ
1
Ψ(δ) − I
δ
≤
1 2
δ R1
5
2
.
(3.9)
Now we need to estimate Tr(Ψ(δ) − δI). Lemma 21 shows that
Z δ
Z δ
(δ − t)Φ(t)E(t)dt
t(δ − t)Φ(t)dt +
Ψ(δ) = δI +
0
0
3
with kE(t)kF,γ(t) ≤ t 5R1 for all 0 ≤ t ≤ δ. Since
Z δ
Z δ
δ5
(δ − t) kΦ(t)kF,x kE(t)kF,x dt ≤ R12 ,
(δ − t)Φ(t)E(t)dt ≤
Tr
20
0
0
we have that
Tr
1
Ψ(δ) − I −
δ
Z
δ
0
t(δ − t)
Φ(t)dt
δ
≤
δ4 2
R .
20 1
Combining (3.9) and (3.10), we have
2
2
Z δ
δ 2 R1
t(δ − t)
1 2
δ4 2
1
TrΦ(t)dt ≤
δ R1 + R1 ≤
.
log det( Ψ(δ)) −
δ
δ
5
20
10
0
Applying (3.7), we have the result.
18
(3.10)
3.2.3
Local Uniqueness of Hamiltonian Curves
Next, we study the local uniqueness of Hamiltonian curves. We know that for every pair x, y, there
can be multiple Hamiltonian curves connecting x and y. Due to this, the probability density px
at y in M is the sum over all possible Hamiltonian curves connecting x and y. The next lemma
establishes a 1-1 map between Hamiltonian curves connecting x to y as we vary x.
Lemma 23. Let γ(t) = Hamx (tvx ) be a Hamiltonian curve and let the step size δ satisfy 0 < δ2 ≤
1
R1 , where R1 = max0≤t≤δ kΦ(t)kF,γ(t) . Let the end points be x = γ(0) and y = γ(δ). Then there is
an unique smooth invertible function v : U ⊂ M → V ⊂ T M such that
y = Hamz,δ (v(z))
for any z ∈ U where U is a neighborhood of x and V is a neighborhood of vx = v(x). Furthermore,
5
we have that k∇η v(x)kx ≤ 2δ
kηkx and
1
η + ∇η v(x)
δ
x
≤
3
R1 δ kηkx .
2
Let γs (t) = Hamc(s) (t · v(c(s))) where c(s) is any path with c(0) = x and c′ (0) = η. Then, for
all 0 ≤ t ≤ δ, we have that
∂
∂s
γs (t)
s=0
γ(t)
≤ 5 kηkx
and
Ds γs′ (t)
s=0 γ(t)
≤
10
kηkx .
δ
Proof. Consider the smooth function f (z, w) = Hamz,δ (w). From Lemma 22, the Jacobian of w
at (x, vx ) in the w variables, i.e., DHamx,δ (vx ), is invertible. Hence, the implicit function theorem
shows that there is a open neighborhood U of x and a unique function v on U such that f (z, v(z)) =
f (x, vx ), i.e. Hamz,δ (v(z)) = Hamx,δ (vx ) = y.
To bound ∇η v(x), we let γs (t) = Hamc(s) (t · v(c(s))) and c(s) be any path with c(0) = x and
′
c (0) = η. Let Γt be the parallel transport from Tγ(t) M to Tγ(0) M. Define
ψ(t) = Γt
∂
∂s
γs (t).
s=0
Lemma 19 shows that ψ(t) satisfies the following ODE
′′
ψ (t) = Γt Φ(t)Γ−1
t ψ(t) ∀t ≥ 0,
′
ψ (0) = ∇η v(x),
ψ(0) = η.
Moreover, we know that ψ(δ) = 0 because γs (δ) = Hamc(s) (δ · v(c(s))) = Hamc(s),δ (v(c(s))) = y for
small enough s.
To bound k∇η v(x)kx , we note that Lemma 21 shows that
t
2
(3.11)
ψ(t) − η − t · ∇η v(x) x ≤ R1 t kηkx + k∇η v(x)kx .
5
Since ψ(δ) = 0 and δ2 ≤
1
R1 ,
we have that
kη + δ · ∇η v(x)kx ≤ kηkx +
19
δ
k∇η v(x)kx
5
which implies that
δ k∇η v(x)kx − kηkx ≤ kηkx +
Therefore, k∇η v(x)kx ≤
5
2δ
δ
k∇η v(x)kx .
5
kηkx . More precisely, from (3.11), we have that
1
η + ∇η v(x)
δ
x
≤
3
R1 δ kηkx .
2
Putting this into (3.11), for t ≤ δ we get
ψ(t)
x
5
1
2
≤ kηkx + kηkx + R1 t kηkx + kηkx ≤ 5 kηkx .
2
2
Now, apply the conclusion of Lemma 21 after taking a derivative, we have that
Z t
′
Φ(s)(η + s · ∇η v(x) + E(s))ds
ψ (t) = ∇η v(x) +
0
where kE(s)kx ≤ R1 s2 kηkx +
that t ≤ δ, we have
′
ψ (t)
x
s3
5
k∇η v(x)kx
≤
3
2
kηkx . Hence, bounding each term and noting
5
10
5 3
≤
kηkx + δR1 1 + +
kηkx .
kηkx ≤
2δ
2 2
δ
When we vary x, the Hamiltonian curve γ from x to y varies and we need to bound ℓ(γ) over
the variation.
Lemma 24. Given a Hamiltonian curve γ(t) = Hamx (t · vx ) with step size δ satisfying δ2 ≤ R11 , let
c(s) be any geodesic starting at γ(0). Let x = c(0) = γ(0) and y = γ(δ). Suppose that the auxiliary
ℓ0
dc
function ℓ satisfies ds
≤ 7ℓ
and ℓ(γ) ≤ 21 ℓ0 . Then, there is a unique vector field v on c such
c(0)
1
that
y = Hamc(s),δ (v(s)).
Moreover, this vector field is uniquely determined by the geodesic c(s) and any v(s) on this vector
field. Also, we have that ℓ(Hamc(s),δ (v(s))) ≤ ℓ0 for all s ≤ 1.
Proof. Let smax be the supremum of s such that v(s) can be defined continuously such that y =
Hamc(s),δ (v(s)) and ℓ(γs ) ≤ ℓ0 where γs (t) = Hamc(s) (t · v(s)). Lemma 23 shows that there is a
neighborhood N at x and a vector field u on N such that for any z ∈ N , we have that
y = Hamz,δ (u(z)).
Also, this lemma shows that u(s) is smooth and hence the parameter ℓ1 shows that ℓ(γs ) is Lipschitz
in s. Therefore, ℓ(γs ) ≤ ℓ0 in a small neighborhood of 0. Hence smax > 0.
Now, we show smax > 1 by contradiction. By the definition of smax , we have that ℓ(γs ) ≤ ℓ0 for
dc
5
5
any 0 ≤ s < smax . Hence, we can apply Lemma 23 to show that kDs v(s)kγ(s) ≤ 2δ
ds γ(s) = 2δ L
where L is the length of c up to s = 1 (since the speed is constant on any geodesic and the curve
is defined over [0, 1]). Therefore, the function v is Lipschitz and hence v(smax ) is well-defined and
ℓ(γsmax ) ≤ ℓ0 by continuity. Hence, we can apply Lemma 23 at ℓ(smax ) and extend the domain of
ℓ(s) beyond smax .
20
To bound ℓ(γs ) beyond smax , we note that kDs γs′ kγ(s) = kDs v(s)kγ(s) ≤
d
ds c γ(0)
= L. Hence,
d
ds ℓ(γs )
5
2δ L
and
≤ (L + 25 L)ℓ1 by the definition of ℓ1 . Therefore, if L ≤
that ℓ(γs ) ≤ ℓ(γ) + 21 ℓ0 ≤ ℓ0 for all s ≤ 1.01 wherever v(s) is
the assumption that smax is the supremum. Hence, smax > 1.
d
ds c γ(s)
ℓ0
7ℓ1 ,
=
we have
defined. Therefore, this contradicts
The uniqueness follows from Lemma 23.
3.2.4
Smoothness of one-step distributions
Lemma 25. For δ2 ≤
px , pz from x, z satisfy
1
√
100 nR1
and δ3 ≤
√ℓ0
,
100 nR1 ℓ1
the one-step Hamiltonian walk distributions
1
1
dTV (px , pz ) = O
d(x, z) + .
δ
25
ℓ0
. Let c(s) be a unit speed geodesic connecting x and
Proof. We first consider the case d(x, y) < 7ℓ
1
ℓ0
z of length L < 7ℓ1 .
def
Let ℓe = min 1, ℓℓ10δ . By the definition of ℓ0 , with probability at least 1 − ℓe in paths γ start at
x, we have that ℓ(γ) ≤ 21 ℓ0 . Let Vx be the set of vx such that ℓ(Hamx (t · vx )) ≤ 21 ℓ0 . Since the
ℓ0
and ℓ(γ) ≤ 12 ℓ0 , for those vx , Lemma 24 shows there is a family
distance from x to z is less than 7ℓ
1
of Hamiltonian curves γs that connect c(s) to y, and ℓ(γs ) ≤ ℓ0 for each of them.
For any vx ∈ V , we have that ℓ(γs ) ≤ ℓ0 . When ℓ(γs ) ≤ ℓ0 , by the definition of R1 , we indeed
have that kΦ(t)k ≤ R1 and hence Lemma 22 shows that
2
Z δ
δ 2 R1
t(δ − t)
1
DHamx,δ (vx ) ≤
TrΦ(t)dt +
log det
δ
δ
10
0
2
2
2
δ R1
1 e
δ √
nR1 +
≤
ℓ
≤
6
10
600
where we used our assumption on δ. We use p(vx ) to denote the probability density of choosing vx
and
s
det (g(Hamx (δ · vx )))
1
def
2
exp − kvx kx .
pe(vx ) =
(2πδ2 )n
2
Hence, we have that
C −1 · p(vx ) ≤ pe(vx ) ≤ C · p(vx )
(3.12)
1 e
ℓ. As we noted, for every vx , there is a corresponding vz such that Hamz (δ·vz ) = y
where C = 1+ 600
and ℓ(Hamz (t · vz )) ≤ ℓ0 . Therefore, we have that
def
(1 − C 2 )p(vx ) + C(e
p(vx ) − pe(vz )) ≤ p(vx ) − p(vz ) ≤(1 − C −2 )p(vx ) + C −1 (e
p(vx ) − pe(vz )).
R
1
Since Vx p(vx )dvx ≥ 1 − 100
min 1, ℓℓ10δ , we have that
Z
ℓe
|p(vx ) − p(vz )| dvx
+
100
Vx
Z
Z
ℓe
≤
|e
p(vx ) − pe(vz )| dvx
p(vx )dvx ) + 2
(1 +
100
Vx
Vx
Z Z
d
ℓe
+2
pe(vc(s) ) dsdvx
≤
50
Vx s ds
dT V (px , pz ) ≤
21
(3.13)
Note that
d
pe(v ) =
ds c(s)
−
1 d
kv(s)k2c(s) pe(vc(s) ).
2 ds
Using (3.12), we have that pe(vc(s) ) ≤ 2 · p(vc(s) ) and hence
Z
Vx
d
pe(v ) dy ≤
ds c(s)
Z
d
kv(s)k2c(s) p(vc(s) )dy
Vx ds
d
kv(s)k2c(s)
≤Eℓ(γs )≤ℓ0
ds
(3.14)
Using that k∂s c(s)k = 1, Lemma 23 shows that
1
∂s c(s) + Ds v(s)
δ
x
3
≤ R1 δ.
2
Therefore, we have that
d
kv(s)k2c(s) = 2 hv(s), Ds v(s)ic(s)
ds
2
hv(s), ∂s c(s)ic(s) + 3R1 δ kv(s)kc(s) .
≤
δ
Since v(s) is a random Gaussian vector from the local metric, we have that hv(s), ∂s c(s)ic(s) = O(1)
√
and kv(s)kc(s) = O( n) with high probability. Putting it into (3.14), we have that
Eℓ(γs )≤ℓ0
d
kv(s)k2c(s) = O
ds
√
1
+ δR1 n .
δ
Putting this into 3.14, we have that
Z
√
d
1
pe(vc(s) ) dy = O
+ δR1 n .
δ
Vx ds
Putting this into (3.13), we get
dTV (px , pz ) = O
√
ℓe
1
+ δR1 n L +
δ
50
ℓ0
for any L < 7ℓ
. By taking a minimal length geodesic, and summing over segment of length
1
any x and z, we have
√
1
1
+ δR1 n d(x, z) +
dTV (px , pz ) = O
δ
25
1
1
=O
d(x, z) + .
δ
25
22
ℓ0
8ℓ1 ,
for
3.3
Convergence bound
Combining Lemma 25 and Lemma 14, we have the following result.
Theorem 26. Given a manifold M. Let ℓ0 , ℓ1 , R1 be the parameters of the Hamiltonian Monte
Carlo defined in Definition 15 and 17. Let qt be the distribution of the current point after t steps
Hamiltonian Monte Carlo with step size δ satisfying
δ2 ≤
1
√
100 nR1
δ3 ≤
and
ℓ
√0
,
100 nR1 ℓ1
starting from initial distribution q0 . Let q be the distribution proportional to e−f . For any ε > 0,
we have that
s
t
dq0 (x)
(δψ)2
1
Ex∼q0
1−
dT V (qt , q) ≤ ε +
ε
dq(x)
2
where ψ is the conductance of the manifold defined in Definition 12.
For ℓ(γ) = kγ ′ (0)kγ(0) , we can bound ℓ0 and ℓ1 as follows:
√
Lemma 27. For the auxiliary function ℓ(γ) = kγ ′ (0)kγ(0) , we have that ℓ0 = 10 n and ℓ1 = O( 1δ ).
Furthermore, we have that
γ ′ (t) γ(t) ≤ γ ′ (0) γ(0) + R0 t
where R0 = supx∈M kµ(x)kx .
√
Proof. For ℓ0 , we note that γ ′ (0) ∼ N (0, g(x)−1 ) and hence kγ ′ (0)kγ(0) ≤ 5 n with probability
√
e−O( n) .
For ℓ1 , we note that
d
d
γ ′ (0)
ℓ(γs ) =
ds
ds s
=
γs (0)
≤
1
2
hDs γs′ (0), γs′ (0)iγs (0)
d
ds
kγs′ (0)k2γs (0)
kγs′ (0)kγs (0)
kγs′ (0)kγs (0)
≤ Ds γs′ (0)
γs (0)
.
Hence, we have that ℓ1 = 1δ .
Next, we note that
d
γ ′ (t)
dt
2
γ(t)
= 2 Dt γ ′ (t), γ ′ (t)
γ(t)
≤ 2 kµ(γ(t))kγ(t) γ ′ (t)
γ(t)
.
Therefore, we get the last result.
Combining Lemma 27 and Theorem 26, we have the following result. This result might be
more convenient to establish an upper bound on the rate of convergence, as it depends on only two
worst-case smoothness parameters. In the next section, we will see a more refined bound that uses
the randomness of Hamiltonian curves via additional parameters.
23
Corollary 28. Given a manifold M. Let qt be the distribution of the current point after t steps
Hamiltonian Monte Carlo with step size δ and q be the distribution proportional to e−f . Let R0 and
R1 be parameters such that
1. kµ(x)kx ≤ R0 for any x ∈ M where µ is defined in Lemma 7.
2. Eα,β∼N (0,g(x)−1 ) hDα µ(x), βi2x ≤ R12 for any x ∈ M.
3. Eα,β∼N (0,g(x)−1 ) hR(α, v)v, βi 2x ≤ R12 for any x ∈ M and any kvkx ≤
Riemann curvature tensor of M.
Suppose that δ ≤
√
n
R0
and δ2 ≤
1
√
,
100 nR1
√
n where R is the
then for any ε > 0, we have that
dT V (qt , q) ≤ ε +
s
1
dq0 (x)
Ex∼q0
ε
dq(x)
1−
(δψ)2
2
t
where ψ is the conductance of the manifold defined in Definition 12. In short, the mixing time of
Hamiltonian Monte Carlo is
2
√
R
−2
0
e ψ
.
nR1 +
O
n
Proof. The statement is basically restating the√ definition of R1 and R0 used in Lemma 27 and
Theorem 26. The only difference is that if δ ≤ Rn0 , then we know that
γ ′ (t)
γ(t)
≤ γ ′ (0)
γ(0)
√
+ R0 t = O( n)
for all 0 ≤ t ≤ δ. Therefore, we can relax the constraints ℓ(γ) ≤ ℓ0 in the definition of R1 to simply
√
kvkx ≤ n. It allows us to use R1 without mentioning the auxiliary function ℓ.
4
Improved analysis of the convergence
Corollary 28 gives a polynomial mixing time for the log barrier function. There are two bottlenecks
to improving the bound. First, the auxiliary function ℓ it used does not capture the fact each curve
γ in Hamiltonian Monte Carlo follows a random initial direction. Second, Lemma 23 also does not
take full advantage of the random initial direction. In this section, we focus on improving Lemma
23.
Our main theorem for convergence can be stated as follows in terms of ψ and additional parameters ℓ0 , ℓ1 , R1 , R2 , R3 (see Definitions 12, 15, 16, 17, 33 and 31). It uses the following key
lemma.
Theorem 29. For δ2 ≤
x, z satisfy
1
R1
and δ5 ≤
ℓ0
,
R21 ℓ1
the one-step Hamiltonian walk distributions px , pz from
1
1
dTV (px , pz ) = O δ R2 + + δR3 d(x, z) + .
δ
25
2
Remark. The constant term at the end is an artifact that comes from bounding the probability of
some bad events of the Hamiltonian walk, and can be made to arbitrary small.
24
We now prove this theorem. In a later section, we specialize to sampling distributions over
polytopes using the logarithmic barrier, by defining a suitable auxiliary function and bounding all
the parameters. The key ingredient is Theorem 29 about the overlap of one-step distributions, which
we prove in the next section.
Here is the consequence of Theorem 29 and Theorem 13.
Theorem 30. Given a manifold M. Let ℓ0 , ℓ1 , R1 , R2 , R3 be the parameters of the Hamiltonian
Monte Carlo defined in Definition 15, 17, 33 and 31. Let qt be the distribution of the current point
after t steps Hamiltonian Monte Carlo starting from initial distribution q0 . Let q be the distribution
proportional to e−f . Suppose that the step size δ satisfies
δ2 ≤
1
ℓ0
, δ5 ≤ 2 and δ3 R2 + δ2 R3 ≤ 1.
R1
R1 ℓ 1
For any ε > 0, we have that
s
dT V (qt , q) ≤ ε +
dq0 (x)
1
Ex∼q0
ε
dq(x)
(δψ)2
1−
2
t
where ψ is the conductance of the manifold.
4.1
Improved one-to-one correspondence for Hamiltonian curve
In the previous section, we only used R1 to analyze how much a Hamiltonian curves change as one
end point varies. Here we derive a more refined analysis of Lemma 23 using an additional parameter
R3 .
Definition 31. For a manifold M and auxiliary function ℓ, R3 is a constant such that for any
Hamiltonian curve γ(t) of step size δ with ℓ(γ0 ) ≤ ℓ0 , if ζ(t) is the parallel transport of the vector
γ ′ (0) along γ(t), then we have
sup kΦ(t)ζ(t)kγ(t) ≤ R3 .
0≤t≤δ
Lemma 32. Under the same assumptions as Lemma 23, we have that
δ
∇η kv(x)k2x ≤ |hvx , ηix | + 3δ2 R3 kηkx .
2
Proof. Let χ = ∇η v(x). Define ψ(t) as in the proof of Lemma 23. Using Lemma 21 we get that
0 = ψ(δ) = η + δχ +
Z
0
i.e.,
−δχ = η +
with
Z
δ
0
kE(s)kx ≤ R1
δ
(δ − s)Γs Φ(s)Γ−1
s (η + sχ + e(s))ds
(δ − s)Γs Φ(s)Γ−1
s (η + sχ + E(s))ds
δ3
kχkx
δ kηkx +
5
2
for all 0 ≤ s ≤ δ where we used k∇η v(x)kx ≤
5
2δ
≤ 2R1 δ2 kηkx
kηkx at the end (by Lemma 23).
25
Therefore,
δ |hv(x), χix | ≤ |hv(x), ηix | +
δ2
sup hv(x), Γs Φ(s)Γ−1
s (η + sχ + E(s))ix .
2 0≤s≤δ
Noting that
kη + sχ + E(s)kx ≤ kηkx + δ kχkx + kE(s)kx ≤ (1 +
5
+ 2R1 δ2 ) kηkx ≤ 6 kηkx ,
2
δ |hv(x), χix | ≤ |hv(x), ηix | + 3δ2 sup
x
we have
0≤s≤δ
= |hv(x), ηix | + 3δ2 sup
0≤s≤δ
γ ′ (0)T Γs Φ(s)Γ−1
s
Φ(s)Γs γ ′ (0)
γ(s)
kηkx
kηkx
≤ |hv(x), ηix | + 3δ2 R3 kηkx .
Finally, we note that
4.2
δ
∇η kv(x)k2x = δ hv(x), ∇η v(x)ix .
2
Improved smoothness of one-step distributions
The proof of Theorem 29 is pretty similar to Lemma 25. First, we show that px (y) is in fact close
to
s
Z δ
X
t(δ − t)
1
det (g(y))
2
exp −
TrΦ(t)dt − kvx kx .
(4.1)
pex (y) =
(2πδ2 )n
δ
2
0
vx :Hamx,δ (vx )=y
Note that this is a more refined estimate than (3.1). We use Lemma 32 to bound the change of
kvx k2x . For the change of TrΦ(t), we defer the calculation until the end of this section.
Theorem 29. For δ2 ≤
x, z satisfy
1
R1
and δ5 ≤
ℓ0
,
R21 ℓ1
the one-step Hamiltonian walk distributions px , pz from
1
1
dTV (px , pz ) = O δ R2 + + δR3 d(x, z) + .
δ
25
Proof. We first consider the case d(x, y) <
δ2 ≤ R11 and δ5 ≤ Rℓ20ℓ , we have that
2
ℓ0
7ℓ1 .
By a similar argument as Lemma 25, using that
1 1
Z Z
d
ℓe
+2
pe(vc(s) ) dsdvx
dT V (px , pz ) ≤
50
Vx s ds
def
where ℓe = min 1, ℓℓ10δ . By direct calculation, we have
d
pe(v ) =
ds c(s)
Z
−
δ
0
t(δ − t) d
1 d
2
′
TrΦ(γs (t))dt −
kv(s)kc(s) pe(vc(s) ).
δ
ds
2 ds
By similar argument as (3.12), we have that pe(vc(s) ) ≤ 2 · p(vc(s) ) and hence
Z δ
t(δ − t) d
1 d
d
2
′
pe(v ) ≤ 2
TrΦ(γs (t))dt +
kv(s)kc(s) p(vc(s) ).
ds c(s)
δ
ds
2 ds
0
26
(4.2)
Since ℓ(γs ) ≤ ℓ0 , we can use Lemma 34 to get
Z
δ
0
Hence,
t(δ − t) d
TrΦ(γs′ (t))dt ≤ O δ2 R2 .
δ
ds
Z
Z
Z
d
2
pe(vc(s) ) dvx ≤O(δ R2 )
p(vc(s) )dvx +
Vx ds
Vx
Vx
R
For the first term, we note that Vx p(vc(s) )dvx ≤ 1.
For the second term, we have that
Z
d
kv(s)k2c(s) p(vc(s) )dvx ≤Eℓ(γs )≤ℓ0
Vx ds
d
kv(s)k2c(s) p(vc(s) )dvx .
ds
(4.3)
d
kv(s)k2c(s) .
ds
(4.4)
By Lemma 32, we have that
δ d
kv(s)k2c(s) ≤ |hv(s), ∂s cix | + 3δ2 R3 k∂s ckx .
2 ds
Since k∂s ckx = 1 and |hv(s), ∂s cix | = O(1) with high probability since v(s) is a Gaussian vector
from the local metric. We have
δ d
kv(s)k2c(s) ≤ O(1) + 3δ2 R3 .
2 ds
Putting it into (4.4) and (4.3), we have that
Z
d
1
2
pe(vc(s) ) dvx = O δ R2 + + δR3 .
δ
Vx ds
Putting this into 4.2, we get
1
ℓe
dTV (px , pz ) = O δ R2 + + δR3 L +
δ
50
2
ℓ0
. By taking a minimal length geodesic, and summing over segment of length
for any L < 7ℓ
1
any x and z, we have
1
1
2
dTV (px , pz ) = O δ R2 + + δR3 d(x, z) + .
δ
25
ℓ0
8ℓ1 ,
for
Definition 33. Given a Hamiltonian curve γ(t) with ℓ(γ0 ) ≤ ℓ0 . Let R2 be a constant depending
on the manifold M and the step size δ such that for any 0 ≤ t ≤ δ, any curve c(s) starting from
γ(t) and any vector field v(s) on c(s) with v(0) = γ ′ (t), we have that
!
dc
d
+ δ k Ds v|s=0 kγ(t) R2 .
TrΦ(v(s))|s=0 ≤
ds
ds s=0 γ(t)
27
Lemma 34. For δ2 ≤
1
R1
and ℓ(γs ) ≤ ℓ0 , we have
Z
δ
0
t(δ − t) d
TrΦ(γs′ (t))dt ≤ O δ2 R2
δ
ds
where γs is a family of Hamiltonian curve that connect c(s) to y defined in Lemma 24.
Proof. By Definition 33, we have that
d
TrΦ(γs′ (t)) ≤
ds
∂
∂s
γs (t)
s=0
+δ
γ(t)
Ds γs′ (t) s=0 γ(t)
!
· R2 .
(4.5)
d
d
d
γs (0) = ds
Hamc(s) (0) = ds
c(s) is a unit vector and hence
By definition of γs , we have that ds
d
ds γs (0) γ(0) = 1. Since Hamc(s),δ (v(s)) = y, Lemma 23 shows that
∂
∂s
γs (t)
s=0
γ(t)
≤ 5 and
Ds γs′ (t)
Therefore, we have that
d
TrΦ(γs′ (t)) ≤ 15R2 .
ds
28
s=0 γ(t)
≤
10
.
δ
5
5.1
Gibbs sampling on manifolds
Isoperimetry for Hessian manifolds
Here we derive a general isoperimetry bound, assuming that the manifold is defined by the Hessian
of a convex function, and that the directional fourth directive is non-negative, a property satisfied,
e.g., by the standard logarithmic barrier.
Lemma 35. Let φ : [a, b] → R be a convex function such that φ′′ is also convex. For any x ∈ [a, b],
we have that
Z b
Z x
p
−φ(t)
−φ(t)
−φ(x)
′′
e
dt .
e
dt,
e
≥ 0.372 φ (x) min
a
x
Let f (x) be the logconcave density proportional to
VarX∼f (x) X ≤
e−φ(x) .
Then, we have that
O(1)
.
minx∈[a,b] φ′′ (x)
p
Proof. Case 1)p|φ′ (x)| ≥ a φ′′ (x) (we will pick a at the end). Without loss of generality, we have
that φ′ (x) ≥ a φ′′ (x). Since φ is convex, we have that
φ(t) ≥ φ(x) + φ′ (x)(t − x).
Therefore, we have that
Z
Case 2) |φ′ (x)| ≤ a
we have that
p
b
x
e−φ(t) dt ≤
Z
∞
′
e−φ(x)−φ (x)(t−x) dt
x
e−φ(x)
e−φ(x)
= ′
≤ p
.
φ (x)
a φ′′ (x)
φ′′ (x). Without loss of generality, we have that φ′′′ (x) ≥ 0. Since φ′′ is convex,
φ′′ (t) ≥ φ′′ (x) + φ′′′ (x)(t − x) ≥ φ′′ (x)
for t ≥ x. Hence, we have that
′
φ(t) = φ(x) + φ (x)(t − x) +
Z
t
x
(t − s)φ′′ (t)ds
1
≥ φ(x) + φ (x)(t − x) + φ′′ (x)(t − x)2
2
′
for all t ≥ x. Therefore, we have that
Z
Z b
−φ(t)
e
dt ≤
x
∞
′
1
′′
2
e−φ(x)−φ (x)(t−x)− 2 φ (x)(t−x) ds
−∞
Z ∞
1 ′′
2
′
= e−φ(x)
e−φ (x)s− 2 φ (x)s ds
−∞
s
2
2π −φ(x)+ 21 φφ′′′(x)
(x)
=
e
φ′′ (x)
s
2π −φ(x)+ a2
2 .
e
≤
′′
φ (x)
29
Hence, we have
Z
−φ(x)
√
a2 e
.
2πe 2 p
φ′′ (x)
x
2
− a2
e√
Combining both cases, the isoperimetric ratio is min a, 2π . Setting a to be the solution of
p
a2
ae 2 = √12π , this minimum is achieved at W (1/2π) > 0.372 where W is the inverse Lambert
b
e−φ(t) dt ≤
1
. This proves the first result.
function, i.e., W (x)eW (x) = 2π
The variance of f follows from the fact that f is logconcave.
This generalizes to higher dimension with no dependence on the dimension using localization,
which we review next. Define an exponential needle E = (a, b, γ) as a segment [a, b] ⊆ Rn and γ ∈ R
corresponding to the weight function eγt applied the segment [a, b]. The integral of an n-dimensional
function h : Rn → R over this one dimensional needle is
Z |b−a|
Z
b−a
.
h(a + tu)eγt dt
where u =
h=
|b − a|
0
E
Theorem 36 (Theorem 2.7 in [9]). Let f1 , f2 , f3 , f3 be four nonnegative continuous functions defined
on Rn , and c1 , c2 > 0. Then, the following are equivalent:
1. For every logconcave function F defined on Rn with compact support,
c 1 Z
c 2 Z
c 1 Z
Z
F (x)f3 (x) dt
≤
F (x)f2 (x) dx
F (x)f1 (x) dx
Rn
Rn
Rn
Rn
F (x)f4 (x) dx
c2
2. For every exponential needle E,
c 2
c 1 Z
c 2 Z
c 1 Z
Z
f4
f3
≤
f2
f1
E
E
E
E
Lemma 37. Let φ : K ⊂ Rn → R be a convex function defined over a convex body K such
that D 4 φ(x)[h, h, h, h] ≥ 0 for all x ∈ K and h ∈ Rn . Given any partition S1 , S2 , S3 of K with
d = minx∈S1 ,y∈S2 d(x, y), i.e., the minimum distance between S1 and S2 in the Riemannian metric
induced by φ. For any α > 0, we have that
R
−αφ(x) dx
√
S3 e
o = Ω( α · d)
nR
R
min S1 e−αφ(x) dx, S2 e−αφ(x) dx
Proof. By rescaling φ, we can assume α = 1. We write the desired inequality as follows, for a
constant C, with χS being the indicator of set S:
Z
Z
Z
Z
−φ(x)
−φ(x)
−φ(x)
e−φ(x) χS3 (x) dx.
e
dx ·
e
χS2 (x) dx ≤
e
χS1 (x) dx ·
Cd
Rn
Rn
Rn
Rn
Using the localization lemma for exponential needles (Theorem 36), with fi (x) being Cde−φ(x) χS1 (x),
e−φ(x) χS2 (x), e−φ(x) and e−φ(x) χS3 (x) with respectively, it suffices to prove the following onedimensional inequality for functions φ defined on an interval and shifted by a linear term:
Z 1
Z 1
e−φ((1−t)a+tb) e−ct χS2 ((1 − t)a + tb) dt
e−φ((1−t)a+tb) e−ct χS1 ((1 − t)a + tb) dt ·
Cd
0
0
Z 1
Z 1
e−φ((1−t)a+tb) e−ct χS3 ((1 − t)a + tb) dt.
e−φ((1−t)a+tb) e−ct dt ·
≤
0
0
30
Each Ti = {t : (1 − t)a + tb ∈ Si } is a union of intervals. By a standard argument (see [19]),
it suffices to consider the case when each Si is a single interval and add up over all intervals in
S3 . Thus it suffices to prove the statement in one dimension for all convex φ with convex φ′′ . In
one-dimension, we have
Z yp
d(x, y) =
φ′′ (t) dt.
x
Taking T3 =
is
[a′ , b′ ]
⊂ [a, b], the inequality we need to prove is that for any convex φ with convex φ′′
R a′ −φ(t) R b −φ(t)
−φ(t) dt
e
dt b′ e
a′ e
≥ Ω(1) a
Rb
R b′ p
−φ(t) dt
φ′′ (t) dt
a e
a′
R b′ −φ(t)
e
dt
e−φ(x)
≥ minx∈[a′ ,b′ ] √
and
that R ba′′ √ ′′
φ′′ (x)
φ
(t)
dt
a′
R b′
which is implied by noting
5.2
dt
applying Lemma 35.
Sampling with the log barrier
For any polytope M = {Ax > b}, the logarithmic barrier function φ(x) is defined as
φ(x) = −
m
X
i=1
log(aTi x − bi ).
Theorem 1. Let φ be the logarithmic barrier for a polytope M with m constraints and n variables.
Hamiltonian Monte Carlo applied to the function f = exp(−αφ(x)) and the metric given by ∇2 φ
with appropriate step size mixes in
!
1
2
1
3
3n3
1
1
n
m
e
+ m2 n6
O
+
α + m−1 α 31 + m− 13
steps where each step is the solution of a Hamiltonian ODE.
√
Proof. Lemma 37 shows that the isoperimetric coefficient ψ is Ω( α). Also, we know that isoperi1
metric coefficient ψ is at worst Ω(m− 2 ) [16]. Lemma 61 shows that the condition of Theorem 30 is
satisfied, thus implying the bound
√
e max( α, m− 12 )−2 δ−2
O
√
e max( α, m− 12 )−2 max n 32 , α 23 m 13 n 31 , αm 12 n 61
=O
!
2
1
1
1
1
2
3 + α 3 m 3 n 3 + αm 2 n 6
n
e
=O
α + m−1
!
1
2
1
3
3n3
1
1
n
m
e
+ m2 n6
=O
+
α + m−1 α 13 + m− 13
To implement the walk, we solve this ODE using the collocation method as described in [13].
The key lemma used in that paper is that the slack of the geodesic does not change by more than
1
). Similarly in Lemma 53, we proved
a constant multiplicative factor up to the step size δ = O( n1/4
the the Hamiltonian flow does not change by more than a constant multiplicative factor up to
−1/3
n−1/4
√ ). Since the step size we use is Θ( n √ ), we can apply the
the step size δ = O( 11/4 ) = O( 1+
α
1+ α
M1
31
collocation method as described in [13] and obtain an algorithm to compute the Hamiltonian flow
ω−1 logO(1) 1 ) time with additive error η distance in the local norm. Due to the exponential
e
in O(mn
η
convergence, it suffices for the sampling purpose with only a polylogarithmic overhead in the total
running time.
6
Polytope volume computation: Gaussian cooling on manifolds
The volume algorithm is essentially the Gaussian cooling algorithm introduced in [3]. Here we apply
it to a sequence of Gibbs distributions rather than a sequence of Gaussians. More precisely, for a
convex body K and a convex barrier function φ : K → R, we define
(
exp −σ −2 φ(x)
if x ∈ K
2
f (σ , x) =
0
otherwise
and
2
F (σ ) =
x∗
Z
f (σ 2 , x) dx
Rn
where
is the minimizer of φ (the center of K). Let µi be the probability distribution proportional
to f (σi2 , x) where σi is the tempature of the Gibbs distribution to be fixed. The algorithm estimates
each ratio in the following telescoping product:
e−σ
−2 φ(x∗ )
vol(K) ≈ F (σk ) = F (σ0 )
k
2 )
Y
F (σi+1
i=1
F (σi2 )
for some large enough σk .
2 , x)/f (σ 2 , x). Then,
Let x be a random sample point from µi and let Yx = f (σi+1
i
Ex∼µi (Yx ) =
6.1
Algorithm: cooling schedule
6.2
Correctness of the algorithm
2 )
F (σi+1
.
F (σi2 )
In this subsection, we prove the correctness of the algorithm. We separate the proof into two parts.
In the first part, we estimate how small σ0 we start with should be and how large σk we end with
should be. In the second part, we estimate the variance of the estimator Yx .
6.2.1
Initial and terminal conditions
First, we need the following lemmas about self-concordance functions and logconcave functions:
Lemma 38 ([27, Thm 2.1.1]). Let φ be a self-concordant function and x∗ be its minimizer. For any
x such that φ(x) ≤ φ(x∗ ) + r 2 with r ≤ 12 , we have that
φ(x) = φ(x∗ ) +
1 ± Θ(r)
(x − x∗ )T ∇2 φ(x∗ )(x − x∗ ).
2
32
Algorithm 2: Volume(M , ε)
Let σ02 = Θ(ε2 n−3 log−3 (n/ε)),
2
σi+1
σ 2 1 + √1
if ϑ ≤ nσi2
i
n
=
.
σ 2 1 + min( √σi , 1 )
otherwise
i
2
ϑ
and
ki =
( √
Θ( ε2n log nε )
Θ((
√
ϑ
σi
if ϑ ≤ nσi2
+ 1)ǫ−2 log nε ) otherwise
.
Set i = 0. Compute x∗ = arg min φ(x).
Assume that φ(x∗ ) = 0 by shifting the barrier function.
Sample k0 points {X1 , . . . , Xk0 } from the Gaussian distribution centered at the minimizer x∗
−1
.
of φ with covariance σ02 ∇2 φ(x∗ )
nϑ
ϑ
2
while σi ≤ Θ(1) ε log ε do
Sample ki points {X1 , . . . , Xk } using Hamiltonian Monte Carlo with target density
f (σi2 , X) and the previous {X1 , . . . , Xk } as warm start.
Compute the ratio
ki
2 ,X )
f (σi+1
1 X
j
Wi+1 =
·
2, X ) .
ki
f
(σ
j
i
j=1
Increment i.
end
n
1
Output: (2πσ02 ) 2 det(∇2 φ(x∗ ))− 2 W1 . . . Wi as the volume estimate.
33
Lemma 39 ([27, Prop 2.3.2]). For any ϑ-self-concordance barrier function φ on convex K, for any
interior point x and y in K, we have that
φ(x) ≤ φ(y) + ϑ ln
1
1 − πy (x)
where πy (x) = inf{t ≥ 0|y + t−1 (x − y) ∈ K}.
Lemma 40 ([23, Lem 5.16]). For any logconcave distribution f on Rn and any β ≥ 2, we have
Px∼f (f (x) ≤ e−βn max f (y)) ≤ e−O(βn) .
y
3
3
Lemma 41 (Large and small σ 2 ). Let φ be a ϑ-self concordant barrier function for K. If σn 2 log 2
1
1
1 and σn 2 log 2 σ1 ≤ 1, then we have that
1
σ
≤
3
1
3 1
n
−2
∗
ln F (σ) − ln e−σ φ(x ) (2πσ 2 ) 2 det(∇2 φ(x∗ ))− 2 ≤ O(σn 2 log 2 ).
σ
If σ 2 ≥ ϑ, then
−2
∗
ln F (σ) − ln e−σ φ(x ) vol(K) ≤ O(σ −2 ϑ ln(σ 2 n/ϑ)).
Proof. We begin with the first inequality. Let S be the set of x such that φ(x) ≤ φ(x∗ ) + βσ 2 n for
some β ≥ 2 to be determined. Therefore,
f (σ 2 , x) ≥ e−βn f (σ 2 , x∗ )
for all x ∈ S. Since f (σ 2 , x) is a logconcave function, Lemma 40 shows that Px∼f (S) ≤ e−O(βn) .
Therefore,
Z
Z
Z
f (σ 2 , x) dx ≤ f (σ 2 , x) dx ≤ (1 + e−O(βn) ) f (σ 2 , x) dx.
S
S
In short, we have
−O(βn)
F (σ) = (1 ± e
)
Z
f (σ 2 , x) dx.
S
By the construction of S, Lemma 38 and the fact that φ is self-concordant, if βσ 2 n ≤ 14 , we have
that
√
1 ± Θ(σ βn)
φ(x) = φ(x∗ ) +
(x − x∗ )T ∇2 φ(x∗ )(x − x∗ )
2
for all x ∈ S. Hence,
Z
√
σ −2
−2
∗
∗ T 2
∗
∗
F (σ) = (1 ± e−O(βn) )e−σ φ(x ) e−(1±O(σ βn)) 2 (x−x ) ∇ φ(x )(x−x ) dx.
S
Now we note that
Z
Z
σ −2
σ −2
∗ T 2
∗
∗
∗ T 2
∗
∗
e−Θ( 2 (x−x ) ∇ φ(x )(x−x )) dx = e−O(βn) e− 2 (x−x ) ∇ φ(x )(x−x ) dx
Sc
because
S
σ −2
(x − x∗ )T ∇2 φ(x∗ )(x − x∗ ) = Ω(βn)
2
34
outside S. Therefore,
F (σ) = (1 ± e−O(βn) )e−σ
−O(βn)
= (1 ± e
−2 φ(x∗ )
Z
3
2
e−(1±O(σ
√
Rn
−σ−2 φ(x∗ )
± O(σ(βn) ))e
3
= (1 ± e−O(βn) ± O(σ(βn) 2 ))e−σ
−2 φ(x∗ )
−2
βn)) σ 2 (x−x∗ )T ∇2 φ(x∗ )(x−x∗ )
Z
e−
dx
σ −2
(x−x∗ )T ∇2 φ(x∗ )(x−x∗ )
2
dx
Rn
n
1
(2πσ 2 ) 2 det(∇2 φ(x∗ ))− 2
1
3
where we used that σ(βn) 2 = O(1) in first sentence and σ(βn) 2 = O(1) in the second sentence.
Setting β = Θ(log σ1 ), we get the first result.
For the second inequality, for any 0 ≤ t < 1 and any x ∈ x∗ + t(K − x∗ ), we have that πx∗ (x) ≤ t
(πx∗ is defined in Lemma 39). Therefore, Lemma 39 shows that
φ(x) ≤ φ(x∗ ) + ϑ ln
1
.
1−t
Note that Px∼µ (x ∈ x∗ + t(K − x∗ )) = tn where µ is the uniform distribution in K. Therefore, for
any 0 < β < 1, we have that
Px∼µ (φ(x) ≤ φ(x∗ ) + ϑ ln
1
}) ≥ (1 − β)n .
β
Hence,
vol(K) · e−σ
−2 φ(x∗ )
≥ F (σ 2 )
n
≥ vol(K) · (1 − β) exp −σ
−2
1
(φ(x ) + ϑ ln ) .
β
∗
Setting β = σ −2 n−1 ϑ, we get the second result.
6.2.2
Variance of Yx
Our goal is to estimate Ex∼µi (Yx ) within a target relative error. The algorithm estimates the
quantity Ex∼µi (Yx ) by taking random sample points x1 , . . . , xk and computing the empirical estimate
for Ex∼µi (Yx ) from the corresponding Yx1 , . . . , Yxk . The variance of Yxi divided by its expectation
squared will give a bound on how many independent samples xi are needed to estimate Ex∼µi (Yx )
within the target accuracy. We have
R
σ2 σi2
2φ(x)
φ(x)
−
dx
)
exp
F ( 2σ2i+1
2
2
2
K
σ
σ
i
i+1
i −σi+1
Ex∼µi (Yx2 ) =
=
R
φ(x)
F (σi2 )
dx
K exp − σ2
i
and
σ2
σ2
i
)
F (σi2 )F ( 2σ2i+1
2
Ex∼µi (Yx2 )
i −σi+1
=
2 )2
Ex∼µi (Yx )2
F (σi+1
2
If we let σ 2 = σi+1
and σi2 = σ 2 /(1 + r), then we can further simplify as
2 2
σ
σ
2
F
1+r F 1−r
Ex∼µi (Yx )
=
.
Ex∼µi (Yx )2
F (σ 2 )2
35
(6.1)
Lemma 42. For any 1 > r ≥ 0, we have that
2 2
σ
σ
Z r Z 1+t
F 1−r
F 1+r
1
= 4
Varx∼µs φ(x)dsdt
ln
σ 0 1−t
F (σ 2 )2
2
where µs be the probability distribution proportional to f ( σs , x).
2
Proof. Fix σ 2 . Let g(t) = ln F ( σt ). Then, we have that
2 2
2 2
σ
σ
σ
σ
Z r
F 1−r
F 1+r
d F 1+t F 1−t
=
dt
ln
ln
F (σ 2 )2
F (σ 2 )2
0 dt
Z r
d
d
=
g(1 + t) − g(1 − t)dt
dt
0 dt
Z r Z 1+t 2
d
=
g(s)dsdt.
2
ds
0
1−t
For
d2
g(s),
ds2
(6.2)
we have that
Z
s
exp − 2 φ(x) dx
σ
K
R
1 d K φ(x) · exp − σs2 φ(x) dx
R
=− 2 ·
s
σ ds
K exp − σ2 φ(x) dx
2 !
R
2 R
s
2 (x) · exp − s φ(x) dx
φ(x)
dx
φ(x)
·
exp
−
φ
1
2
2
K R
σ
σ
− KR
=
2
s
2
s
σ
K exp − σ2 φ(x) dx
K exp − σ2 φ(x) dx
1
1
= 4 Ex∼µs φ2 (x) − (Ex∼µs φ(x))2 = 4 Varx∼µs φ(x).
σ
σ
d2
d2
g(s)
=
ln
ds2
ds2
Putting it into (6.2), we have the result.
Now, we bound Varx∼µs φ(x). This can be viewed as a manifold version of the thin shell or
variance hypothesis estimate.
Lemma 43 (Thin shell estimates). Let φ be a ϑ-self concordant barrier function for K. Then, we
have that
2
σ
Varx∼µs φ(x) = O
ϑ .
s
def
Proof. Let Kt = {x ∈ K such that φ(x) ≤ t} and m be the number such that µs (Km ) = 21 . Let
Km,r = {x such that d(x, y) ≤ r and y ∈ Km }. By repeatedly applying Lemma 37, we have that
µs (Km,r ) = 1 − e−Ω(
√
s
r)
σ
.
√
By our assumption on φ, for any x and y, we have that |φ(x) − φ(y)| ≤ ϑd(x, y). Therefore,√for
√
s
any x ∈ Km,r , we have that φ(x) ≤ m + ϑr. Therefore, with probability at least 1 − e−Ω( √σ r)
√
√
s
φ(x) ≥ m − ϑr. Hence, with 1 − e−Ω( σ r)
in µs , it follows that φ(x) ≤ m + ϑr. Similarly,
√
probability in µs , we have that |φ(x) − m| ≤ ϑr. The bound on the variance follows.
36
Now we are ready to prove the key lemma.
Lemma 44. Let φ be a ϑ-self concordant barrier function for K. For any
Ex∼µi (Yx2 )
= O r 2 min
2
Ex∼µi (Yx )
1
2
ϑ
,n .
σi2
> r ≥ 0, we have that
Proof. Using Lemma 43 and Lemma 42, we have that
2 2
σ
σ
2
F 1−r
F 1+r
r ϑ
=O
.
ln
2
σ2
F (σ 2 )
(6.3)
This bounds is useful when σ 2 is large.
case σ 2 is small, we recall that for any logconcave function f , the function a →
RFor the
n
a
a
f (x) dx is logconcave (Lemma 3.2 in [8]). In particular, this shows that an F a1 is logconcave
1 1−r
in a. Therefore, with a = 1+r
σ2 , σ2 , σ2 ,we have
1
F (σ 2 )2 ≥
σ 4n
1+r
σ2
n
F
Rearranging the term, we have that
2 2
σ
σ
F 1−r
F 1+r
F (σ 2 )2
Therefore, we have that
ln
F
σ2
1+r
F
F
σ2
1+r
≤
σ2
1−r
(σ 2 )2
1−r
σ2
F
n
.
n
1
(1 + r)(1 − r)
σ2
1−r
.
Combining (6.1), (6.3) and (6.4), we have the result.
6.2.3
= O nr 2 .
(6.4)
Main lemma
Lemma 45. Given any ϑ-self-concordance barrier φ on a convex set K and 0 < ε < 12 , the algorithm
Volume(M , ε) outputs the volume of K to within a 1 ± ε multiplicative factor.
−2
∗
n
1
Proof. By our choice of ε, Lemma 41 shows that e−σ0 φ(x ) (2πσ02 ) 2 det(∇2 φ(x∗ ))− 2 is an 1± 4ε mul−2
∗
tiplicative approximation of F (σ0 ) and that e−σk φ(x ) vol(K) is a 1± 4ε multiplicative approximation
of F (σk ). Note that we shifted the function φ such that φ(x∗ ) = 0. Therefore,
k
2
n
1 Y F (σi+1 )
ε
vol(K) = (1 ± )(2πσ02 ) 2 det(∇2 φ(x∗ ))− 2
2) .
2
F
(σ
i
i=1
2 , X)/f (σ 2 , X) is upper
In Lemma 44, we showed that the variance of the estimator Y = f (σi+1
i
q
√
2
bounded by O(1)(EY ) . Note that the algorithm takes O( n) iterations to double σi if ϑn ≤ σi
√
and O( ϑσi−1 ) iterations otherwise. By a simple analysis of variance, to have relative error ε, it
e i ) samples in each phase.
suffices to have O(k
37
6.3
Volume computation with the log barrier
In this section, we prove the Theorem 2, restated below for convenience.
Theorem 2. For any polytope P = {x : Ax ≥ b} with m constraints and n variables, and any
ε > 0, the Hamiltonian
volume algorithm estimates the volume of P to within 1 ± ε multiplicative
2
−2
e mn 3 ε
factor using O
steps where each step consists of solving a first-order ODE and takes time
e mnω−1 LO(1) logO(1) 1 and L is the bit complexity3 of the polytope.
O
ε
1
m
e
n− 3 ). Since the number
n , the mixing time of HMC is O(m ·√
√
e 2n ), the total number of
σ 2 is O( n) and since we samples O(
ǫ
Proof. In the first part, when σ 2 ≤
of sampling phases to double such
steps of HMC is
√
√
n
− 31
e
e
e
O mn
=O
×O n ×O
2
ǫ
In the second part, when σ 2 ≥
m
n,
2
m · n3
ǫ2
!
.
√
1
2
e −2n 3 −1 +
the mixing time of HMC is O(
σ +m
1
m3 n3
σ
−2
3
1
1
3
+m−√
1
+ m 2 n 6 ).
m
−2
e
Since the number of sampling phases to double σ 2 is O(1+ σm ) and since we sample O((
σ +1)ε )
in each phase, the total number of steps of HMC is
!
!
√
√
1
2
2
1
3
3 n3
3
1
1
m
m
·
n
n
m
m
−2
e (
e
e
+ m2 n6 × O 1 +
+
+ 1)ε
.
×O
=O
O
σ −2 + m−1 σ − 32 + m− 13
σ
σ
ǫ2
Combining both parts, the total number of steps of HMC is
!
2
3
m
·
n
e
.
O
ǫ2
3
A.
L = log(m + dmax + kbk∞ ) where dmax is the largest absolute value of the determinant of a square sub-matrix of
38
7
Logarithmic barrier
For any polytope M = {Ax > b}, the logarithmic barrier function φ(x) is defined as
φ(x) = −
m
X
i=1
log(aTi x − bi ).
We denote the manifold induced by the logarithmic barrier on M by ML . The goal of this section
is to analyze Hamiltonian Monte Carlo on ML . In Section 7.1, we give explicit formulas for various
Riemannian geometry concepts on ML . In Section 7.2, we describe the HMC specialized to ML .
In Sections 7.3 to 7.5, we bound the parameters required by Theorem 30, resulting in Theorem 1.
The following parameters that are associated with barrier functions will be convenient.
Definition 46. For a convex function f , let M1 , M2 and M3 be the smallest numbers such that
−1
1. M1 ≥ maxx∈M (∇f (x))T ATx Ax
∇f (x) and M1 ≥ n.
2. ∇2 f M2 · ATx Ax .
3. Tr((ATx Ax )−1 ∇3 f (x)[v]) ≤ M3 kvkx for all v.
For the case f = φ are the standard logarithmic barrier, these parameters are n, 1,
tively.
7.1
√
n respec-
Riemannian geometry on ML (G2 )
We use the following definitions throughout this section.
Definition 47. For any matrix A ∈ Rm×n and vectors b ∈ Rm and x ∈ Rn , define
1. sx = Ax − b, Sx = Diag(sx ), Ax = Sx−1 A.
2. sx,v = Ax v, Sx,v = Diag(Ax v).
(2)
= (Px )2ij .
3. Px = Ax (ATx Ax )−1 ATx , σx = diag(Px ), Σx = Diag(P ), Px
ij
4. Gradient of φ: φi = −
P
ℓ
eTℓ Ax ei .
−1
P T
ej .
eℓ Ax ei eTℓ Ax ej , gij = eTi ATx Ax
5. Hessian of φ and its inverse: gij = φij = ATx Ax ij =
P
6. Third derivatives of φ: φijk = −2 ℓ eTℓ Ax ei eTℓ Ax ej eTℓ Ax ek .
7. For brevity (overloading notation), we define sγ ′ = sγ,γ ′ , sγ ′′ = sγ,γ ′′ , Sγ ′ = Sγ,γ ′ and
Sγ ′′ = Sγ,γ ′′ for a curve γ(t).
In this section, we will frequently use the following identities derived from elementary calculus
(using only the chain/product rules and the formula for derivative of the inverse of a matrix). For
reference, we include proofs in Appendix C.
Fact 48. For any matrix A and any curve γ(t), we have
dAγ
= −Sγ ′ Aγ ,
dt
dPγ
= −Sγ ′ Pγ − Pγ Sγ ′ + 2Pγ Sγ ′ Pγ ,
dt
dSγ ′
= Diag(−Sγ ′ Aγ γ ′ + Aγ γ ′′ ) = −Sγ2′ + Sγ ′′ ,
dt
39
We also use these matrix inequalities: Tr(AB) = Tr(BA), Tr(P AP ) ≤ Tr(A) for any psd
matrix A; Tr(ABAT ) ≤ Tr(AZAT ) for any B Z; the Cauchy-Schwartz, namely, Tr(AB) ≤
1
1
Tr(AAT ) 2 Tr(BB T ) 2 . We note Px2 = Px because Px is a projection matrix.
Since the manifold ML is naturally embedded in Rn , we can identify Tx ML with Euclidean
coordinates. We have that
hu, vix = uT ∇2 φ(x)v = uT ATx Ax v.
We will use the following two lemmas proved in [13].
Lemma 49. Let w(t) be a vector field defined on a curve z(t) in ML . Then,
∇z ′ w =
−1 T
−1 T
dw
dw
Az Sz ′ sz,w =
Az Sz,w sz ′ .
− ATz Az
− ATz Az
dt
dt
In particular, the equation for parallel transport on a curve γ(t) is given by
−1 T
d
Aγ Sγ ′ Aγ v.
v(t) = ATγ Aγ
dt
(7.1)
Lemma 50. Given u, v, w, x ∈ Tx ML , the Riemann Curvature Tensor at x is given by
R(u, v)w =
ATx Ax
def
−1
ATx (Sx,v Px Sx,w − Diag(Px sx,v sx,w )) Ax u
and the Ricci curvature Ric(u) = TrR(u, u) is given by
Ric(u) = sTx,u Px(2) sx,u − σxT Px s2x,u
where R(u, u) is the operator defined above.
7.2
Hamiltonian walk on ML
We often work in Euclidean coordinates. In this case, the Hamiltonian walk is given by the formula
in the next lemma. To implement the walk, we solve this ODE using the collocation method as
described in [13], after first reducing it to a first-order ODE. The resulting complexity is Õ(mnω−1 )
per step.
Lemma 51. The Hamiltonian curve at a point x in Euclidean coordinates is given by the following
equations
γ ′′ (t) =
ATγ Aγ
′
γ (0) = w,
−1
ATγ s2γ ′ + µ(γ(t))
∀t ≥ 0
γ(0) = x.
where µ(x) = ATx Ax
−1
ATx σx − (ATx Ax )−1 ∇f (x) and w ∼ N (0, (ATγ Aγ )−1 ).
Proof. Recall from Lemma 7 that the Hamiltonian walk is given by
Dt
dγ
=µ(γ(t)),
dt
dγ
(0) ∼N (0, g(x)−1 )
dt
40
where µ(x) = −g(x)−1 ∇f (x)− 12 g(x)−1 Tr g(x)−1 Dg(x) . By Lemma 49, applied with w(t) = γ ′ (t),
z(t) = γ(t), we have
−1 T 2
dγ
Dt
Aγ sγ ′ .
= γ ′′ (t) − ATγ Aγ
dt
For the formula of µ, we note that
∂
1
1X
(ATx Ax )−1 ij
Tr g(x)−1 Dg(x) k =
ATx Ax ji
2
2
∂xk
ij
X
X
(ATx Ax )−1 ij
eTℓ Ax ei eTℓ Ax ej eTℓ Ax ek
by Defn.47(6) = −
ij
=−
=
X
ℓ
ATx (ATx Ax )−1 Ax
ℓ
−ATx σx .
ℓℓ
eTℓ Ax ek
Therefore,
µ(x) = −(ATx Ax )−1 ∇f (x) + ATx Ax
−1
ATx σx .
Many parameters for Hamiltonian walk depends on the operator Φ(t). Here, we give a formula
of Φ(t) in Euclidean coordinates.
Lemma 52. Given a curve γ(t), in Euclidean coordinates, we have that
Φ(t) = M (t) − R(t)
where
−1
ATγ Sγ ′ Pγ Sγ ′ Aγ − ATγ Diag(Pγ s2γ ′ )Aγ
−1 T
Ax Sx,µ − 3Σx + 2Px(2) Ax − ∇2 f (x) .
M (t) = ATγ Aγ
R(t) = ATγ Aγ
Proof. Lemma 50 with v = w = γ ′ , gives the formula for R(t).
For M (t), Lemma 49 with w = µ(x), z ′ = u shows that
Du µ(x) = ∇u µ(x) − (ATx Ax )−1 ATx Sx,µ Ax u.
For the first term ∇u µ(x), we note that
−1
−1 T
ATx Sx,u Ax ATx Ax
Ax σx
−1
ATx Sx,u σx
− 3 ATx Ax
−1 T
+ 2 ATx Ax
Ax diag(Px Sx,u Px )
−1
−1
∇f (x)
ATx Sx,u Ax ATx Ax
− 2 ATx Ax
∇u µ(x) =2 ATx Ax
− (ATx Ax )−1 ∇2 f (x)u.
41
Therefore, we have that
Du µ(x)
−1 T (2)
−1 T
Ax Px Ax u
Ax Σx Ax u + 2 ATx Ax
ATx Diag(Px σx )Ax u − 3 ATx Ax
−1
−1 T
∇f (x) Ax u − (ATx Ax )−1 ∇2 f (x)u − (ATx Ax )−1 ATx Sx,µ Ax u
Ax Diag Ax ATx Ax
− 2 ATx Ax
−1
−1 T
∇f (x) − Sx,µ − 3Σx + 2Px(2) Ax − ∇2 f (x) u
Ax 2Diag(Px σx ) − 2Diag Aγ ATx Ax
= ATx Ax
−1 T
= ATx Ax
Ax 2Sx,µ − Sx,µ − 3Σx + 2Px(2) Ax − ∇2 f (x) u.
−1 T
= ATx Ax
Ax Sx,µ − 3Σx + 2Px(2) Ax − ∇2 f (x) u.
=2 ATx Ax
−1
where we used the facts that
µ = ATx Ax
−1
ATx σx − (ATx Ax )−1 ∇f (x)
Sx,µ = Ax µ = Diag(Px σx − Ax (ATx Ax )−1 ∇f (x)).
Remark. Note that R(t) and M (t) is symmetric in h·, ·iγ , but not in h·, ·i2 . That is why the formula
does not look symmetric.
7.3
Randomness of the Hamiltonian flow (ℓ0 )
Many parameters of a Hessian manifold relate to how fast a Hamiltonian curve approaches the
boundary of the polytope. Since the initial velocity of the
curve is drawn from a
Hamiltonian
1
Gaussian distribution, one can imagine that sγ ′ (0) ∞ = O √m sγ ′ (0) 2 (each coordinate of sγ ′
measures the relative rate at which the curve is approaching the corresponding facet). So the walk
initial approaches/leaves every facet of the polytope at roughly the same slow pace. If this holds
for the entire walk, it would allow us to get very tight bounds on various parameters. Although we
are not able to prove that sγ ′ (t) ∞ is stable throughout 0 ≤ t ≤ δ, we will show that sγ ′ (t) 4 is
stable and thereby obtain a good bound on sγ ′ (t) ∞ .
Throughout this section, we only use the randomness of the walk to prove that both sγ ′ (t) 4 and
sγ ′ (t) ∞ are small with high probability. Looking ahead, we will show that sγ ′ (t)
√
√
and sγ ′ (t) ∞ = O( log n + M1 δ) (Lemma 55), we define
sγ ′ (t)
def
ℓ(γ) = max
0≤t≤δ
n1/2
+
2
1/4
M1
+
sγ ′ (t)
1/4
M1
4
sγ ′ (t)
√∞
+√
+
log n + M1 δ
sγ ′ (0)
n1/2
2
+
sγ ′ (0)
n1/4
4
1/4
4
= O(M1 )
sγ ′ (0) ∞
+ √
log n
!
to capture this randomness involves in generating the geodesic walk. This allows us to perturb the
geodesic (Lemma 24) without worrying about the dependence on randomness.
We first prove the the walk is stable in the L4 norm and hence ℓ(γ) can be simply approximated
by sγ ′ (0) 4 and sγ ′ (0) ∞ .
Lemma 53. Let γ be a Hamiltonian flow in ML starting at x. Let v4 =
1
0≤t≤
1/4 , we have that
12(v4 +M1
)
42
sγ ′ (0)
4
. Then, for
1.
sγ ′ (t)
1/4
4
≤ 2v4 + M1 .
2. kγ ′′ (t)k2γ(t) ≤ 128v44 + 30M1 .
Proof. Let u(t) = sγ ′ (t)
4
. Then, we have (using Holder’s inequality in the first step),
d
du
≤
Aγ γ ′
dt
dt
≤ Aγ γ
′′
4
4
2
= Aγ γ ′′ − Aγ γ ′
+ u (t).
2
4
(7.2)
Under the Euclidean coordinates, by Lemma 51 the Hamiltonian flow is given by
γ ′′ (t) = ATγ Aγ
with µ(x) = ATx Ax
−1
γ ′′
−1
ATγ s2γ ′ + µ(γ(t))
ATx σx − (ATx Ax )−1 ∇f (x). Hence, we have that
2
γ
−1 T T −1 T 2
Aγ sγ ′
Aγ Aγ Aγ Aγ
Aγ ATγ Aγ
−1 T T −1 T
+ 3σγT Aγ ATγ Aγ
Aγ Aγ Aγ Aγ
Aγ σγ
−1
−1
+ 3(∇f (x))T ATγ Aγ
ATγ Aγ ATγ Aγ
∇f (x)
X
X
−1
≤3
(s4γ ′ )i + 3
(σγ2 )i + 3(∇f (x))T ATγ Aγ
∇f (x)
≤3 s2γ ′
T
i
4
i
≤3u (t) + 3(n + M1 ) ≤ 3u4 (t) + 6M1
Therefore, we have
Aγ γ ′′
4
≤ Aγ γ ′′
2
Plugging it into (7.2), we have that
1/4
(7.3)
p
≤ 2u2 (t) + 3 M1 .
p
du
≤ 3u2 (t) + 3 M1 .
dt
Note that when u ≤ 2v4 + M1 , we have that
p
du
1/4
≤ 12v42 + 9 M1 ≤ 12(v4 + M1 )2 .
dt
Since u(0) = v4 , for 0 ≤ t ≤
1
1/4 ,
12(v4 +M1 )
1/4
we have that u(t) ≤ 2v4 + M1
and this gives the first
inequality.
Using (7.3), we get the second inequality.
We can now prove that ℓ(γ) is small with high probability.
Lemma 54. Assume that δ ≤
1
1/4 ,
36M1
ℓ1 = Ω(n1/4 δ) and n is large enough, we have that
ℓ0
1
min 1,
Pγ∼x (ℓ(γ) ≥ 128) ≤
.
100
ℓ1 δ
Therefore, we have ℓ0 ≤ 256.
43
Proof. From the definition of the Hamiltonian curve (Lemma 7), we have that
Aγ γ ′ (0) = Bz
−1/2
and z ∼ N (0, I).
where B = Aγ ATγ Aγ
First, we estimate kAγ γ ′ (t)k4 . Lemma 65 shows that
Pz∼N (0,I) kBzk44 ≤ 3
Note that
P
eTi B
i
4
2
=
P
2
i (σγ )i
Pγ ′ (0)
def
X
eTi B
i
4
2
!1/4
4
2
s
+ kBk2→4 s ≤ 1 − exp(− ).
2
≤ n and kBk2→4 ≤ kBk2→2 = 1. Putting s =
′
Aγ γ (0)
4
4
n1/4
2 ,
we have that
√
n
).
≤ 11n ≤ 1 − exp(−
8
Therefore, we have that v4 = kAγ γ ′ (0)k4 ≤ 2n1/4 with probability at least 1 − exp(−
apply Lemma 53 to get that
1/4
1/4
sγ ′ (t) 4 ≤ 2v4 + M1 ≤ 5M1
√
n
8 ).
Now, we
1
1/4 .
12(v4 +M1 )
for all 0 ≤ t ≤
Next, we estimate kAγ γ ′ (t)k∞ . Since eTi Aγ γ ′ (0) = eTi Bx ∼ N (0, σi ), we have
2
√
t
T
′
.
Pγ ′ (0) ei Aγ γ (0) ≥ σi t ≤ 2 exp −
2
Hence, we have that
Pγ ′ (0)
′
Aγ γ (0)
∞
X
p
2 log n
≥ 2 log n ≤ 2
exp −
σi
i
P
P
n
log n
Since i exp − 2 log
is
concave
in
σ,
the
maximum
of
exp
−
on the feasible set {0 ≤
i
σi
σi
P
σ ≤ 1, σi = n} occurs on its vertices. Hence, we have that
p
2
Pγ ′ (0) Aγ γ ′ (0) ∞ ≥ 2 log n ≤ 2n exp (−2 log n) = .
n
√
√
Lemma 53 shows that kAγ γ ′′ k∞ ≤ kγ ′′ kγ(t) ≤ 46 n + 6 M1 . Hence, for any 0 ≤ t ≤ δ, we have
that
sγ ′ (t)
∞
Z
t
Aγ(t) γ ′′ (r) ∞ dr
≤ Aγ(t) γ (0) ∞ +
0
Z δ
sγ(t),i
′
≤ max
Aγ(r) γ ′′ (r)
Aγ(0) γ (0) ∞ +
i,0≤s≤t sγ(s),i
0
p
sγ(t),i p
√
≤ max
2 log n + (46 n + 6 M1 )δ
i,0≤s≤t sγ(s),i
p
sγ(t),i p
≤ max
2 log n + 52 M1 δ .
i,0≤s≤t sγ(s),i
′
44
∞
dr
(7.4)
sγ(t),i
sγ(s),i
Let z(t) = maxi,0≤s≤t
sγ(t),i = sγ(s),i exp
Z
. Note that
sγ ′ (α),i dα
t
r
since
(Aγ(t)−b)i = (Aγ(r)−b)i exp
Z
t
r
ai · γ ′ (α)
dα
(Aγ(α) − b)i
Hence, we have that
z ′ (t) ≤ z(t) sγ ′ (t)
∞
Solving this, since z(0) = 1, we get
z(t) ≤
Since t ≤ δ ≤
1
1/4 ,
36M1
1
.
√
√
1 − 2 log n + 52 M1 δ t
we have that z(t) ≤ 1.05. Putting this into (7.4), we have that
sγ ′ (t)
Finally, we estimate sγ ′ (t)
2
X
Pz∼N (0,I) kBzk22 ≤
P
i
eTi B
2
2
=
P
∞
p
p
≤ 3 log n + 55 M1 δ.
= kAγ γ ′ (t)k2 . Lemma 65 shows that
Note that
p
p
≤ z 2 (t) 2 log n + 52 M1 δ .
eTi B
i
2
2
!1/2
2
2
r
+ kBk2→2 r ≤ 1 − exp(− ).
2
1/2
≤ n and kBk2→2 ≤ 1. Putting s = n 3 , we have that
n
2
Pγ ′ (0) Aγ γ ′ (0) 2 ≤ 2n ≤ 1 − exp(− ).
18
i (σγ )i
Therefore, kAγ γ ′ (0)k22 ≤ 2n with high probability. Next, we note that
d
Aγ γ ′
dt
2
2
= Aγ γ ′ , Aγ γ ′′ − s2γ ′
E
D
−1 T 2
Aγ sγ ′ + Aγ µ(γ(t)) − Aγ s2γ ′
= Aγ γ ′ , Aγ ATγ Aγ
E
D
−1 T 2
Aγ sγ ′ − s2γ ′ + Aγ γ ′ , Aγ µ(γ(t))
= Aγ γ ′ , Aγ ATγ Aγ
= Aγ γ ′ , Aγ µ(γ(t)) .
Using that µ(x) = ATx Ax
d
Aγ γ ′
dt
2
2
=
−1
ATx σx − (ATx Ax )−1 ∇f (x), we have that
X
X
(γ ′ )i (∇f )i
(sγ ′ )i (σγ )i −
i
i
sX
sX
qX
q
2
2
≤
(sγ ′ )i
(σγ )i +
(sγ ′ )2i (∇f )T (ATγ Aγ )−1 (∇f )
i
≤ 2 Aγ γ ′
2
p
i
M1 .
√
≤ M1 . Since δ ≤
d
1
′
Therefore, we have that dt
kAγ γ ′ k2
1/4 , we have that kAγ γ (t)k2 ≤
36M1
1/4
1/4
√
√
M1
M1
1/4
kAγ γ ′ (0)k2 + 36
≤ 2n + 36
. Therefore, we have that kAγ γ ′ (t)k2 ≤ 2 n + M1 with
n
probability at least 1 − exp(− 18 ).
45
Combining our estimates on sγ ′ (t)
we have that
sγ ′ (t)
2
1/4
1/2
n + M1
max
P
0≤t≤δ
+
sγ ′ (t)
4
1/4
M1
2
, sγ ′ (t)
+√
√
n
2
n
≤ exp(− ) + exp(−
)+ .
18
8
n
sγ ′ (t)
log n +
4
and sγ ′ (t)
√∞
M1 δ
+
∞
sγ ′ (0)
n1/2
and using the assumption on δ,
2
+
sγ ′ (0)
n1/4
4
sγ ′ (0) ∞
+ √
log n
!
!
≥ 128
In Lemma 59, we indeed have that ℓ1 = Ω(n1/4 δ) and hence ℓℓ10δ = Ω(n−1/4 δ−2 ) = Ω(1). Therefore,
1
min 1, ℓℓ10δ when n is large enough.
the probability is less than 100
Here, we collect some simple consequences of small ℓ(γ) that we will use later.
Lemma 55. Given a Hamiltonian flow γ on ML with ℓ(γ) ≤ ℓ0 ≤ 256. For any 0 ≤ t ≤ δ,
1/4
1. kAγ γ ′ (t)k2 ≤ 256(n1/2 + M1 ), kAγ γ ′ (0)k2 ≤ 256n1/2 .
1/4
2. kAγ γ ′ (t)k4 ≤ 256M1 , kAγ γ ′ (0)k4 ≤ 256n1/4 .
√
√
√
3. kAγ γ ′ (t)k∞ ≤ 256 log n + M1 δ , kAγ γ ′ (0)k∞ ≤ 256 log n.
4. kγ ′′ (t)k2γ ≤ 1013 M1 .
Proof. The first three inequalities simply follow from the definition of ℓ(γ). Since kAγ γ ′ (0)k4 ≤
1/4
256M1 , Lemma 53 shows the last inequality.
7.4
Parameters R1 , R2 and R3
Lemma 56. For a Hamiltonian curve γ on ML with ℓ(γ) ≤ ℓ0 , we have that
sup kΦ(t)kF,γ(t) ≤ R1
0≤t≤ℓ
√
√
with R1 = O( M1 + M2 n).
Proof. Note that Φ(t) = M (t) − R(t) where
ATγ Sγ ′ Pγ Sγ ′ Aγ − ATγ Diag(Pγ s2γ ′ )Aγ ,
−1 T
M (t) = ATγ Aγ
Aγ (Sγ,µ − 3Σγ + 2Pγ(2) )Aγ − ∇2 f .
R(t) = ATγ Aγ
−1
We bound the Frobenius norm of Φ(t) separately.
For kR(t)kF,γ , we note that
kR(t)k2F,γ
≤2 (ATγ Aγ )−1/2 ATγ Sγ ′ Pγ Sγ ′ Aγ (ATγ Aγ )−1/2
=2TrPγ S Pγ S Pγ S Pγ S +
γ′
≤4 sγ ′
γ′
γ′
γ′
2
F
+ 2 (ATγ Aγ )−1/2 ATγ Diag(Pγ s2γ ′ )Aγ (ATγ Aγ )−1/2
2TrPγ Diag(Pγ s2γ ′ )Pγ Diag(Pγ s2γ ′ )
4
.
4
46
2
F
For kM (t)kF,γ , we note that
kM (t)k2F,γ
≤2 (ATγ Aγ )−1/2 ATγ (Sγ,µ − 3Σγ + 2Pγ(2) )Aγ (ATγ Aγ )−1/2
2
F
+ 2 (ATγ Aγ )−1/2 ∇2 f (ATγ Aγ )−1/2
2
F
≤6TrPγ Sγ,µ Pγ Sγ,µ + 54TrPγ Σγ Pγ Σγ + 24TrPγ Pγ(2) Pγ Pγ(2) + 2M22 n
2
≤6TrSγ,µ
+ 54TrΣ2γ + 24Tr(Pγ(2) )2 + 2M22 n.
(2)
where we used that diag(Pγ )
2
≤ kσγ k2 ≤ n and
TrPγ M Pγ M = TrPγ M Pγ M Pγ = TrPγ M M Pγ
= TrM Pγ Pγ M ≤ TrM 2 .
Note that
2
2
ksγ,µ k22 = kµ(γ)k2γ = Pγ σγ − Aγ (ATγ Aγ )−1 ∇f (γ)
and
Aγ ATγ Aγ
Therefore, we have that
−1
∇f (γ(t))
2
2
= ∇f (γ(t))T ATγ Aγ
≤ 2n + 2M1 ≤ 4M1
−1
∇f (γ(t)) = M1 .
kM (t)k2F,γ ≤ 102M1 + 2M22 n.
The claim follows from Lemma 55.
Lemma 57. Let γ be a Hamiltonian curve on ML with ℓ(γ) ≤ ℓ0 . Assume that δ2 ≤ √1n . For any
0 ≤ t ≤ δ, any curve c(r) starting from γ(t) and any vector field v(r) on c(r) with v(0) = γ ′ (t), we
have that
!
dc
d
+ δ k Dr v|r=0 kγ(t) .
TrΦ(v(r))|r=0 ≤ R2
dr
dr r=0 γ(t)
where
1/4
p
√
M
R2 = O( nM1 + nM1 δ2 + 1 +
δ
√
n log n √
+ nM2 + M3 ).
δ
Proof. We first bound TrR(t). By Lemma 50, we know that
(2)
T
Ric(v(r)) = sTc(r),v(r) Pc(r) sc(r),v(r) − σc(r)
Pc(r) s2c(r),v(r)
= Tr(Sc(r),v(r) Pc(r) Sc(r),v(r) Pc(r) ) − Tr(Diag(Pc(r) s2c(r),v(r) )Pc(r) ).
For simplicity, we suppress the parameter r and hence, we have
Ric(v) = Tr(Sc,v Pc Sc,v Pc ) − Tr(Diag(Pc s2c,v )Pc ).
47
d
d
c = c′ and dr
v = v ′ (in Euclidean coordinates). Since
We write dr
d
and dr
Sc,v = −Sc′ Sc,v + Sc,v′ , we have that
d
dr Pc
= −Sc′ Pc − Pc Sc′ + 2Pc Sc′ Pc
d
Ric(v)
dr
= −2Tr(Sc,v Sc′ Pc Sc,v Pc ) − 2Tr(Sc,v Pc Sc′ Sc,v Pc ) + 4Tr(Sc,v Pc Sc′ Pc Sc,v Pc )
−2Tr(Sc′ Sc,v Pc Sc,v Pc ) + 2Tr(Sc,v′ Pc Sc,v Pc )
+Tr(Diag(Pc s2c,v )Sc′ Pc ) + Tr(Diag(Pc s2c,v )Pc Sc′ ) − 2Tr(Diag(Pc s2c,v )Pc Sc′ Pc )
+Tr(Diag(Pc Sc′ s2c,v )Pc ) + Tr(Diag(Sc′ Pc s2c,v )Pc ) − 2Tr(Diag(Pc Sc′ Pc s2c,v )Pc )
+2Tr(Diag(Pc Sc,v Sc′ sc,v )Pc ) − 2Tr(Diag(Pc Sc,v sc,v′ )Pc )
= −6Tr(Sc,v Sc′ Pc Sc,v Pc ) + 4Tr(Sc,v Pc Sc′ Pc Sc,v Pc ) + 2Tr(Sc,v′ Pc Sc,v Pc )
+3Tr(Diag(Pc s2c,v )Sc′ Pc ) − 2Tr(Diag(Pc s2c,v )Pc Sc′ Pc )
+3Tr(Diag(Pc Sc′ s2c,v )Pc ) − 2Tr(Diag(Pc Sc′ Pc s2c,v )Pc )
−2Tr(Diag(Pc Sc,v sc,v′ )Pc ).
d
Let dr
Ric(v) = (1) + (2) where (1) is the sum of all terms not involving v ′ and (2) is the sum of
other terms.
For the first term (1), we have that
|(1)| ≤ 6 |Tr(Sc,v Sc′ Pc Sc,v Pc )| + 4 |Tr(Sc,v Pc Sc′ Pc Sc,v Pc )|
+3 Tr(Diag(Pc s2c,v )Sc′ Pc ) + 2 Tr(Diag(Pc s2c,v )Pc Sc′ Pc )
+3 Tr(Diag(Pc Sc′ s2c,v )Pc ) + 2 Tr(Diag(Pc Sc′ Pc s2c,v )Pc )
sX
sX
2
(sc,v )i
(sc,v )2i + 4 ksc′ k∞ |Tr(Pc Sc,v Pc Sc,v Pc )|
≤ 6 ksc′ k∞
i
i
sX
sX
sX
4
4
(sc,v )i kSc′ k2 + 2
(sc,v )i
(Pc Sc′ Pc )2ii
+3
i
i
i
sX
sX
sX
sX
(sc,v )4i
(Pc )2ii + 2 ksc′ k∞
(sc,v )4i
(Pc )2ii
+3 ksc′ k∞
i
≤
≤
i
i
10 ksc′ k∞ ksc,v k22 + 3 ksc,v k24 ksc′ k2
√
20 ksc′ k2 ksc,v k24 n.
+
√
7 ksc′ k∞ ksc,v k24 n
i
1/2
Since sc,v = sγ ′ at r = 0, we have that ksc,v k24 = O(M1 ) and hence
p
|(1)| = O
nM1 ksc′ k2 .
For the second term (2), we have that
|(2)| ≤ 2 Tr(Sc,v′ Pc Sc,v Pc ) + 2 Tr(Diag(Pc Sc,v sc,v′ )Pc )
sX
√
(sc,v′ sc,v )2i
≤ 2 sc,v′ 2 ksc,v k2 + 2 n
i
p
p
1/4
n log n + nM1 δ sc,v′
≤ O n1/2 + M1
sc,v′ 2 + O
p
p
1/4
= O M1 + n log n + nM1 δ sc,v′ 2
48
2
where we used ksc,v k∞ = sγ ′
1/4
M1 )
∞
=O
√
log n +
√
M1 δ and at r = 0, we have ksc,v k2 = Aγ(0) γ ′ (0)
sγ ′ 2 = O(n1/2 +
in the second-to-last line.
Note that at r = 0, by Lemma 49, we have
Dr v =
Therefore,
−1 T
dv
− ATc Ac
Ac Sc′ sc,v .
dr
sc,v′ = Ac v ′ = Ac (Dr v) − Ac ATc Ac
and hence
sc,v′
2
≤ kDr vk + Ac ATc Ac
≤ kDr vk + sγ ′
−1
−1
ATc Sc′ sc,v
ATc Sc′ sc,v
2
ksc′ k2
∞
Therefore,
p
p
p
p
1/4
|(2)| = O M1 + n log n + nM1 δ kDr vk +
log n + M1 δ ksc′ k2 .
Therefore, we have
d
Ric(v(r))|s=0
dr
p
p
p
1/4
=O
nM1 ksc′ k2 + O M1 + n log n + nM1 δ kDr vk
p
p
p
p
1/4
log n + M1 δ ksc′ k2
+ O M1 + n log n + nM1 δ
p
p
p
√
√
1/4
1/4
=O
nM1 + M1
log n + M1
M1 δ + n log n + nM1 δ2 ksc′ k2
!
√
1/4
M1
n log n p
+
+ nM1 δ kDr vk
+O
δ
δ
!
√
1/4
p
√
n log n
M1
2
=O
(ksc′ k2 + δ kDr vk) .
+
nM1 + nM1 δ +
δ
δ
√
√
3/4
where we used M1 δ = O( nM1 δ2 + nM1 ) at the last line.
Next, we bound TrM (t). Lemma 52 shows that
TrM (r) =Tr((ATc Ac )−1 ATc (Sc,µ − 3Σc + 2Pc(2) )Ac − (ATc Ac )−1 ∇2 f )
=Tr(Pc (Sc,µ − 3Σc + 2Pc(2) )) − Tr((ATc Ac )−1 ∇2 f )
=σcT Pc σc − σcT Ac (ATc Ac )−1 ∇f − 3Tr(Σ2c ) + 2Tr(Pc(3) ) − Tr((ATc Ac )−1 ∇2 f ).
where in the last step we used
Sc,µ = Ac µ = Diag(Pc σc − Ac (ATc Ac )−1 ∇f ).
49
2
=
Since
d
dr Pc
= −Sc′ Pc − Pc Sc′ + 2Pc Sc′ Pc and
d
dr Ac
= −Sc′ Ac , we have that
d
TrM (r)
dr
= − σcT Sc′ Pc σc − σcT Pc Sc′ σc + 2σcT Pc Sc′ Pc σc
− 4σcT Pc Sc′ σc + 4σcT Pc diag(Pc Sc′ Pc )
+ 3σcT Sc′ Ac (ATc Ac )−1 ∇f − 2diag(Pc Sc′ Pc )T Ac (ATc Ac )−1 ∇f
− 2σcT Ac (ATc Ac )−1 ATc Sc′ Ac (ATc Ac )−1 ∇f − σcT Ac (ATc Ac )−1 ∇2 f · c′
+ 12Tr(Sc′ Σ2c ) − 12Tr(Σc diag(Pc Sc′ Pc ))
− 6Tr(Pc(2) Sc′ Pc + Pc(2) Pc Sc′ ) + 12Tr(Pc(2) Pc Sc′ Pc )
− 2Tr((ATc Ac )−1 ATc Sc′ Ac (ATc Ac )−1 ∇2 f ) − Tr((ATc Ac )−1 ∇3 f [c′ ]).
Simplifying it, we have
d
TrM (r)
dr
= − 6σcT Sc′ Pc σc + 2σcT Pc Sc′ Pc σc + 4σcT Pc Pc(2) sc′
+ 3σcT Sc′ Ac (ATc Ac )−1 ∇f − 2sTc′ Pc(2) Ac (ATc Ac )−1 ∇f
− 2σcT Ac (ATc Ac )−1 ATc Sc′ Ac (ATc Ac )−1 ∇f − σcT Ac (ATc Ac )−1 ∇2 f · c′
+ 12Tr(Sc′ Σ2c ) − 12σcT Pc(2) sc′
− 6Tr(Pc(2) Sc′ Pc + Pc(2) Pc Sc′ ) + 12Tr(Pc(2) Pc Sc′ Pc )
− 2Tr((ATc Ac )−1 ATc Sc′ Ac (ATc Ac )−1 ∇2 f ) − Tr((ATc Ac )−1 ∇3 f [c′ ]).
d
TrM (r) = (3) + (4) where (3) is the sum of all terms not involving f and (4) is the sum of
Let dr
other terms with f .
For the first term (3), we have that
|(3)| ≤6 σcT Sc′ Pc σc + 2 σcT Pc Sc′ Pc σc + 4 σcT Pc Pc(2) sc′ + 12 Tr(Sc′ Σ2c ) + 12 σcT Pc(2) sc′
+ 6 Tr(Pc(2) Sc′ Pc + Pc(2) Pc Sc′ ) + 12 Tr(Pc(2) Pc Sc′ Pc )
q
√
√
√
√
≤6 σcT Sc2′ σc n + 2 ksc′ k∞ n + 4 n ksc′ k2 + 12 ksc′ k2 n + 12 kSc′ k2 n
+ 6 diag(Pc Pc(2) )
2
ksc′ k2 + 6 diag(Pc(2) Pc )
≤36n ksc′ k2 .
50
2
ksc′ k2 + 12 diag(Pc Pc(2) Pc )
2
ksc′ k2
For the second term (4), we have that
(4) ≤3 σcT Sc′ Ac (ATc Ac )−1 ∇f + 2 sTc′ Pc(2) Ac (ATc Ac )−1 ∇f
+ 2 σcT Ac (ATc Ac )−1 ATc Sc′ Ac (ATc Ac )−1 ∇f + σcT Ac (ATc Ac )−1 ∇2 f · c′
+ 2 Tr((ATc Ac )−1 ATc Sc′ Ac (ATc Ac )−1 ∇2 f ) + Tr((ATc Ac )−1 ∇3 f [c′ ])
q
q
q
q
(2)
(2)
T
T
T
T
−1
≤3 sc′ Σc sc′ ∇f (Ac Ac ) ∇f + sc′ Pc Pc Pc sc′ ∇f T (ATc Ac )−1 ∇f
q
q
q
q
T
T
T
−1
T
+ 2 σc Pc Sc′ Pc Sc′ Pc σc ∇f (Ac Ac ) ∇f + σc Pc σc c′ ∇2 f (ATc Ac )−1 ∇2 f · c′
+ 2 diag(Ac (ATc Ac )−1 ∇2 f (ATc Ac )−1 ATc ) 2 ksc′ k2 + Tr((ATc Ac )−1 ∇3 f [c′ ])
p
p
√
≤4 ksc′ k2 M1 + 2 ksc′ k∞ nM1 + 3 nM2 ksc′ k2 + M3 ksc′ k2
p
√
≤ 6 M1 + 3 nM2 + M3 ksc′ k2 .
Therefore,
p
√
d
TrM (s) ≤ O n + M1 + nM2 + M3 ksc′ k2 .
dr
Lemma 58. Let γ be a Hamiltonian curve on ML with ℓ(γ) ≤ ℓ0 . Assume that δ ≤
ζ(t) be the parallel transport of the vector γ ′ (0) on γ(t). Then,
1
1/4 .
36M1
sup kΦ(t)ζ(t)kγ(t) ≤ R3
0≤t≤δ
where
1
2
R3 = O M1
p
3
4
1
4
log n + M1 n δ + M2 n
1
2
.
Proof. By Lemma 52, we have that
−1 T
Aγ (Sγ,µ − 3Σγ + 2Pγ(2) − Sγ ′ Pγ Sγ ′ + Diag(Pγ s2γ ′ ))Aγ − ∇2 f
Φ(t) = ATγ Aγ
= (1) + (2)
where (2) is the last term − ATγ Aγ
For the first term, we have that
−1
∇2 f .
k(1)ζkγ = Pγ (Sγ,µ − 3Σγ + 2Pγ(2) − Sγ ′ Pγ Sγ ′ + Diag(Pγ s2γ ′ ))sγ,ζ
≤ (Sγ,µ − 3Σγ +
2Pγ(2)
− Sγ ′ Pγ Sγ ′ +
Diag(Pγ s2γ ′ ))sγ,ζ
Pγ(2) sγ,ζ
2
2
+ Sγ ′ Pγ Sγ ′ sγ,ζ 2 + Diag(Pγ s2γ ′ )sγ,ζ
≤ kSγ,µ sγ,ζ k2 + 3 kΣγ sγ,ζ k2 + 2
2
≤ ksγ,ζ k∞ ksγ,µ k2 + kσγ k2 + Pγ s2γ ′ 2 + 2 Pγ(2) sγ,ζ + sγ ′ ∞ Sγ ′ sγ,ζ 2
2
p
√
2
(2)
≤ ksγ,ζ k∞ 2 M1 + n + sγ ′ 4 + 2 Pγ sγ,ζ + sγ ′ ∞ sγ ′ 4 ksγ,ζ k4
2
51
2
Let
where we used that ksγ,µ k22 = kµ(x)k2x ≤ 4M1 , kσγ k2 ≤ n. Now, we note that
Pγ(2) sγ,ζ
2
2
=
X
i
2
X
(Pγ )2ij (sγ,ζ )j
j
≤ ksγ,ζ k2∞
= ksγ,ζ k2∞
X
i
X
X
(Pγ )2ij
j
σγ2
i
i
= n ksγ,ζ k2∞ .
√
√
1/4
′
log
n
+
M1 δ), we have that
=
O(M
)
and
s
=
O(
γ
1
4
∞
p
2
k(1)ζkγ ≤ ksγ,ζ k∞ 5 M1 + sγ ′ 4 + sγ ′ ∞ sγ ′ 4 ksγ,ζ k4
p
1/2
1/4
3/4
≤ O M1
ksγ,ζ k∞ + O( log nM1 + M1 δ) ksγ,ζ k4 .
Using also that sγ ′
For the second term, we have that
k(1)ζkγ = Pγ ∇2 f ζ
2
2
≤ ζ T ∇2 f · ζ ≤ M2 ksγ,ζ k2 .
Combining both terms, we have that
p
1/4
3/4
1/2
kΦ(t)ζkγ = O M1
ksγ,ζ k∞ + O( log nM1 + M1 δ) ksγ,ζ k4 + M2 ksγ,ζ k2 .
(7.5)
Now, we bound ksγ,ζ k2 , ksγ,ζ k4 and ksγ,ζ k∞ . (7.1) shows that
−1 T
d
ζ(t) = ATγ Aγ
Aγ Sγ ′ Aγ ζ.
dt
Let wp (t) = kAγ ζ(t)kp . Then, we have that
d
d
wp (t) ≤
Aγ ζ(t)
dt
dt
p
d
ζ(t)
dt
≤ Sγ ′ Aγ ζ(t)
p
+ Aγ
≤ Sγ ′ Aγ ζ(t)
p
+ Pγ Sγ ′ Aγ ζ
≤ sγ ′
d
w2 (t) ≤ 2 sγ ′
For p = 2, we have that dt
1
that t ≤ δ ≤
1/4 , we have
∞
w (t) + Sγ ′ Aγ ζ
∞ p
w2 (t) ≤ e512(
= O(n
where we used that ζ(0) = γ ′ (0) and t ≤
O(n1/2 ).
√
√
log n+ M1 δ)t
1/2
p
.
2
w2 (t). Using that sγ ′
36M1
p
∞
≤ 256
√
log n +
√
M1 δ and
w2 (0)
)
1
1/4
12(v4 +M1 )
52
at the end. Therefore, we have that ksγ,ζ k2 =
For p = 4, we note that
d
w4 (t) ≤ sγ ′ ∞ w4 (t) + sγ ′ 4 ksγ,ζ k4
dt
1/4
≤ 2 sγ ′ 4 w4 (t) = O(M1 w4 (t)).
1
1/4 , we have again
36M1
O(n1/4 ), we have that
that w4 (t) = O(w4 (0)). Since w4 (0) = kAγ ζ(0)k4 =
Since t ≤ δ ≤
kAγ γ ′ (0)k4 =
w4 (t) = O(n1/4 ).
For p = ∞, we note that
d
w∞ (t) ≤ sγ ′ ∞ w∞ (t) + sγ ′ 4 ksγ,ζ k4
dt
1/4
1/4
≤ O(M1 w∞ (t)) + O(M1 n1/4 ).
Again using t ≤ δ ≤
1
1/4 ,
36M1
√
1/4
we have that w∞ (t) ≤ O( log n + M1 n1/4 δ).
Combining our bounds on w2 , w4 , w∞ to (7.5), we get
1
3
3
1
p
p
1
1
1
1
2
4
kΦ(t)ζkγ = O M1 log n + M1 n 4 δ + O( log nM14 n 4 + M14 n 4 δ) + O M2 n 2
1
3
p
1
1
2
4
4
2
= O M1 log n + M1 n δ + M2 n
.
7.5
Stability of L2 + L4 + L∞ norm (ℓ1 )
Lemma 59. Given a family of Hamiltonian curves γr (t) on ML with ℓ(γ0 ) ≤ ℓ0 , for δ2 ≤
1 √
√
, we have that
M +M n
1
2
d
1
1/4
ℓ(γr ) ≤ O M1 δ + √
dr
δ log n
1/4
Hence, ℓ1 = O M1 δ +
Proof. For brevity, all
d
dr
√1
δ log n
d
γr (0)
dr
.
are evaluated at r = 0. Since
d
sγ ′
dr r
2
≤ sγr′
∞
Aγr
d
γr
dr
d
′
dr sγr
+ δ Dr γr′ (0)
γr (0)
γr (0)
!
(7.6)
.
d ′
= −Sγr , d γr sγr′ + Aγr dr
γr , we have
dr
+ Aγr
2
d ′
γ
dr r
.
2
For the last term, we note that
Dr γr′ =
Hence, we have
Aγr
d ′
γ
dr r
2
≤ Dr γr′
γr
−1 T
d ′
γr − ATγr Aγr
Aγr Sγr , d γr sγr′ .
dr
dr
+ Sγr , d γr sγ ′
dr
53
2
≤ Dr γr′
γr
+ sγr′
∞
sγr , d γr
dr
2
.
Therefore, we have that
d
sγ ′
dr r
2
d
+ Dr γ ′
γr
dr
γr
p
d
p
=O
log n + M1 δ
γr
dr
≤ 2 sγr′
∞
γ
+ Dr γr′ .
(7.7)
γr
Since γr is a family of Hamiltonian curves, Lemma 19 shows that
′′
ψ (t) = Γt Φ(t)Γ−1
t ψ(t)
where ψ(t) is the parallel transport of
d
ψ(t) = γr (0) + Dr γr′ (0)t +
dr
d
dr γr (t)
Z
0
t
from γr (t) to γr (0). By Lemma 49, we have that
(t − r)Γr Φ(r)Γ−1
r (
d
γr (0) + Dr γr′ (0)r + E(r))dr
dr
def
d
with kE(r)kF ≤ O(1)∆ and ∆ = dr
γr (0) γr (0) + δ kDr γr′ (0)kγr (0) where we used that kΦ(t)kF,γ =
√
√
1 √
O( M1 + M2 n) (Lemma 56) and that s2 ≤ δ2 ≤ √M +M
.
1
2 n
Therefore, we have that
Z t
d
γr (t)
= ψ(t) γr (0) ≤ ∆ + O(∆) (t − s) Γr Φ(s)Γ−1
dr
r
γr (0)
dr
0
γr (t)
≤ O(∆)
√
√
where we used again kΦ(s)kF,γ = O( M1 + M2 n) and s2 ≤ δ2 ≤
′
Similarly, we have that kDr γr′ (t)kγr (t) = ψ (t)
√1 ,
n
Putting these into (7.7) and using h ≤
d
sγ ′
dr r
We write
ℓ(γr ) = max
0≤t≤δ
sγr′ (t)
n1/2 +
2
1/4
M1
+
sγr′ (t)
1 √
.
M1 +M2 n
≤ O( ∆
δ ).
we have
∆
M1 δ ∆ +
δ
p
1
M1 δ +
=O
∆.
δ
=O
2
γr (0)
√
4
1/4
M1
p
log n +
sγr′ (t)
+√
log n +
p
√∞
M1 δ
+
sγr′ (0)
(7.8)
2
n1/2
+
According to same calculation as (7.7), we can improve the estimate on
d
sγ ′ (0)
dr r
2
≤
p
log n
d
γr (0)
dr
∆
≤ .
δ
sγr′ (0)
n1/4
d
′
dr sγr 2
4
sγ ′ (0) ∞
+ √r
log n
!
.
for t = 0 and get
+ Dr γr′ (0)
γr
(7.9)
54
Using (7.8) and (7.9), we have that
d
d
′
′
dr sγr (t) 2
dr sγr (0) 2
√
max
+
1/4
0≤t≤δ
log n
M1
!
√
M1 δ + 1δ
1
∆
+ √
1/4
δ log n
M1
d
ℓ(γr ) = O
dr
=O
=O
7.6
1/4
M1 δ
1
+ √
δ log n
!
∆.
Mixing Time
Lemma 60. If f (x) = α · φ(x) (logarithmic barrier), then, we have that M1 = n + α2 m, M2 = α
√
and M3 = 2α · n.
−1
Proof. For M1 , we note that (∇f (x))T ATx Ax
∇f (x) = α2 1T Ax (ATx Ax )−1 ATx 1 ≤ α2 m. Hence,
M1 = n + α2 m.
For M2 , it directly follows from the definition.
For M3 , we note that
Tr((ATx Ax )−1 ∇3 f (x)[v]) = −2αTr((ATx Ax )−1 ATx Sx,v Ax )
X
σx,i (sx,v )i .
= −2α
i
Hence, we have
Tr((ATx Ax )−1 ∇3 f (x)[v])
sX
sX
√
2
σx,i
(sx,v )2i ≤ 2α n kvkx .
≤ 2α
i
i
Using Theorem 29, we have the following
Lemma 61. There is a universal constant c > 0 such that if the step size
1
1
1
1
1
1
1
δ ≤ c · min n− 3 , α− 3 m− 6 n− 6 , α− 2 m− 4 n− 12 ,
then, all the δ conditions for Theorem 30 are satisfied.
Proof. In the previous section, we proved that if δ ≤
1
1/4
36M1
and n is large enough,
1. ℓ0 = 256 (Lemma 54)
1/4
1
(Lemma 59)
2. ℓ1 = O M1 δ + δ√log
n
√
√
3. R1 = O( M1 + M2 n) (Lemma 56)
√
√
4. R2 = O( nM1 + nM1 δ2 +
1/4
M1
δ
+
√
n log n
δ
+
55
√
nM2 + M3 ) (Lemma 57)
3
1√
1
1
5. R3 = O(M12 log n + M14 n 4 δ + M2 n 2 ) (Lemma 58)
1
1
1
1
1
Substituting the value of M1 , M2 and M3 and using that δ . n− 3 , δ . α− 2 n− 3 and δ . α− 2 m− 4 ,
we have that
1 1
1
1. ℓ1 = O α 2 m 4 δ + δ√log
n
√
√
2. R1 = O( n + α m)
1 √
√
√
4
3. R2 = O(n + α nm + αm +δ n log n )
√
√
3
3
1
4. R3 = O( n log n + α m log n + nδ + α 2 m 4 n 4 δ)
Now, we verify all the δ conditions for Theorem 30.
1
1
1
Using δ . n− 3 and δ . α− 2 m− 4 , we have that δ2 .
Using δ . n
− 13
− 12
and δ . α
m
− 41
1
R1 .
, we have that
1
1
1
δ5 R12 ℓ1 . δ5 (n + α2 m) · (α 2 m 4 δ + √
) ≤ ℓ0 .
δ log n
For the last condition, we note that
p
√
√
1
δ3 R2 + δ2 R3 .nδ3 + α nmδ3 + αm 4 δ2 + n log nδ2
p
p
3
1
3
+ n log nδ2 + α m log nδ2 + nδ3 + α 2 m 4 n 4 δ3
p
p
√
√
3
1
1
3
. αm 4 δ2 + n log nδ2 + α m log nδ2 + nδ3 + α nmδ3 + α 2 m 4 n 4 δ3
p
p
√
3
3
1
. n log nδ2 + α m log nδ2 + nδ3 + α nmδ3 + α 2 m 4 n 4 δ3
where we used that
Therefore, if
√
√
1
αm 4 . (1 + α m) at the end
1
1
1
1
1
1
1
δ ≤ c · min n− 3 , α− 3 m− 6 n− 6 , α− 2 m− 4 n− 12
for small enough constant, then all the δ conditions for Theorem 30 are satisfied.
Acknowledgement. We thank Ben Cousins for helpful discussions. This work was supported in part
by NSF awards CCF-1563838, CCF-1717349 and CCF-1740551.
References
[1] D. Applegate and R. Kannan. Sampling and integration of near log-concave functions. In
STOC, pages 156–163, 1991.
[2] M. Betancourt. A Conceptual Introduction to Hamiltonian Monte Carlo. ArXiv e-prints,
January 2017.
[3] B. Cousins and S. Vempala. Bypassing KLS: Gaussian cooling and an O ∗ (n3 ) volume algorithm.
In STOC, pages 539–548, 2015.
[4] M. E. Dyer and A. M. Frieze. Computing the volume of a convex body: a case where randomness provably helps. In Proc. of AMS Symposium on Probabilistic Combinatorics and Its
Applications, pages 123–170, 1991.
56
[5] M. E. Dyer, A. M. Frieze, and R. Kannan. A random polynomial time algorithm for approximating the volume of convex bodies. In STOC, pages 375–381, 1989.
[6] M. Girolami, B. Calderhead, and S. A. Chin. Riemannian Manifold Hamiltonian Monte Carlo.
ArXiv e-prints, July 2009.
[7] Mark Girolami and Ben Calderhead. Riemann manifold langevin and hamiltonian monte carlo
methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73(2):123–
214, 2011.
[8] A. T. Kalai and S. Vempala. Simulated annealing for convex optimization. Math. Oper. Res.,
31(2):253–266, 2006.
[9] R. Kannan, L. Lovász, and M. Simonovits. Isoperimetric problems for convex bodies and a
localization lemma. Discrete & Computational Geometry, 13:541–559, 1995.
[10] R. Kannan, L. Lovász, and M. Simonovits. Random walks and an O ∗ (n5 ) volume algorithm
for convex bodies. Random Structures and Algorithms, 11:1–50, 1997.
[11] R. Kannan and H. Narayanan. Random walks on polytopes and an affine interior point method
for linear programming. In STOC, pages 561–570, 2009.
[12] Yin Tat Lee and Aaron Sidford. Path finding methods for linear programming: Solving linear
programs in õ (vrank) iterations and faster algorithms for maximum flow. In Foundations
of Computer Science (FOCS), 2014 IEEE 55th Annual Symposium on, pages 424–433. IEEE,
2014.
[13] Yin Tat Lee and Santosh S. Vempala. Geodesic walks in polytopes. CoRR, abs/1606.04696,
2016.
[14] Yin Tat Lee and Santosh Srinivas Vempala. Eldan’s stochastic localization and the KLS hyperplane conjecture: An improved lower bound for expansion. CoRR, abs/1612.01507, 2016.
[15] Yin Tat Lee and Santosh Srinivas Vempala. Eldan’s stochastic localization and the KLS hyperplane conjecture: An improved lower bound for expansion. In Proc. of IEEE FOCS, 2017.
[16] Yin Tat Lee and Santosh Srinivas Vempala. Geodesic walks in polytopes. In Proceedings of the
49th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2017, Montreal, QC,
Canada, June 19-23, 2017, pages 927–940, 2017.
[17] L. Lovász and M. Simonovits. Mixing rate of Markov chains, an isoperimetric inequality, and
computing the volume. In ROCS, pages 482–491, 1990.
[18] L. Lovász and M. Simonovits. On the randomized complexity of volume and diameter. In Proc.
33rd IEEE Annual Symp. on Found. of Comp. Sci., pages 482–491, 1992.
[19] L. Lovász and M. Simonovits. Random walks in a convex body and an improved volume
algorithm. In Random Structures and Alg., volume 4, pages 359–412, 1993.
[20] L. Lovász and S. Vempala. Fast algorithms for logconcave functions: sampling, rounding,
integration and optimization. In FOCS, pages 57–68, 2006.
[21] L. Lovász and S. Vempala. Hit-and-run from a corner. SIAM J. Computing, 35:985–1005, 2006.
57
[22] L. Lovász and S. Vempala. Simulated annealing in convex bodies and an O ∗ (n4 ) volume
algorithm. J. Comput. Syst. Sci., 72(2):392–417, 2006.
[23] L. Lovász and S. Vempala. The geometry of logconcave functions and sampling algorithms.
Random Struct. Algorithms, 30(3):307–358, 2007.
[24] Oren Mangoubi and Aaron Smith. Rapid mixing of hamiltonian monte carlo on strongly logconcave distributions. arXiv preprint arXiv:1708.07114, 2017.
[25] Radford M. Neal. MCMC using Hamiltonian dynamics. Handbook of Markov Chain Monte
Carlo, 54:113–162, 2010.
[26] R.M. Neal. Bayesian Learning for Neural Networks. Lecture Notes in Statistics. Springer New
York, 1996.
[27] Yurii Nesterov, Arkadii Nemirovskii, and Yinyu Ye. Interior-point polynomial algorithms in
convex programming, volume 13. SIAM, 1994.
[28] Burt Totaro. The curvature of a hessian metric.
15(04):369–391, 2004.
International Journal of Mathematics,
[29] S. Vempala. Geometric random walks: A survey. MSRI Combinatorial and Computational
Geometry, 52:573–612, 2005.
58
A
Matrix ODE
In this section, we prove Lemmas (20) and (21) for the solution of the ODE (3.4), restated below
for convenience.
d2
Ψ(t) = Φ(t)Ψ(t),
dt2
d
Ψ(0) = B,
dt
Ψ(0) = A.
Lemma 62. Consider the matrix ODE (3.4). Let λ = max0≤t≤ℓ kΦ(t)k2 . For any t ≥ 0, we have
that
√
√
kBk
kΨ(t)k2 ≤ kAk2 cosh( λt) + √ 2 sinh( λt).
λ
Proof. Note that
Z
′
t
Ψ(t) = Ψ(0) + tΨ (0) +
(t − s)Ψ′′ (s)ds
0
Z t
= A + tB +
(t − s)Φ(s)Ψ(s)ds.
(A.1)
0
Let a(t) = kΨ(t)k2 , then we have that
a(t) ≤ kAk2 + t kBk2 + λ
Z
t
(t − s)a(s)ds.
0
Let a(t) be the solution of the integral equation
a(t) = kAk2 + t kBk2 + λ
Z
t
0
(t − s)a(s)ds.
By induction, we have that a(t) ≤ a(t) for all t ≥ 0. By taking derivatives on both sides, we have
that
a′′ (t) = λa(t), a(0) = kAk2 , a′ (0) = kBk2 .
Solving these equations, we have
√
√
kBk
kΨ(t)k2 = a(t) ≤ a(t) = kAk2 cosh( λt) + √ 2 sinh( λt)
λ
for all t ≥ 0.
Lemma 63. Consider the matrix ODE (3.4). Let λ = max0≤t≤ℓ kΦ(t)kF . For any 0 ≤ t ≤
have that
t3
2
kΨ(t) − A − BtkF ≤ λ t kAk2 + kBk2 .
5
In particular, this shows that
Ψ(t) = A + Bt +
with kE(s)kF ≤ λ s2 kAk2 +
s3
5
kBk2 .
Z
t
0
(t − s)Φ(s)(A + Bs + E(s))ds
59
√1 ,
λ
we
Proof. Recall from (A.1) that
Ψ(t) = A + tB +
Z
t
0
(A.2)
(t − s)Φ(s)Ψ(s)ds.
Let E(t) = Ψ(t) − (A + tB). Using Lemma 20, we have that
kE(t)kF =
Z
≤λ
t
0
Z
Z
(t − s)Φ(s)Ψ(s)ds
F
t
(t − s) kΨ(s)k2 ds
0
√
√
kBk
(t − s) kAk2 cosh( λs) + √ 2 sinh( λs)ds
λ
0
√
√
√
kBk2
= λ kAk2 (cosh( λt) − 1) + √ (sinh( λt) − λt) .
λ
≤λ
t
√
√
√
Since 0 ≤ t ≤ √1λ , we have that cosh( λt) − 1 ≤ λt2 and sinh( λt) − λt ≤
the result.
The last equality follows again from (A.2)
λ3/2 t3
5 .
This gives
Next, we have an elementary lemma about the determinant.
Lemma 64. Suppose that E is a matrix (not necessarily symmetric) with kEk2 ≤ 41 , we have
|log det(I + E) − TrE| ≤ kEk2F .
Proof. Let f (t) = log det(I + tE). Then, by Jacobi’s formula, we have
f ′ (t) = Tr (I + tE)−1 E ,
f ′′ (t) = −Tr((I + tE)−1 E(I + tE)−1 E).
Since kEk2 ≤ 14 , we have that (I + tE)−1
f ′′ (t)
=
≤
2
≤
4
3
and hence
Tr((I + tE)−1 E(I + tE)−1 E)
T
Tr(E T (I + tE)−1 (I + tE)−1 E)
≤ 2 Tr(E T E) = 2 kEk2F .
The result follows from
Z
1
(1 − s)f ′′ (s)ds
f (1) = f (0) + f (0) +
0
Z 1
(1 − s)f ′′ (s)ds.
= Tr(E) +
′
0
60
B
Concentration
Lemma 65 ([13, Ver 3, Lemma 90]). For p ≥ 1, we have
p
!1/p
2
p+1 X
p/2
2 Γ( 2 )
t
p
p
√
+ kAk2→p t
Px∼N (0,I) kAxkp ≤
kai k2
.
≤ 1 − exp −
π
2
i
In particular, we have
and
Px∼N (0,I) kAxk44 ≤ 3
Px∼N (0,I) kAxk22 ≤
C
X
i
X
i
kai k42
kai k22
!1/4
!1/2
4
2
t
+ kAk2→4 t
≤ 1 − exp −
2
2
2
t
.
+ kAk2→2 t ≤ 1 − exp −
2
Calculus
Proof of Fact 48. Recall Definition 47 and write
dSγ−1
dAγ
=
A
dt
dt
dSγ −1
Sγ A
= −Sγ−1
dt
d(Aγ − b)
−1
= −Sγ Diag
Aγ
dt
= −Diag Sγ−1 Aγ ′ Aγ
= −Diag(Aγ γ ′ )Aγ = −Sγ ′ Aγ .
For the second, using the first,
dAγ (ATγ Aγ )−1 ATγ
dPγ
=
dt
dt
d(ATγ Aγ )−1 T
dAγ T
dAγ
=
(Aγ Aγ )−1 ATγ + Aγ (ATγ Aγ )−1
+ Aγ
Aγ
dt
dt
dt
d(ATγ Aγ ) T
(Aγ Aγ )−1 ATγ
= −Sγ ′ Pγ − Pγ Sγ ′ − Aγ (ATγ Aγ )−1
dt
= −Sγ ′ Pγ − Pγ Sγ ′ + 2Aγ (ATγ Aγ )−1 ATγ Sγ ′ Aγ (ATγ Aγ )−1 ATγ
= −Sγ ′ Pγ − Pγ Sγ ′ + 2Pγ Sγ ′ Pγ .
And for the last,
dSγ ′
= Diag
dt
dAγ ′
γ + Aγ γ ′′
dt
= Diag(−Sγ ′ Aγ γ ′ + Aγ γ ′′ ) = −Sγ2′ + Sγ ′′ .
61
D
Basic definitions of Riemannian geometry
Here we recall basic notions of Riemannian geometry. One can think of a manifold M as a ndimensional “surface” in Rk for some k ≥ n.
1. Tangent space Tp M : For any point p, the tangent space Tp M of M at point p is a linear
subspace of Rk of dimension n. Intuitively, Tp M is the vector space of possible directions that
are tangential to the manifold at x. Equivalently, it can be thought as the first-order linear
d
c(t) is tangent
approximation of the manifold M at p. For any curve c on M , the direction dt
′
to M and hence lies in Tc(t) M . When it is clear from context, we define c (t) = dc
dt (t). For any
n
n
open subset M of R , we can identify Tp M with R because all directions can be realized by
derivatives of some curves in Rn .
2. Riemannian metric: For any v, u ∈ Tp M , the inner product (Riemannianq
metric) at p is given
by hv, uip and this allows us to define the norm of a vector kvkp = hv, vip . We call a
manifold a Riemannian manifold if it is equipped with a Riemannian metric. When it is clear
from context, we define hv, ui = hv, uip . In Rn , hv, uip is the usual ℓ2 inner product.
3. Differential (Pushforward) d: Given a function f from a manifold M to a manifold N , we
define df (x) as the linear map from Tx M to Tf (x) N such that
df (x)(c′ (0)) = (f ◦ c)′ (0)
for any curve c on M starting at x = c(0). When M and N are Euclidean spaces, df (x) is the
Jacobian of f at x. We can think of pushforward as a manifold Jacobian, i.e., the first-order
approximation of a map from a manifold to a manifold.
4. Hessian manifold: We call M a Hessian manifold (induced by φ) if M is an open subset of Rn
with the Riemannian metric at any point p ∈ M defined by
hv, uip = v T ∇2 φ(p)u
where v, u ∈ Tp M and φ is a smooth convex function on M .
5. Length: For any curve c : [0, 1] → M , we define its length by
L(c) =
Z
1
0
d
c(t)
dt
dt.
c(t)
6. Distance: For any x, y ∈ M , we define d(x, y) be the infimum of the lengths of all paths
connecting x and y. In Rn , d(x, y) = kx − yk2 .
7. Geodesic: We call a curve γ(t) : [a, b] → M a geodesic if it satisfies both of the following
conditions:
(a) The curve γ(t) is parameterized with constant speed. Namely,
for t ∈ [a, b].
d
dt γ(t) γ(t)
is constant
(b) The curve is the locally shortest length curve between γ(a) and γ(b). Namely, for any
family of curve c(t, s) with c(t, 0) = γ(t) and c(a, s) = γ(a) and c(b, s) = γ(b), we have
Rb d
d
c(t, s) c(t,s) dt = 0.
that ds
s=0 a dt
62
Note that, if γ(t) is a geodesic, then γ(αt) is a geodesic for any α. Intuitively, geodesics are
local shortest paths. In Rn , geodesics are straight lines.
8. Exponential map: The map expp : Tp M → M is defined as
expp (v) = γv (1)
where γv is the unique geodesic starting at p with initial velocity γv′ (0) equal to v. The
exponential map takes a straight line tv ∈ Tp M to a geodesic γtv (1) = γv (t) ∈ M . Note that
expp maps v and tv to points on the same geodesic. Intuitively, the exponential map can be
thought as point-vector addition in a manifold. In Rn , we have expp (v) = p + v.
9. Parallel transport: Given any geodesic c(t) and a vector v such that hv, c′ (0)ic(0) = 0, we
define the parallel transport Γ of v along c(t) by the following process: Take h to be infinitesimally small and v0 = v. For i = 1, 2, · · · , 1/h, we let vih be the vector orthogonal to c′ (ih)
that minimizes the distance on the manifold between expc(ih) (hvih ) and expc((i−1)h) (hv(i−1)h ).
Intuitively, the parallel transport finds the vectors on the curve such that their end points are
closest to the end points of v. For general vector v ∈ Tc′ (0) , we write v = αc′ (0) + w and we
define the parallel transport of v along c(t) is the sum of αc′ (t) and the parallel transport of
w along c(t). For a non-geodesic curve, see the definition in Fact 66.
10. Orthonormal frame: Given vector fields v1 , v2 , · · · , vn on a subset of M , we call {vi }ni=1 is an
orthonormal frame if hvi , vj ix = δij for all x. Given a curve c(t) and an orthonormal frame at
c(0), we can extend it on the whole curve by parallel transport and it remains orthonormal
on the whole curve.
11. Directional derivatives and the Levi-Civita connection: For a vector v ∈ Tp M and a vector
field u in a neighborhood of p, let γv be the unique geodesic starting at p with initial velocity
γv′ (0) = v. Define
u(h) − u(0)
∇v u = lim
h→0
h
where u(h) ∈ Tp M is the parallel transport of u(γ(h)) from γ(h) to γ(0). Intuitively, LeviCivita connection is the directional derivative of u along direction v, taking the metric into
d
u(x + tv). When u is defined on a curve
account. In particular, for Rn , we have ∇v u(x) = dt
d
d
n
c, we define Dt u = ∇c′ (t) u. In R , we have Dt u(γ(t)) = dt
u(γ(t)). We reserve dt
for the usual
derivative with Euclidean coordinates.
We list some basic facts about the definitions introduced above that are useful for computation and
intuition.
Fact 66. Given a manifold M , a curve c(t) ∈ M , a vector v and vector fields u, w on M , we have
the following:
1. (alternative definition of parallel transport) v(t) is the parallel transport of v along c(t) if and
only if ∇c′ (t) v(t) = 0.
2. (alternative definition of geodesic) c is a geodesic if and only if ∇c′ (t) c′ (t) = 0.
3. (linearity) ∇v (u + w) = ∇v u + ∇v w.
4. (product rule) For any scalar-valued function f, ∇v (f · u) =
63
∂f
∂v u
+ f · ∇v u.
5. (metric preserving)
d
dt
hu, wic(t) = hDt u, wic(t) + hu, Dt wic(t) .
∂c
6. (torsion free-ness) For any map c(t, s) from a subset of R2 to M , we have that Ds ∂c
∂t = Dt ∂s
where Ds = ∇ ∂c and Dt = ∇ ∂c .
∂s
∂t
7. (alternative definition of Levi-Civita connection) ∇v u is the unique linear mapping from the
product of vector and vector field to vector field that satisfies (3), (4), (5) and (6).
D.1
Curvature
Roughly speaking, curvature measures the amount by which a manifold deviates from Euclidean
space. Given vector u, v ∈ Tp M , in this section, we define uv be the point obtained from moving
from p along direction u with distance kukp (using geodesic), then moving along direction “v” with
distance kvkp where “v” is the parallel transport of v along the path u. In Rn , uv is exactly p + u + v
and hence uv = vu, namely, parallelograms close up. For a manifold, parallelograms almost close
up, namely, d(uv, vu) = o(kuk kvk). This property is called being torsion-free.
1. Riemann curvature tensor: Three-dimensional parallelepipeds might not close up, and the
curvature tensor measures how far they are from closing up. Given vector u, v, w ∈ Tp M , we
define uvw as the point obtained by moving from uv along direction “w” for distance kwkp
where “w” is the parallel transport of w along the path uv. In a manifold, parallelepipeds do
not close up and the Riemann curvature tensor how much uvw deviates from vuw. Formally,
for vector fields v, w, we define τv w be the parallel transport of w along the vector field v for
one unit of time. Given vector field v, w, u, we define the Riemann curvature tensor by
R(u, v)w =
d d −1 −1
τ τ τsu τtv w
ds dt su tv
.
(D.1)
t,s=0
Riemann curvature tensor is a tensor, namely, R(u, v)w at point p depends only on u(p), v(p)
and w(p).
2. Ricci curvature: Given a vector v ∈ Tp M , the Ricci curvature Ric(v) measures if the geodesics
starting around p in direction v converge together. Positive Ricci curvature indicates the
geodesics converge while negative curvature indicates they diverge. Let S(0) be a small shape
around p and S(t) be the set of point obtained by moving S(0) along geodesics in the direction
v for t units of time. Then,
volS(t) = volS(0)(1 −
Formally, we define
Ric(v) =
t2
Ric(v) + smaller terms).
2
X
ui
(D.2)
hR(v, ui )ui , vi
where ui is an orthonormal basis of Tp M . Equivalently, we have Ric(v) = Eu∼N (0,I) hR(v, u)u, vi.
kvk2 .
For Rn , Ric(v) = 0. For a sphere in n + 1 dimension with radius r, Ric(v) = n−1
r2
Fact 67 (Alternative definition of Riemann curvature tensor). Given any M -valued function c(t, s),
∂c
we have vector fields ∂c
∂t and ∂s on M . Then, for any vector field z,
R(
∂c ∂c
, )z = ∇ ∂c ∇ ∂c z − ∇ ∂c ∇ ∂c z.
∂t
∂s
∂s
∂t
∂t ∂s
Equivalently, we write R(∂t c, ∂s c)z = Dt Ds z − Ds Dt z.
64
Fact 68. Given vector fields v, u, w, z on M ,
hR(v, u)w, zi = hR(w, z)v, ui = − hR(u, v)w, zi = − hR(v, u)z, wi .
D.2
Hessian manifolds
2
Recall that a manifold is called Hessian if it is a subset of Rn and its metric is given by gij = ∂x∂i ∂xj φ
for some smooth convex function φ. We let gij be entries of the inverse matrix of gij . For example,
P
2
∂3
we have j gij gjk = δik . We use φij to denote ∂x∂i ∂xj φ and φijk to denote ∂xi ∂x
j ∂xk φ.
Since a Hessian manifold is a subset of Euclidean space, we identify tangent spaces Tp M by
Euclidean coordinates. The following lemma gives formulas for the Levi-Civita connection and
curvature under Euclidean coordinates.
Lemma 69 ([28]). Given a Hessian manifold M , vector fields v, u, w, z on M , we have the following:
1. (Levi-Civita connection) ∇v u =
and the Christoffel symbol
P
∂uk
ik vi ∂xi ek
Γkij =
+
P
k
ijk vi uj Γij ek
where ek are coordinate vectors
1 X kl
g φijl .
2
l
2. (Riemann curvature tensor) hR(u, v)w, zi =
Rklij =
3. (Ricci curvature) Ric(v) =
1
4
P
P
ijlk
Rklij ui vj wl zk where
1 X pq
g (φjkp φilq − φikp φjlq ) .
4 pq
ijlkpq g
pq g jl
(φjkpφilq − φikp φjlq ) vi vk .
65
| 8 |
arXiv:1710.08585v1 [cs.LG] 24 Oct 2017
Max-Margin Invariant Features from Transformed
Unlabeled Data
Dipan K. Pal, Ashwin A. Kannan∗, Gautam Arakalgud∗, Marios Savvides
Department of Electrical and Computer Engineering
Carnegie Mellon University
Pittsburgh, PA 15213
{dipanp,aalapakk,garakalgud,marioss}@cmu.edu
Abstract
The study of representations invariant to common transformations of the data
is important to learning. Most techniques have focused on local approximate
invariance implemented within expensive optimization frameworks lacking explicit
theoretical guarantees. In this paper, we study kernels that are invariant to a unitary
group while having theoretical guarantees in addressing the important practical
issue of unavailability of transformed versions of labelled data. A problem we
call the Unlabeled Transformation Problem which is a special form of semisupervised learning and one-shot learning. We present a theoretically motivated
alternate approach to the invariant kernel SVM based on which we propose MaxMargin Invariant Features (MMIF) to solve this problem. As an illustration, we
design an framework for face recognition and demonstrate the efficacy of our
approach on a large scale semi-synthetic dataset with 153,000 images and a new
challenging protocol on Labelled Faces in the Wild (LFW) while out-performing
strong baselines.
1
Introduction
It is becoming increasingly important to learn well generalizing representations that are invariant
to many common nuisance transformations of the data. Indeed, being invariant to intra-class
transformations while being discriminative to between-class transformations can be said to be one of
the fundamental problems in pattern recognition. The nuisance transformations can give rise to many
‘degrees of freedom’ even in a constrained task such as face recognition (e.g. pose, age-variation,
illumination etc.). Explicitly factoring them out leads to improvements in recognition performance as
found in [11, 8, 6]. It has also been shown that that features that are explicitly invariant to intra-class
transformations allow the sample complexity of the recognition problem to be reduced [2]. To this
end, the study of invariant representations and machinery built on the concept of explicit invariance is
important.
Invariance through Data Augmentation. Many approaches in the past have enforced invariance
by generating transformed labelled training samples in some form such as [14, 19, 21, 10, 17, 4].
Perhaps, one of the most popular method for incorporating invariances in SVMs is the virtual support
method (VSV) in [20], which used sequential runs of SVMs in order to find and augment the support
vectors with transformed versions of themselves.
Indecipherable transformations in data leads to shortage of transformed labelled samples. The
above approaches however, assume that one has explicit knowledge about the transformation. This
is a strong assumption. Indeed, in most general machine learning applications, the transformation
∗
Authors contributed equally
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
present in the data is not clear and cannot be modelled easily, e.g. transformations between different
views of a general 3D object and between different sentences articulated by the same person. Methods
which work on generating invariance by explicitly transforming or augmenting labelled training data
cannot be applied to these scenarios. Further, in cases where we do know the transformations that
exist and we actually can model them, it is difficult to generate transformed versions of very large
labelled datasets. Hence there arises an important problem: how do we train models to be invariant to
transformations in test data, when we do not have access to transformed labelled training samples ?
Transformed unlabeled data
Non-transformed labeled data
Train
Train
Availability of unlabeled transformed data.
Although it is difficult to obtain or generate
transformed labelled data (due to the reasons
mentioned above), unlabeled transformed data
is more readily available. For instance, if different views of specific objects of interest are not
available, one can simply collect views of general objects. Also, if different sentences spoken
by a specific group of people are not available,
one can simply collect those spoken by members of the general population. In both these
scenarios, no explicit knowledge or model of
the transformation is needed, thereby bypassing
the problem of indecipherable transformations.
This situation is common in vision e.g. only
unlabeled transformed images are observed, but
has so far mostly been addressed by the community by intense efforts in large scale data collection. Note that the transformed data that is
collected is not required to be labelled. We now
are in a position to state the central problem that
this paper addresses.
Test image
invariant to
not invariant to
Figure 1: Max-Margin Invariant Features (MMIF) can
solve an important problem we call the Unlabeled Transformation Problem. In the figure, a traditional classifier
F (x) "learns" invariance to nuisance transformations
directly from the labeled dataset X . On the other hand,
our approach (MMIF) can incorporate additional invariance learned from any unlabeled data that undergoes the
nuisance transformation of interest.
The Unlabeled Transformation (UT) Problem:
Having access to transformed versions of the training unlabeled data but not of labelled data, how
do we learn a discriminative model of the labelled data, while being invariant to transformations
present in the unlabeled data ?
Overall approach. The approach presented in this paper however (see Fig. 1), can solve this problem
and learn invariance to transformations observed only through unlabeled samples and does not need
labelled training data augmentation. We explicitly and simultaneously address both problems of
generating invariance to intra-class transformation (through invariant kernels) and being discriminative
to inter or between class transformations (through max-margin classifiers). Given a new test sample,
the final extracted feature is invariant to the transformations observed in the unlabeled set, and thereby
generalizes using just a single example. This is an example of one-shot learning.
Prior Art: Invariant Kernels. Kernel methods in machine learning have long been studied to
considerable depth. Nonetheless, the study of invariant kernels and techniques to extract invariant
features has received much less attention. An invariant kernel allows the kernel product to remain
invariant under transformations of the inputs. Most instances of incorporating invariances focused
on local invariances through regularization and optimization such as [20, 21, 3, 23]. Some other
techniques were jittering kernels [19, 3] and tangent-distance kernels [5], both of which sacrificed
the positive semi-definite property of its kernels and were computationally expensive. Though these
methods have had some success, most of them still lack explicit theoretical guarantees towards
invariance. The proposed invariant kernel SVM formulation on the other hand, develops a valid PSD
kernel that is guaranteed to be invariant. [4] used group integration to arrive at invariant kernels
but did not address the Unlabeled Transformation problem which our proposed kernels do address.
Further, our proposed kernels allow for the formulation of the invariant SVM and application to large
scale problems. Recently, [16] presented some work with invariant kernels. However, unlike our
non-parametric formulation, they do not learn the group transformations from the data itself and
assume known parametric transformations (i.e. they assume that transformation is computable).
Key ideas. The key ideas in this paper are twofold.
2
1. The first is to model transformations using unitary groups (or sub-groups) leading to unitarygroup invariant kernels. Unitary transforms allow the dot product to be preserved and
allow for interesting generalization properties leading to low sample complexity and also
allow learning transformation invariance from unlabeled examples (thereby solving the
Unlabeled Transformation Problem). Classes of learning problems, such as vision, often
have transformations belonging to a unitary-group, that one would like to be invariant
towards (such as translation and rotation). In practice however, [9] found that invariance to
much more general transformations not captured by this model can been achieved.
2. Secondly, we combine max-margin classifiers with invariant kernels leading to non-linear
max-margin unitary-group invariant classifiers. These theoretically motivated invariant
non-linear SVMs form the foundation upon which Max-Margin Invariant Features (MMIF)
are based. MMIF features can effectively solve the important Unlabeled Transformation
Problem. To the best of our knowledge, this is the first theoretically proven formulation of
this nature.
Contributions. In contrast to many previous studies on invariant kernels, we study non-linear
positive semi-definite unitary-group invariant kernels guaranteeing invariance that can address the UT
Problem. One of our central theoretical results to applies group integration in the RKHS. It builds on
the observation that, under unitary restrictions on the kernel map, group action in the input space is
reciprocated in the RKHS. Using the proposed invariant kernel, we present a theoretically motivated
approach towards a non-linear invariant SVM that can solve the UT Problem with explicit invariance
guarantees. As our main theoretical contribution, we showcase a result on the generalization of
max-margin classifiers in group-invariant subspaces. We propose Max-Margin Invariant Features
(MMIF) to learn highly discriminative non-linear features that also solve the UT problem. On the
practical side, we propose an approach to face recognition to combine MMIFs with a pre-trained
deep learning feature extractor (in our case VGG-Face [13]). MMIF features can be used with deep
learning whenever there is a need to focus on a particular transformation in data (in our application
pose in face recognition) and can further improve performance.
2
Unitary-Group Invariant Kernels
Premise: Consider a dataset of normalized samples along with labels X = {xi }, Y = {yi } ∀i ∈
1...N with x ∈ Rd and y ∈ {+1, −1}. We now introduce into the dataset a number of unitary transformations g part of a locally compact unitary-group G. We note again that the set of transformations
under consideration need not be the entire unitary group. They could very well be a subgroup. Our
augmented normalized dataset becomes {gxi , yi } ∀g ∈ G ∀i. For clarity, we denote by gx the action
of group element g ∈ G on x, i.e. gx = g(x). We also define an orbit of x under G as the set
XG = {gx} ∀g ∈ G. Clearly, X ⊆ XG . An invariant function is defined as follows.
Definition 2.1 (G-Invariant Function). For any group G, we define a function f : X → Rn to be
G-invariant if f (x) = f (gx) ∀x ∈ X ∀g ∈ G.
One method of generating an invariant towards a group is through group integration. Group integration
has stemmed from classical invariant theory and can be shown to be a projection onto a G-invariant
subspace for vector spaces. In such a space x = gx ∀g ∈ G and thus the representation x is invariant
under the transformation of any element from the group G. This is ideal for recognition problems
where one would want to be discriminative to between-class transformations (for e.g. between distinct
subjects in face recognition) but be invariant to within-class transformations (for e.g. different images
of the same subject). The set of transformations we model as G are the within-class transformations
that we would like to be invariant towards. An invariant to any group G can be generated through the
following basic (previously) known property (Lemma 2.1) based on group integration.
d
Lemma 2.1. (Invariance Property) Given a vector ω ∈
R R , and any
R affine group G, for any fixed
0
0
g ∈ G and a normalized Haar measure dg, we have g G gω dg = G gω dg
The Haar measure (dg) exists for every locally compact group and is unique up to a positive
multiplicative constant (hence
normalized). A similar property holds for discrete groups. Lemma 2.1
R
results in the quantity G gω dg enjoy global invariance (encompassing all elements) to group G.
This property allows one to generate a G-invariant subspace in the inherent space Rd through group
integration. In practice, the integral corresponds to a summation over transformed samples. The
3
following two lemmas (novel results, and part
R of our contribution) (Lemma 2.2 and 2.3) showcase
elementary properties of the operator Ψ = G g dg for a unitary-group G 2 . These properties would
prove useful in the analysis of unitary-group invariant kernels and features.
R
Lemma 2.2. If Ψ = G g dg for unitary G, then ΨT = Ψ
R
Lemma 2.3. (Unitary Projection) If Ψ = G g dg for any affine G, then ΨΨ = Ψ, i.e. it is a
projection operator. Further, if G is unitary, then hω, Ψω 0 i = hΨω, ω 0 i ∀ω, ω 0 ∈ Rd
Sample Complexity and Generalization. On applying the operator Ψ to the dataset X , all points
in the set {gx | g ∈ G} for any x ∈ X map to the same point Ψx in the G-invariant subspace
thereby reducing the number of distinct points by a factor of |G| (the cardinality of G, if G is finite).
Theoretically, this would drastically reduce sample complexity while preserving linear feasibility
(separability). It is trivial to observe that a perfect linear separator learned in XΨ = {Ψx | x ∈
X } would also be a perfect separator for XG , thus in theory achieving perfect generalization.
Generalization here refers to the ability to perform correct classification even in the presence of the
set of transformations G. We prove a similar result for Reproducing Kernel Hilbert Spaces (RKHS)
in Section 2.2. This property is theoretically powerful since cardinality of G can be large. A classifier
can avoid having to observe transformed versions {gx} of any x and yet generalize perfectly.
is
The case of Face Recognition. As an illustration, if the group G of transformations considered
R
pose (it is hypothesized that small changes in pose can be modeled as unitary [11]), then Ψ = G g dg
represents a pose invariant subspace. In theory, all poses of a subject will converge to the same point
in that subspace leading to near perfect pose invariant recognition.
We have not yet leveraged the power of the unitary structure of the groups which is also critical in
generalization to test cases as we would see later. We now present our central result showcasing that
unitary kernels allow the unitary group action to reciprocate in a Reproducing Kernel Hilbert Space.
This is critical to set the foundation for our core method called Max-Margin Invariant Features.
2.1
Group Actions Reciprocate in a Reproducing Kernel Hilbert Space
Group integration provides exact invariance as seen in the previous section. However, it requires
the group structure to be preserved, i.e. if the group structure is destroyed, group integration does
not provide an invariant function. In the context of kernels, it is imperative that the group relation
between the samples in XG be preserved in the kernel Hilbert space H corresponding to some kernel
k with a mapping φ. If the kernel k is unitary in the following sense, then this is possible.
Definition 2.2 (Unitary Kernel). A kernel k(x, y) = hφ(x), φ(y)i is a unitary kernel if, for a unitary
group G, the mapping φ(x) : X → H satisfies hφ(gx), φ(gy)i = hφ(x), φ(y)i ∀g ∈ G, ∀x, y ∈ X .
The unitary condition is fairly general, a common class of unitary kernels is the RBF kernel. We now
define a transformation within the RKHS itself as gH : φ(x) → φ(gx) ∀φ(x) ∈ H for any g ∈ G
where G is a unitary group. We then have the following result of significance.
Theorem 2.4. (Covariance in the RKHS) If k(x, y) = hφ(x), φ(y)i is a unitary kernel in the sense of
Definition 2.2, then gH is a unitary transformation, and the set GH = {gH | gH : φ(x) → φ(gx) ∀g ∈
G} is a unitary-group in H.
Theorem 2.4 shows that the unitary-group structure is preserved in the RKHS. This paves the way for
new theoretically motivated approaches to achieve invariance to transformations in the RKHS. There
have been a few studies on group invariant kernels [4, 11]. However, [4] does not examine whether
the unitary group structure is actually preserved in the RKHS, which is critical. Also, DIKF was
recently proposed as a method utilizing group structure under the unitary kernel [11]. Our result is a
generalization of the theorems they present. Theorem 2.4 shows that since the unitary group structure
is preserved in the RKHS, any method involving group integration would be invariant in the original
space. The preservation of the group structure allows more direct group invariance results to be
applied in the RKHS. It also directly allows one to formulate a non-linear SVM while guaranteeing
invariance theoretically leading to Max-Margin Invariant Features.
2
All proofs are presented in the supplementary material
4
2.2
Invariant Non-linear SVM: An Alternate Approach Through Group Integration
We now apply the group integration approach to the kernel SVM. The decision function of SVMs
can be written in the general form as fθ (x) = ω T φ(x) + b for some bias b ∈ R (we agglomerate all
parameters of f in θ) where φ is the kernel feature map, i.e. φ : X → H. Reviewing the SVM, a
maximum margin separator is found by minimizing loss functions such as the hinge loss along with a
regularizer. In order to invoke invariance, we can now utilize group integration in the the kernel space
H using Theorem 2.4. All points in the set {gx ∈ XG } get mapped to φ(gx) = gH φ(x) for a given
g ∈ G in
R the input space X . Group integration then results in a G-invariant subspace within H through
ΨH = GH gH dgH using Lemma 2.1. Introducing Lagrange multipliers α = (α1 , α2 ...αN ) ∈ RN ,
the dual formulation (utilizing Lemma 2.2 and Lemma 2.3) then becomes
min −
α
X
i
αi +
1X
yi yj αi αj hΨH φ(xi ), ΨH φ(xj )i
2 i,j
(1)
P
under the constraints
0 ≤ αi ≤ N1 ∀i. The SVM separator is then given by
i αi yi = 0,
P
∗
∗
ωH = ΨH ω = i yi αi ΨH φ(xi ) thereby existing in the GH -invariant (or equivalently G-invariant)
subspace ΨH within H (since g → gH is a bijection). Effectively, the SVM observes samples from
∗
XΨH = {x | φ(x) = ΨH φ(u), ∀u ∈ XG } and therefore ωH
enjoys exact global invariance to G.
∗
Further, ΨH ω is a maximum-margin separator of {φ(XG )} (i.e. the set of all transformed samples).
This can be shown by the following result.
For a unitary group G and unitary kernel k(x, y) = hφ(x), φ(y)i,
Theorem 2.5. (Generalization)
R
∗
if ωH
= ΨH ω ∗ = ( GH gH dgH ) ω ∗ is a perfect separator for {ΨH φ(X )} = {ΨH φ(x) | ∀x ∈ X },
then ΨH ω ∗ is also a perfect separator for {φ(XG )} = {φ(x) | x ∈ XG } with the same margin.
Further, a max-margin separator of {ΨH φ(X )} is also a max-margin separator of {φ(XG )}.
The invariant non-linear SVM in objective 1, observes samples in the form of ΨH φ(x) and obtains a
max-margin separator ΨH ω ∗ . This allows for the generalization properties of max-margin classifiers
to be combined with those of group invariant classifiers. While being invariant to nuisance transformations, max-margin classifiers can lead to highly discriminative features (more robust than DIKF
[11] as we find in our experiments) that are invariant to within-class transformations.
Theorem 2.5 shows that the margins of φ(XG ) and {ΨH φ(XG )} are deeply related and implies that
ΨH φ(x) is a max-margin separator for both datasets. Theoretically, the invariant non-linear SVM is
able to generalize to XG on just observing X and utilizing prior information in the form of G for all
unitary kernels k. This is true in practice for linear kernels. For non-linear kernels in practice, the
invariant SVM still needs to observe and integrate over transformed training inputs.
Leveraging unitary group properties. During test time to achieve invariance, the SVM would
require to observe and integrate over all possible transformations of the test sample. This is a huge
computational and design bottleneck. We would ideally want to achieve invariance and generalize
by observing just a single test sample, in effect perform one shot learning. This would not only
be computationally much cheaper but make the classifier powerful owing to generalization to full
transformed orbits of test samples by observing just that single sample. This is where unitarity of g
helps and we leverage it in the form of the following
Lemma.
R
Lemma 2.6. (Invariant Projection) If Ψ = G g dg for any unitary group G, then for any fixed g 0 ∈ G
(including the identity element) we have hΨx0 , Ψω 0 i = hg 0 x0 , Ψω 0 i ∀ω, ω 0 ∈ Rd
Assuming Ψω 0 is the learned SVM classifier, Lemma 2.6 shows that for any test x0 , the invariant dot
product hΨx0 , Ψω 0 i which involves observing all transformations of x0 is equivalent to the quantity
hg 0 x0 , Ψω 0 i which involves observing only one transformation of x0 . Hence one can model the entire
orbit of x0 under G by a single sample g 0 x0 where g 0 ∈ G can be any particular transformation
including identity. This drastically reduces sample complexity and vastly increases generalization
capabilities of the classifier since one only need to observe one test sample to achieve invariance
Lemma 2.6 also helps us in saving computation, allowing us to apply the computationally expensive
Ψ (group integration) operation only once on he classifier and not the test sample. Thus, the kernel in
the Invariant SVM formulation can be replaced by the form kΨ (x, y) = hφ(x), ΨH φ(y)i.
For kernels in general, the GH -invariant subspace cannot be explicitly
R computed since it lies in the
RKHS. It is only implicitly projected upon through ΨH φ(xi ) = G φ(gxi )dgH . It is important to
5
Integration over the group
(pooling)
Class 1
Test Image
Class 2
Class 3
Kernel Invariant Feature
Test Image
Class 4
(a) Invariant kernel feature extraction
(b) SVM feature extraction leading to MMIF features
Figure 2: MMIF Feature Extraction. (a) l(x) denotes the invariant kernel feature of any x which is invariant
to the transformation G. Invariance is generated by group integration (or pooling). The invariant kernel feature
learns invariance form the unlabeled transformed template set TG . Also, the faces depicted are actual samples
from the large-scale mugshots data (∼ 153, 000 images). (b) Once the invariant features have been extracted
for the labelled non-transformed dataset X , then the SVMs learned act as feature extractors. Each binary class
SVM (different color) was trained on the invariant kernel feature of a random subset of l(X ) with random class
assignments. The final MMIF feature for x is the concatenation of all SVM inner-products with l(x).
note that during testing however, the SVM formulation will be invariant to transformations of the test
sample regardless of a linear or non-linear kernel.
Positive
R Semi-Definiteness. The G-invariant kernel map is now of the form kΨ (x, y) =
hφ(x), G φ(gy)dgH i. This preserves the positive semi-definite property of the kernel k while
guaranteeing global invariance to unitary transformations., unlike jittering kernels [19, 3] and
tangent-distance kernels [5]. If we wish to include invariance to scaling however (in the sense of
scaling an image), then we would lose positive-semi-definiteness (it is also not a unitary transform).
Nonetheless, [22] show that conditionally positive definite kernels still exist for transformations
including scaling, although we focus of unitary transformations in this paper.
3
Max-Margin Invariant Features
The previous section utilized a group integration approach to arrive a theoretically invariant non-linear
SVM. It however does not
Transformation problem i.e. the kernel kΨ (x, y) =
R address the Unlabeled
R
hΨH φ(x), ΨH φ(y)i = h G φ(gx)dgH , G φ(gy)dgH i still requires observing transformed versions
of the labelled input sample namely {gx | gx ∈ XG } (or atleast one of the labelled samples if we
utilize Lemma 2.6). We now present our core approach called Max-Margin Invariant Features (MMIF)
that does not require the observation of any transformed labelled training sample whatsoever.
Assume that we have access to an unlabeled set of M templates T = {ti }i={1,...M } . We assume that
we can observe all transformations under a unitary-group G, i.e. we have access to TG = {gti | ∀g ∈
G}i={1,...M } . Also, assume we have access to a set X = {xj }i={1,...D} of labelled data with N
classes which are not transformed. We can extract an M -dimensional invariant kernel feature for each
xj ∈ X as follows. Let the invariant kernel feature be l(x) ∈ RM to explicitly show the dependence
on x. Then the ith dimension of l for any particular x is computed as
Z
l(x)i = hφ(x), ΨH φ(ti )i = hφ(x),
Z
gH φ(ti )dgH i = hφ(x),
G
φ(gti )dgH i
(2)
G
The first equality utilizes Lemma 2.6 and the third equality uses Theorem 2.4. This is equivalent
to observing all transformations of x since hφ(x), ΨH φ(ti )i = hΨH φ(x), φ(ti )i using Lemma 2.3.
Thereby we have constructed a feature l(x) which is invariant to G without ever needing to observe
transformed versions of the labelled vector x. We now briefly the training of the MMIF feature
extractor. The matching metrics we use for this study is normalized cosine distance.
6
Training MMIF SVMs. To learn a K-dimensional MMIF feature (potentially independent of
N ), we learn K independent binary-class linear SVMs. Each SVM trains on the labelled dataset
l(X ) = {l(xj ) | j = {1, ...D}} with each sample being label +1 for some subset of the N classes
(potentially
P just one class) and the rest being labelled −1. This leads us to a classifier in the form
of ωk = j yj αj l(xj ). Here, yj is the label of xj for the k th SVM. It is important to note that the
unlabeled data was only used to extract l(xj ). Having multiple classes randomly labelled as positive
allows the SVM to extract some feature that is common between them. This increases generalization
by forcing the extracted feature to be more general (shared between multiple classes) rather than
being highly tuned to a single class. Any K-dimensional MMIF feature can be trained through this
technique leading to a higher dimensional feature vector useful in case where one has limited labelled
samples and classes (N is small). During feature extraction, the K inner products (scores) of the
test sample x0 with the K distinct binary-class SVMs provides the K-dimensional MMIF feature
vector. This feature vector is highly discriminative due to the max-margin nature of SVMs while
being invariant to G due to the invariant kernels.
0
MMIF. Given TG and X , the MMIF feature is defined as MMIF(x
) ∈ RK for any test x0 with
P
0
each dimension k being computed as hl(x ), ωk i for ωk =
j yj αj l(xj ) ∀xj ∈ X . Further,
l(x0 ) ∈ RM ∀x with each dimension i being l(x0 )i = hφ(x0 ), ΨH φ(ti )i. The process is illustrated in
Fig. 2.
Inheriting transformation invariance from transformed unlabeled data: A special case of semisupervised learning. MMIF features can learn to be invariant to transformations (G) by observing
them only through TG . It can then transfer the invariance knowledge to new unseen samples from
X thereby becoming invariant to XG despite never having observed any samples from XG . This
is a special case of semi-supervised learning where we leverage on the specific transformations
present in the unlabeled data. This is a very useful property of MMIFs allowing one to learn
transformation invariance from one source and sample points from another source while having
powerful discrimination and generalization properties. The property is can be formally stated as the
following Theorem.
Theorem 3.1. (MMIF is invariant to learnt transformations) MMIF(x0 ) = MMIF(gx0 ) ∀x0 ∀g ∈ G
where G is observed only through TG = {gti | ∀g ∈ G}i={1,...M } .
Thus we find that MMIF can solve the Unlabeled Transformation Problem. MMIFs have an invariant
and a discriminative component. The invariant component of MMIF allows it to generalize to
new transformations of the test sample whereas the discriminative component allows for robust
classification due to max-margin classifiers. These two properties allow MMIFs to be very useful as
we find in our experiments on face recognition.
Max and Mean Pooling in MMIF. Group integration in practice directly results in mean pooling.
Recent work however, showed that group integration can be treated as a subset of I-theory where one
tries to measure moments (or a subset of) of the distribution hx, gωi g ∈ G since the distribution itself
is also an invariant [1]. Group integration can be seen as measuring the mean or the first moment of
the distribution. One can also characterize using the infinite moment or the max of the distribution.
We find in our experiments that max pooling outperforms mean pooling in general. All results in this
paper however, still hold under the I-theory framework.
MMIF on external feature extractors (deep networks). MMIF does not make any assumptions
regarding its input and hence one can apply it to features extracted from any feature extractor in
general. The goal of any feature extractor is to (ideally) be invariant to within-class transformation
while maximizing between-class discrimination. However, most feature extractors are not trained
to explicitly factor out specific transformations. If we have access to even a small dataset with
the transformation we would like to be invariant to, we can transfer the invariance using MMIFs
(e.g. it is unlikely to observe all poses of a person in datasets, but pose is an important nuisance
transformation).
Modelling general non-unitary transformations. General non-linear transformations such as
out-of-plane rotation or pose variation are challenging to model. Nonetheless, a small variation
in these transformations can be approximated by some unitary G assuming piece wise linearity
through transformation-dependent sub-manifold unfolding [12]. Further, it was found that in practice,
integrating over general transformations produced approximate invariance [9].
7
1
0.9
0.9
0.8
Verification Rate
Verification Rate
1
0.95
0.85
∞ -DIKF (0.74)
1 -DIKF (0.61)
NDP-∞ (0.41)
NDP-1 (0.32)
MMIF (Ours) (0.78)
VGG Features (0.55)
MMIF-VGG (Ours) (0.61)
0.8
0.75
0.7
0.65
0.6
0.55
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
10 -8
0.5
0.1
0.2
0.3
0.4
0.5
0.6
MMIF VGG (Ours)(0.71)
VGG (0.56)
0.7
False Accept Rate
(a) Invariant kernel feature extraction
10 -7
10 -6
10 -5
10 -4
10 -3
10 -2
10 -1
10 0
False Accept Rate
(b) SVM feature extraction leading to MMIF features
Figure 3: (a) Pose-invariant face recognition results on the semi-synthetic large-scale mugshot database (testing
on 114,750 images). Operating on pixels: MMIF (Pixels) outperforms invariance based methods DIKF [11]
and invariant NDP [9]. Operating on deep features: MMIF trained on VGG-Face features [13] (MMIF-VGG)
produces a significant improvement in performance. The numbers in the brackets represent VR at 0.1% FAR.
(b) Face recognition results on LFW with raw VGG-Face features and MMIF trained on VGG-Face features.
The values in the bracket show VR at 0.1% FAR.
4
Experiments on Face Recognition
As illustration, we apply MMIFs using two modalities overall 1) on raw pixels and 2) on deep features
from the pre-trained VGG-Face network [13]. We provide more implementation details and results
discussion in the supplementary.
A. MMIF on a large-scale semi-synthetic mugshot database (Raw-pixels and deep features).
We utilize a large-scale semi-synthetic face dataset to generate the sets TG and X for MMIF. In this
dataset, only two major transformations exist, that of pose variation and subject variation. All other
transformations such as illumination, translation, rotation etc are strictly and synthetically controlled.
This provides a very good benchmark for face recognition. where we want to be invariant to pose
variation and be discriminative for subject variation. The experiment follows the exact protocol
and data as described in [11] 3 We test on 750 subjects identities with 153 pose varied real-textured
gray-scale image each (a total of 114,750 images) against each other resulting in about 13 billion
pair-wise comparisons (compared to 6,000 for the standard LFW protocol). Results are reported as
ROC curves along with VR at 0.1% FAR. Fig. 3(a) shows the ROC curves for this experiment. We
find that MMIF features out-performs all baselines including VGG-Face features (pre-trained), DIKF
and NDP approaches thereby demonstrating superior discriminability while being able to effectively
capture pose-invariance from the transformed template set TG . MMIF is able to solve the Unlabeled
Transformation problem by extracting transformation information from unlabeled TG .
B. MMIF on LFW (deep features): Unseen subject protocol. In order to be able to effectively
train under the scenario of general transformations and to challenge our algorithms, we define a new
much harder protocol on LFW. We choose the top 500 subjects with a total of 6,300 images for
training MMIF on VGG-Face features and test on the remaining subjects with 7,000 images. We
perform all versus all matching, totalling upto 49 million matches (4 orders more than the official
protocol). The evaluation metric is defined to be the standard ROC curve with verification rate
reported at 0.1% false accept rate. We split the 500 subjects into two sets of 250 and use as TG and
X . We do not use any alignment for this experiment, and the faces were cropped according to [18].
Fig. 3(b) shows the results of this experiment. We see that MMIF on VGG features significantly
outperforms raw VGG on this protocol, boosting the VR at 0.1% FAR from 0.56 to 0.71. This
demonstrates that MMIF is able to generate invariance for highly non-linear transformations that are
not well-defined rendering it useful in real-world scenarios where transformations are unknown but
observable.
3
We provide more details in the supplementary. Also note that we do not need utilize identity information, all
that is required is the fact that a set of pose varied images belong to the same subject. Such data can be obtained
through temporal sampling.
8
5
Main Experiments: Detailed notes supplementing the main paper.
A. MMIF on a large-scale semi-synthetic mugshot database (Raw-pixels and deep features).
MMIF template set TG and X . We utilize a large-scale semi-synthetic face dataset to generate
the sets TG and X for MMIF. The face textures are sampled from real-faces although the poses are
rendered using 3D model fit to each face independently, hence the dataset is semi-synthetic. This
semi-synthetic dataset helps us to evaluate our algorithm in a clean setting, where there exists only
one challenging nuisance transformation (pose variation). Therefore G models pose variation in faces.
We utilize the same pose variation dataset generation procedure as described in [11] in order for a fair
comparison. The poses were rendered varying from −40◦ to 40◦ (yaw) and −20◦ to 20◦ (pitch) in
steps of 5◦ using 3D-GEM [15]. The total number of images we generate is 153 × 1000 = 153, 000
images. We align all faces by the two eye-center locations in a 168 × 128 crop.
Protocol. Our first experiment is a direct comparison with approaches similar in spirit to ours,
namely `∞ -DIKF and `1 -DIKF [11] and NDP-`∞ and NDP-`1 [9, 1]. We train on 250 subjects
(38,250 images) and test each method on the remaining 750 subjects (114,750 images), matching all
pose-varied images of a subject to each other. DIKF follows the same protocol as in [11]. For MMIF,
we utilize the first 125 × 153 images (125 subjects with 153 poses each) as TG and the next 125 × 153
images as X . A total of 500 SVMs were trained on subsets of X (10 randomly chosen subjects per
SVM with all images of 3 of those 10 subjects, again randomly chosen, being +1 and the rest being
−1). Note that although X in this case contains pose variation, we do not integrate over them to
generate invariance. All explicit invariance properties are generated through integration over TG . For
testing, we compare all 153 images of the remaining unseen 750 subjects against each other (114,750
images). The algorithms are therefore tested on about 13 billion pair wise comparisons. Results are
reported as ROC curves along with VR at 0.1% FAR. For this experiment, we report results working
on 1) raw pixels directly and 2) 4096 dimensional features from the pre-trained VGG-Face network
[13]. As a baseline, we also report results on using the VGG-Face features directly.
Results. Fig.3(a) shows the ROC curves for this experiment. We find that MMIF features out-perform
both DIKF and NDP approaches thereby demonstrating superior discriminability while being able to
effectively capture pose-invariance from the transformed template set TG . We find that VGG-Face
features suffer a handicap due to the images being grayscale. Nonetheless, MMIF is able to transfer
pose-invariance from TG onto the VGG features. This significantly boosts performance owing to the
fact that the main nuisance transformation is pose. MMIF being explicitly pose invariant along with
solving the Unlabeled Transformation Problem is able to help VGG features while preserving the
discriminability of the VGG features. In fact, the max-margin SVMs further add discriminability.
This illustrates in a clean setting (dataset only contains synthetically generated pose variation as
nuisance transformation), that MMIF is able to work well in conjunction with deep learning features,
thereby rendering itself immediately usable in more realistic settings. Our next set of experiments
focus on this exact aspect.
B. MMIF on LFW (deep features).
Unseen subject protocol. LFW [7] has received a lot of attention in the recent years, and algorithms
have approached near human accuracy on the original testing protocol. In order to be able to
effectively train under the scenario of general transformations and to challenge our algorithms, we
define a new much harder protocol on LFW. Instead of evaluating on about 6000 pair wise matches,
we pair wise match on all images of subjects not seen in training. We have no way of modelling these
subjects whatsoever, making this a difficult task. We utilize 500 subjects and all their images for
training and test on the remaining 5249 subjects and all of their images. To use maximum amount of
data for training, we pick the top 500 subjects with the most number of images available (about 6,300
images). The test data thus contains about 7000 images. The number of test pairwise matches is
about 49 million, four orders of magnitude larger than the 6000 matches that the original LFW testing
protocol defined. The evaluation metric is defined to be the standard ROC curve with verification rate
reported at 0.1% false accept rate.
MMIF template set TG and X . We split the 500 subjects data into two parts of 250 subjects each.
We use the 250 subjects with the most number of images as transformed template set TG and use the
rest of the 250 subjects as X . Note that in this experiment, the transformations considered are very
generic and highly non-linear making it a difficult experiment. We do not use any alignment for this
experiment, and the faces were cropped according to [18].
9
Protocol. For MMIF, we process the kernel features from the transformed template set T G exactly
as in the previous experiment A. Similarly, we learn a total of 500 SVMs on subsets of X following
the same protocol as the previous experiment.
Results. Fig.3(b) shows the results of this experiment. We see that MMIF on VGG features
significantly outperforms raw VGG on this protocol, boosting the VR at 0.1% FAR from 0.56 to
0.71. This suggests, that MMIF can be used in conjunction with pre-trained deep features. In
this experiment, MMIF capitalizes on the non-linear transformations that exist in LFW, whereas
in the previous experiment on the semi0synthetic dataset (Experiment A), the transformation was
well-defined to be pose variation. This demonstrates that MMIF is able to generate invariance for
highly non-linear transformations that are not well-defined rendering it useful in real-world scenarios
where transformations are unknown but observable.
6
6.1
Additional Experiments
Large-scale Semi Synthetic Mugshot Data
Motivation: In the main paper, the transformations were observed only through unlabeled TG while
X is only meant to provide labeled untransformed data. However, during our expeirments in the main
paper, even though we do not explicitly pool over the transformations X , we utilize all transformations
for training the SVMs. In order to be closer to our theoretical setting, we now run MMIF on raw
pixels and VGG-Face features [13] while constraining the number of images the SVMs train on to 30
random images for each subject.
1
0.95
0.9
Verification Rate
MMIF Template set TG and X : We utilize a
large scale semi-synthetic face dataset to generate the template set TG for MMIF. The face
textures are sampled from real faces and the
poses are rendered using a 3D model fit to each
face independently, making the dataset semisynthetic. This semi-synthetic dataset helps us
evaluate our algorithm in a clean setting, where
there exists only one challenging nuisance transformation (pose variation). Therefore G models
pose variation in faces. We utilize the same
pose variation dataset generation procedure as
described in [11] in order for a fair comparison.
The poses were rendered varying from −40◦ to
40◦ (yaw) and −20◦ to 20◦ (pitch) in steps of
5◦ using 3D-GEM [15]. The total number of
images we generate is 153 x 1000 = 153,000
images. We align all faces by the two eye-center
locations in a 168 × 128 crop. Unlike our experiment presented in the main paper on this dataset,
the template set X is constrained to include only
30 randomly selected poses that TG contained
. This is done to better simulate a real-world
setting where through X we would only observe
faces at a few random poses.
0.85
∞ -DIKF (0.74)
1 -DIKF (0.61)
NDP-∞ (0.41)
NDP-1 (0.32)
VGG Features (0.55)
MMIF-raw (Ours) (0.78)
MMIF-VGG (Ours) (0.61)
MMIF-cons-raw (Ours) (0.43)
MMIF-cons-VGG (Ours) (0.65)
0.8
0.75
0.7
0.65
0.6
0.55
0.5
0.1
0.2
0.3
0.4
0.5
0.6
0.7
False Accept Rate
Figure 4: Pose-invariant face recognition results on
the semi-synthetic large-scale mugshot database (testing on 114,750 images). Operating on deep features:
MMIF-cons-VGG trained on VGG-Face features [13]
produces a significant improvement in performance over
pure VGG features even though it utilizes a constrained
X set. Interestingly, MMIF-cons-VGG almost matches
performance of MMIF-VGG while using less data. The
numbers in the brackets represent VR at 0.1% FAR.
MMIF-cons was trained on the entire TG but only 30
random transformations per subject in the X .
Protocol: This experiment is a direct comparison with approaches similar in spirit to ours,
namely l∞ -DIKF and l1 -DIKF [11] and NDP-l∞ and NDP-l1 [9, 1]. We call this setting for MMIF
as MMIF-cons (constrained) for reference. We train on 250 subjects (38,250 images) and test each
method on the remaining 750 subjects (114,750 images), matching all pose-varied images of a subject
to each other. DIKF follows the same protocol as in [11].
For MMIF, we utilize the first 125 x 153 images (125 subjects with 153 poses each) as the template
set TG . Thus, TG remains exactly the same as the protocol in the main paper. The template set X is
generated by choosing 30 random poses (for every subject) of the next 125 subjects. A total of 500
SVMs are trained on X with a random subset of 5 subjects being labeled +1 and the rest labeled -1.
10
It’s important to note that since X does not contain transformations that are observed in its entirety,
all explicit invariance properties are generated through integration over TG .
For testing, we follow the same protocol as in the main paper. We compare all 153 images of the
remaining unseen 750 subjects against each other (114,750 images). The algorithms are therefore
tested on about 13 billion pair wise comparisons. Results are reported as ROC curves along with
the VR at 0.1% FAR. For this experiment, we report results working on 1) raw pixels directly and
2) 4096 dimensional features from the pre-trained VGG-Face network [13]. As a baseline, we also
report results on using the VGG-Face features directly.
Results: Fig. 4 shows the ROC curves for this experiment. We find that even though we train SVMs
for MMIF-cons-VGG on a constrained version of X , it outperforms raw VGG features. Although,
we do observe that MMIF-cons-raw outperforms NDP methods thereby demonstrating superior
discriminability, it fails to match the original MMIF-raw method performance. Interestingly however,
MMIF-cons-VGG matches MMIF-VGG features in performance despite being trained on much lesser
data (30 instead of 153 images per subject). Thus, we find that MMIF when trained on a good feature
extractor can provide added benefits of discrimination despite having lesser labeled samples to train
on.
6.2
IARPA IJB-A Janus
In this experiment, we explore how the number of SVMs influences the recognition performance on a
large scale real-world dataset, namely the IARPA Janus Benchmark A (IJB-A) dataset.
Data: We work on the verification protocol (1:1 matching) of the original dataset IJB-A Janus.
This subset consists of 5547 image templates that map to 492 distinct subjects with each template
containing (possibly) multiple images. The images are cropped with respect to bounding boxes that
are specified by the dataset for all labeled images. The cropped images are then re-sized to 244 x 244
pixels in accordance with the requirements of the VGG face model. Explicit pose invariance (MMIF)
is then applied to these general face descriptors.
Verification Rate
MMIF Template set TG and X : In order to
effectively train under the scenario of general
transformations, we define a new protocol the
1
Janus dataset similar to the LFW protocol de0.95
fined in the main paper. This protocol is suited
0.9
for MMIF since we explicitly generate invari0.85
ance to transformations that exist in Janus data.
0.8
We utilize the first 100 subjects and all the tem0.75
plates that map to these subjects (23723 images)
0.7
for training MMIF and test on the remaining 392
VGG MMIF-100 SVMs (0.47)
0.65
VGG MMIF-250 SVMs (0.51)
subjects (27363 images). To make use of the
0.6
VGG MMIF-500 SVMs (0.51)
maximum amount of data for training, we pick
0.55
the top 100 subjects with the most number of
0.5
0.1
0.2
0.3
0.4
0.5
0.6
0.7
images, the rest are all utilized for testing. Our
False Accept Rate
training dataset is further split into templates TG
and X similar to our LFW protocol in the main Figure 5: Results of MMIF trained on VGG-Face feapaper. We use the first 50 subjects (of the top tures on the IARPA IJB-A Janus dataset for 100, 250
100 subjects) as TG and the rest as X in order and 500 SVMs. The number in the bracket denotes VR
to maximize the transformations that we gener- at 0.1% FAR.
ate invariance towards. To showcase the ability
of MMIF to be used in conjunction with deep
learning techniques, similar to our LFW experiment in the main paper, we train and test on VGG-Face
features [13] on the Janus data.
Protocol: As in our LFW experiment, we split the training data into two templates - TG and X .
Similarly to all MMIF protocols in this paper, we train a total of 100, 250 and 500 SVM’s on subsets
of X following the same protocol. We perform pairwise comparisons for the entirety of the test
data (∼ 750 million image comparisons) which far exceeds the number of comparisons defined in
the original testing protocol (∼ 110, 000 template comparisons) thereby making this protocol much
larger and harder. Recall that throughout this supplementary and the main paper we always test on
11
completely unseen subjects. The evaluation metric is defined to be the standard ROC curve using
cosine distance.
Results: Fig. 5 shows the ROC curves for this experiment with new much larger and harder protocol.
We find that even with just 100 SVMs or 100 max-margin feature extractors, the performance is close
to that of 500 feature extractors. This suggests, that though the SVMs provide enough discrimination,
the invariant kernel provides bulk of the recognition performance by explicitly being invariant to the
transformations in the TG . Hence, our proposed invariant kernel is effective at learning invariance
towards transformations present in a unlabeled dataset. We provide these curves as baselines for
future work focusing on the problem on learning unlabeled transformations from a given dataset.
7
7.1
Proofs of theoretical results
Proof of Lemma 2.1
Proof. We have,
g0
Z
Z
gω dg =
G
g 0 gω dg =
Z
G
g 00 ω dg 00 =
G
Z
gω dg
G
Since the normalized Haar measure is invariant, i.e. dg = dg 0 . Intuitively, g 0 simply rearranges the
group integral owing to elementary group properties.
7.2
Proof of Lemma 2.2
Proof. We have,
Z
Z
Z
ΨT = ( g dg)T =
g T dg =
g −1 dg −1 = Ψ
G
G
G
Using the fact g ∈ G ⇒ g −1 ∈ G and dg = dg −1 .
7.3
Proof Lemma 2.3
Proof. We have,
Z Z
ΨΨ =
gh dg dh
(3)
ZG ZG
g 0 dg 0 dh
Z
Z
=
dh g 0 dg 0
=
G
(4)
G
G
(5)
G
=Ψ
(6)
R
Since the Haar measure is normalized ( G dg = 1), and invariant. Also for any ω, ω 0 ∈ Rd , we have
R
R
hω, Ψω 0 i = G hω, gω 0 idg = G hg −1 ω, ω 0 idg −1 = hΨω, ω 0 i
7.4
Proof of Theorem 2.4
Proof. We have hφ(gx), φ(gy)i = hφ(x), φ(y)i = hgH φ(x), gH φ(y)i, since the kernel k is unitary.
Here we define gH φ(x) as the action of gH on φ(x). Thus, the mapping gH preserves the dot-product
in H while reciprocating the action of g. This is one of the requirements of a unitary operator,
however gH needs to be linear. We note that linearity of gH can be derived from the linearity of the
inner product and its preservation under gH in H. Specifically for an arbitrary vector p and a scalar
α, we have
||αgH p − gH (αp)||2
= hαgH p − gH (αp), αgH p − gH (αp)i
(7)
(8)
= ||αgH p||2 + ||gH (αp)||2 − 2hαgH p, gH (αp)i
(9)
2
2
2
= |α|||p|| + ||αp|| − 2α hp, pi = 0
12
(10)
Similarly for vectors p, q, we have ||gH (p + q) − (gH p + gH q)||2 = 0
We now prove that the set GH is a group. We start with proving the closure property. We have for any
0
fixed gH , gH
∈ GH
0
00
gH gH
φ(x) = gH φ(g 0 x) = φ(gg 0 x) = φ(g 00 x) = gH
φ(x)
00
0
00
Since g 00 ∈ G therefore gH
∈ GH by definition. Also, gH gH
= gH
and thus closure is established.
Associativity, identity and inverse properties can be proved similarly. The set GH = {gH | gH :
φ(x) → φ(gx) ∀g ∈ G} is therefore a unitary-group in H.
7.5
Proof of Theorem 2.5
Proof. Since ΨH ω ∗ is a perfect separator
mini yi (ΨH φ(xi ))T (ΨH ω ∗ ) ≥ ρ0 ∀{xi , yi } ∈ X .
for
{ΨH φ(X )},
∃ρ0
>
0,
s.t.
0
Using Lemma 2.4 and Theorem 2.5, we have for any fixed gH
∈ GH ,
0
(ΨH φ(xi ))T (ΨH ω ∗ ) = (gH
φ(xi ))T (ΨH ω ∗ )
Hence,
0
min yi (gH
φ(xi ))T (ΨH ω ∗ )
(11)
0
= min yi (ΨH φ(xi ))T (ΨH ω ∗ ) ≥ ρ0 ∀(gH
⇒ g) ∈ G
(12)
i
i
Thus, ΨH ω ∗ is perfect separator for {φ(XG )} with a margin of at-least ρ0 . It also implies that a
max-margin separator of {ΨH φ(X )} is also a max-margin separator of {φ(XG )}.
7.6
Proof of Lemma 2.6
R
R
R
Proof. We have hΨx0 , Ψω 0 i = h g gx0 , Ψω 0 idg = h g g 0 x0 , Ψω 0 idg = hg 0 x0 , Ψω 0 ) g dg =
hg 0 x0 , Ψω 0 i
In the second equality, we fix any group element g 0 ∈ G since the inner-product is invariant using
the argument hω, Ψω 0 i = hg 0 ω, Ψω 0 i. This is true using Lemma 2.1 and the fact that G is unitary.
Further, the final equality utilizes the fact that the Haar measure dg is normalized.
7.7
Proof of Theorem 3.1
0
K
0
Proof. Given TG and X , the MMIF feature is defined as MMIF(x
P ) ∈ R for any test x with
0
each dimension k being computed as hl(x ), ωk i for ωk =
. Further,
j yj αj l(xj ) ∀xj ∈ X
R
l(x0 ) ∈ RM ∀x with each dimension i being l(x0 )i = hφ(x0 ), ΨH φ(ti )i. Here, ΨH = GH gH dgH
where gH in the RKHS corresponds to the group action of g ∈ G acting in the space of X .
We therefore have for the ith dimension of l(x0 ),
l(x0 )i = hφ(x0 ), ΨH φ(ti )i
Z
0
= hφ(x ),
gH φ(ti )dgH i
G
Z H
0−1
= hφ(x0 ),
gH
gH φ(ti )dgH i
GH
Z
0−1
= hφ(x0 ), gH
gH φ(ti )dgH i
GH
Z
0
= hgH
φ(x0 ),
gH φ(ti )dgH i
(13)
(14)
(15)
(16)
(17)
GH
= hφ(g 0 x0 ), ΨH φ(ti )i
0 0
0
= l(g x )i ∀g ∈ G
13
(18)
(19)
Here, in line 15 we utilize the closure property of a group (since gH forms a group according to
Theorem 2.4). Line 17 utilizes the fact that gH is unitary, and finally line 18 uses Theorem 2.4. Hence
we find that every element of l(x0 ) is invariant to G observed only through TG , and thus trivially,
MMIF(x0 ) = MMIF(g 0 x0 ) for any g 0 ∈ G observed only through TG .
References
[1] F. Anselmi, J. Z. Leibo, L. Rosasco, J. Mutch, A. Tacchetti, and T. Poggio. Magic materials: a theory of
deep hierarchical architectures for learning sensory representations. MIT, CBCL paper, 2013.
[2] F. Anselmi, J. Z. Leibo, L. Rosasco, J. Mutch, A. Tacchetti, and T. Poggio. Unsupervised learning of
invariant representations in hierarchical architectures. CoRR, abs/1311.4158, 2013.
[3] D. Decoste and B. Schölkopf. Training invariant support vector machines. Mach. Learn., 46(1-3):161–190,
Mar. 2002.
[4] B. Haasdonk and H. Burkhardt. Invariant kernel functions for pattern analysis and machine learning. In
Machine Learning, pages 35–61, 2007.
[5] B. Haasdonk and D. Keysers. Tangent distance kernels for support vector machines. In Pattern Recognition,
2002. Proceedings. 16th International Conference on, volume 2, pages 864–868 vol.2, 2002.
[6] G. E. Hinton. Learning translation invariant recognition in a massively parallel networks. In PARLE
Parallel Architectures and Languages Europe, pages 1–13. Springer, 1987.
[7] G. B. Huang, M. Ramesh, T. Berg, and E. Learned-Miller. Labeled faces in the wild: A database
for studying face recognition in unconstrained environments. Technical Report 07-49, University of
Massachusetts, Amherst, October 2007.
[8] J. Z. Leibo, Q. Liao, and T. Poggio. Subtasks of unconstrained face recognition. In International Joint
Conference on Computer Vision, Imaging and Computer Graphics, VISIGRAPP, 2014.
[9] Q. Liao, J. Z. Leibo, and T. Poggio. Learning invariant representations and applications to face verification.
Advances in Neural Information Processing Systems (NIPS), 2013.
[10] P. Niyogi, F. Girosi, and T. Poggio. Incorporating prior information in machine learning by creating virtual
examples. In Proceedings of the IEEE, pages 2196–2209, 1998.
[11] D. K. Pal, F. Juefei-Xu, and M. Savvides. Discriminative invariant kernel features: a bells-and-whistles-free
approach to unsupervised face recognition and pose estimation. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition, pages 5590–5599, 2016.
[12] S. W. Park and M. Savvides. An extension of multifactor analysis for face recognition based on submanifold
learning. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 2645–
2652. IEEE, 2010.
[13] O. M. Parkhi, A. Vedaldi, and A. Zisserman. Deep face recognition. 2015.
[14] T. Poggio and T. Vetter. Recognition and structure from one 2d model view: Observations on prototypes,
object classes and symmetries. Laboratory, Massachusetts Institute of Technology, 1992.
[15] U. Prabhu, J. Heo, and M. Savvides. Unconstrained pose-invariant face recognition using 3d generic elastic
models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(10):1952–1961, 2011.
[16] A. Raj, A. Kumar, Y. Mroueh, T. Fletcher, and B. Schölkopf. Local group invariant representations via orbit
embeddings. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics
(AISTATS 2017), volume 54 of Proceedings of Machine Learning Research, pages 1225–1235, 2017.
[17] M. Reisert. Group integration techniques in pattern analysis – a kernel view. PhD Thesis, 2008.
[18] C. Sanderson and B. C. Lovell. Multi-region probabilistic histograms for robust and scalable identity
inference. In International Conference on Biometrics, pages 199–208. Springer, 2009.
[19] B. Schölkopf and A. J. Smola. Learning with kernels: Support vector machines, regularization, optimization, and beyond. MIT press, 2002.
[20] B. Schölkopf, C. Burges, and V. Vapnik. Incorporating invariances in support vector learning machines.
pages 47–52. Springer, 1996.
[21] B. Schölkopf, P. Simard, A. Smola, and V. Vapnik. Prior knowledge in support vector kernels. Advances in
Neural Information Processing Systems (NIPS), 1998.
[22] C. Walder and O. Chapelle. Learning with transformation invariant kernels. In Advances in Neural
Information Processing Systems, pages 1561–1568, 2007.
[23] X. Zhang, W. S. Lee, and Y. W. Teh. Learning with invariance via linear functionals on reproducing kernel
hilbert space. In Advances in Neural Information Processing Systems, pages 2031–2039, 2013.
14
| 7 |
arXiv:1703.03963v1 [] 11 Mar 2017
On Solving Travelling Salesman Problem with Vertex
Requisitions∗
Anton V. EREMEEV
Omsk Branch of Sobolev Institute of Mathematics SB RAS,
Omsk State University n.a. F.M. Dostoevsky
eremeev@ofim.oscsbras.ru
Yulia V. KOVALENKO
Sobolev Institute of Mathematics SB RAS,
julia.kovalenko.ya@yandex.ru
Received: / Accepted:
Abstract: We consider the Travelling Salesman Problem with Vertex Requisitions, where
for each position of the tour at most two possible vertices are given. It is known that
the problem is strongly NP-hard. The proposed algorithm for this problem has less time
complexity compared to the previously known one. In particular, almost all feasible instances
of the problem are solvable in O(n) time using the new algorithm, where n is the number of
vertices. The developed approach also helps in fast enumeration of a neighborhood in the
local search and yields an integer programming model with O(n) binary variables for the
problem.
Keywords: Combinatorial optimization, System of vertex requisitions, Local search, Integer
programming.
MSC: 90C59, 90C10.
1
INTRODUCTION
The Travelling Salesman Problem (TSP) is one of the well-known NP-hard combinatorial optimization problems [1]: given a complete arc-weighted digraph with n vertices, find
a shortest travelling salesman tour (Hamiltonian circuit) in it.
The TSP with Vertex Requisitions (TSPVR) was formulated by
A.I. Serdyukov in [2]: find a shortest travelling salesman tour, passing at i-th position
This research of the authors is supported by the Russian Science Foundation Grant (project no. 15-1110009).
∗
1
a vertex from a given subset X i , i = 1, . . . , n. A special case where |X i | = n, i = 1, . . . , n, is
equivalent to the TSP.
This problem can be interpreted in terms of scheduling theory. Consider a single machine
that may perform a set of operations X = {x1 , . . . , xn }. Each of the identical jobs requires
processing all n operations in such a sequence that the i-th operation belongs to a given
subset X i ⊆ X for all i = 1, . . . , n. A setup time is needed to switch the machine from
one operation of the sequence to another. Moreover, after execution of the last operation
of the sequence the machine requires a changeover to the first operation of the sequence to
start processing of the next job. The problem is to find a feasible sequence of operations,
minimizing the cycle time.
TSP with Vertex Requisitions where |X i | ≤ k, i = 1, . . . , n, was called k-TSP with
Vertex Requisitions (k-TSPVR) in [2]. The complexity of k-TSPVR was studied in [2]
for different values of k on graphs with small vertex degrees. In [3] A.I. Serdyukov proved
the NP-hardness of 2-TSPVR in the case of complete graph and showed that almost all
feasible instances of the problem are solvable in O(n2) time. In this paper, we propose an
algorithm for 2-TSPVR with time complexity O(n) for almost all feasible problem instances.
The developed approach also has some applications to local search and integer programming
formulation of 2-TSPVR.
The paper has the following structure. In Section 2, a formal definition of 2-TSPVR is
given. In Section 3, an algorithm for this problem is presented. In Section 4, a modification
of the algorithm is proposed with an improved time complexity and it is shown that almost
all feasible instances of the problem are solvable in time O(n). In Section 5, the developed
approach is used to formulate and enumerate efficiently a neighborhood for local search. In
Section 6, this approach allows to formulate an integer programming model for 2-TSPVR
using O(n) binary variables. The last section contains the concluding remarks.
2
PROBLEM FORMULATION AND ITS HARDNESS
2-TSP with Vertex Requisitions is formulated as follows.
Let G = (X, U)
be a complete arc-weighted digraph, where X = {x1 , . . . , xn } is the set of vertices,
U = {(x, y) : x, y ∈ X, x 6= y} is the set of arcs with non-negative arc weights ρ(x, y),
(x, y) ∈ U . Besides that, a system of vertex subsets (requisitions) X i ⊆ X, i = 1, . . . , n,
is given, such that 1 ≤ |X i | ≤ 2 for all i = 1, . . . , n.
Let F denote the set of bijections from Xn := {1, . . . , n} to X, such that f (i) ∈ X i , i =
1, . . . , n, for all f ∈ F . The problem consists in finding such a mapping f ∗ ∈ F that ρ(f ∗ ) =
n−1
P
min ρ(f ), where ρ(f ) =
ρ(f (i), f (i + 1)) + ρ(f (n), f (1)) for all f ∈ F . Later on the
f ∈F
i=1
symbol I is used for the instances of this problem.
Any feasible solution uses only the arcs that start in a subset X i and end in X i+1 for
some i ∈ {1, . . . , n} (we assume n + 1 := 1). Other arcs are irrelevant to the problem and we
assume that they are not given in a problem input I.
2-TSPVR is strongly NP-hard [3]. The proof of this fact in [3] is based on a reduction of
Clique problem to a family of instances of 2-TSPVR with integer input data, bounded by a
polynomial in problem length. Therefore, in view of sufficient condition for non-existence of
Fully Polynomial-Time Approximation Scheme (FPTAS) for strongly NP-hard problems [4],
the result from [3] implies that 2-TSPVR does not admit an FPTAS, provided that P6=NP.
The k-TSPVR with k ≥ 3 cannot be approximated with any constant or polynomial factor
of the optimum in polynomial time, unless P=NP, as follows from [5].
3
SOLUTION METHOD
Following the approach of A.I. Serdyukov [3], let us consider a bipartite graph Ḡ = (Xn , X, Ū)
where the two subsets of vertices of bipartition Xn , X have equal sizes and the set of edges
is Ū = {{i, x} : i ∈ Xn , x ∈ X i }. Now there is a one-to-one correspondence between the set
of perfect matchings W in the graph Ḡ and the set F of feasible solutions to a problem
instance I: Given a perfect matching W ∈ W of the form {{1, x1 }, {2, x2 }, . . . , {n, xn }}, this
mapping produces the tour (x1 , x2 , . . . , xn ).
An edge {i, x} ∈ Ū is called special if {i, x} belongs to all perfect matchings in the graph Ḡ.
Let us also call the vertices of the graph Ḡ special, if they are incident with special edges.
Supposing that Ḡ is given by the lists of adjacent vertices, the special edges and edges
that do not belong to any perfect matching in the graph Ḡ may be efficiently computed by
the Algorithm 1 described below. After that all edges, except for the special edges and those
adjacent to them, are slit into cycles. Note that the method of finding all special edges and
cycles in the graph Ḡ was not discussed in [3].
Algorithm 1. Finding special edges in the graph Ḡ
Step 1 (Initialization). Assign Ḡ′ := Ḡ.
Step 2. Repeat Steps 2.1-2.2 while it is possible:
Step 2.1 (Solvability test). If the graph Ḡ′ contains a vertex of degree 0 then problem I
is infeasible, terminate.
Step 2.2 (Finding a special edge). If the graph Ḡ′ contains a vertex z of degree 1, then
store the corresponding edge {z, y} as a special edge and remove its endpoints y and z from Ḡ′ .
Each edge of the graph Ḡ is visited and deleted at most once (which takes O(1) time).
The number of edges |Ū | ≤ 2n. So the time complexity of Algorithm 1 is O(n).
Algorithm 1 identifies the case when problem I is infeasible. Further we consider only
feasible instances of 2-TSPVR and bipartite graphs corresponding to them.
After the described preprocessing the resulting graph Ḡ′ is 2-regular (the degree of each
vertex equals 2) and its components are even cycles. The cycles of the graph Ḡ′ can be
computed in O(n) time using the Depth-First Search algorithm (see e.g. [6]). Note that there
are no other edges in the perfect matchings of the graph Ḡ, except for the special edges and
edges of the cycles in Ḡ′ . In what follows, q(Ḡ) denotes the number of cycles in the graph Ḡ
(and in the corresponding graph Ḡ′ ).
Each cycle j, j = 1, . . . , q(Ḡ), contains exactly two maximal (edge disjoint) perfect
matchings, so it does not contain any special edges. Every perfect matching in Ḡ is
uniquely defined by a combination of maximal matchings chosen in each of the cycles and the
graph G = ( X8, X, U )
1
x1
2
x2
3
x3
4
x4
5
x5
6
x6
7
x7
8
x8
cycle 1
special
edges
cycle 2
Figure 1: An instance I with n = 8 and system of vertex requisitions: X 1 = {x1 , x2 },
X 2 = {x1 , x2 }, X 3 = {x3 }, X 4 = {x3 , x4 }, X 5 = {x5 , x6 }, X 6 = {x6 , x7 }, X 7 = {x7 , x8 },
X 8 = {x6 , x8 }. Here the edges drawn in bold define one maximal matching of a cycle, and the
rest of the edges in the cycle define another one. The special edges are depicted by dotted lines.
The edges depicted by dashed lines do not belong to any perfect matching. The feasible solutions of the instance are f 1 = (x1 , x2 , x3 , x4 , x5 , x6 , x7 , x8 ), f 2 = (x1 , x2 , x3 , x4 , x5 , x7 , x8 , x6 ),
f 3 = (x2 , x1 , x3 , x4 , x5 , x6 , x7 , x8 ), f 4 = (x2 , x1 , x3 , x4 , x5 , x7 , x8 , x6 ).
set of all special edges (see Fig. 1). Therefore, 2-TSPVR is solvable by the following algorithm.
Algorithm 2. Solving 2-TSPVR
Step 1. Build the bipartite graph Ḡ, identify the set of special edges and cycles and find
all maximal matchings in cycles.
Step 2. Enumerate all perfect matchings W ∈ W of Ḡ by combining the maximal matchings of cycles and joining them with special edges.
Step 3. Assign the corresponding solution f ∈ F to each W ∈ W and compute ρ(f ).
Step 4. Output the result f ∗ ∈ F , such that ρ(f ∗ ) = min ρ(f ).
f ∈F
To evaluate the Algorithm 2, first note that maximal matchings in cycles are found easily
in O(n) time. Now |F | = |W| = 2q(Ḡ) so the time complexity of Algorithm 2 of solving
2-TSPVR is O(n2q(Ḡ) ), where q(Ḡ) ≤ ⌊ n2 ⌋ and the last inequality is tight.
4
IMPROVED ALGORITHM
In [3], it was shown that almost all feasible instances of 2-TSPVR have not more than n
feasible solutions and may be solved in quadratic time. To describe this result precisely, let
us give the following
Definition 1 [3] A graph Ḡ = (Xn , X, Ū) is called “good” if it satisfies the inequality
q(Ḡ) ≤ 1.1 ln(n).
Note that any problem instance I, which corresponds to a “good” graph Ḡ, has at most
2
< n0.77 feasible solutions.
Let χ̄n denote the set of “good” bipartite graphs Ḡ = (Xn , X, Ū ) , and let χn be the set
of all bipartite graphs Ḡ = (Xn , X, Ū ). The results of A.I. Serdyukov from [3] imply
1.1 ln n
Theorem 1 |χ̄n |/|χn | −→ 1 as n → ∞.
The proof of Theorem 1 from [3] is provided in the appendix for the sake of completeness.
According to the frequently used terminology (see e.g. [7]), this theorem means that almost
all feasible instances I have at most n0.77 feasible solutions and thus they are solvable in
O(n1.77 ) time by Algorithm 2.
Using the approach
from [8] we will now modify Algorithm 2 for solving 2-TSPVR in
q(Ḡ)
O q(Ḡ)2
+ n time. Let us carry out some preliminary computations before enumerating
all possible combinations of maximal matchings in cycles in order to speed up the evaluation
of objective function. We will call a contact between cycle j and cycle j ′ 6= j (or between
cycle j and a special edge) the pair of vertices (i, i + 1) (we assume n + 1 := 1) in the left-hand
part of the graph Ḡ, such that one of the vertices belongs to the cycle j and the other one
belongs to the cycle j ′ (or the special edge). A contact inside a cycle will mean a pair of
vertices in the left-hand part of a cycle, if their indices differ exactly by one, or these vertices
are (n, 1).
Consider a cycle j. If a contact (i, i + 1) is present inside this cycle, then each of the
two maximal matchings w 0,j and w 1,j in this cycle determines the i-th arc of a tour in the
graph G. Also, if the cycle j has a contact (i, i + 1) to a special edge, each of the two maximal
matchings w 0,j and w 1,j also determines the i-th arc of a tour in the graph G. For each of the
matchings w k,j , k = 0, 1, let the sum of the weights of arcs determined by the contacts inside
the cycle j and the contacts to special edges be denoted by Pjk .
If cycle j contacts to cycle j ′ , j ′ 6= j, then each combination of the maximal matchings
of these cycles determines the i-th arc of a tour in the graph G for any contact (i, i + 1)
between the cycles. If a maximal matching is chosen in each of the cycles, one can sum up
the weights of the arcs in G determined by all contacts between cycles j and j ′ . This yields
(0,0)
(0,1)
(1,0)
(1,1)
four values which we denote by Pjj ′ , Pjj ′ , Pjj ′ and Pjj ′ , where the superscripts identify
the matchings chosen in each of the cycles j and j ′ respectively.
(0,0)
(0,1)
(1,0)
(1,1)
Parameters Pj0 , Pj1 , Pjj ′ , Pjj ′ , Pjj ′ and Pjj ′ can be found as follows. Suppose that
intermediate values of Pj0 , Pj1 for j = 1, . . . , q(Ḡ) are stored in one-dimensional arrays of
(0,0)
(0,1)
(1,0)
(1,1)
size q(Ḡ), and intermediate values of Pjj ′ , Pjj ′ , Pjj ′ and Pjj ′ for j, j ′ = 1, . . . , q(Ḡ) are
stored in two-dimensional arrays of size q(Ḡ) × q(Ḡ). Initially, all of these values are assumed
to be zero and they are computed in an iterative way by the consecutive enumeration of pairs
of vertices (i, i + 1), i = 1, . . . , n − 1, and (n, 1) in the left-hand part of the graph Ḡ. When
we consider a pair of vertices (i, i + 1) or (n, 1), at most four parameters (partial sums) are
updated depending on whether the vertices belong to different cycles or to the same cycle, or
one of the vertices is special. So the overall time complexity of the pre-processing procedure
is O(q 2 (Ḡ) + n).
Now all possible combinations of the maximal matchings in cycles may be enumerated
using a Grey code (see e.g. [9]) so that the next combination differs from the previous one by
altering a maximal matching only in one of the cycles. Let the binary vector δ = (δ1 , . . . , δq(Ḡ) )
define assignments of the maximal matchings in cycles. Namely, δj = 0, if the matching w 0,j is
chosen in the cycle j; otherwise (if the matching w 1,j is chosen in the cycle j), we have δj = 1.
This way every vector δ is bijectively mapped into a feasible solution fδ to 2-TSPVR.
In the process of enumeration, a step from the current vector δ̄ to the next vector δ changes the maximal matching in one of the cycles j. The new value of objective function ρ(fδ ) may be computed via the current value ρ(fδ̄ ) by the formula ρ(fδ ) =
P
P
(δj ,δ ′ )
(δ̄j ,δ̄ ′ )
δ̄
δ
ρ(fδ̄ ) − Pj j + Pj j −
Pjj ′ j , where A(j) is the set of cycles contacting to
Pjj ′ j +
j ′ ∈A(j)
j ′ ∈A(j)
the cycle j. Obviously, |A(j)| ≤ q(Ḡ), so updating the objective function value for the next
solution requires O(q(Ḡ)) time, and the
overall time complexity of the modified algorithm for
q(Ḡ)
solving 2-TSPVR is O q(Ḡ)2
+n .
In view of Theorem 1 we conclude that using this modification of Algorithm 2 almost all
feasible instances of 2-TSPVR are solvable in O(n0.77 ln n + n) = O(n) time.
5
LOCAL SEARCH
A local search algorithm starts from an initial feasible solution. It moves iteratively from one
solution to a better neighboring solution and terminates at a local optimum. The number of
steps of the algorithm, the time complexity of one step, and the value of the local optimum
depend essentially on the neighborhood. Note that neighborhoods, often used for the classical
TSP (e.g. k-Opt, city-swap, Lin-Kernighan [10]), will contain many infeasible neighboring
solutions if applied to 2-TSPVR because of the vertex requisition constraints.
A local search method with a specific neighborhood for 2-TSPVR may be constructed
using the relationship between the perfect matchings in the graph Ḡ and the feasible solutions.
The main idea of the algorithm consists in building a neighborhood of a feasible solution to
2-TSPVR on the basis of a Flip neighborhood of the perfect matching, represented by the
maximal matchings in cycles and the special edges.
Let the binary vector δ = (δ1 , . . . , δq(Ḡ) ) denote the assignment of the maximal matchings
to cycles as above. The set of 2q(Ḡ) vectors δ corresponds to the set of feasible solutions by a
one-to-one mapping fδ . We assume that a solution fδ′ belongs to the Exchange neighborhood
of solution fδ iff the vector δ ′ is within Hamming distance 1 from δ, i.e. δ ′ belongs to the Flip
neighborhood of vector δ.
Enumeration of the Exchange neighborhood takes O(q 2 (Ḡ)) time if the preprocessing described in Section 4 is carried out before the start of the local search (without the preprocessing
it takes O(nq(Ḡ)) operations). Therefore, for almost all feasible instances I, the Exchange
neighborhood may be enumerated in O(ln2 (n)) time.
6
MIXED INTEGER LINEAR PROGRAMMING
MODEL
The one-to-one mapping between the maximal matchings in cycles of the graph Ḡ and feasible solutions to 2-TSPVR may be also exploited in formulation of a mixed integer linear
programming model.
Recall that Pj0 (Pj1 ) is the sum of weights of all arcs of the graph G determined by the
contacts inside the cycle j and the contacts of the cycle j with special edges, when the maximal
(k,l)
matching w 0,j (w 1,j ) is chosen in the cycle j, j = 1, . . . , q(Ḡ). Furthermore, Pjj ′ is the sum
of weights of arcs in the graph G determined by the contacts between cycles j and j ′ , if the
′
maximal matchings w k,j and w l,j are chosen in the cycles j and j ′ respectively, k, l = 0, 1,
j = 1, . . . , q(Ḡ) − 1, j ′ = j + 1, . . . , q(Ḡ). These values are computable in O(n + q 2 (Ḡ)) time
as shown in Section 4.
Let us introduce the following Boolean variables:
0, if matching w 0,j is chosen in the cycle j,
dj =
1, if matching w 1,j is chosen in the cycle j,
j = 1, . . . , q(Ḡ).
The objective function combines the pre-computed arc weights for all cycles, depending
on the choice of matchings in d = (d1 , . . . , dq(Ḡ) ):
q(Ḡ)−1 q(Ḡ)
X (0,0)
(0,1)
Pjj ′ (1 − dj )(1 − dj ′ ) + Pjj ′ (1 − dj )dj ′
X
j=1
j ′ =j+1
q(Ḡ)−1 q(Ḡ)
+
X
X
(1,0)
Pjj ′ dj (1
− dj ′ ) +
(1,1)
Pjj ′ dj dj ′
j ′ =j+1
j=1
(1)
q(Ḡ)
+
X
j=1
Pj0 (1 − dj ) + Pj1 dj → min,
dj ∈ {0, 1}, j = 1, . . . , q(Ḡ).
(2)
Let us define supplementary real variables in order to remove non-linearity of the objective
(k)
function: for k ∈ {0, 1} we assume that pj ≥ 0 is an upper bound on the sum of weights of
arcs in the graph Ḡ determined by the contacts of the cycle j, if matching w k,j is chosen in
this cycle, i.e. dj = k, j = 1, . . . , q(Ḡ) − 1.
Then the mixed integer linear programming model has the following form:
q(Ḡ)−1
X
q(Ḡ)
(0)
pj
+
(1)
pj
j=1
+
X
j=1
Pj0 (1 − dj ) + Pj1 dj → min,
q(Ḡ)
q(Ḡ)
(0)
pj
≥
X
j ′ =j+1
(0,0)
Pjj ′
(1 − dj − dj ′ ) +
X
j ′ =j+1
(0,1)
Pjj ′ (dj ′ − dj ) ,
(3)
j = 1, . . . , q(Ḡ) − 1,
q(Ḡ)
q(Ḡ)
(1)
pj
≥
X
(4)
(1,0)
Pjj ′
j ′ =j+1
(dj − d ) +
j′
X
(1,1)
Pjj ′ (dj + dj ′ − 1) ,
j ′ =j+1
j = 1, . . . , q(Ḡ) − 1,
(5)
pj ≥ 0, k = 0, 1, j = 1, . . . , q(Ḡ) − 1,
(6)
dj ∈ {0, 1}, j = 1, . . . , q(Ḡ).
(7)
(k)
Note that if matching w 0,j is chosen for the cycle j in an optimal solution of problem (3)(0)
(1)
(7), then inequality (4) holds for pj as equality and pj = 0. Analogously, if matching w 1,j
(1)
(0)
is chosen for the cycle j, then inequality (5) holds for pj as equality and pj = 0. Therefore,
problems (1)-(2) and (3)-(7) are equivalent because a feasible solution of one problem corresponds to a feasible solution of another problem, and an optimal solution corresponds to an
optimal solution.
The number of real variables in model (3)-(7) is (2q(Ḡ) − 2), the number of Boolean
variables is q(Ḡ). The number of constraints is O(q(Ḡ)), where q(Ḡ) ≤ ⌊ n2 ⌋. The proposed
model may be used for computing lower bound of objective function or in branch-and-bound
algorithms, even if the graph Ḡ is not “good”.
Note that there are a number of integer linear programming models in the literature
on the classical TSP, involving O(n2 ) Boolean variables. Model (3)–(7) for 2-TSPVR has
at most ⌊ n2 ⌋ Boolean variables and for almost all feasible instances the number of Boolean
variables is O(ln(n)).
7
CONCLUSION
We presented an algorithm for solving 2-TSP with Vertex Requisitions, that reduces
the time complexity bound formulated in [3]. It is easy to see that the same approach is
applicable to the problem 2-Hamiltonian Path of Minimum Weight with Vertex
Requisitions, that asks for a Hamiltonian Path of Minimum Weight in the graph G, assuming the same system of vertex requisitions as in 2-TSP with Vertex Requisitions.
Using the connection to perfect matchings in a supplementary bipartite graph and some
preprocessing we constructed a MIP model with O(n) binary variables and a new efficiently
searchable Exchange neighborhood for problem under consideration.
Further research might address the existence of approximation algorithms with constant
approximation ratio for 2-TSP with Vertex Requisitions.
References
[1] Garey, M.R., Johnson, D.S.: Computers and intractability. A guide to the theory of NPcompleteness. W.H. Freeman and Company, San Francisco, CA (1979)
[2] Serdyukov, A.I.: Complexity of solving the travelling salesman problem with requisitions
on graphs with small degree of vertices. Upravlaemye systemi. 26, 73–82 (1985) (In Russian)
[3] Serdyukov, A.I.: On travelling salesman problem with prohibitions. Upravlaemye systemi.
17, 80–86 (1978) (In Russian)
[4] Garey, M.R., Johnson, D.S.: Strong NP-completeness results: Motivation, examples, and
implications. Journal of the ACM. 25, 499–508 (1978)
[5] Serdyukov A.I.: On finding Hamilton cycle (circuit) problem with prohibitions. Upravlaemye systemi. 19, 57–64 (1979) (In Russian)
[6] Cormen, T.H., Leiserson, C.E., Rivest, R.L., Stein, C.: Introduction to Algorithms, 2nd
edition. MIT Press (2001)
[7] Chvatal, V.: Probabilistic methods in graph theory. Annals of Operations Research. 1,
171–182 (1984)
[8] Eremeev, A., Kovalenko, J.: Optimal recombination in genetic algorithms for combinatorial optimization problems: Part II. Yugoslav Journal of Operations Research. 24 (2),
165-186 (2014)
[9] Reingold, E.M., Nievergelt, J., Deo, N.: Combinatorial algorithms: Theory and Practice.
Englewood Cliffs, Prentice-Hall (1977)
[10] Kochetov, Yu. A.: Computational bounds for local search in combinatorial optimization.
Computational Mathematics and Mathematical Physics. 48 (5), 747-763 (2008)
[11] Feller, W.: An Introduction to Probability Theory and Its Applications. Vol. 1. John
Wiley & Sons, New York, NY (1968)
[12] Riordan, J.: An Introduction to Combinatorial Analysis. John Wiley & Sons, New York,
NY (1958)
APPENDIX
Note that A.I. Serdyukov in [3] used the term block instead of term cycle, employed in Section 3
of the present paper. A block was defined in [3] as a maximal (by inclusion) 2-connected
subgraph of graph Ḡ with at least two edges. However, in each block of the graph Ḡ, the
degree of every vertex equals 2 (otherwise F = Ø because the vertices of degree 1 do not
belong to blocks and the vertex degrees are at most 2 in the right-hand part of Ḡ). So, the
notions block and cycle are equivalent in the case of considered bipartite graph Ḡ. We use the
term block in the proof of Theorem 1 below, as in the original paper [3], in order to avoid a
confusion of cycles in Ḡ with cycles in permutations of the set {1, . . . , n}.
Theorem 1 [3] |χ̄n |/|χn | −→ 1 as n → ∞.
Proof. Let Sn be the set of all permutations of the set {1, . . . , n}. Consider a random
permutation s from Sn . By ξ(s) denote the number of cycles in permutation s. It is known
n
P
1
and the
(see e.g. [11]) that the expectation E[ξ(s)] of random variable ξ(s) is equal to
i
i=1
variance V ar[ξ(s)] equals
n
P
i=1
i−1
.
i2
Let S̄n denote the set of permutations from Sn , where the
number of cycles is at most 1.1 ln(l). Then, using Chebychev’s inequality [11], we get
|S̄n |/|Sn | −→ 1 as n → ∞.
(8)
Now let Sn′ denote the set of permutations from Sn , which do not contain the cycles of
(i)
length 1, and let Sn be the set of permutations from Sn , which contain a cycle with element
i, i = 1, . . . , n. Using the principle of inclusion and exclusion [12], we obtain
|Sn \
Sn′ |
=
[
Sn(i)
1≤i≤n
X
Sn(i)
1≤i6=j6=k≤n
\
Sn(j)
\
=
n
X
Sn(i) −
i=1
X
1≤i6=j≤n
Sn(i)
\
Sn(j) +
Sn(k) − . . . = n! − Cn2 (n − 2)! + Cn3 (n − 3)! − . . . ≤
n! n!
2
2
+
= n! = |Sn |.
2
6
3
3
Therefore,
1
|Sn′ | ≥ |Sn |.
3
(9)
Combining (8) and (9), we get
|S̄n′ |/|Sn′ |
3|Sn′ \S̄n′ |
3|Sn \S¯n |
|Sn′ \S̄n′ |
≥
1
−
≥
1
−
−→ 1,
=1−
|Sn′ |
|Sn |
|Sn | n→+∞
(10)
where S̄n′ = Sn′ ∩ S¯n .
The values |χ̄n | and |χn \χ̄n | may be bounded, using the following approach. We assign
any permutation s ∈ Sl′ , l ≤ n, a set of bipartite graphs χn (s) ⊂ χn as follows. First of
all let us assign an arbitrary set of n − l edges to be special. Then the non-special vertices {i1 , i2 , . . . , il } ⊂ Xn of the left-hand part, where ij < ij+1 , j = 1, . . . , l − 1, are now
partitioned into ξ(s) blocks, where ξ(s) is the number of cycles in permutation s. Every
cycle (t1 , t2 , . . . , tr ) in permutation s corresponds to some sequence of vertices with indices
{it1 , it2 , . . . , itr } belonging to the block associated with this cycle. Finally, it is ensured that
for each pair of vertices {itj , itj+1 }, j = 1, . . . , r − 1, as well as for the pair {itr , it1 }, there
exists a vertex in the right-hand part X which is adjacent to both vertices of the pair. Except
for special edges and blocks additional edges are allowed in graphs from class χn (s). These
edges are adjacent to the special vertices of the left-hand part such that the degree of any
vertex of the left-hand part is not greater than two. Moreover, additional edges should not
lead to creating new blocks.
There are n! ways to associate vertices of the left-hand part to vertices of the righthand part, therefore the number of different graphs from class χn (s), s ∈ Sl′ , l ≤ n, is
|χn (s)| = Cnl 2ξn!
h(n, l), where function h(n, l) depends only on n and l, and ξ1 (s) is the
1 (s)
number of cycles of length two in permutation s. Division by 2ξ1 (s) is here due to the fact
that for each block that corresponds to a cycle of length two in s, there are two equivalent
ways to number the vertices in its right-hand part.
Let s = c1 c2 . . . cξ(s) be a permutation from set Sl′ , represented by cycles ci , i = 1, . . . , ξ(s),
and let cj be an arbitrary cycle of permutation s of length at least three, j = 1, . . . , ξ(s).
Permutation s may be transformed into permutation s1 ,
s1 = c1 c2 . . . cj−1 c−1
j cj+1 . . . cξ(s),
(11)
by reversing the cycle cj . Clearly, permutation s1 induces the same subset of bipartite graphs
in class χn as the permutation s does. Thus any two permutations s1 and s2 from set Sl′ ,
l ≤ n, induce the same subset of graphs in χn , if one of these permutations may be obtained
from the other one by several transformations of the form (11). Otherwise the two induced
subsets of graphs do not intersect. Besides that χn (s1 ) ∩ χn (s2 ) = ∅ if s1 ∈ Sl′1 , s2 ∈ Sl′2 ,
l1 6= l2 .
On one hand, if s ∈ S̄l′ , l ≤ n, then χn (s) ⊆ χ̄n . On the other hand, if s ∈ S̃l′ := Sl′ \S̄l′ ,
l < n, then either χn (s) ⊆ χ̄n or, alternatively, χn (s) ⊆ χn \χ̄n may hold. Therefore,
|χ̄n | ≥
n X
X
l=2 s∈S̄l′
≥
n
X
|S̄l′ |
·
l=2
|χn \χ̄n | ≤
n!
Cnl ξ1 (s) ξ(s)−ξ1 (s) h(n, l)
2
2
n!
Cnl 1.1ln(l) h(n, l)
2
n
X
l=⌊1.1ln(n)⌋
Now assuming ψ(n) =
X
s∈S̃l′
max
l=⌊1.1ln(n)⌋,...,n
n
X
≥
n!
Cnl ξ(s)
2
=
n X
X
Cnl
l=2 s∈S̄l′
|S̄l′ | · Cnl
l=⌊1.1ln(n)⌋
≤
n
X
n!
h(n, l) ≥
2ξ(s)
n!
21.1ln(l)
|S̃l′ | · Cnl
l=⌊1.1ln(n)⌋
(12)
h(n, l),
n!
21.1ln(l)
h(n, l).
(13)
|S̃l′ |/|S̄l′ | and taking into account (12), (13) and (10),
we obtain
|χn \χ̄n |
≤ ψ(n) → 0 as n → +∞.
|χ̄n |
Finally, the statement of the theorem follows from (14). Q.E.D.
(14)
| 8 |
PyCraters: A Python framework for crater function analysis
Scott A. Norris
arXiv:1410.8489v1 [physics.comp-ph] 28 Oct 2014
October 31, 2014
Abstract
We introduce a Python framework designed to automate the most common tasks associated with
the extraction and upscaling of the statistics of single-impact crater functions to inform coefficients of
continuum equations describing surface morphology evolution. Designed with ease-of-use in mind, the
framework allows users to extract meaningful statistical estimates with very short Python programs.
Wrappers to interface with specific simulation packages, routines for statistical extraction of output,
and fitting and differentiation libraries are all hidden behind simple, high-level user-facing functions. In
addition, the framework is extensible, allowing advanced users to specify the collection of specialized
statistics or the creation of customized plots. The framework is hosted on the BitBucket service under
an open-source license, with the aim of helping non-specialists easily extract preliminary estimates of
relevant crater function results associated with a particular experimental system.
1
Introduction
Irradiation by energetic ions is a ubiquitous materials processing technique, used widely in laboratories
and industry for doping, cleaning, and modification of materials surfaces. Under certain environmental
conditions, ion irradiation is observed to induce the spontaneous formation of nanometer-scale structures
such as ripples, dots, and holes [1]. In some contexts, such as the gradual ion-induced degradation of fusion
reactor components, these structures are an artifact to be avoided [2], but more recent observations of wellordered, high-aspect ratio structures [3] has led to the consideration of ion irradiation as an inexpensive
means of inducing such structures deliberately. With sufficient understanding, this process could serve as
the basis of an inexpensive, high-throughput means of creating surfaces with desired mechanical, optical, or
electronic properties, ready for immediate application across the existing installed base of ion beam facilities.
A major barrier to predictive understanding of the ion-induced nanostructuring process has been a large
number of competing physical mechanisms, and an accompanying difficulty in estimating their relative magnitudes in different parameter regimes. To the original modeling of energy deposition and its relationship
to sputter yield by Sigmund [4, 5], and subsequent discovery of an sputter erosion-driven instability mechanism identified by Bradley and Harper [6], numerous additional mechanisms have since been added. It
has since been discovered that atoms displaced during the collision cascade, but not sputtered from the
surface, produce contributions to the evolution equations that directly compete with the erosive mechanism.
First studied in 1D by Carter and Vishnyakov, and later in 2D dimensions by Davidovitch et al. [7, 8],
this ”momentum transfer” or “mass redistribution” contribution effectively doubles the number of unknown
parameters in the problem, as every erosive term has a redistributive counterpart, and the magnitudes of
each must in principle be estimated. Intensifying this problem was the realization that strongly-ordered
structures generating the most excitement seemed to be due to the presence of multiple components in the
target [9, 10], whether through contamination, intentional co-deposition, or the use of a two-component target such as III-V compounds. To accurately model such equations requires the introduction of a second field
to track concentrations [11] which, although it can overcome the barriers to ordered structures exhibited by
one-component models [12, 13], again doubles the number of unknown parameters present in the problem.
Recently, the "Crater Function" framework has emerged as a means of rigorously connecting surface
morphology evolution over long spatial and temporal scales to the statistical properties of single ion impacts
[14, 15, 16]. Given the “Crater Function” ∆h (x, y; S) describing the average surface modification due to an
ion impact (with a parametric dependence on the surface properties via the argument S), the multi-scale
1
analysis within the framework [14] produces contributions to partial differential equations governing the
surface morphology evolution. It can therefore be viewed as a way of estimating many of the unknown
parameters present in these problems by means of atomistic simulation. Originally applied only to pure
materials and using only data from flat surfaces [15], the framework has since been expanded to the case of
binary materials [16], and to enable incorporation of simulation data from curved targets [17]. It has thus
matured to the point where it may be of value to the general community as a parameter estimation tool.
However, to use the framework to this end, one needs the capability to (a) perform numerical simulations of
ion impacts over a variety of surface parameters, (b) extract the necessary statistics from the output of each
simulations, (c) fit parameter-dependent statistics to various appropriate functional forms, and (d) combine
and report the results.
This manuscript describes a Python library to provide all of the above capabilities through a simple and
user-friendly API. Access to various simulation tools is provided via wrappers that automatically create input
files, run the solver, read the output, and save in a common format. A customizable set of statistical analyses
are then run on the common-format output file, and saved for later use under a unique parameter-dependent
filename. A flexible loading mechanism and general purpose fitting library make it easy to load statistics as
a function of arbitrary parameters, and then fit the resulting data points to appropriate smooth nonlinear
functions. Finally, functions utilizing these capabilities are provided to perform all current mathematical
operations indicated by the literature to extract and plot PDE components. Using this library, example
codes that re-obtain existing results within the literature are as short as 20 lines each, and can easily be
modified by end users to begin studying systems of their choosing.
2
Theoretical Background
Crater Functions. If the dominant effects of the impact-induced collision cascade can be assumed to take
place near to the surface of the evolving film, then the normal surface velocity of an ion-bombarded surface
can be represented by an integro-differential equation of the form [18],
ˆ
vn = I (φ (x0 )) ∆h (x − x0 ; S (x0 )) dx0 ,
(1)
where I = I0 cos (φ) is the projected ion flux depending on the local angle of incidence φ (x), ∆h is the “crater
function” describing the average surface response due to single ion impacts, and S describes an arbitrary
parametric dependence of the crater function on the local surface shape. This form has advantages over more
traditional treatments of irradiation-induced morphology evolution. Instead of separate, simplified models
of the processes of ion-induced sputtering [5, 6] and impact-induced momentum transfer [7, 8] – both of
which break down as the angle of incidence approaches grazing – the crater function ∆h naturally includes
components due to both sputtered atoms and redistributed atoms (thus unifying the two approaches), and
can in principle be obtained empirically (thus avoiding inaccuracy at high angles of incidence).
A Generic Framework. Exploiting the typical experimental observation of a separation of spatial scales
between the size of the impact (direct spatial dependence of ∆h) and the typical size of emergent structures
(spatial dependence of φ and S), a multiple-scale analysis was conducted in which the integral in Eq. (1) is
expanded into an infinite series of terms involving the moments of ∆h [14]:
h
i
h
i
h
i 1
vn = I M̃ (0) + ε∇S · I M̃ (1) + ε2 ∇S · ∇S · I M̃ (2) + . . .
2
(2)
where I is the projected ion flux (a scalar), the ∇S are co-ordinate free surface divergences, and the M̃ (i) are
“effective” moments of the crater function ∆h in increasing tensor order [14]. This result provides intuition
as to which parts of the crater function ∆h are most important to understand morphology evolution, and
represents a general solution for the multiple-scale expansion of Eq. (1) in the sense that it should apply for
any parametric dependencies of the crater function on the surface shape S. (Note that in this co-ordinate
free form, the effective moments are really combinations of the actual moments as described in Ref. [14];
however, in any linearization, they are equivalent).
2
Example Applications. Equation (2) is fully nonlinear and independent of the specific form of the crater
function. Therefore, to study surface stability, one must first choose a form for the crater function ∆h, and
then linearize the resulting specific instance of Equation (2) about a flat surface. This process was first
demonstrated in Ref. [15], where for simplicity and consistency with available simulation data, the crater
function
∆h = g (x − x0 ; φ) ,
(3)
was chosen, which depends parametrically only on the local angle of incidence φ. Inserting this expression
into the general result Eq. (2), linearizing, and then adopting a moving frame of reference to eliminate
translational and advective terms, one obtains to leading order the PDE
∂2h
∂2h
∂h
= SX (θ) 2 + SY (θ) 2
∂t
∂x
∂y
(4)
where the angle-dependent coefficients SX (θ) and SY (θ) are related to the crater functions via the expressions
i
∂ h
I0 cos (θ) Mx(1) (θ)
h∂θ
i
SY (θ) = I0 cos (θ) cot (θ) Mx(1) (θ)
SX (θ) =
(5)
(1)
where Mx is the component of the (vector) first moment in the projected direction of the ion beam. More
recently, a similar approach has been applied to an extended crater function of the form [17]
∆h = g (x − x0 ; φ, K11 , K12 , K22 )
(6)
depending additionally on the surface curvatures Kij near the point of impact. It was found that including
this dependency within the crater function reveals additional terms in the coefficient values, which take the
revised form
i
i
∂ h
∂ h
SX (θ) =
I0 cos (θ) Mx(1) (θ) +
I0 cos (θ) M (0)
∂θ
∂K11
(7)
h
i
i.
∂ h
(1)
SY (θ) = I0 cos (θ) cot (θ) Mx (θ) +
I0 cos (θ) M (0)
∂K22
Implications. The practical consequence of such results is that one can directly connect atomistic simulations over small length- and time-scales to continuum equations governing morphology evolution over much
longer scales. If the crater function ∆h and its moments M (i) can be identified as functions of the local
surface configuration (angle, curvature, etc.) by simulation (e.g. [19]) or even experiment (e.g. [20]), then
the expected continuum evolution of the system can be predicted via Eq. (4) with coefficients supplied by
results such as Eqs. (5) or (7). Early applications of these ideas show significant promise for predicting the
outcome of experiments [15] or determining the likely physical mechanisms driving experimental observations
[16]. However, the steps required to do so represent a non-trivial task in simulation and data analysis: an
effective procedure must address questions of
(1) creation or selection of a simulation tool to perform many single-impact simulations
(2) obtaining statistically converged moments at individual parameter combinations,
(3) estimation of derivative values using data from adjacent parameter combinations
(4) a smoothing mechanism to prevent uncertainties at step (2) from being amplified in step (3)
An approach incorporating such steps was first demonstrated in Ref. [15], where molecular dynamics simulations using the PARCAS code [21] were performed for irradiation of Si by 250 eV Ar+ at 5-degree increments
between 0◦ and 90◦ . Smoothing was accomplished by a weighted fitting of the simulation results to a trun(1)
cated Fourier series, and fitted values of Mx (θ) were then inserted into Eqs. (5). Analyzing the resulting
linear PDE of the form (4) (with additional terms describing ion-enhanced viscous flow), the most-unstable
3
wavelengths at each angle were compared to the wavelengths of experimentally-observe structures, with reasonable agreement. In the process, the relative sizes of the effects of erosion and redistribution were directly
obtained and compared, and the effects of redistribution were unexpectedly found to be dominant for the
chosen system.
3
Goals and Outline of the Framework
The previous section outlines the general features of the Crater Function approach, including the potential
promise for the general problem of coefficient estimation, and also the technical hurdles associated with
its use. However, while the process of simulation, statistical analysis, fitting, and differentiation is timeconsuming, it is also in principle mechanical, suggesting the utility of an open-source library to centralize
best practices and avoid repeated re-implementations. Here we describe the primary goals of such a library,
and a summary of the structure of the resulting codebase.
Motivation #1: Accessibility. A principal motivation for the present work is to automate the process
described above in a generic and accessible way. As many of the procedures as possible should be performed
automatically, with reasonable default strategies applied for users that do not wish to delve into the details
of atomistic simulation, internal statistical data structures, or the optimal fitting functions for moment
curves. Instead, a first-time user should be able to spend most of their effort on deciding what system
and environmental parameters to study, after which the library should take care of subsequent mechanical
operations. In particular, it should provide simple visualization routines for tasks expected to be common,
such as plotting the calculated angle-dependent coefficient values.
Motivation #2: Extensibility. A second motivation is to accommodate the desires of users who wish to
move beyond the basic capabilities just described. For example, whereas first-time users may wish to call a
high-level function to obtain a graph similar to one already published in the literature, a more advanced user
might like to work directly with the raw statistical data, using the collected moments to extract particular
quantities of interest. Finally, a very advanced user or researcher within the field may wish to customize
the set of statistics to be gathered, or even modify the methods used to calculate and fit those customized
statistical quantities. As much as possible, the goal should be to accommodate users at each of these levels
of detail, and allowing each of them to use built-in utilities to simplify the calculation, loading, fitting, and
plotting of whatever quantities are of interest to a given user.
Motivation #3: Portability. A final motivation is to enable the collection of statistical data from as
many sources as possible, with as little effort as possible. When first demonstrated in Ref. [15], simulations
were performed by Molecular Dynamics (MD), using the PARCAS simulation code [21]. However, in principle
any MD code could be used, such as the open-source LAAMPS code [22]. Furthermore, if the time required
for MD simulation is a significant obstacle, then the simpler, faster Binary Collision Approximation (BCA)
becomes an appealing approach. A variety of BCA codes exist, including many descended from the widelyused SRIM code [23, 24, 25] such as TRIDYN [26, 27], SDTRIMSP [28] and TRI3DST [29]. Importantly,
these various codes have different and complementary capabilities, and so practitioners may need to employ
different codes to answer different types of questions. To facilitate this, the library should in principle be
compatible with as many as possible.
Framework Summary. In response to these motivations, we have therefore implemented our library as
a layered framework. All of the analysis routines are implemented using an internal, standardized set of
data structures for holding the results of simulation output. These structures, themselves, are then hidden
behind a set of commonly-used high-level routines for generating coefficient values, which may be easily used
by practitioners without reproducing any of the underlying work. Furthermore, the framework is designed
to be agnostic with respect to the simulation tool used to obtain the impact data. Any solver capable of
producing such data can be “wrapped” within an input-output object whose job is simply to write the input
files required by that solver, run the simulation, and read the resulting output files. Because input and
output file formats vary widely between codes, the specifics of each wrapper are abstracted from the user.
4
High-level visualization routines for simple display of common results
plot_pure_material_coefficient_summary()
plot_binary_material_coefficient_summary()
plot_phase_diagram()
Mid-level analysis routines performing calculations indicated by theory
calculate_PDE_coefficients_pure()
calculate_PDE_coefficients_binary()
calculate_transition_angles()
Low-level utility routines for solver-independent data manipulation
extract_statistics_from_output()
load_data_slice_as_list()
nonlinear_data_fit()
# basic statistical analysis of simulation output
# load arbitrary parametric data slice into 1D list
# generic method to fit data to supplied function
PARCAS
LAAMPS
MARLOWE
TRI3DST
SDTrimSP
history of use
with ion impacts
open source
crystalline
targets
curved target
surfaces
concentration
depth profile
Molecular Dynamics wrappers
Binary Collision Approximation wrappers
Figure 1: A schematic diagram of the organization of the PyCraters library. (a) At the lowest level are solverspecific wrappers that abstract the performance of individual simulations behind a standardized interface.
Requests for simulations are performed, and results obtained, in a standard format. (b) Above this layer is a
set of generic utilities for the collection of statistics from simulation data, the storage and loading of data in
easy-to-use formats, and the fitting of data points to smooth curves. (c) Next is a layer for performing the
real work associated with the crater function formalism – the loading of appropriate slices of data, fitting that
data to appropriate curves, and differentiating the resulting fits to obtain coefficients of linearized equations.
(d) Finally, a related set of visualization utilities plots commonly-sought coarse-grained quantities. On top
of this framework, small end-user codes to perform basic surveys can easily be written by non-specialists,
while specialists can directly invoke the lower-level routines to suit their particular needs.
Instead, simulation parameters common to all solvers are available in a simplified, standardized, high-level
user interface common to all wrappers, whereas features specific only to certain solvers can be specified on
a solver-by-solver basis.
The overall approach is illustrated in Figure 1, and consists of components at various levels of abstraction.
From lowest-level to highest-level, these may be summarized as consisting of
1. Wrappers around individual Molecular Dynamics or Binary Collision Approximation solvers.
2. Generic, but extensible analysis routines for the extraction of moments and other statistics.
3. An customizable library for the fitting of these statistics to appropriate smooth functional forms.
4. Routines for performing calculations needed to convert fitted functions into PDE coefficients.
5. Plotting utilities to display reasonable summaries of various kinds of data.
It should be stressed, however, that from the user’s perspective, these capabilities are visible in the opposite
order. For example, a user desiring to plot the PDE coefficients for a given material would simply call an
associated plotting function. This function checks to see if fits are available from a prior use of the script,
and if not, requests such fits from a lower-level function. That function checks to see whether the statistical
data are available, and if not, requests them from a still lower-level function. Finally, that function invokes
a BCA or MD simulation tool wrapper, which performs all communication with the solver needed to obtain
the statistics.
5
Algorithm 1 Common code needed to load the PyCraters libraries.
1
2
3
4
5
6
7
8
9
10
11
12
# import necessary libraries
import sys
import numpy as np
import matplotlib . pyplot as plt
import pycraters . wrappers . TRI3DST as wrap
import pycraters . helpers as help
import pycraters . IO as io
# build solver wrapper and parameter object
exec_location = sys . argv [1]
wrapper = wrap . TRI3DST_Wrapper ( exec_location )
params = wrap . TRI 3D ST _Pa ra me ter s ()
We conclude this section by noting that our goal is not to completely wrap the functionality of sophisticated tools such as LAAMPS and TRIDYN – rather, it is to provide a consistent interface to these tools for
users with the specific aim of using them to gather and apply single-impact crater function statistics. Python
– a high-level scripting language with very mature numerical capabilities – is an ideal language in which
to implement such an interface. We note that although certain kinds of parameter sweeping are built into
some of the solvers we have discussed, such capabilities effectively represent miniature “scripting languages”
unique to each solver. By using a mature language like Python to perform all scripting, the PyCraters
library allows parameter sweeping and statistical analysis to be done in a uniform manner, regardless of the
underlying tool used to perform the simulations, and therefore without having to learn the details of the
scripting capabilities of each solver.
4
Usage Examples
We will now proceed to briefly illustrate several examples of the framework in use. The focus of this section is
not on any particular result obtained herein (which are subject to revision as theoretical approaches improve),
nor on precisely documenting the codebase (which is subject to change in future software versions). Rather, it
is to emphasize the general structure of the code as depicted in Figure 1, and illustrate the kinds of problems
that the framework has been designed to investigate. In all cases, we will estimate PDE coefficients using
the simpler Eqs. (5), procedures for which have been documented in the literature. A report on somewhat
more complicated procedures for using the revised Eqs. (7) will be the topic of future work.
4.1
Preliminaries: Basic library loading and setup
We begin with a very brief introduction to the code typically needed at the beginning of a PyCraters script,
shown in Algorithm 1. It shows the loading of common mathematical libraries, as well as a wrapper and
set of helper routines from the PyCraters library. The first code block simply loads all needed libraries,
including the relevant parts of PyCraters itself. The second code block reads the executable location from
the command line, and creates the two main objects needed to interact with the library – a solver wrapper
and a parameter holder. The latter is tasked with describing the simulation environment, while the former
abstracts each solver behind a uniform interface. Note that in these two code blocks, the user chosen the
TRI3DST solver [29]. If a different solver were desired, only three lines of code would need to be changed.
4.2
Simplest example: Angle-dependence over one energy
We now present a very simple illustration of the framework, by using it to obtain results of the kind reported
in Ref [15] – PDE coefficients associated with the low-energy irradiation of a pure material. For narrative
consistency with the next section, we choose 100 eV Ne+ → C. The code needed for this example appears
6
Algorithm 2 An example program listing using the PyCraters Python framework.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# set parameter values in high - level format
params . target = [[" C " , 1.0]]
params . beam
= " Ne "
params . energy = 100
params . angle
= None
params . impacts = 1000
# perform simulations over a series of angles
angles = np . linspace (0 ,85 ,18)
for aa in angles :
params . angle = aa
wrapper . go ( params )
# extract statistics , perform fits , and generate plots
fitdata = help . e x t r a c t _ P D E _ c o e f f i c i e n t s ( params , angles )
results = help . p l o t _ c o e f f i c i e n t _ s u m m a r y ( fitdata )
plt . show ()
in Algorithm 2, and should be assumed to be preceded by the code in Algorithm 1. The first code block
specifies the environmental parameters the user wishes to consider, including the ion and target species, the
ion energy and incidence angle, and the number of impacts to perform (the incidence angle is left blank).
The second code block sweeps over the incidence angle in 5-degree increments: for each angle, it updates the
parameter holder, and calls the solver wrapper for the specified set of parameters. Finally, the third code
block calls a routine that plots a simple summary of the results.
The bulk of the work performed by the PyCraters framework is hidden behind just three function calls.
The call to the wrapper.go() routine performs all interaction with the underlying BCA solver, including
writing input files, calling the executable, reading output files, extracting moments, and storing the results
on the hard disk under a unique file name constructed by the Params() object. Next, the call to the
help.extract_PDE_coefficients() routine reads these files for all angles in the sweep, uses a self-contained
library to fit each of the angle-dependent moments to appropriate functions of the incidence angle, performs
differentiations needed to construct the coefficients, and stores both fits and coefficients under additional
reconstructable filenames. Finally, the help.plot_coefficient_summary() routine is a relatively simple
visualization routine containing plots likely to be of interest to a casual user. For the program listing in
Algorithm 2, the output is exhibited in Fig. 2.
4.3
Angle-Energy Phase Diagrams
In this section we present a slightly more complicated example, demonstrating the ease with which additional
parameters may be swept to identify trends in statistical behavior. Specifically, we observe that because
the stability of Eqn. 4 is determined by the signs of the angle-dependent coefficients SX (θ) and SY (θ), the
sixth panel of Figure 2 describes the transition in expected behavior from flat surfaces (both coefficients
positive) to ripples oriented in the x-direction (SX < 0). The points at which these curves cross the
origin, and each other, thus serve to divide domains of different expected behavior. Using the PyCraters
framework, it is only a matter of adding an extra for() loop to repeat this calculation for a variety of
ion energies, and thereby obtain an angle-energy phase diagram. Furthermore, additional sweeps over ion
and target species allow the associated phase diagrams for each ion/target combination to be compared,
enabling the identification of trends in stability with respect to ion and target atom mass. The code used
to perform these simulation is listed as Algorithm 3. We note the great similarity to the code listed in
Algorithm 2 – with the exception of three extra nested for() loops, and the additional calls to functions like
help.find_pattern_transitions() (which calculates curve intersections between {0, SX (θ) , SY (θ)}) and
help.plot_energy_angle_phase_diagram() (which plots the results) – the majority of the code is similar.
7
0.05
0.00
(1) (θ)
Meros
.
(0) (θ)
Meros
.
0.02
0.04
0.06
0.08
2.0
1.5
1.0
0.05
1.0
0.10
0.8
0.15
0.20
SX components
SX,eros.
SX,redist.
SX,total
SY(θ)
0.0
0.5
1.0
1.5
2.0
0 10 20 30 40 50 60 70 80 90
angle θ
SY components
1.6
SY,eros.
1.4
SY,redist.
1.2
SY,total
1.0
0.8
0.6
0.4
0.2
0.0
0.2
0 10 20 30 40 50 60 70 80 90
angle θ
0.6
0.4
0.0
0.2
0 10 20 30 40 50 60 70 80 90
angle θ
2.0
SX and SY
1.5
1.0
Figure 2: Output of the program listing in.
8
1st redist. moment
fit
data
0.2
fit
data
0.35
0 10 20 30 40 50 60 70 80 90
angle θ
0.5
SX(θ)
1.2
0.30
0.10
0 10 20 30 40 50 60 70 80 90
angle θ
1.4
0.00
0.25
fit
data
1st erosive moment
(1) ( )
Mredist
.θ
Sputter Yield
SX,Y(θ)
0.02
0.5
0.0
0.5
1.0
SX
SY
1.5
0 10 20 30 40 50 60 70 80 90
angle θ
Algorithm 3 A program for generating angle-energy phase diagrams over many ion/target combination.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# identify
targets =
beams
=
energies =
angles
=
finedeg =
impacts =
parameters to sweep over
[ [[" C " , 1.0]] , [[" Si " , 1.0]] , [[" Ge " , 1.0]] , [[" Sn " , 1.0]]]
[" Ne " , " Ar " , " Kr " , " Xe "]
10.**( np . linspace (2.0 , 4.0 , 21))
np . linspace (0 ,85 ,18)
np . linspace (0 , 90 , 91)
1000
# perform the simulations and store the results
for tt in targets :
for bb in beams :
tdatalist = []
for ee in energies :
for aa in angles :
params . target = tt
params . beam
= bb
params . energy = ee
params . angle = aa
params . impacts = impacts
wrapper . go ( params )
# extract coefficients and find transition angles
fitdata = help . e x t r a c t _ P D E _ c o e f f i c i e n t s ( params , angles )
tdata
= help . f i n d _ p a t t e r n _ t r a n s i t i o n s ( fitdata )
tdatalist . append ( tdata )
# after each energy sweep , plot and save the phase diagram
help . p l o t _ e n e r g y _ a n g l e _ p h a s e _ d i a g r a m ( tdatalist )
plt . savefig ("% s -% s - phase - diagram . svg " % ( ion_species , target_species ))
This reflects the aims of the framework of providing high-level functionality in easy-to-use functions, allowing
the end user to focus on specifying the range of parameters to be explored.
A sampling of the respective graphs are shown in Figure (3). In the first column, the ion mass is
increased, and a corresponding increase in the size of the region for stable, flat surfaces is observed, as well
as a decrease in the size of the region for perpendicular mode ripples. Because the stable regions are induced
by redistribution, and the latter by erosion, this indicates an increasing relative strength of the redistributive
effect as the ion mass increases. By contrast, in the second column, the target mass is increased, and the
trend is reversed – the stable region shrinks, and the region of perpendicular mode ripples grows. This
indicates that generically, as the target mass increases, the role of erosion grows relative to that of mass
redistribution. Interestingly, for most ion/mass combinations, the effect of the ion energy seems to be small
above around 500 eV. It should be stressed, however, that these results are not presented as a definitive
prediction on behavior. They capture only the effect of the collision cascade – sputter erosion and mass
redistribution – and do not capture phenomena such as stress implantation and relaxation [30, 31, 32, 33].
This effect also contributes to the coefficients SX (θ) and SY (θ), and its magnitude relative to collisional
effects is not yet known.
4.4
Binary Materials
The simulation and analysis of single-impact events on binary targets is easily performed within the PyCraters
framework. In Ref. [16], the Crater Function approach was extended to binary compound targets, and
estimates were made of several coefficients within a theory describing the irradiation of such targets [12, 34],
9
Ne+
C
Xe+
C
Ar+
C
Xe+
Si
Kr+
C
Xe+
Ge
Xe+
C
Xe+
Sn
Figure 3: A collection of angle-energy phase diagrams
generated using the program listed in Algorithm 3.
Left column descending: increasing ion mass ( Ne+ , Ar+ , Kr+ , Xe+ → C). Right column descending:
increasing target mass (Xe+ → {C, Si, Ge, Sn}). In each figure, the region between the left edge and the
blue line indicates flat targets, the region between the blue and red lines indicates parallel mode ripples, and
the region between the red line and the right edge indicates perpendicular mode ripples.
10
(a)
(b)
Figure 4: Estimates of the relative sputter yields for various amorphous compounds Ga1−x Sbx (a) if the target
is homogeneous, (b) if the target exhibits a 1nm layer enriched in Sb of the form Ga1−y Sby , with y = 2x−x2 .
The enrichment of the surface layer produces a dramatic shift in the equilibrium film concentration.
for the case of GaSb irradiated by Ar at 250 eV. However, during this process, a notable discrepancy between
simulations and experiments was observed – whereas experiments observe gradual buildup of excess Ga on
the irradiated target, the simulations of impacts on GaSb indicated a slight preferential sputtering of Ga,
which would lead to excess Sb. It was hypothesized that this discrepancy might be due to the effect of
Gibbsian surface segregation, whereby the atom with the lower surface free energy (in this case Sb) migrates
to the surface [35], where it is more easily sputtered due surface proximity. Because the in-situ composition
profile is unknown, the targets used in the initial studies were homogeneous, and could not capture this
effect.
The PyCraters framework is an ideal tool to explore this hypothesis, and a code listing which performs
the needed simulations is provided in Algorithm 4. In this example, the user must move beyond built-in
functionality, and begin to provide some customization to the default settings. For instance, we specify a
list of amorphous targets with varying stoichiometries of the form Ga1−x Sbx , and a simple user-supplied
function that allows a surface layer 1nm thick to be modified by enriching it in Sb from x to a plausible level
of y = 2x − x2 . After the target is then irradiated at 1 keV by Ar+, and the relative sputter yields of Ga
and Sb (which appear in the zeroth moment of the crater function) are extracted directly from the storage
files, and a few lines of code are written to plot these yields as functions of the base Sb composition.
The results for both unmodified and modified targets are shown in Figure 4. Unsurprisingly, the modified
target exhibits a greater sputter yield of Sb than the unmodified target – the top monolayer of this target
contains more Antimony, and most sputtering occurs from the top monolayer. In fact, the new yields largely
mirror the enriched layer composition. Interestingly, however, this relatively small modification is entirely
sufficient to resolve the discrepancy between simulation and experiment just described. Though the actual
composition profile remains as yet unknown, the results of this kind of experimentation with plausible model
targets suggests that this mechanism is likely sufficient to explain the observed enrichment of Ga over time.
4.5
Future work: Comparison of Methodologies
An important future goal of the framework is to facilitate the comparison of simulations performed using
Molecular Dynamics to those using the Binary Collision Approximation. For instance, using MD, Ref. [15]
found redistributed atoms to contribute far more to the shape and magnitude of coefficients in Eqs. (5)
than did sputtered atoms. However, using the BCA, Ref. [36] reports a reduced redistributive signal for the
environment studied in Ref. [15], and finds that sputtered atoms are dominant for most angles at higher
energies. The use of MD vs BCA may be an important source of these conflicting results – in Ref. [37],
it was found that the BCA reports significantly fewer displacements of small magnitude than molecular
dynamics. Because of the very large number of such displacements, they may contribute significantly to the
effect of mass redistribution, and hence the BCA approach may systematically under-report the strength
of this effect. This question demands further study, and because the extraction of the statistics and the
11
Algorithm 4 Code to investigate the effect of segregation.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
# define a simple function of depth
def c o n c e n t r a t i o n _ f u n c t i o n ( params , depth ):
cbase = [ a [1] for a in params . target ]
if depth > 10:
return cbase
else :
bSb = cbase [1]
lSb = 2.0* bSb - bSb **2
return [1 - lSb , lSb ]
# basic parameter setup
params . target = None
params . beam
= " Ar "
params . angle
= 0.0
params . energy = 1000
params . impacts = 1000
params . set_parameter (" cfunc " , c o n c e n t r a t i o n _ f u n c t i o n )
# set up targets at different concentrations
targets = [ ]
for phi in np . linspace (0.0 , 1.0 , 11):
target = [[" Ga " , 1.0 - phi ] ,[" Sb " , phi ]]
targets . append ( target )
# iterate over targets
for tt in targets :
params . target = tt
wrapper . go ( params )
# make plots
import libcraters . IO as io
m0e_avg = io . array_range ( ’./ ’ , params , ’ target ’ ,
m0e_std = io . array_range ( ’./ ’ , params , ’ target ’ ,
Gaval = [ a [0] for a in m0e_avg ]
Sbval = [ a [1] for a in m0e_avg ]
Gaerr = [ b [0]/ np . sqrt ( params . impacts ) * 1.97 for
Sberr = [ b [1]/ np . sqrt ( params . impacts ) * 1.97 for
targets , ’ m0e_avg ’ )
targets , ’ m0e_std ’ )
b in m0e_std ]
b in m0e_std ]
plt . figure (1 , figsize =(4.5 , 3.0))
pctSb_list = [ a [1][1] for a in targets ]
plt . errorbar ( pctSb_list , - np . array ( Gaval ) , yerr = Gaerr , label = ’ Ga ’ , fmt = ’ gs - ’ , linewidth
plt . errorbar ( pctSb_list , - np . array ( Sbval ) , yerr = Sberr , label = ’ Sb ’ , fmt = ’ bs - ’ , linewidth
plt . xlabel ( ’ at . pct . Sb ’)
plt . ylabel ( ’ fractional sputter yield ’)
plt . legend () plt . ylim (0 , max ( - np . array ( Tvals ))*1.7 )
plt . tight_layout () plt . show ()
plt . savefig ( ’ relative - sputter - yields . svg ’)
12
estimation of coefficients are generic processes independent of the solver within the PyCraters library, it will
be straightforward to use those common resources to identify the implications of these observations for the
specific statistics required by crater function analysis.
In a related vein, several different procedures for extracting the crater function ∆h and its moments M (i)
from a list of atomic positions have been suggested in the literature [38, 15, 39, 36], and the importance of
the differences between the strategies has never been investigated. The PyCraters library provides a natural
environment in which to conduct such studies. Because the framework provides the results of simulations in
a common format, variations on the statistical extraction routine can be written in a way that is independent
of both the underlying solver and the methods used to fit and differentiate the results. This should allow
easy comparison of the different extraction methods for a variety of simulation environments, and aid the
identification of best practices for this important procedure.
5
Summary
In conclusion, we have introduced a Python framework designed to automate the most common tasks associated with the extraction and upscaling of the statistics of single-impact crater functions to inform coefficients
of continuum equations describing surface morphology evolution. The framework has been designed to be
compatible with a wide variety of existing atomistic solvers, including Molecular Dynamics and Binary
Collision Approximation codes. However, in order to remain accessible to first-time users, the details of
each of these solvers is abstracted behind a standardized interface, and much functionality can be accessed
via high-level functions and visualization routines. Although the addition of much functionality is still in
progress, the current codebase is able to reproduce many important results from the recent literature, and
examples demonstrating these capabilities are included to facilitate modification and additional exploration
by the community. The project is currently hosted on the BitBucket repository under a suitable open-source
license, and is available for immediate download.
13
References
[1] M. Navez, D. Chaperot, , and C. Sella. Microscopie electronique - etude de lattaque du verre par
bombardement ionique. Comptes Rendus Hebdomadaires Des Seances De L Academie Des Sciences,
254:240, 1962.
[2] M. J. Baldwin and R. P. Doerner. Helium induced nanoscopic morphology on tungsten under fusion
relevant plasma conditions. Nucl. Fusion, 48:035001, 2008.
[3] S. Facsko, T. Dekorsy, C. Koerdt, C. Trappe, H. Kurz, A. Vogt, and H. L. Hartnagel. Formation of
ordered nanoscale semiconductor dots by ion sputtering. Science, 285:1551–1553, 1999.
[4] P. Sigmund. Theory of sputtering. I. Sputtering yield of amorphous and polycrystalline targets. Phys.
Rev., 184:383–416, 1969.
[5] P. Sigmund. A mechanism of surface micro-roughening by ion bombardment. J. Mater. Sci., 8:1545–
1553, 1973.
[6] R. M. Bradley and J. M.E. Harper. Theory of ripple topography induced by ion bombardment. J. Vac.
Sci. Technol., 6:2390–2395, 1988.
[7] G. Carter and V. Vishnyakov. Roughening and ripple instabilities on ion-bombarded si. Phys. Rev. B,
54:17647–17653, 1996.
[8] B. P. Davidovitch, M. J. Aziz, and M. P. Brenner. On the stabilization of ion sputtered surfaces. Phys.
Rev. B, 76:205420, 2007.
[9] G. Ozaydin-Ince and K. F. Ludwig Jr. In situ x-ray studies of native and mo-seeded surface nanostructuring during ion bombardment of si(100). J. Phys. Cond. Matt., 21:224008, 2009.
[10] S. Macko, F. Frost, B. Ziberi, D.F. Forster, and T. Michely. Is keV ion-induced pattern formation on
Si(001) caused by metal impurities? Nanotechnology, 21:085301, 2010.
[11] V. B. Shenoy, W. L. Chan, and E. Chason. Compositionally modulated ripples induced by sputtering
of alloy surfaces. Physical Review Letters, 98:256101, 2007.
[12] R. Mark Bradley and Patrick D. Shipman. Spontaneous pattern formation induced by ion bombardment
of binary compounds. Physical Review Letters, 105:145501, 2010.
[13] S. A. Norris. Ion-assisted phase separation in compound films: An alternate route to ordered nanostructures. Journal of Applied Physics, 114:204303, 2013.
[14] S. A. Norris, M. P. Brenner, and M. J. Aziz. From crater functions to partial differential equations: A
new approach to ion bombardment induced nonequilibrium pattern formation. J. Phys. Cond. Matt.,
21:224017, 2009.
[15] S. A. Norris, J. Samela, L. Bukonte, M. Backman, D. F. K. Nordlund, C.S. Madi, M.P. Brenner, and
M.J. Aziz. Molecular dynamics of single-particle impacts predicts phase diagrams for large scale pattern
formation. Nature Communications, 2:276, 2011.
[16] Scott A. Norris, J. Samela, K. Nordlund, M. Vestberg, and M. Aziz. Crater functions for compound
materials: a route to parameter estimation in coupled-pde models of ion bombardment. Nuclear Instruments and Methods in Physics Research B, 318B:245–252, 2014. arXiv:1303.2674.
[17] Matt P. Harrison and R. Mark Bradley. Crater function approach to ion-induced nanoscale pattern
formation: Craters for flat surfaces are insufficient. Physical Review B, 89:245401, 2014.
[18] M. J. Aziz. Nanoscale morphology control using ion beams. Matematisk-fysiske Meddelelser, 52:187–206,
2006.
14
[19] E. M. Bringa, K. Nordlund, and J. Keinonen. Cratering energy regimes: From linear collision cascades
to heat spikes to macroscopic impacts. Phys. Rev. B, 64:235426, 2001.
[20] G. Costantini, F. Buatier de Mongeot, C. Boragno, and U. Valbusa. Is ion sputtering always a “negative
homoepitaxial deposition”? Phys. Rev. Lett., 86:838–841, 2001.
[21] K. Nordlund. parcas computer code. The main principles of the molecular dynamics algorithms are
presented in [40, 41]. The adaptive time step and electronic stopping algorithms are the same as in [42].
[22] Open-source laamps molecular-dynamics code hosted at http://lammps.sandia.gov/. The original
algorithms of this community-maintained codebase are presented in Ref. [43].
[23] J. P. Biersack and L. G. Haggmark. A monte carlo computer program for the transport of energetic
ions in amorphous targets. Nuclear Instruments and Methods, 174:257–269, 1980.
[24] J. F. Ziegler, J. P. Biersack, and U. Littmark. The Stopping and Range of Ions in Matter. Pergamon
Press, New York, 1985.
[25] SRIM, 2000.
[26] W. Möller and W. Eckstein. TRIDYN – a TRIM simulation code including dynamic composition
changes. Nucl. Inst. Meth. Phys. Res. B, 2:814–818, 1984.
[27] W. Möller. TRIDYN-HZDR.
[28] A.Mutzke, R.Schneider, W.Eckstein, and R.Dohmen. SDTrimSP version 5.0. IPP Report 12/8, Garching, 2011.
[29] M. L. Nietiadi, L. Sandoval, H. M. Urbassek, and W. Möller. Sputtering of Si nanospheres. Physical
Review B, 90:045417, 2014.
[30] Mario Castro and Rodolfo Cuerno. Hydrodynamic approach to surface pattern formation by ion beams.
Applied Surface Science, 258:4171–4178, 2012.
[31] S. A. Norris. Stability analysis of a viscoelastic model for ion-irradiated silicon. Physical Review B,
85:155325, 2012.
[32] S. A. Norris. Stress-induced patterns in ion-irradiated silicon: model based on anisotropic plastic flow.
Phys. Rev. B, 86:235405, 2012. arxiv:1207.5754.
[33] M. Castro, R. Gago, L. Vázquez, J. Muñoz-García, and R. Cuerno. Stress-induced solid flow drives
surface nanopatterning of silicon by ion-beam irradiation. Physical Review B, 86:214107, 2012.
[34] P. D. Shipman and R. M. Bradley. Theory of nanoscale pattern formation induced by normal-incidence
ion bombardment of binary compounds. Physical Review B, 84:085420, 2011.
[35] W. Yu, J. L. Sullivan, and Saied S. O. Xps and leiss studies of ion bombarded gasb, insb and cdse
surfaces. Surface Science, 352–354:781–787, 1996.
[36] Hans Hofsäss. Surface instability and pattern formation by ion-induced erosion and mass redistribution.
Applied Physics A, 114:401–422, 2014.
[37] L. Bukonte, F. Djurabekova, J. Samela, K. Nordlund, S.A. Norris, and M.J. Aziz. Comparison of
molecular dynamics and binary collision approximation simulations for atom displacement analysis.
Nuclear Instruments and Methods in Physics Research B, 297:23–28, 2013.
[38] N. Kalyanasundaram, M. Ghazisaeidi, J. B. Freund, and H. T. Johnson. Single impact crater functions
for ion bombardment of silicon. Appl. Phys. Lett., 92:131909, 2008.
[39] M. Z. Hossain, K. Das, J. B. Freund, and H. T. Johnson. Ion impact crater asymmetry determines
surface ripple orientation. Applied Physics Letters, 99:151913, 2011.
15
[40] K. Nordlund, M. Ghaly, R. S. Averback, M. Caturla, T. Diaz de la Rubia, and J. Tarus. Defect
production in collision cascades in elemental semiconductors and fcc metals. Phys. Rev. B, 57:7556–
7570, 1998.
[41] M. Ghaly, K. Nordlund, and R. S. Averback. Molecular dynamics investigations of surface damage
produced by kiloelectronvolt self-bombardment of solids. Philos. Mag. A, 79:795–820, 1999.
[42] K. Nordlund. Molecular dynamics simulation of ion ranges in the 1 – 100 kev energy range. Comput.
Mater. Sci., 3:448, 1995.
[43] S. Plimpton. Fast parallel algorithms for short-range molecular dynamics. Journal of Computational
Physics, 117:1–19, 1995.
16
| 5 |
Implementation of a Direct Coupling Coherent Quantum Observer
including Observer Measurements
arXiv:1603.01436v1 [quant-ph] 4 Mar 2016
Ian R. Petersen and Elanor H. Huntington
Abstract— This paper considers the problem of constructing
a direct coupling quantum observer for a quantum harmonic
oscillator system. The proposed observer is shown to be able to
estimate one but not both of the plant variables and produces
a measureable output using homodyne detection.
I. I NTRODUCTION
A number of papers have recently considered the problem
of constructing a coherent quantum observer for a quantum
system; e.g., see [1]–[4]. In the coherent quantum observer
problem, a quantum plant is coupled to a quantum observer
which is also a quantum system. The quantum observer is
constructed to be a physically realizable quantum system
so that the system variables of the quantum observer converge in some suitable sense to the system variables of the
quantum plant. The papers [4]–[7] considered the problem
of constructing a direct coupling quantum observer for a
given closed quantum system. In [4], the proposed observer
is shown to be able to estimate some but not all of the
plant variables in a time averaged sense. Also, the paper
[8] shows that a possible experimental implementation of
the augmented quantum plant and quantum observer system
considered in [4] may be constructed using a non-degenerate
parametric amplifier (NDPA) which is coupled to a beamsplitter by suitable choice of the NDPA and beamsplitter
parameters.
One important limitation of the direct coupled quantum
observer results given in [4]–[8] is that both the quantum
plant and the quantum observer are closed quantum systems.
This means that it not possible to make an experimental measurement to verify the properties of the quantum observer.
In this paper, we address this difficulty by extending the
results of [4] to allow for the case in which the quantum
observer is an open quantum linear system whose output
can be monitored using homodyne detection. In this case, it
is shown that similar results can be obtained as in [4] except
that now the observer output is subject to a white noise
perturbation. However, by suitably designing the observer,
it is shown that the level of this noise perturbation can
be made arbitrarily small (at the expense of slow observer
This work was supported by the Australian Research Council (ARC)
and the Chinese Academy of Sciences Presidents International Fellowship
Initiative (No. 2015DT006).
Ian R. Petersen is with the School of Engineering and Information Technology, University of New South Wales at the Australian Defence Force Academy, Canberra ACT 2600, Australia.
i.r.petersen@gmail.com
Elanor H. Huntington is with the College of Engineering and Computer
Science, The Australian National University, Canberra, ACT 0200, Australia. Email: Elanor.Huntington@anu.edu.au.
convergence). Also, the results of [8] are extended to show
that a possible experimental implementation of the augmented quantum plant and quantum observer system may
be constructed using a non-degenerate parametric amplifier
(NDPA) which is coupled to a beamsplitter by suitable choice
of the NDPA and beamsplitter parameters. In this case, the
NDPA contains an extra field channel as compared to the
result in [8] and this extra channel is used for homodyne
detection in the observer.
II. D IRECT C OUPLING C OHERENT Q UANTUM O BSERVER
WITH O BSERVER M EASUREMENT
In this section, we extend the theory of [4] to the case of a
direct coupled quantum observer which is also coupled to a
field to enable measurements to be made on the observer. In
our proposed direct coupled coherent quantum observer, the
quantum plant is a single quantum harmonic oscillator which
is a linear quantum system (e.g., see [9]–[13]) described by
the non-commutative differential equation
ẋp (t)
zp (t)
=
=
0; xp (0) = x0p ;
Cp xp (t)
(1)
where zp (t) denotes the system variable to be estimated by
the observer and Cp ∈ R1×2 . This quantum plant corre
qp
sponds to a plant Hamiltonian Hp = 0. Here xp =
pp
where qp is the plant position operator and pp is the plant
momentum operator. It follows from (1) that the plant system
variables xp (t) will remain fixed if the plant is not coupled
to the observer.
We now describe the linear quantum system which will
correspond to the quantum observer; see also [9]–[13]. This
system is described by a quantum stochastic differential
equation (QSDE) of the form
dxo
= Ao xo dt + Bo dw;
xo (0) = x0o ;
= Co xo dt + dw;
= Kyo
(2)
dQ
where dw =
is a 2 × 1 vector of quantum noises
dP
expressed in quadrature form corresponding to the input field
for the observer and dyo is the corresponding output field;
e.g., see [9], [11]. The observer output zo (t) will be a real
scalar quantity obtained by applying homodyne detection
2×2
2×2
to the observer output field.
Ao ∈ R , Bo ∈ R ,
qo
is a vector of self-adjoint
Co ∈ R2×2 . Also, xo (t) =
po
system variables corresponding to the observer position and
dyo
zo (t)
momentum operators; e.g., see [9]. We assume that the
plant variables commute with the observer variables. The
system dynamics (2) are determined by the observer system
Hamiltonian and coupling operators which are operators
on the underlying Hilbert space for the observer. For the
quantum observer under consideration, this Hamiltonian is
a self-adjoint operator given by the quadratic form: Ho =
1
T
2 xo (0) Ro xo (0), where Ro is a real symmetric matrix.
Also, the coupling operator L is defined by a matrix Wo ∈
R2×2 so that
L + L∗
= Wo xo .
(3)
L−L∗
i
Then, the corresponding matrices Ao , Bo and Co in (2) are
given by
1
Ao = 2JRo + WoT JWo , Bo = JWoT J, Co = Wo (4)
2
where
0
1
J=
;
−1 0
e.g., see [9], [11]. Furthermore, we will assume that the
quantum observer is coupled to the quantum plant as shown
in Figure 1. In addition, we define a coupling Hamiltonian
Quantum
Observer
Quantum
Plant
Homdyne
Detector
yo
zo
zp
Fig. 1.
Plant Observer System.
Then, we can write the QSDEs describing the closed loop
system as follows:
dxp
dxo
dyo
= 2Jαβ T xo dt;
1
= 2ωo Jxo dt + 2JβαT xp dt + JWoT JWo xo dt
2
+JW T Jdw
κo
√
2ωo
−2
xo dt + 2JβαT xp dt − κdw;
=
κ
−2ωo − 2
= Wo xo dt + dw
√
=
κxo dt + dw;
(9)
e.g., see [9], [11]. Now it follow from (1) and (8) that
zp = αT xp .
Hence, it follows from the first equation in (9) that
dzp = 2αT Jαβ T xo dt = 0.
That is, the quantity zp remains constant even after the quantum plant is coupled to the quantum observer. In addition,
we can re-write the remaining equations in (9) as
κ
√
−2
2ωo
dxo =
xo dt + 2Jβzp dt − κdw;
−2ωo − κ2
√
(10)
dyo =
κxo dt + dw;
To analyse the system (10), we first calculate the steady
state value of the quantum expectation of the observer
variables as follows:
κ
−1
−2
2ωo
< x̄o > = −2
Jβzp
−2ωo − κ2
4
κ
4ωo
Jβzp .
=
κ2 + 16ωo2 −4ωo κ
Then, we define the quantity
which defines the coupling between the quantum plant and
the quantum observer:
T
Hc = xp (0) Rc xo (0).
The augmented quantum linear system consisting of the
quantum plant and the quantum observer is then a linear
quantum system described by the total Hamiltonian
Ha
where
xa =
= Hp + Hc + Ho
1
=
xa (0)T Ra xa (0)
2
xp
xo
, Ra =
0
RcT
Rc
Ro
(5)
,
(6)
and the coupling operator L defined in (3). Extending the
approach used in [4], we assume that we can write
Rc = αβ T ,
(7)
√
Ro = ωo I, Wo = κI where α ∈ R2 , β ∈ R2 , ωo > 0 and
κ > 0. In addition, we assume
α = CpT .
(8)
4
x̃o = xo − < x̄o >= xo − 2
κ + 16ωo2
κ
−4ωo
4ωo
κ
Jβzp .
We can now re-write the equations (10) in terms of x̃0 as
follows
κ
√
2ωo
−2
xo dt + 2Jβzp dt − κdw
dx̃o =
−2ωo − κ2
κ
2ωo
−2
=
x̃o dt
−2ωo − κ2
κ
−1
κ
−2
2ωo
2ωo
−2
Jβzp dt
−2
−2ωo − κ2
−2ωo − κ2
√
+2Jβz dt − κdw
κ p
√
−2
2ωo
=
x̃o dt − κdw;
κ
−2ωo − 2
−1
κ
√
√
2ωo
−2
Jβzp dt + dw
dyo =
κx̃o dt − 2 κ
−2ωo − κ2
κ
−1
√
−2
2ωo
= −2 κ
Jβzp dt + dwout (11)
−2ωo − κ2
where
dwout =
√
κx̃o dt + dw.
We now look at the transfer function of the system
κ
√
2ωo
−2
x̃o dt − κdw;
dx̃o =
κ
−2ωo − 2
√
dwout =
κx̃o dt + dw
(12)
which is given by
G(s) = −κ
s + κ2
2ωo
−2ωo
s + κ2
−1
.
It is straightforward to verify that this transfer function is
such that
G(jω)G(jω)† = I
for all ω. That is G(s) is all pass. Also, the matrix
− κ2
2ωo
is Hurwitz and hence, the system (12)
−2ωo − κ2
will converge to a steady state in which dwout represents
a standard quantum white noise with zero mean and unit
intensity. Hence, at steady state, the equation
−1
κ
√
2ωo
−2
Jβzp dt + dwout (13)
dyo = −2 κ
−2ωo − κ2
shows that the output field converges to a constant value plus
zero mean white quantum noise with unit intensity.
We now consider the construction of the vector K defining
the observer output zo . This vector determines the quadrature
of the output field which is measured by the homodyne
detector. We first re-write equation (13) as
dyo = ezp dt + dwout
where
√
e = −2 κ
− κ2
−2ωo
2ωo
− κ2
−1
Jβ
(14)
However, if we choose
K=
eT
kek2
(16)
1
. Hence, this value of
then (15) is satisfied and kKk = kek
K must be the optimal K.
We now consider the special case of ωo = 0. In this case,
we obtain
2
√
4
0
e = 2 κ κ 2 Jβ = √ Jβ.
0 κ
κ
Hence, as κ → 0, kek → ∞ and therefore kKk → 0. This
means that we can make the noise level on our measurement
arbitrarily small by choosing κ > 0 sufficiently small.
However, as κ gets smaller, the system (12) gets closer to
instability and hence, takes longer to converge to steady state.
III. A P OSSIBLE I MPLEMENTATION OF THE P LANT
O BSERVER S YSTEM
In this section, we describe one possible experimental
implementation of the plant-observer system given in the previous section. The plant-observer system is a linear quantum
system with Hamiltonian
1
Hc + Ho = xTp Rc xo + xTo Ro xo
(17)
2
and coupling operator defined so that
L + L∗
∗
= Wo xo .
L−L
i
Furthermore,
we assume that Rc = αβ T , Ro = ωo I, Wo =
√
κI where α ∈ R2 , β ∈ R2 , ωo > 0 and κ > 0.
In order to construct a linear quantum system with a
Hamiltonian of this form, we consider an NDPA coupled
to a beamsplitter as shown schematically in Figure 2; e.g.,
see [14].
is a vector in R2 . Then
out
B2
B2
dzo = Kezp dt + Kdwout .
Hence, we choose K such that
A
Ke = 1
NDPA
B1
(15)
and therefore
dzo = zp dt + dn
a,b
out
A
out
B1
where
dn = Kdwout
will be a white noise process at steady state with intensity
kKk2 . Thus, to maximize the signal to noise ratio for our
measurement, we wish to choose K to minimize kKk2
subject to the constraint (15). Note that it follows from (15)
and the Cauchy-Schwartz inequality that
1 ≤ kKkkek
and hence
kKk ≥
1
.
kek
B1
A
Beamsplitter
Fig. 2.
NDPA coupled to a beamsplitter.
A linearized approximation for the NDPA is defined by a
quadratic Hamiltonian of the form
1
ı
H1 = (ǫa∗ b∗ − ǫ∗ ab) + ωo b∗ b
2
2
where a is the annihilation operator corresponding to the
first mode of the NDPA and b is the annihilation operator
corresponding to the second mode of the NDPA. These
modes will be assumed to be of the same frequency but with
a different polarization with a corresponding to the quantum
plant and b corresponding to the quantum observer. Also, ǫ
is a complex parameter defining the level of squeezing in the
NDPA and ωo corresponds to the detuning frequency of the b
mode in the NDPA. The a mode in the NDPA is assumed to
be tuned. In addition, theNDPA
corresponds
to a vector of
√
κ1 a
√
coupling operators L = κ2 b . Here κ1 > 0, κ2 > 0,
√
κ3 b
κ3 > 0 are scalar parameters determined by the reflectance
of the mirrors in the NDPA.
From the above Hamiltonian and coupling operators, we
can calculate the following quantum stochastic differential
equations (QSDEs) describing the NDPA:
da
db
=
0
ǫ
2
ǫ
2
0
γ1
2
a∗
b∗
dt
a
dt
−
γ2
1
b
0
2 + 2 ıωo
dA
√
κ1 0
0
dB1 ;
√
√
−
0
κ2
κ3
dB2
√
dA
κ1 0
√
a
0
κ
dt + dB1 ;
√ 2
b
κ3
dB2
0
(18)
dAout
dB1out =
dB2out
0
where γ1 = κ1 and γ2 = κ2 + κ3 .
We now consider the equations defining the beamsplitter
A
B1
=
cos θ
−eıφ sin θ
e−ıφ sin θ
cos θ
Aout
B1out
where θ and φ are angle parameters defining the beamsplitter;
e.g., see [15]. This implies
Aout
B1out
=
cos θ
eıφ sin θ
−e−ıφ sin θ
cos θ
A
B1
.
Substituting this into the second equation in (18), we obtain
−ıφ
cos θ
−e
sin θ
dA
dB1
eıφ sin θ cos θ
√
a
dA
κ1 0
√
dt +
=
b
dB1
0
κ2
and hence
−e−ıφ sin θ
cos θ − 1
√
κ1 0
√
=
κ2
0
cos θ − 1
eıφ sin θ
dA
dB1
a
dt.
b
We now assume that cos θ 6= 1. It follows that we can write
dA
=
dB1
1
cos θ − 1 e−ıφ sin θ
2(1 − cos θ) −eıφ sin θ cos θ − 1
√
a
κ1 0
√
dt.
×
b
κ2
0
Substituting this into the first equation in (18), we obtain
da
=
db
∗
a
0 2ǫ
dt
ǫ
0
b∗
2
γ1
a
0
2
dt
−
γ2
1
b
0
2 + 2 ıωo
√
1
cos θ − 1 e−ıφ sin θ
κ1 0
√
−
κ2
−eıφ sin θ cos θ − 1
2(1 − cos θ) 0
√
a
κ1 0
√
×
dt
b
0
κ2
0
dB2 ;
− √
κ3
√
dB2out = κ3 b + dB2 .
These QSDEs can be written
da
db
da∗ = F
db∗
dB2out
= H
out∗
dB2
in the form
a
b
dB2
dt + G
;
a∗
dB2∗
∗
b
a
dB2
b
dt +
dB2∗
a∗
∗
b
F2
F1#
√
κ1 κ2 e−ıφ sin θ
2(1−cos θ)
− κ23 − 21 ıωo
where the matrix F is given by F =
F1
=
F2
=
0
√
0
ǫ
2
κ1 κ2 eıφ sin θ
2(1−cos θ)
ǫ
2
.
F1
F2#
−
0
Also, the matrix G is given by
0
√κ3
G = −
0
0
and the matrix H is given by
√
0
κ3
H=
0 0
0
0
,
0
√
κ3
0
0
0
√
κ3
.
and
,
It now follows from the proof of Theorem 1 in [16] that we
can construct a Hamiltonian for this system of the form
a
b
1 ∗ ∗
a b a b M
H=
a∗
2
b∗
where the matrix R is given by
R =
=
Rc =
where the matrix M is given by
M1
M2
=
ı
2
"
=
ı
2
0
0
ǫ
√
κ κ2 e−ıφ sin θ
− 1 1−cos
θ
√
κ1 κ2 eıφ sin θ
1−cos θ
ǫ
0
−ıωo
#
and δ =
.
Also, we can construct the coupling operator for this
system in the form
a
b
L = N1 N2
a∗
b∗
N1 N2
is given by
where the matrix N =
N2# N1#
N = H.
Hence,
N1 =
0
√
κ3
Then we calculate
H
=
=
1
qp
2
1 T
xp
2
pp
xTo
0
1
0
1
0
ı
.
0
−ı
qp
pp
qo po R
qo
po
xp
R
xo
ℜ(ǫ) + ℜ(δ)
ℑ(ǫ) − ℑ(δ)
Hence,
1
ωo xTo xo + xTp Rc xo .
2
Comparing this with equation (7), we require that
−ℑ(ǫ) − ℑ(δ) ℜ(ǫ) + ℜ(δ)
= αβ T
ℜ(ǫ) − ℜ(δ)
ℑ(ǫ) − ℑ(δ)
(19)
and the condition (15) to be satisfied in order for the system
shown in Figure 2 to provide an implementation of the
augmented plant-observer system.
We first observe that the matrix on the right hand side of
equation (19) is a rank one matrix and hence, we require that
−ℑ(ǫ) − ℑ(δ) ℜ(ǫ) + ℜ(δ)
det
= |δ|2 − |ǫ|2 = 0.
ℜ(ǫ) − ℜ(δ)
ℑ(ǫ) − ℑ(δ)
That is, we require that
√
κ1 κ2
sin θ
= |ǫ|.
1 − cos θ
sin θ
Note that the function 1−cos
θ takes on all values in (−∞, ∞)
for θ ∈ (0, 2π) and hence, this condition can always be
satisfied for a suitable choice of θ. This can be seen in Figure
sin θ
3 which shows a plot of the function f (θ) = 1−cos
θ.
, N2 = 0.
We now wish to calculate the Hamiltonian H in terms of the
quadrature variables defined such that
a
qp
b
∗ = Φ pp
a
qo
b∗
po
where the matrix Φ is given by
1 ı
0 0
Φ=
1 −ı
0 0
√
κ1 κ2 eıφ sin θ
.
1−cos θ
=
,
−ℑ(ǫ) − ℑ(δ)
ℜ(ǫ) − ℜ(δ)
H=
40
30
20
10
f(θ)
and J =
M1 M2
M2# M1#
ı
JF − F † J
M=
2
I 0
. Then, we calculate M
0 −I
where
†
Φ
MΦ
0
Rc
,
RcT ωo I
0
−10
−20
−30
−40
0
1
2
Fig. 3.
3
θ (rad)
4
5
6
7
Plot of the function f (θ).
Furthermore, we will assume without loss of generality
that θ ∈ (0, π) and hence we obtain our first design equation
|ǫ|
sin θ
.
= √
1 − cos θ
κ1 κ2
(20)
In practice, this ratio would be chosen in the range of
|ǫ|
√
κ1 κ2 ∈ (0, 0.6) in order to ensure that the linearized model
which is being used is valid.
We now construct the vectors α and β so that condition
(19) is satisfied. Indeed, we let
#
"
1
−ℑ(ǫ) − ℑ(δ)
, β=
.
α=
ℜ(ǫ)−ℜ(δ)
ℜ(ǫ) + ℜ(δ)
−ℑ(ǫ)−ℑ(δ)
For these values of α and β, it is straightforward to verify
that (19) is satisfied provided that |ǫ| = |δ|. With this value
of β, we now calculate the quantity e defined in (14) as
follows:
4
κ3
4ωo
e =
Jβ
κ23 + 16ωo2 −4ωo κ3
4
ℜ(ǫ) + ℜ(δ)
κ3
4ωo
.
= − 2
ℑ(ǫ) + ℑ(δ)
κ3 + 16ωo2 −4ωo κ3
Then, the vector K defining the quadrature measured by the
homodyne detector is constructed according to the equation
(16).
In the special case that ωo = 0, this reduces to
4
ℜ(ǫ) + ℜ(δ)
e=−
.
κ3 ℑ(ǫ) + ℑ(δ)
In terms of complex numbers e = e(1) + ıe(2), we can write
this as
4
e = − (ǫ + δ).
κ3
Then, in terms of complex numbers K = K(1) + ıK(2), the
formula (16) becomes
K=
κ3 (ǫ + δ)
κ3
e
=−
=−
|e|2
4(ǫ + δ)(ǭ + δ̄)
4(ǭ + δ̄)
¯ denotes complex conjugate. Also, as noted in
where (·)
Section II, the steady state measurement noise intensity is
given by
κ3
1
=−
kek
4|ǫ + δ|
which approaches zero as κ3 → 0. However, this is at the
expense of increasingly slower convergence to steady state.
IV. C ONCLUSIONS
In this paper, we have shown that a direct coupling
observer for a linear quantum system can be implemented
in the case that the observer can be measured using a
Homodyne detection measurement. This would allow the
plant observer system to be constructed experimentally and
the performance of the observer could be verified using the
measured data.
R EFERENCES
[1] Z. Miao and M. R. James, “Quantum observer for linear quantum
stochastic systems,” in Proceedings of the 51st IEEE Conference on
Decision and Control, Maui, December 2012.
[2] I. Vladimirov and I. R. Petersen, “Coherent quantum filtering for
physically realizable linear quantum plants,” in Proceedings of the
2013 European Control Conference, Zurich, Switzerland, July 2013,
arXiv:1301.3154.
[3] Z. Miao, L. A. D. Espinosa, I. R. Petersen, V. Ugrinovskii, and M. R.
James, “Coherent quantum observers for finite level quantum systems,”
in Australian Control Conference, Perth, Australia, November 2013.
[4] I. R. Petersen, “A direct coupling coherent quantum observer,” in Proceedings of the 2014 IEEE Multi-conference on Systems and Control,
Antibes, France, October 2014, also available arXiv 1408.0399.
[5] ——, “A direct coupling coherent quantum observer for a single
qubit finite level quantum system,” in Proceedings of 2014 Australian
Control Conference, Canberra, Australia, November 2014, also arXiv
1409.2594.
[6] ——, “Time averaged consensus in a direct coupled distributed
coherent quantum observer,” in Proceedings of the 2015 American
Control Conference, Chicago, IL, July 2015.
[7] ——, “Time averaged consensus in a direct coupled coherent quantum
observer network for a single qubit finite level quantum system,” in
Proceedings of the 10th ASIAN CONTROL CONFERENCE 2015, Kota
Kinabalu, Malaysia, May 2015.
[8] I. R. Petersen and E. H. Huntington, “A possible implementation
of a direct coupling coherent quantum observer,” in Proceedings of
2015 Australian Control Conference, Gold Coast, Australia, November
2015.
[9] M. R. James, H. I. Nurdin, and I. R. Petersen, “H ∞ control of
linear quantum stochastic systems,” IEEE Transactions on Automatic
Control, vol. 53, no. 8, pp. 1787–1803, 2008, arXiv:quant-ph/0703150.
[10] I. R. Petersen, “Quantum linear systems theory,” in Proceedings of the
19th International Symposium on Mathematical Theory of Networks
and Systems, Budapest, Hungary, July 2010.
[11] H. I. Nurdin, M. R. James, and A. C. Doherty, “Network synthesis
of linear dynamical quantum stochastic systems,” SIAM Journal on
Control and Optimization, vol. 48, no. 4, pp. 2686–2718, 2009.
[12] J. Gough and M. R. James, “The series product and its application to
quantum feedforward and feedback networks,” IEEE Transactions on
Automatic Control, vol. 54, no. 11, pp. 2530–2544, 2009.
[13] G. Zhang and M. James, “Direct and indirect couplings in coherent
feedback control of linear quantum systems,” IEEE Transactions on
Automatic Control, vol. 56, no. 7, pp. 1535–1550, 2011.
[14] H. Bachor and T. Ralph, A Guide to Experiments in Quantum Optics,
2nd ed. Weinheim, Germany: Wiley-VCH, 2004.
[15] L. Mandel and E. Wolf, Optical Coherence and Quantum Optics.
Cambridge, UK: Cambridge University Press, 1995.
[16] A. J. Shaiju and I. R. Petersen, “A frequency domain condition for the
physical realizability of linear quantum systems,” IEEE Transactions
on Automatic Control, vol. 57, no. 8, pp. 2033 – 2044, 2012.
| 3 |
Noname manuscript No.
(will be inserted by the editor)
Multi-point Codes from the GGS Curves
arXiv:1706.00313v3 [] 29 Jun 2017
Chuangqiang Hu · Shudi Yang
Received: date / Accepted: date
Abstract This paper is concerned with the construction of algebraic geometric codes defined from GGS curves. It is of significant use to describe bases for
the Riemann-Roch spaces associated to some totally ramified places, which
enables us to study multi-point AG codes. Along this line, we characterize
explicitly the Weierstrass semigroups and pure gaps by an exhaustive computation of the basis for Riemann-Roch spaces from GGS curves. Additionally,
we determine the floor of a certain type of divisor and investigate the properties of AG codes. Finally, we apply these results to find multi-point codes
with excellent parameters. As one of the examples, a presented code with
parameters [216, 190, > 18] over F64 yields a new record.
Keywords Algebraic geometric codes · GGS curve · Weierstrass semigroup ·
Weierstrass pure gap
Mathematics Subject Classification (2010) 14H55 · 11R58
1 Introduction
In the early 1980s, Goppa [12] constructed algebraic geometric codes (AG
codes for short) from algebraic curves. Since then, the study of AG codes
becomes an important instrument in the theory of error-correcting codes.
Roughly speaking, the parameters of an AG code are good when the underlying curve has many rational points with respect to its genus. For this reason
C. Hu
Yau Mathematical Science Center, Tsinghua University, Peking 100084, P.R. China
E-mail: huchq@mail2.sysu.edu.cn
S. Yang
School of Mathematical Sciences, Qufu Normal University, Shandong 273165, P.R.China
E-mail: yangshudi7902@126.com
2
Chuangqiang Hu, Shudi Yang
maximal curves, that is curves attaining the Hasse-Weil upper bound, have
been widely investigated in the literature: for example the Hermitian curve
and its quotients, the Suzuki curve, the Klein quartic and the GK curve. In
this work we will study multi-point AG codes on the GGS curves.
In order to construct good AG codes we need to study Weierstrass semigroups and pure gaps. Their use dates back to the theory of one-point codes.
For example, the authors in [26,14,28,29] examined one-point codes from Hermitian curves and develop efficient methods to decode them. Korchmáros and
Nagy [19] computed the Weierstrass semigroup of a degree-three closed point
of the Hermitian curve. Matthews [24] determined the Weierstrass semigroup
of any r-tuple rational points on the quotient of the Hermitian curve. As is
known, Weierstrass pure gap is also a useful tool in coding theory. Garcia,
Kim and Lax improved the Goppa bound using arithmetical structure of the
Weierstrass gaps at one place in [9,10]. The concept of pure gaps of a pair
of points on a curve was initiated by Homma and Kim [15], and it had been
pushed forward by Carvalho and Torres [4] to several points. Maharaj and
Matthews [21] extended this construction by introducing the notion of the
floor of a divisor and obtained improved bounds on the parameters of AG
codes.
We mention that Maharaj [20] showed that Riemann-Roch spaces of divisors from fiber products of Kummer covers of the projective line, can be
decomposed as a direct sum of Riemann-Roch spaces of divisors of the projective line. Maharaj, Matthewsa and Pirsic [22] determined explicit bases for
large classes of Riemann-Roch spaces of the Hermitian function field. Along
this research line, Hu and Yang [16] gave other explicit bases for RiemannRoch spaces of divisors over Kummer extensions, which makes it convenient
to determine the pure gaps.
In this work, we focus our attention on the GGS curves, which are maximal
curves constructed by Garcia, Güneri and Stichtenoth [8] over Fq2n defined by
the equations
(
xq + x = y q+1 ,
2
yq − y = z m,
where q is a prime power and m = (q n + 1)/(q + 1) with n > 1 to be an
odd integer. Obviously the GGS curve is a generalization of the GK curve
initiated by Giulietti and Korchmáros [11] where we take n = 3. Recall that
Fanali and Giulietti [7] have investigated one-point AG codes over the GK
curves and obtained linear codes with better parameters with respect to those
known previously. Two-point and multi-point AG codes on the GK maximal
curves have been studied in [6] and [3], respectively. Bartoli, Montanucci and
Zini [2] examined one-point AG codes from the GGS curves. Inspired by the
above work and [16,5], here we will examine multi-point AG codes arising from
GGS curves. To be precise, an explicit basis for the corresponding RiemannRoch space is determined by constructing a related set of lattice points. The
properties of AG codes from GGS curves are also considered. Then the basis is
Multi-point Codes from the GGS Curves
3
utilized to characterize the Weierstrass semigroups and pure gaps with respect
to several totally ramified places. In addition, we give an effective algorithm
to compute the floor of divisors. Finally, our results will lead us to find new
codes with better parameters in comparison with the existing codes in MinT’s
Tables [25]. A new record-giving [216, 190, > 18]-code over F64 is presented as
one of the examples.
The remainder of the paper is organized as follows. Section 2 focuses on the
construction of bases for the Riemann-Roch space from GGS curves. Section
3 studies the properties of the related AG codes. In Section 4 we determine
the Weierstrass semigroups and the pure gaps. Section 5 devotes to the floor
of divisors from GGS curves. Finally, in Section 6 we employ the results in the
previous sections to construct multi-point codes with excellent parameters.
2 Bases for Riemann-Roch spaces from GGS curves
Throughout this paper, we always let q be a prime power and n > 1 be an
odd integer. The GGS curve GGS(q, n) over Fq2n is defined by the equations
(
xq + x = y q+1 ,
(1)
2
yq − y = z m,
where m = (q n +1)/(q +1). The genus of GGS(q, n) is (q −1)(q n+1 +q n −q 2 )/2
and there are q 2n+2 − q n+3 + q n+2 + 1 rational places, see [8] for more details.
Especially when n = 3, the equation (1) gives the well-known maximal curve
introduced by Giulietti and Korchmáros [11], the so-called GK curve, which
is not a subcover of the corresponding Hermitian curve.
Denote by Pα,β,γ the rational place
P of this curve except for the one centered at infinity P∞ . Take Qβ := αq +α=β q+1 Pα,β,0 where β ∈ Fq2 . Then
deg(Qβ ) = q. For later use, we write P0 := P0,0,0 and Q0 := P0 +P1 +· · ·+Pq−1 .
The following proposition describes some principle divisors from GGS curves.
Proposition 1 Let the curve GGS(q, n) be given in (1) and assume that αµ
with 0 6 µ < q are the solutions of xq + x = 0. Then we obtain
(1) div(x − αµ ) = m(q + 1)Pµ − m(q + 1)P∞ ,
(2) div(y − β)
P= mQβ − mqP∞ for β ∈ Fq2 ,
(3) div(z) = β∈F 2 Qβ − q 3 P∞ .
q
For convenience, we use Qν (0 6 ν 6 q 2 − 1) to represent the divisors Qβ
(β ∈ Fq2 ). In particular, Qν |ν=0 = Qβ |β=0 .
For a function field F , the Riemann-Roch vector space with respect to a
divisor G is defined by
n
o
L(G) = f ∈ F div(f ) + G > 0 ∪ {0}.
4
Chuangqiang Hu, Shudi Yang
Let ℓ(G) be the dimension of L(G). From the famous Riemann-Roch Theorem,
we know that
ℓ(G) − ℓ(W − G) = 1 − g + deg(G),
where W is a canonical divisor and g is the genus of the associated curve.
Pq−1
P 2 −1
In this section, we consider a divisor G := µ=0
rµ Pµ + qν=1
sν Qν + tP∞
from the GGS curve GGS(q, n). Actually, we can show that the Riemann-Roch
space L(G) is generated by some elements, say Ei,j,k for some i, j, k, and the
number of such elements equals ℓ(G). To see this, we proceed as follows.
Let j = (j1 , j2 , · · · , jq−1 ) and k = (k1 , k2 , · · · , kq2 −1 ). For (i, j, k) ∈
q2 +q−1
Z
, we define
Ei,j,k := z
i
q−1
Y
µ=1
jµ
(x − αµ )
2
qY
−1
(y − βν )kν .
(2)
ν=1
Pq−1
Pq2 −1
Set |j| = µ=1 jµ and |k| = ν=1 kν . Here and thereafter, we denote |v| to
be the sum of all the coordinates of a given vector v. By Proposition 1, one
can compute the divisor of Ei,j,k :
div(Ei,j,k ) =iP0 +
q−1
X
(i + m(q + 1)jµ )Pµ +
µ=1
2
qX
−1
(i + mkν )Qν
ν=1
− q 3 i + m(q + 1)|j| + mq|k| P∞ .
(3)
For later use, we denote by ⌊x⌋ the largest integer not greater thanx and
α
if
by ⌈x⌉ the smallest integer not less than x. It is easy to show that j =
β
and only if 0 6 βj − α < β, where β ∈ N and α ∈ Z.
Put r = (r0 , r1 , · · · , rq−1 ) and s = (s1 , s2 , · · · , sq2 −1 ). Let us define a set
2
of lattice points for (r, s, t) ∈ Zq +q ,
n
Ωr,s,t := (i, j, k) i + r0 > 0,
0 6 i + m(q + 1)jµ + rµ < m(q + 1) for µ = 1, · · · , q − 1,
or equivalently,
0 6 i + mkν + sν < m for ν = 1, · · · , q 2 − 1,
o
q 3 i + m(q + 1)|j| + mq|k| 6 t ,
n
Ωr,s,t := (i, j, k) i + r0 > 0,
−i − rµ
for µ = 1, · · · , q − 1,
jµ =
m(q + 1)
−i − sν
kν =
for ν = 1, · · · , q 2 − 1,
m
Multi-point Codes from the GGS Curves
5
o
q 3 i + m(q + 1)|j| + mq|k| 6 t .
(4)
The following lemma is crucial for the proof of our key result. However,
the proof of this lemma is technical, and will be completed later.
Lemma 1 The number of lattice points in Ωr,s,t can be expressed as:
#Ωr,s,t = 1 − g + t + |r| + q|s|,
o
n
for t > 2g − 1 − q 3 w, where w = min 06µ6q−1 rµ , sν .
16ν6q2 −1
Pq−1
Pq2 −1
Let G :=
µ=0 rµ Pµ +
ν=1 sν Qν + tP∞ . It is trivial that deg(G) =
|r| + q|s| + t. Now we can easily prove the main result of this section.
Pq−1
P 2 −1
sν Qν + tP∞ . The elements Ei,j,k
Theorem 1 Let G := µ=0
rµ Pµ + qν=1
with (i, j, k) ∈ Ωr,s,t constitute a basis for the Riemann-Roch space L(G). In
particular ℓ(G) = #Ωr,s,t .
Proof Let (i, j, k) ∈ Ωr,s,t . It follows from the definition that Ei,j,k ∈ L(G),
Pq−1
Pq2 −1
where G =
µ=0 rµ Pµ +
ν=1 sν Qν + tP∞ . From Equation (3), we have
vP0 (Ei,j,k ) = i, which indicates that the valuation of Ei,j,k at the rational
place P0 uniquely depends on i. Since lattice points in Ωr,s,t provide distinct
values of i, the elements Ei,j,k are linearly independent of each other, with
(i, j, k) ∈ Ωr,s,t . In order to indicate that they constitute a basis for L(G),
the only thing is to prove that
ℓ(G) = #Ωr,s,t .
For the case of r0 sufficiently large, it follows from the Riemann-Roch Theorem
and Lemma 1 that
ℓ(G) = 1 − g + deg(G)
= 1 − g + |r| + q|s| + t = #Ωr,s,t .
This implies that L(G) is spanned by elements Ei,j,k with (i, j, k) in the set
Ωr,s,t .
For the general case, we choose r0′ > r0 large enough and set G′ :=
Pq−1
Pq2 −1
′
r0 P0 + µ=1 rµ Pµ + ν=1 sν Qν + tP∞ , r ′ = (r0′ , r1 , · · · , rq−1 ). From above
argument, we know that the elements Ei,j,k with (i, j, k) ∈ Ωr′ ,s,t span the
whole space of L(G′ ). Remember that L(G) is a linear subspace of L(G′ ),
which can be written as
o
n
L(G) = f ∈ L(G′ ) vP0 (f ) > −r0 .
Thus, we choose f ∈ L(G) and suppose that
X
ai Ei,j,k ,
f=
(i,j,k)∈Ωr′ ,s,t
6
Chuangqiang Hu, Shudi Yang
since f ∈ L(G′ ). The valuation of f at P0 is vP0 (f ) = minai 6=0 {i}. Then the
inequality vP0 (f ) > −r0 gives that, if ai 6= 0, then i > −r0 . Equivalently, if
i < −r0 , then ai = 0. From the definition of Ωr,s,t and Ωr′ ,s,t , we get that
X
f=
ai Ei,j,k .
i,j,k∈Ωr,s,t
Then the theorem follows.
⊓
⊔
We now turn to prove Lemma 1 which requires a series of results listed as
follows.
Definition 1 Let (a1 , · · · , ak ) be a sequence of positive integers such that
the greatest common divisor is 1. Define di = gcd(a1 , · · · , ai ) and Ai =
{a1 /di , · · · , ai /di } for i = 1, · · · , k. Let d0 = 0. Let Si be the semigroup generated by Ai . If ai /di ∈ Si−1 for i = 2, · · · , k, we call the sequence (a1 , · · · , ak )
telescopic. A semigroup is called telescopic if it is generated by a telescopic
sequence.
Lemma 2 (Lemma 6.4, [18]) If (a1 , · · · , ak ) is telescopic and M ∈ Sk ,
then there exist uniquely determined non-negative integers 0 6 xi < di−1 /di
for i = 2, · · · , k, such that
M=
k
X
xi ai .
i=1
We call this representation the normal representation of M .
Lemma 3 (Lemma 6.5, [18]) For the semigroup generated by the telescopic
sequence (a1 , · · · , ak ) we have
lg (Sk ) =
k
X
(di−1 /di − 1)ai ,
i=1
g(Sk ) = (lg (Sk ) + 1)/2,
where lg (Sk ) and g(Sk ) denote the largest gap and the number of gaps of Sk ,
respectively.
Lemma 4 Let m = (q n + 1)/(q + 1), g = (q − 1)(q n+1 + q n − q 2 )/2 for an odd
integer n > 1. Let t ∈ Z. Consider the lattice point set Ψ (t) defined by
n
o
(α, β, γ) 0 6 α < m, 0 6 β 6 q, γ > 0, q 3 α + mqβ + m(q + 1)γ 6 t ,
If t > 2g − 1, then Ψ (t) has cardinality
#Ψ (t) = 1 − g + t.
Multi-point Codes from the GGS Curves
7
Proof Let a1 = q 3 , a2 = mq, a3 = m(q + 1). It is easily verified that the
sequence (a1 , a2 , a3 ) is telescopic. By Lemma 2 every element M in S3 has
a unique representation M = a1 α + a2 β + a3 γ, where S3 is the semigroup
generated by (a1 , a2 , a3 ). One obtains from Lemma 3 that
lg (S3 ) = (q − 1)(q n + 1)(q + 1) − q 3 ,
1
1
g(S3 ) = (lg (S3 ) + 1) = (q − 1)(q n+1 + q n − q 2 ) = g.
2
2
It follows that the set Ψ (t) has cardinality 1 − g + t provided that t > 2g − 1 =
lg (S3 ), which finishes the proof.
⊓
⊔
From Lemma 4, we get the number of lattice points in Ω0,0,t .
Lemma 5 If t > 2g − 1, then the number of lattice points in Ω0,0,t is
#Ω0,0,t = 1 − g + t.
Proof Note that
n
Ω0,0,t := (i, j, k) i > 0,
−i
for µ = 1, · · · , q − 1,
jµ =
m(q + 1)
−i
for ν = 1, · · · , q 2 − 1,
kν =
m
o
q 3 i + m(q + 1)|j| + mq|k| 6 t .
(5)
Set i := α + m(β + (q + 1)γ) with 0 6 α < m, 0 6 β 6 q and γ > 0. Then
Equation (5) gives that
n
o
Ω0,0,t ∼
= (α, β, γ) 0 6 α < m, 0 6 β 6 q, γ > 0, q 3 α + mqβ + (q n + 1)γ 6 t .
∼ B means that two lattice point sets A,
Here and thereafter, the notation A =
B are bijective. Thus the assertion #Ω0,0,t = 1 − g + t is derived from Lemma
4.
⊓
⊔
Lemma 6 The lattice point set Ωr,s,t as defined above is symmetric with respect to r0 , r1 , · · · , rq−1 and s1 , · · · , sq2 −1 , respectively. In other words, we
q2 −1
q−1
have #Ωr,s,t = #Ωr′ ,s′ ,t , where the sequences ri i=0 and si i=1 are equal
q2 −1
q−1
to ri′ i=0 and s′i i=1 up to permutation, respectively.
Proof Recall that Ωr,s,t is defined by
n
Ωr,s,t := (i′ , j ′ , k′ ) i′ + r0 > 0,
′
−i − rµ
jµ′ =
for µ = 1, · · · , q − 1,
m(q + 1)
8
Chuangqiang Hu, Shudi Yang
−i′ − sν
=
m
for ν = 1, · · · , q 2 − 1,
o
q 3 i′ + m(q + 1)|j ′ | + mq|k′ | 6 t ,
kν′
′
where j ′ = (j1′ , · · · , jq−1
) and k′ = (k1′ , · · · , kq′ 2 −1 ). It is important to write
′
i = i + m(q + 1)l with 0 6 i < m(q + 1). Let jµ′ = jµ − l for µ > 1,
kν′ = kν − (q + 1)l for ν > 1. Then
n
Ωr,s,t ∼
= (i, l, j, k) i + m(q + 1)l > −r0 , 0 6 i < m(q + 1),
−i − rµ
jµ =
for µ = 1, · · · , q − 1,
m(q + 1)
−i − sν
for ν = 1, · · · , q 2 − 1,
kν =
m
o
q 3 i + m(q + 1) l + |j| + mq|k| 6 t ,
where j = (j1 , · · · , jq−1
) and k = (k1 , · · · , kq2 −1 ). The first inequality in Ωr,s,t
−i − r0
. So we write l = j0 + ι with ι > 0. Then
gives that l > j0 :=
m(q + 1)
n
Ωr,s,t ∼
= (i, ι, j0 , j, k) 0 6 i < m(q + 1), ι > 0,
−i − rµ
jµ =
for µ = 0, 1, · · · , q − 1,
m(q + 1)
−i − sν
for ν = 1, · · · , q 2 − 1,
kν =
m
o
q 3 i + m(q + 1) j0 + ι + |j| + mq|k| 6 t .
The right hand side means that the number of the lattice points does not
depend on the order of rµ , 0 6 µ 6 q − 1, and the order of sν , 1 6 ν 6 q 2 − 1,
which concludes the desired assertion.
⊓
⊔
Lemma 7 Let r = (r0 , r1 , · · · , rq−1 ) and s = (s1 , s2 , · · · , sq2 −1 ). The following equality holds:
#Ωr,s,t = #Ω0,s,t + |r|,
where rµ > 0 for µ > 0, sν > 0 for ν > 1, and t > 2g − 1.
Proof Let us take the sets Ωr,s,t and Ωr′ ,s,t into consideration, where r ′ =
(0, r1 , · · · , rq−1 ). It follows from the definition that the complement set ∆ :=
Ωr,s,t \Ωr′ ,s,t is given by
n
(i, j, k) − r0 6 i < 0,
−i − rµ
jµ =
for µ = 1, · · · , q − 1,
m(q + 1)
Multi-point Codes from the GGS Curves
−i − sν
kν =
m
9
for ν = 1, · · · , q 2 − 1,
o
q 3 i + m(q + 1)|j| + mq|k| 6 t .
It is trivial that ∆ = ∅ if r0 = 0. To determine the cardinality of ∆ with
r0 > 0, we denote i := α + m(β + (q + 1)γ) with α, β, γ satisfying 0 6 α < m,
0 6 β 6 q and γ 6 −1. Then jµ 6 −γ for µ > 1, kν 6 −β − (q + 1)γ for ν > 1.
A straightforward computation shows
q 3 i + m(q + 1)|j| + mq|k| 6 q 3 α + mqβ + m(q + 1)γ
6 q 3 (m − 1) + mq 2 − m(q + 1) = 2g − 1.
So the last inequality in ∆ always holds for all t > 2g − 1, which means that
the cardinality of ∆ is determined by the first inequality, that is #∆ = r0 .
Then we must have
#Ωr,s,t = #Ωr′ ,s,t + r0 ,
whenever r0 > 0. Repeating the above routine and using Lemma 6, we get
#Ωr,s,t = #Ω0,s,t + |r|,
where r = (r0 , r1 , · · · , rq−1 ).
⊓
⊔
Lemma 8 Let s = (s1 , s2 , · · · , sq2 −1 ). The following identity holds:
#Ω0,s,t = #Ω0,0,t + q|s|,
where si > 0 for i = 1, 2, · · · , q 2 − 1 and t > 2g − 1.
Proof For convenience, let us denote r := (s0 , s0 , · · · , s0 ) to be the q-tuple
with all entries equal s0 , where s0 > 0, and write Ωr,s,t as Γs0 ,(s1 ,··· ,sq2 −1 ),t .
To get the desired conclusion, we first claim that
#Γs0 ,(s1 ,··· ,sq2 −1 ),t = #Γs′0 ,(s′1 ,··· ,s′ 2
q −1
),t ,
(6)
q2 −1
q2 −1
where the sequence si i=0 is equal to s′i i=0 up to permutation.
Note that Γs0 ,(s1 ,··· ,sq2 −1 ),t is equivalent to
n
(i′ , j ′ , k1′ , · · · , kq′ 2 −1 ) i′ + s0 > 0,
0 6 i′ + m(q + 1)j ′ + s0 < m(q + 1),
0 6 i′ + mkν′ + sν < m for ν = 1, · · · , q 2 − 1,
o
q 3 i′ + m(q + 1)(q − 1)j ′ + mq|k′ | 6 t ,
Pq2 −1
where |k′ | = ν=1 kν′ . By setting i′ := i + mκ where 0 6 i < m and kν′ :=
kν − κ for ν > 1, we obtain
n
Γs0 ,(s1 ,··· ,sq2 −1 ),t ∼
= (i, κ, j ′ , k1 , · · · , kq2 −1 ) 0 6 i < m, i + mκ + s0 > 0,
10
Chuangqiang Hu, Shudi Yang
0 6 i + mκ + m(q + 1)j ′ + s0 < m(q + 1),
0 6 i + mkν + sν < m for ν = 1, · · · , q 2 − 1,
o
q 3 i + m(q + 1)(q − 1)j ′ + mq(κ + |k|) 6 t .
−i − s0
.
Put κ := k0 + ε, where ε := −(q + 1)j0 + η, 0 6 η < q + 1 and k0 :=
m
One gets that 0 6 i + mk0 + s0 < m, which leads to 0 6 i + mκ + m(q + 1)j0 −
mη+s0 < m. So the inequality 0 6 i+mκ+m(q+1)j0 +s0 < m+mη 6 m(q+1)
holds because ε = −(q + 1)j0 + η. Thus we must have j0 = j ′ 6 0. Therefore
n
Γs0 ,(s1 ,··· ,sq2 −1 ),t ∼
= (i, j0 , η, k0 , k1 , · · · , kq2 −1 ) 0 6 i < m, j0 6 0,
0 6 η < q + 1,
−i − sν
kν =
for ν = 0, 1, · · · , q 2 − 1,
m
o
q 3 i − m(q + 1)j0 + mq(k0 + η + |k|) 6 t .
The right hand side means that the lattice points do not depend on the order
of sν with 0 6 ν 6 q 2 − 1, by observing that kν is determined by sν . In other
words, we have shown that the number of lattice points in Γs0 ,(s1 ,··· ,sq2 −1 ),t
does not depend on the order of sν with 0 6 ν 6 q 2 − 1, concluding the claim
we presented by (6). So it follows from (6) and Lemma 7 that
#Γ0,(s1 ,s2 ,··· ,sq2 −1 ),t = #Γs1 ,(0,s2 ,··· ,sq2 −1 ),t
= #Γ0,(0,s2 ,··· ,sq2 −1 ),t + qs1 .
By repeatedly using Lemma 7, we get
#Γ0,(s1 ,s2 ,··· ,sq2 −1 ),t = #Γ0,(0,0,··· ,0),t + q(s1 + s2 + · · · + sq2 −1 ),
concluding the desired formula #Ω0,s,t = #Ω0,0,t + q|s|.
⊓
⊔
With the above preparations, we are now in a position to give the proof of
Lemma 1.
o
n
Proof [Proof of Lemma 1] By taking w := min 06µ6q−1 rµ , sν , we obtain
16ν6q2 −1
from the definition that Ωr,s,t is equivalent to Ωr′ ,s′ ,t′ , where r ′ = (r0 −
w, · · · , rq−1 − w), s′ = (s1 − w, · · · , sq2 −1 − w) and t′ = t + q 3 w. Hence
#Ωr,s,t = #Ωr′ ,s′ ,t′ . On the other hand, by observing that rµ − w > 0,
sν − w > 0 and t′ > 2g − 1, we establish from Lemmas 5, 7 and 8 that
#Ωr′ ,s′ ,t′ = #Ω0,s′ ,t′ + |r ′ |
= #Ω0,0,t′ + q|s′ | + |r ′ |
= 1 − g + t′ + q|s′ | + |r ′ |
= 1 − g + t + q|s| + |r|.
Multi-point Codes from the GGS Curves
11
It then follows that
#Ωr,s,t = 1 − g + t + q|s| + |r|,
completing the proof of Lemma 1.
⊓
⊔
We finish this section with a result that allows us to give a new form of the
Pq−1
P 2 −1
basis for our Riemann-Roch space L(G) with G = µ=0
rµ Pµ + qν=1
sν Q ν +
tP∞ . Denote λ := (λ1 , · · · , λq−1 ) and γ := (γ1 , · · · , γq2 −1 ). For (u, λ, γ) ∈
2
Zq +q−1 , we define
2
qY
−1
q−1
Y
u
λµ
hγνν ,
Λu,λ,γ := τ
fµ
µ=1
µ=1
n−3
zq
x − αµ
y − βν
, fµ :=
for µ > 1, and hν :=
for ν > 1. The
x − α0
x − α0
y − β0
divisor of Λu,λ,γ is computed from Proposition 1 that
where τ :=
div(Λu,λ,γ ) =
q−1
X
µ=1
q
n−3
2
qX
−1
u + m(q + 1)λµ − m|γ| Pµ +
(q n−3 u + mγν )Qν
ν=1
− (m(q + 1) − q n−3 )u + m(q + 1)|λ| + m|γ| P0 + uP∞ .
There is a close relationship between the elements Λu,λ,γ and Ei,j,k explored as follows.
Pq−1
Pq2 −1
Corollary 1 Let G := µ=0 rµ Pµ + ν=1 sν Qν + tP∞ . Then the elements
Λu,λ,γ with (u, λ, γ) ∈ Θr,s,t form a basis for the Riemann-Roch space L(G),
where the set Θr,s,t is given by
n
(u, λ, γ) u > −t,
0 6 q n−3 u + mγν + sν < m for ν = 1, · · · , q 2 − 1
0 6 q n−3 u + m(q + 1)λµ − m|γ| + rµ < m(q + 1) for µ = 1, · · · , q − 1,
o
(7)
(m(q + 1) − q n−3 )u + m(q + 1)|λ| + m|γ| 6 r0 .
In addition we have #Θr,s,t = #Ωr,s,t .
Proof It suffices to prove that the set
o
n
Λu,λ,γ (u, λ, γ) ∈ Θr,s,t
equals the set
o
n
Ei,j,k (i, j, k) ∈ Ωr,s,t .
In fact, for fixed (u, λ, γ) ∈ Zq
2
+q−1
i = −(m(q + 1) − q
, we obtain Λu,λ,γ equals Ei,j,k with
n−3
)u − m(q + 1)|λ| − m|γ|,
12
Chuangqiang Hu, Shudi Yang
jµ = u + |λ| + λµ for µ = 1, · · · , q − 1,
kν = (q + 1)(u + |λ|) + |γ| + γν for ν = 1, · · · , q 2 − 1.
On the contrary, if we set
u = −q 3 i − m(q + 1)|j| − mq|k|,
λµ = q 2 i + q n−1 |j| + m|k| + jµ for µ = 1, · · · , q − 1,
γν = (q + 1)(i + q n−3 |j|) + q n−2 |k| + kν for ν = 1, · · · , q 2 − 1,
then Ei,j,k is exactly the element Λu,λ,γ . Therefore, if we restrict (i, j, k) in
Ωr,s,t , then we must have (u, λ, γ) is in Θr,s,t and vice versa. This completes
the proof of the claim and hence of this corollary.
⊓
⊔
In the following, we will demonstrate an interesting property of #Ωr,s,t for
GK curves with a specific vector s.
Corollary 2 Let n = 3 and the vectors r, s be given by
r := (r0 , r1 , · · · , rq−1 ),
s := (s′1 , s′2 , · · · , s′q2 −1 )
= (s1 , s1 , · · · , s1 , s2 , s2 , · · · , s2 , · · · , sq−1 , sq−1 , · · · , sq−1 ).
{z
} |
{z
}
|
{z
}
|
q+1
q+1
q+1
Then the lattice point set Ωr,s,t is symmetric with respect to r0 , r1 , · · · , rq−1 , t.
q
In other words, we have #Ωr,s,t = #Ωr′ ,s,t′ , where the sequence ri i=0 is
q
equal to ri′ i=0 up to permutation by putting rq := t and rq′ := t′ .
Proof Denote r := (r0 , ṙ) = (r0 , r1 , · · · , rq−1 ) and Ω(r0 ,ṙ),s,t := Ωr,s,t . By
Lemma 6, it suffices to prove that
#Ω(r0 ,ṙ),s,t = #Θ(r0 ,ṙ),s,t = #Ω(t,ṙ),s,r0 .
The first identity follows directly from Corollary 1. Applying Corollary 1
again gives the set Θ(r0 ,ṙ),s,t as
n
(u, λ, γ) u + t > 0,
0 6 u + mγν + s′ν < m for ν = 1, · · · , q 2 − 1,
0 6 u + m(q + 1)λµ − m|γ| + rµ < m(q + 1) for µ = 1, · · · , q − 1,
o
q 3 u + m(q + 1)|λ| + m|γ| 6 r0 .
From our assumption, it is obvious that |γ| is divisible by q + 1. So if we take
|γ|
i := u, jµ := λµ −
for µ > 1 and kν := γν for ν > 1, then Θ(r0 ,ṙ),s,t is
q+1
equivalent to
n
(i, j, k) i + t > 0,
Multi-point Codes from the GGS Curves
13
0 6 i + mkν + s′ν < m for ν = 1, · · · , q 2 − 1,
0 6 i + m(q + 1)jµ + rµ < m(q + 1) for µ = 1, · · · , q − 1,
o
q 3 i + m(q + 1)|j| + m|k| 6 r0 .
The last set is exactly Ω(t,ṙ),s,r0 by definition. Hence the second identity is
just shown, completing the whole proof.
⊓
⊔
3 The AG codes from GGS curves
This section settles the properties of AG codes from GGS curves. Generally
speaking, there are two classical ways of constructing AG codes associated
with divisors D and G, where G is a divisor of arbitrary function field F
and D := Q1 + · · · + QN is another divisor of F such that Q1 , · · · , QN are
pairwise distinct rational places, each not belonging to the support of G. One
construction is based on the Riemann-Roch space L(G),
n
o
CL (D, G) := (f (Q1 ), · · · , f (QN )) f ∈ L(G) ⊆ FN
q .
The other one depends on the space of differentials Ω(G − D),
n
o
CΩ (D, G) := (resQ1 (η), · · · , resQN (η)) η ∈ Ω(G − D) .
It is well-known the codes CL (D, G) and CΩ (D, G) are dual to each other.
Further CΩ (D, G) has parameters [N, kΩ , dΩ ] with kΩ = N − k and dΩ >
deg(G) − (2g − 2), where k = ℓ(G) − ℓ(G − D) is the dimension of CL (D, G).
If moreover 2g − 2 < deg(G) < N then
kΩ = N + g − 1 − deg(G).
We refer the reader to [26] for more information.
P
In this section, we will study the linear code CL (D, G) with D := α,β,γ Pα,β,γ
γ6=0
Pq2 −1
Pq−1
and G := µ=0 rµ Pµ + ν=1 sν Qν + tP∞ . The length of CL (D, G) is
N := deg(D) = q n+2 (q n − q + 1) − q 3 .
It is well known that the dimension of CL (D, G) is given by
dim CL (D, G) = ℓ(G) − ℓ(G − D).
(8)
Set R := N + 2g − 2. Since deg(G) > R, we deduce from the Riemann-Roch
Theorem and (8) that
dim CL (D, G) = (1 − g + deg(G)) − (1 − g + deg(G − D))
= deg(G) − deg(G − D) = N,
which implies that CL (D, G) is trivial. So we only consider the case 0 6
deg(G) 6 R.
Now, we use the following lemmas to calculate the dual of CL (D, G).
14
Chuangqiang Hu, Shudi Yang
Lemma 9 (Proposition 2.2.10, [27]) Let τ be an element of the function
field of the curve X such that vPi (τ ) = 1 for all rational places Pi contained
in the divisor D. Then the dual of CL (D, G) is
CL (D, G)⊥ = CL (D, D − G + div(dτ ) − div(τ )).
Lemma 10 (Proposition 2.2.14, [27]) Suppose that G1 and G2 are divisors
with G1 = G2 + div(ρ) for some ρ ∈ F \{0} and supp G1 ∩ supp D = supp G2 ∩
supp D = ∅. Let N := deg(D) and ̺ := (ρ(P1 ), · · · , ρ(PN )) with Pi ∈ D.
Then the codes CL (D, G1 ) and CL (D, G2 ) are equivalent and
CL (D, G2 ) = ̺ · CL (D, G1 ).
Theorem 2 Let A := (q n +1)(q−1)−1, B := mq 2 (q n −q 3 )+(q n +1)(q 2 −1)−1
Pi
P n−3
n
2j
2
and ρ := 1 + i=1
z (q +1)(q−1) j=1 q . Then the dual code of CL (D, G) is
given as follows.
(1) The dual of CL (D, G) is represented as
⊥
CL (D, G) = ̺ · CL (D,
q−1
X
(A − rµ )Pµ +
µ=0
2
qX
−1
(A − sν )Qν + (B − t)P∞ ),
ν=1
where ̺ := (ρ(Pα1 ,β1 ,γ1 ), · · · , ρ(PαN ,βN ,γN )) with Pαi ,βi ,γi ∈ D.
(2) In particular, for n = 3, we have ρ = 1 and
q−1
X
⊥
CL (D, G) = CL (D,
(A − rµ )Pµ +
µ=0
2
qX
−1
(A − sν )Qν + (B − t)P∞ ).
ν=1
Proof Define
o
n
2
H := z ∈ F∗q2n ∃y ∈ Fq2n with y q − y = z m .
Consider the element
τ :=
Y
(z − γ).
γ∈H
Then τ is a prime element for all places Pα,β,γ in D and its divisor is
X
div(τ ) =
div(z − γ) = D − deg(D)P∞ ,
γ∈H
where D =
P
α,β,γ
γ6=0
Pα,β,γ and N = deg(D) = q n+2 (q n − q + 1) − q 3 . Moreover
by a same discussion as in the proof of Lemma 2 in [1], we have
τ =1+
k−1
X
i=0
w
Pi
j=0
q2j +
Pk−1
j=0
q2j+1
+
k−1
X
i=0
w
Pi
j=0
q2j+1
,
Multi-point Codes from the GGS Curves
15
where n = 2k + 1 (note that n > 1 is odd) and w = z (q
straightforward computation shows
dτ = w
Pk−1
j=0
q2j+1
n
+1)(q−1)
. Then a
k−1
X Pi 2j
w j=1 q dw
1+
i=1
=w
qn −q
q2 −1
dw = −z
Let ρ := 1 +
Pk−1
i=1
w
Pi
j=1
k−1
X Pi 2j
w j=1 q dw,
1+
i=1
(qn +1)(q−1)−1
q2j
dz.
and denote its divisor by div(ρ). Set
A := m(q n − q) + (q n + 1)(q − 1) − 1,
S := q 3 A − 2g + 2.
Since div(dz) = (2g−2)P∞ (see Lemma 3.8 of [13]), it follows from Proposition
1 that
div(dτ ) = A · div(z) + div(dz) + div(ρ)
X
=A
Qβ − q 3 A − 2g + 2 P∞ + div(ρ)
β∈Fq2
=A
q−1
X
Pµ + A
2
qX
−1
Qν − SP∞ + div(ρ).
ν=1
µ=0
Let η := dτ /τ be a Weil differential. The divisor of η is
div(η) = div(dτ ) − div(τ )
=A
q−1
X
Pµ + A
µ=0
2
qX
−1
Qν − D +
ν=1
deg(D) − S P∞ + div(ρ).
By writing B := deg(D) − S = mq 2 (q n − q 3 ) + (q n + 1)(q 2 − 1) − 1, we establish
from Lemma 9 that the dual of CL (D, G) is
CL (D, G)⊥ = CL (D, D − G + div(η))
= CL (D,
q−1
X
(A − rµ )Pµ +
µ=0
2
qX
−1
(A − sν )Qν + (B − t)P∞ + div(ρ)).
ν=1
Denote ̺ := (ρ(Pα1 ,β1 ,γ1 ), · · · , ρ(PαN ,βN ,γN )) with Pαi ,βi ,γi ∈ D. Then we
deduce the first statement from Lemma 10. The second statement then follows
immediately.
⊓
⊔
16
Chuangqiang Hu, Shudi Yang
Theorem 3 Suppose that 0 6 deg(G) 6 R. Then the dimension of CL (D, G)
is given by
(
#Ωr,s,t
if 0 6 deg(G) < N,
dim CL (D, G) =
⊥
N − #Ωr,s,t if N 6 deg(G) 6 R,
⊥
where Ωr,s,t
:= Ωr′ ,s′ ,B−t with r ′ = (A − r0 , · · · , A − rq−1 ) and s′ = (A −
s1 , · · · , A − sq2 −1 ).
Proof For 0 6 deg(G) < N , we have by Theorem 1 and Equation (8) that
dim CL (D, G) = ℓ(G) = #Ωr,s,t .
For N 6 deg(G) 6 R, Theorem 2 yields that
⊥
dim CL (D, G) = N − dim CL (D, G)⊥ = N − #Ωr,s,t
.
So the proof is completed.
⊓
⊔
We mention that the minimum distance of CL (D, G) follows from Goppa
bounds and Theorem 3. Techniques for improving the Goppa bounds will be
dealt with in the next two sections.
4 Weierstrass semigroups and pure gaps
In this section, we will characterize the Weierstrass semigroups and pure gaps
over GGS curves, which will enables us to obtain improved bounds on the
parameters of AG codes.
We first briefly introduce some corresponding definitions and notations [23].
For an arbitrary function field F , let Q1 , · · · , Ql be distinct rational places of
F , then the Weierstrass semigroup H(Q1 , · · · , Ql ) is defined by
n
(s1 , · · · , sl ) ∈ Nl0 ∃f ∈ F with (f )∞ =
l
X
i=1
o
si Q i ,
and the Weierstrass gap set G(Q1 , · · · , Ql ) is defined by Nl0 \H(Q1 , · · · , Ql ),
where N0 := N ∪ {0} denotes the set of nonnegative integers.
Homma and Kim [15] introduced the concept of pure gap set with respect
to a pair of rational places. This was generalized by Carvalho and Torres [4]
to several rational places, denoted by G0 (Q1 , · · · , Ql ), which is given by
l
o
n
X
si Q i .
(s1 , · · · , sl ) ∈ Nl ℓ(G) = ℓ(G − Qj ) for 1 6 j 6 l, where G =
i=1
In addition, they showed that (s1 , · · · , sl ) is a pure gap at (Q1 , · · · , Ql ) if and
only if
ℓ(s1 Q1 + · · · + sl Ql ) = ℓ((s1 − 1)Q1 + · · · + (sl − 1)Ql ).
A useful way to calculate the Weierstrass semigroups is given as follows,
which can be regarded as an easy generalization of a result due to Kim [17]
Multi-point Codes from the GGS Curves
17
Lemma 11 For rational places Q1 , · · · , Ql with 1 6 l 6 r, the set H(Q1 , · · · , Ql )
is given by
l
o
n
X
l
si Q i .
(s1 , · · · , sl ) ∈ N0 ℓ(G) 6= ℓ(G − Qj ) for 1 6 j 6 l, where G =
i=1
In the rest of this section, we will restrict our study to the divisor G :=
Pq−1
µ=0 rµ Pµ +tP∞ and denote r = (r0 , r1 , · · · , rq−1 ). Our main task is to determine the Weierstrass semigroups and the pure gaps at totally ramified places
P0 , P1 , · · · , Pq−1 , P∞ . Before we proceed, some auxiliary results are presented
in the following. Denote Ωr,0,t by Ω(r0 ,r1 ,··· ,rq−1 ),t for the clarity of description.
Lemma 12 For the lattice point set Ω(r0 ,r1 ,··· ,rq−1 ),t , we have the following
assertions.
(1) #Ω(r0 ,r1 ,··· ,rq−1 ),t = #Ω(r0 −1,r1 ,··· ,rq−1 ),t + 1 if and only if
q−1
lr m
X
t + q 3 r0
r0 − rµ
0
+ q(q − 1)
6
.
m(q + 1)
m
m(q + 1)
µ=1
(2) #Ω(r0 ,r1 ,··· ,rq−1 ),t = #Ω(r0 ,r1 ,··· ,rq−1 ),t−1 + 1 if and only if
q−1 n−3
X
q
t − rµ
µ=1
m(q + 1)
n−3
r0 − q n−3 t
q
t
+ q(q − 1)
6 t+
.
m
m(q + 1)
Proof Consider two lattice point sets Ω(r0 ,r1 ,··· ,rq−1 ),t and Ω(r0 −1,r1 ,··· ,rq−1 ),t ,
which are given in Equation (4). Clearly, the latter one is a subset of the
former one, and the complement set Φ of Ω(r0 −1,r1 ,··· ,rq−1 ),t in Ω(r0 ,r1 ,··· ,rq−1 ),t
is given by
n
Φ := (i, j, k) i + r0 = 0,
−i − rµ
for µ = 1, · · · , q − 1,
jµ =
m(q + 1)
−i
kν =
for ν = 1, · · · , q 2 − 1,
m
o
q 3 i + m(q + 1)|j| + mq|k| 6 t ,
where j = (j1 , j2 , · · · , jq−1 ) and k = (k1 , k2 , · · · , kq2 −1 ). It follows immediately
that the set Φ is not empty if and only if
−q 3 r0 + m(q + 1)
q−1
lr m
X
r0 − rµ
0
+ mq(q 2 − 1)
6 t,
m(q
+
1)
m
µ=1
which concludes the first assertion.
For the second assertion, we obtain from Corollary 1 that the difference
between #Ω(r0 ,r1 ,··· ,rq−1 ),t and #Ω(r0 ,r1 ,··· ,rq−1 ),t−1 is exactly the same as the
18
Chuangqiang Hu, Shudi Yang
one between #Θr,0,t and #Θr,0,t−1 . In an analogous way, we define Ψ as the
complementary set of Θr,0,t−1 in Θr,0,t , namely
n
Ψ := (u, λ, γ) u = −t,
0 6 q n−3 u + mγν < m for ν = 1, · · · , q 2 − 1,
0 6 q n−3 u + m(q + 1)λµ − m|γ| + rµ < m(q + 1) for µ = 1, · · · , q − 1,
o
− (m(q + 1) − q n−3 )u + m(q + 1)|λ| + m|γ| + r0 > 0 .
The set Ψ is not empty if and only if
m(q + 1)
q−1 n−3
X
q
t − rµ
µ=1
m(q + 1)
n−3
q
t
+ mq(q 2 − 1)
6 r0 + (q n + 1 − q n−3 )t,
m
completing the proof of the second assertion.
⊓
⊔
We are now ready for the main results of this section dealing with Weierstrass semigroups and pure gap sets, which play an interesting role in finding
codes with good parameters. For simplicity, we define
l
X
rj − ri
rj
Wj (rl , t, l) :=
+ (q − 1 − l)
m(q + 1)
m(q + 1)
i=0
i6=j
lr m
t + q 3 rj
j
,
−
+ q(q − 1)
m
m(q + 1)
for j = 1, · · · , l, and
l n−3
X
q
t − ri
n−3
q
t
+ (q − 1 − l)
W∞ (rl , t, l) :=
m(q
+
1)
m(q
+
1)
i=1
n−3
q
t
r0 − q n−3 t
+ q(q − 1)
,
−t−
m
m(q + 1)
where rl = (r0 , r1 , · · · , rl ).
Theorem 4 Let P0 , P1 , · · · , Pl be rational places as defined previously. For
0 6 l < q, the following assertions hold.
(1) The Weierstrass semigroup H(P0 , P1 , · · · , Pl ) is given by
o
n
Wj (rl , 0, l) 6 0 for 0 6 j 6 l .
(r0 , r1 , · · · , rl ) ∈ Nl+1
0
(2) The Weierstrass semigroup H(P0 , P1 , · · · , Pl , P∞ ) is given by
o
n
W
(r
,
t,
l)
6
0
for
0
6
j
6
l
and
j
=
∞
.
(r0 , r1 , · · · , rl , t) ∈ Nl+2
j
l
0
Multi-point Codes from the GGS Curves
19
(3) The pure gap set G0 (P0 , P1 , · · · , Pl ) is given by
n
o
(r0 , r1 , · · · , rl ) ∈ Nl+1 Wj (rl , 0, l) > 0 for 0 6 j 6 l .
(4) The pure gap set G0 (P0 , P1 , · · · , Pl , P∞ ) is given by
n
o
(r0 , r1 , · · · , rl , t) ∈ Nl+2 Wj (rl , t, l) > 0 for 0 6 j 6 l and j = ∞ .
Proof The desired conclusions follow from Theorem 1, Lemmas 6, 11 and 12.
⊔
⊓
The following corollary states the characterizations of Weierstrass semigroup and gaps at only one point.
Corollary 3 With notation as before, we have the following statements.
n
k
q3 k o
k
(1) H(P0 ) = k ∈ N0 (q − 1)
+ q(q − 1)
6
.
m(q + 1)
m
m(q + 1)
(2) Let α, β, γ ∈ Z. Then α + m(β + (q + 1)γ) ∈ N is a gap at P0 if and only
if exactly one of the following two conditions is satisfied:
(i) α = 0, 0 < β 6 q − 1, 0 6 γ 6 q − 1 − β.
q3 α
β
and
−
(ii) 0 < α < m, 0 6 β 6 q, 0 6 γ 6 q 2 − 1 − β +
q + 1 m(q + 1)
β
q3 α
β−
6 q 2 − 1.
−
q + 1 m(q + 1)
Proof The first statement is an immediate consequence of Theorem 4 (1).
We now focus on the second statement. It follows from Theorem 4 (3) that
the Weierstrass gap set at P0 is
n
k
q3 k o
k
G(P0 ) = k ∈ N (q − 1)
+ q(q − 1)
>
.
m(q + 1)
m
m(q + 1)
Let k ∈ G(P0 ) and write k = α + m(β + (q + 1)γ), where 0 6 α < m, 0 6 β 6 q
and γ > 0. We find that the case α + mβ = 0 does not occur, since otherwise
we have k = m(q + 1)γ and
k
q3 k
k
(q − 1)
+ q(q − 1)
−
m(q + 1)
m
m(q + 1)
= (q − 1)γ + q(q − 1)(q + 1)γ − q 3 γ
= −γ 6 0,
which contradicts to the fact k ∈ G(P0 ). So α + mβ 6= 0. There are two
possibilities.
(i) If α = 0, then 0 < β 6 q. In this case, k = mβ + m(q + 1)γ is a gap at
P0 if and only if
k
q3 k
k
(q − 1)
+ q(q − 1)
>
,
m(q + 1)
m
m(q + 1)
20
Chuangqiang Hu, Shudi Yang
or equivalently,
(q − 1)(γ + 1) + q(q − 1)(β + (q + 1)γ) > q 3 γ +
q3 β
,
q+1
leading to the first condition 0 6 γ 6 q − 1 − β and 0 < β 6 q − 1.
(ii) If 0 < α < m, then 0 6 β 6 q. In this case, we have similarly that
k = α + mβ + m(q + 1)γ is a gap at P0 if and only if
q3 α
q3 β
+
,
m(q + 1) q + 1
q3 α
β
.
which gives the second condition 0 6 γ 6 q 2 − 1 − β +
−
q + 1 m(q + 1)
Note that
β
q3 α
q2 − 1 − β +
> q 2 − 1 − q − 1 > 0.
−
q + 1 m(q + 1)
(q − 1)(γ + 1) + q(q − 1)(1 + β + (q + 1)γ) > q 3 γ +
The proof is finished.
⊓
⊔
5 The floor of divisors
In this section, we investigate the floor of divisors from GGS curves. The
significance of this concept is that it provides a useful tool for evaluating
parameters of AG codes. We begin with general function fields.
Definition 2 ([22]) Given a divisor G of a function field F/Fq with ℓ(G) > 0,
the floor of G is the unique divisor G′ of F of minimum degree such that
L(G) = L(G′ ). The floor of G will be denoted by ⌊G⌋.
The floor of a divisor can be used to characterize Weierstrass semigroups
and pure gap sets. Let G = s1 Q1 + · · · + sl Ql . It is not hard to see that
(s1 , · · · , sl ) ∈ H(Q1 , · · · , Ql ) if and only if ⌊G⌋ = G. Moreover, (s1 , · · · , sl ) is
a pure gap at (Q1 , · · · , Ql ) if and only if
⌊G⌋ = ⌊(s1 − 1)Q1 + · · · + (sl − 1)Ql ⌋.
Maharaj, Matthews and Pirsic in [22] defined the floor of a divisor and
characterized it by the basis of the Riemann-Roch space.
Theorem 5 ([22]) Let G be a divisor of a function field F/Fq and let b1 , · · · , bt ∈
L(G) be a spanning set for L(G). Then
n
o
⌊G⌋ = − gcd div(bi ) i = 1, · · · , t .
The next theorem extends Theorem 3.4 of [4] by determining the lower
bound of minimum distance in a more general situation.
Multi-point Codes from the GGS Curves
21
Theorem 6 ([22]) Assume that F/Fq is a function field with genus g. Let
D := Q1 + · · · + QN where Q1 , · · · , QN are distinct rational places of F , and
let G := H + ⌊H⌋ be a divisor of F such that H is an effective divisor whose
support does not contain any of the places Q1 , · · · , QN . Then the minimum
distance of CΩ (D, G) satisfies
dΩ > 2 deg(H) − (2g − 2).
The following theorem provides a characterization of the floor over GGS
curves, which can be viewed as a generalization of Theorem 3.9 in [22] related
to Hermitian function fields.
Pq−1
P 2 −1
Theorem 7 Let H := µ=0
rµ Pµ + qν=1
sν Qν + tP∞ be a divisor of GGS
curve given by (1). Then the floor of H is given by
⌊H⌋ =
q−1
X
rµ′ Pµ
+
µ=0
2
qX
−1
s′ν Qν + t′ P∞ ,
ν=1
where
r0′ = max
n
o
− i (i, j, k) ∈ Ωr,s,t ,
o
− i − m(q + 1)jµ (i, j, k) ∈ Ωr,s,t for µ = 1, · · · , q − 1,
o
n
s′ν = max − i − mkν (i, j, k) ∈ Ωr,s,t for ν = 1, · · · , q 2 − 1,
o
n
t′ = max q 3 i + m(q + 1)|j| + mq|k| (i, j, k) ∈ Ωr,s,t .
rµ′ = max
n
Pq−1
Pq2 −1
Proof Let H = µ=0 rµ Pµ + ν=1 sν Qν + tP∞ . It follows from Theorem 1
that the elements Ei,j,k of Equation (2) with (i, j, k) ∈ Ωr,s,t form a basis for
the Riemann-Roch space L(H). Note that the divisor of Ei,j,k is
iP0 +
q−1
X
(i + m(q + 1)jµ )Pµ +
µ=1
2
qX
−1
(i + mkν )Qν
ν=1
− q 3 i + m(q + 1)|j| + mq|k| P∞
By Theorem 5, we get that
o
n
⌊H⌋ = − gcd div(Ei,j,k ) (i, j, k) ∈ Ωr,s,t .
The desired conclusion then follows.
⊓
⊔
22
Chuangqiang Hu, Shudi Yang
6 Examples of codes on GGS curves
In this section we treat several examples of codes to illustrate our results. The
codes in the next example will give new records of better parameters than the
corresponding ones in MinT’s tables [25].
Example 1 Now, we study codes arising from GK curves, that is, we let q = 2
and n = 3 in (1) given at the beginning of Section 2. This curve has 225
F64 -rational points and its genus is g = 10. Here we will study multi-point
AG codes from this curve by employing our previous results. Let us take
H = 3P0 + 4P1 + 11P∞ for example. Then it can be computed from Equation
(4) that the elements (−i, −i − m(q + 1)j1 , −i − mk1 , −i − mk2 , −i − mk3 , q 3 i +
m(q + 1)j1 + mq(k1 + k2 + k3 )), with (i, j1 , k1 , k2 , k3 ) ∈ Ω3,4,0,0,0,11 , are as
follows
( 3, 3, 0, 0, 0, −6 ),
( 2, 2, −1, −1, −1, 2 ),
( 1, 1, −2, −2, −2, 10 ),
( 0, 0, 0, 0, 0, 0 ),
(−1, −1, −1, −1, −1, 8 ),
(−3, −3, 0, 0, 0, 6 ),
(−6, 3, 0, 0, 0, 3 ),
(−7, 2, −1, −1, −1, 11 ),
(−9, 0, 0, 0, 0, 9 ).
So we obtain from Theorem 7 that ⌊H⌋ = 3P0 + 3P1 + 11P∞ . Let D be a
divisor consisting of N = 216 rational places away from the places P0 , P1 ,
Q1 , Q2 , Q3 and P∞ . According to Theorem 6, if we let G = H + ⌊H⌋ =
6P0 + 7P1 + 22P∞ , then the code CΩ (D, G) has minimum distance at least
2 deg(H) − (2g − 2) = 18. Since 2g − 2 < deg(G) < N , the dimension of
CΩ (D, G) is kΩ = N +g−1−deg(G) = 190. In other words, the code CΩ (D, G)
has parameters [216, 190, > 18]. One can verify that our resulting code improve
the minimum distance with respect to MinT’s Tables. Moreover CΩ (D, G) is
equivalent to CL (D, G′ ), where G′ = 2P0 + P1 + 8Q1 + 8Q2 + 8Q3 + 4P∞ , and
its generating matrix can be determined by Theorem 2.
Additionally, we remark that more AG codes with excellent parameters
can be found by taking H = aP0 + bP1 + 7P∞ , where a, b ∈ {4, 5, 6} and
9 6 a+ b 6 12. The floor of such H is computed to be ⌊H⌋ = aP0 + bP1 + 6P∞ .
Let D be as before. If we take G = H +⌊H⌋ = 2aP0 +2bP1 +13P∞ , then we can
produce AG codes CΩ (D, G) with parameters [216, 212−2a−2b, > 2a+2b−4].
All of these codes improve the records of the corresponding ones found on
MinT’s Tables.
Example 2 Consider the curve GGS(q, n) of (1) with q = 2 and n = 5. This
curve has 3969 F210 -rational points and its genus is g = 46. It follows from
Multi-point Codes from the GGS Curves
23
Theorem 4 that
n
o
(57, j, 3) 1 6 j 6 3 ⊆ G0 (P0 , P1 , P∞ ).
Let D be a divisor consisting of N = 3960 rational places except P0 , P1 , Q1 ,
Q2 , Q3 and P∞ . Applying Theorem 3.4 of [4] (see also Theorem 1, [16]), if we
take G = 113P0 + 3P1 + 5P∞ , then the three-point code CΩ (D, G) has length
N = 3960, dimension N + g − 1 − deg(G) = 3884 and minimum distance at
least 36. Thus we produce an AG code with parameters [3960, 3884, > 36].
Unfortunately, this F210 -code cannot be compared with the one on MinT’s
Tables because the alphabet size given is at most 256.
Acknowledgements This work is partially supported by China Postdoctoral Science Foundation Funded Project (Project No. 2017M611801). This work is also partially supported
by Guangdong Natural Science Foundation (Grant No. 2014A030313161) and the Natural
Science Foundation of Shandong Province of China (ZR2016AM04).
References
1. Abdón, M., Bezerra, J., Quoos, L.: Further examples of maximal curves. Journal of
Pure and Applied Algebra 213, 1192–1196 (2009)
2. Bartoli, D., Montanucci, M., Zini, G.: AG codes and AG quantum codes from the GGS
curve. arXiv:1703.03178 (2017)
3. Bartoli, D., Montanucci, M., Zini, G.: Multi point AG codes on the GK maximal curve.
Des. Codes Cryptogr (2017). DOI 10.1007/s10623-017-0333-9
4. Carvalho, C., Torres, F.: On Goppa codes and Weierstrass gaps at several points. Designs, Codes and Cryptography 35, 211–225 (2005)
5. Castellanos, A.S., Masuda, A.M., Quoos, L.: One-and two-point codes over Kummer
extensions. IEEE Transactions on Information Theory 62(9), 4867–4872 (2016)
6. Castellanos, A.S., Tizziotti, G.C.: Two-Point AG Codes on the GK Maximal Curves.
IEEE Transactions on Information Theory 62(2), 681–686 (2016)
7. Fanali, S., Giulietti, M.: One-Point AG Codes on the GK Maximal Curves. IEEE
Transactions on Information Theory 56(1), 202–210 (2010)
8. Garcia, A., Güneri, C., Stichtenoth, H.: A generalization of the Giulietti-Korchmáros
maximal curve. Advances in Geometry 10(3), 427–434 (2010)
9. Garcia, A., Kim, S.J., Lax, R.F.: Consecutive Weierstrass gaps and minimum distance
of Goppa codes. Journal of Pure and Applied Algebra 84, 199–207 (1993)
10. Garcia, A., Lax, R.F.: Goppa codes and Weierstrass gaps. In: Coding Theory and
Algebraic Geometry, pp. 33–42. Springer Berlin (1992)
11. Giulietti, M., Korchmáros, G.: A new family of maximal curves over a finite field.
Mathematische Annalen 343, 229–245 (2008)
12. Goppa, V.D.: Codes associated with divisors. Problemy Peredachi Informatsii 13, 33–39
(1977)
13. Güneri, C., Özdemiry, M., Stichtenoth, H.: The automorphism group of the generalized
Giulietti-Korchmáros function field. Advances in Geometry 13, 369–380 (2013)
14. Guruswami, V., Sudan, M.: Improved decoding of Reed-Solomon and algebraicgeometric codes. IEEE Transactions on Information Theory 45(6), 1757–1768 (1999)
15. Homma, M., Kim, S.J.: Goppa codes with Weierstrass pairs. Journal of Pure and
Applied Algebra 162, 273–290 (2001)
16. Hu, C., Yang, S.: Multi-point codes over Kummer extensions. Des. Codes Cryptogr
(2017). DOI 10.1007/s10623-017-0335-7
17. Kim, S.J.: On the index of the Weierstrass semigroup of a pair of points on a curve.
Archiv der Mathematik 62, 73–82 (1994)
24
Chuangqiang Hu, Shudi Yang
18. Kirfel, C., Pellikaan, R.: The minimum distance of codes in an array coming from
telescopic semigroups. IEEE Transactions on Information Theory 41(6), 1720–1732
(1995)
19. Korchmáros, G., Nagy, G.: Hermitian codes from higher degree places. Journal of Pure
and Applied Algebra 217, 2371–2381 (2013)
20. Maharaj, H.: Code construction on fiber products of Kummer covers. IEEE Transactions
on Information Theory 50(9), 2169–2173 (2004)
21. Maharaj, H., Matthews, G.L.: On the floor and the ceiling of a divisor. Finite Fields
and Their Applications 12, 38–55 (2006)
22. Maharaj, H., Matthews, G.L., Pirsic, G.: Riemann-Roch spaces of the Hermitian function field with applications to algebraic geometry codes and low-discrepancy sequences.
Journal of Pure and Applied Algebra 195, 261–280 (2005)
23. Matthews, G.L.: The Weierstrass semigroup of an m-tuple of collinear points on a
Hermitian curve. Finite Fields and Their Applications pp. 12–24 (2004)
24. Matthews, G.L.: Weierstrass semigroups and codes from a quotient of the Hermitian
curve. Designs, Codes and Cryptography 37, 473–492 (2005)
25. MinT: Online database for optimal parameters of (t, m, s)-nets, (t, s)-sequences, orthogonal arrays, and linear codes. (Accessed on 2017-06-13.). URL http://mint.sbg.ac.at.
26. Stichtenoth, H.: Algebraic Function Fields and Codes, vol. 254. Springer-Verlag, Berlin,
Heidelberg (2009)
27. Stichtenoth, H.: Algebraic function fields and codes, vol. 254. Springer Science & Business Media (2009)
28. Yang, K., Kumar, P.V.: On the true minimum distance of Hermitian codes. In: Coding
Theory and Algebraic Geometry, pp. 99–107. Springer (1992)
29. Yang, K., Kumar, P.V., Stichtenoth, H.: On the weight hierarchy of geometric Goppa
codes. IEEE Transactions on Information Theory 40(3), 913–920 (1994)
| 7 |
Overcoming Exploration in Reinforcement Learning
with Demonstrations
arXiv:1709.10089v2 [cs.LG] 25 Feb 2018
Ashvin Nair12 , Bob McGrew1 , Marcin Andrychowicz1 , Wojciech Zaremba1 , Pieter Abbeel12
Abstract— Exploration in environments with sparse rewards
has been a persistent problem in reinforcement learning (RL).
Many tasks are natural to specify with a sparse reward, and
manually shaping a reward function can result in suboptimal
performance. However, finding a non-zero reward is exponentially more difficult with increasing task horizon or action
dimensionality. This puts many real-world tasks out of practical
reach of RL methods. In this work, we use demonstrations
to overcome the exploration problem and successfully learn to
perform long-horizon, multi-step robotics tasks with continuous
control such as stacking blocks with a robot arm. Our method,
which builds on top of Deep Deterministic Policy Gradients and
Hindsight Experience Replay, provides an order of magnitude
of speedup over RL on simulated robotics tasks. It is simple
to implement and makes only the additional assumption that
we can collect a small set of demonstrations. Furthermore, our
method is able to solve tasks not solvable by either RL or
behavior cloning alone, and often ends up outperforming the
demonstrator policy.
I. I NTRODUCTION
RL has found significant success in decision making for
solving games, so what makes it more challenging to apply
in robotics? A key difference is the difficulty of exploration,
which comes from the choice of reward function and complicated environment dynamics. In games, the reward function
is usually given and can be directly optimized. In robotics,
we often desire behavior to achieve some binary objective
(e.g., move an object to a desired location or achieve a certain
state of the system) which naturally induces a sparse reward.
Sparse reward functions are easier to specify and recent work
suggests that learning with a sparse reward results in learned
policies that perform the desired objective instead of getting
stuck in local optima [1], [2]. However, exploration in an
environment with sparse reward is difficult since with random
exploration, the agent rarely sees a reward signal.
The difficulty posed by a sparse reward is exacerbated
by the complicated environment dynamics in robotics. For
example, system dynamics around contacts are difficult to
model and induce a sensitivity in the system to small errors.
Many robotics tasks also require executing multiple steps
successfully over a long horizon, involve high dimensional
control, and require generalization to varying task instances.
These conditions further result in a situation where the agent
so rarely sees a reward initially that it is not able to learn at
all.
All of the above means that random exploration is not a
tenable solution. Instead, in this work we show that we can
use demonstrations as a guide for our exploration. To test our
1
OpenAI,
2
University of California, Berkeley.
method, we solve the problem of stacking several blocks at
a given location from a random initial state. Stacking blocks
has been studied before in the literature [3], [4] and exhibits
many of the difficulties mentioned: long horizons, contacts,
and requires generalizing to each instance of the task. We
limit ourselves to 100 human demonstrations collected via
teleoperation in virtual reality. Using these demonstrations,
we are able to solve a complex robotics task in simulation
that is beyond the capability of both reinforcement learning
and imitation learning.
The primary contribution of this paper is to show that
demonstrations can be used with reinforcement learning
to solve complex tasks where exploration is difficult. We
introduce a simple auxiliary objective on demonstrations, a
method of annealing away the effect of the demonstrations
when the learned policy is better than the demonstrations,
and a method of resetting from demonstration states that
significantly improves and speeds up training policies. By
effectively incorporating demonstrations into RL, we shortcircuit the random exploration phase of RL and reach
nonzero rewards and a reasonable policy early on in training.
Finally, we extensively evaluate our method against other
commonly used methods, such as initialization with learning
from demonstrations and fine-tuning with RL, and show that
our method significantly outperforms them.
II. R ELATED W ORK
Learning methods for decision making problems such as
robotics largely divide into two classes: imitation learning
and reinforcement learning (RL). In imitation learning (also
called learning from demonstrations) the agent receives behavior examples from an expert and attempts to solve a task
by copying the expert’s behavior. In RL, an agent attempts
to maximize expected reward through interaction with the
environment. Our work combines aspects of both to solve
complex tasks.
Imitation Learning: Perhaps the most common form of
imitation learning is behavior cloning (BC), which learns a
policy through supervised learning on demonstration stateaction pairs. BC has seen success in autonomous driving
[5], [6], quadcopter navigation [7], locomotion [8], [9]. BC
struggles outside the manifold of demonstration data. Dataset
Aggregation (DAGGER) augments the dataset by interleaving
the learned and expert policy to address this problem of
accumulating errors [10]. However, DAGGER is difficult to
use in practice as it requires access to an expert during all
of training, instead of just a set of demonstrations.
Fig. 1: We present a method using reinforcement learning to solve the task of block stacking shown above. The robot starts
with 6 blocks labelled A through F on a table in random positions and a target position for each block. The task is to move each
block to its target position. The targets are marked in the above visualization with red spheres which do not interact with the
environment. These targets are placed in order on top of block A so that the robot forms a tower of blocks. This is a complex,
multi-step task where the agent needs to learn to successfully manage multiple contacts to succeed. Frames from rollouts of
the learned policy are shown. A video of our experiments can be found at: http://ashvin.me/demoddpg-website
Fundamentally, BC approaches are limited because they
do not take into account the task or environment. Inverse
reinforcement learning (IRL) [11] is another form of imitation learning where a reward function is inferred from the
demonstrations. Among other tasks, IRL has been applied
to navigation [12], autonomous helicopter flight [13], and
manipulation [14]. Since our work assumes knowledge of a
reward function, we omit comparisons to IRL approaches.
Reinforcement Learning: Reinforcement learning methods have been harder to apply in robotics, but are heavily
investigated because of the autonomy they could enable.
Through RL, robots have learned to play table tennis [15],
swing up a cartpole, and balance a unicycle [16]. A renewal
of interest in RL cascaded from success in games [17], [18],
especially because of the ability of RL with large function
approximators (ie. deep RL) to learn control from raw pixels.
Robotics has been more challenging in general but there
has been significant progress. Deep RL has been applied to
manipulation tasks [19], grasping [20], [21], opening a door
[22], and locomotion [23], [24], [25]. However, results have
been attained predominantly in simulation per high sample
complexity, typically caused by exploration challenges.
Robotic Block Stacking: Block stacking has been studied
from the early days of AI and robotics as a task that
encapsulates many difficulties of more complicated tasks we
want to solve, including multi-step planning and complex
contacts. SHRDLU [26] was one of the pioneering works,
but studied block arrangements only in terms of logic and
natural language understanding. More recent work on task
and motion planning considers both logical and physical
aspects of the task [27], [28], [29], but requires domainspecific engineering. In this work we study how an agent
can learn this task without the need of domain-specific
engineering.
One RL method, PILCO [16] has been applied to a simple
version of stacking blocks where the task is to place a
block on a tower [3]. Methods such as PILCO based on
learning forward models naturally have trouble modelling
the sharply discontinuous dynamics of contacts; although
they can learn to place a block, it is a much harder problem
to grasp the block in the first place. One-shot Imitation [4]
learns to stack blocks in a way that generalizes to new target
configurations, but uses more than 100,000 demonstrations
to train the system. A heavily shaped reward can be used
to learn to stack a Lego block on another with RL [30]. In
contrast, our method can succeed from fully sparse rewards
and handle stacking several blocks.
Combining RL and Imitation Learning: Previous work
has combined reinforcement learning with demonstrations.
Demonstrations have been used to accelerate learning on
classical tasks such as cart-pole swing-up and balance [31].
This work initialized policies and (in model-based methods)
initialized forward models with demonstrations. Initializing
policies from demonstrations for RL has been used for
learning to hit a baseball [32] and for underactuated swingup [33]. Beyond initialization, we show how to extract more
knowledge from demonstrations by using them effectively
throughout the entire training process.
Our method is closest to two recent approaches —
Deep Q-Learning From Demonstrations (DQfD) [34] and
DDPG From Demonstrations (DDPGfD) [2] which combine
demonstrations with reinforcement learning. DQfD improves
learning speed on Atari, including a margin loss which
encourages the expert actions to have higher Q-values than
all other actions. This loss can make improving upon the
demonstrator policy impossible which is not the case for
our method. Prior work has previously explored improving
beyond the demonstrator policy in simple environments by
introducing slack variables [35], but our method uses a
learned value to actively inform the improvement. DDPGfD
solves simple robotics tasks akin to peg insertion using
DDPG with demonstrations in the replay buffer. In contrast
to this prior work, the tasks we consider exhibit additional
difficulties that are of key interest in robotics: multi-step
behaviours, and generalization to varying goal states. While
previous work focuses on speeding up already solvable tasks,
we show that we can extend the state of the art in RL with
demonstrations by introducing new methods to incorporate
demonstrations.
III. BACKGROUND
A. Reinforcement Learning
We consider the standard Markov Decision Process framework for picking optimal actions to maximize rewards over
discrete timesteps in an environment E. We assume that the
environment is fully observable. At every timestep t, an agent
is in a state xt , takes an action at , receives a reward rt ,
and E evolves to state xt+1 . In reinforcement learning, the
agent must learn a policy at = π(xt ) toPmaximize expected
T
returns. We denote the return by Rt = i=t γ (i−t) ri where
T is the horizon that the agent optimizes over and γ is
a discount factor for future rewards. The agent’s objective
is to maximize expected return from the start distribution
J = Eri ,si ∼E,ai ∼π [R0 ].
A variety of reinforcement learning algorithms have been
developed to solve this problem. Many involve constructing
an estimate of the expected return from a given state after
taking an action:
Qπ (st , at ) = Eri ,si ∼E,ai ∼π [Rt |st , at ]
= Ert ,st+1 ∼E [rt + γ Eat+1 ∼π [Qπ (st+1 , at+1 )]]
(1)
(2)
We call Qπ the action-value function. Equation 2 is a
recursive version of equation 1, and is known as the Bellman equation. The Bellman equation allows for methods to
estimate Q that resemble dynamic programming.
B. DDPG
Our method combines demonstrations with one such
method: Deep Deterministic Policy Gradients (DDPG) [23].
DDPG is an off-policy model-free reinforcement learning
algorithm for continuous control which can utilize large
function approximators such as neural networks. DDPG
is an actor-critic method, which bridges the gap between
policy gradient methods and value approximation methods
for RL. At a high level, DDPG learns an action-value
function (critic) by minimizing the Bellman error, while
simultaneously learning a policy (actor) by directly maximizing the estimated action-value function with respect to
the parameters of the policy.
Concretely, DDPG maintains an actor function π(s) with
parameters θπ , a critic function Q(s, a) with parameters θQ ,
and a replay buffer R as a set of tuples (st , at , rt , st+1 )
for each transition experienced. DDPG alternates between
running the policy to collect experience and updating the
parameters. Training rollouts are collected with extra noise
for exploration: at = π(s) + N , where N is a noise process.
During each training step, DDPG samples a minibatch
consisting of N tuples from R to update the actor and critic
networks. DDPG minimizes the following loss L w.r.t. θQ
to update the critic:
yi = ri + γQ(si+1 , π(si+1 ))
(3)
L=
1 X
(yi − Q(si , ai |θQ ))2
N i
(4)
The actor parameters θπ are updated using the policy
gradient:
1 X
∇a Q(s, a|θQ )|s=si ,a=π(s) ∇θπ π(s|θπ )|si
∇θ π J =
N i
(5)
To stabilize learning, the Q value in equation 3 is usually
computed using a separate network (called the target network) whose weights are an exponential average over time
of the critic network. This results in smoother target values.
Note that DDPG is a natural fit for using demonstrations. Since DDPG can be trained off-policy, we can use
demonstration data as off-policy training data. We also take
advantage of the action-value function Q(s, a) learned by
DDPG to better use demonstrations.
C. Multi-Goal RL
Instead of the standard RL setting, we train agents with
parametrized goals, which lead to more general policies
[36] and have recently been shown to make learning with
sparse rewards easier [1]. Goals describe the task we expect
the agent to perform in the given episode, in our case
they specify the desired positions of all objects. We sample
the goal g at he beginning of every episode. The function
approximators, here π and Q, take the current goal as an
additional input.
D. Hindsight Experience Replay (HER)
To handle varying task instances and parametrized goals,
we use Hindsight Experience Replay (HER) [1]. The key
insight of HER is that even in failed rollouts where no
reward was obtained, the agent can transform them into
successful ones by assuming that a state it saw in the rollout
was the actual goal. HER can be used with any off-policy
RL algorithm assuming that for every state we can find a
goal corresponding to this state (i.e. a goal which leads to a
positive reward in this state).
For every episode the agent experiences, we store it in
the replay buffer twice: once with the original goal pursued
in the episode and once with the goal corresponding to the
final state achieved in the episode, as if the agent intended
on reaching this state from the very beginning.
IV. M ETHOD
Our method combines DDPG and demonstrations in several ways to maximally use demonstrations to improve
learning. We describe our method below and evaluate these
ideas in our experiments.
A. Demonstration Buffer
First, we maintain a second replay buffer RD where we
store our demonstration data in the same format as R. In
each minibatch, we draw an extra ND examples from RD
to use as off-policy replay data for the update step. These
examples are included in both the actor and critic update.
This idea has been introduced in [2].
B. Behavior Cloning Loss
Second, we introduce a new loss computed only on the
demonstration examples for training the actor.
LBC =
ND
X
2
kπ(si |θπ ) − ai k
(6)
i=1
This loss is a standard loss in imitation learning, but we show
that using it as an auxiliary loss for RL improves learning
significantly. The gradient applied to the actor parameters θπ
is:
λ1 ∇θπ J − λ2 ∇θπ LBC
(7)
(Note that we maximize J and minimize LBC .) Using this
loss directly prevents the learned policy from improving
significantly beyond the demonstration policy, as the actor is
always tied back to the demonstrations. Next, we show how
to account for suboptimal demonstrations using the learned
action-value function.
C. Q-Filter
We account for the possibility that demonstrations can be
suboptimal by applying the behavior cloning loss only to
states where the critic Q(s, a) determines that the demonstrator action is better than the actor action:
LBC =
ND
X
2
kπ(si |θπ ) − ai k 1Q(si ,ai )>Q(si ,π(si ))
(8)
i=1
The gradient applied to the actor parameters is as in equation
7. We label this method using the behavior cloning loss and
Q-filter “Ours” in the following experiments.
D. Resets to demonstration states
To overcome the problem of sparse rewards in very long
horizon tasks, we reset some training episodes using states
and goals from demonstration episodes. Restarts from within
demonstrations expose the agent to higher reward states during training. This method makes the additional assumption
that we can restart episodes from a given state, as is true in
simulation.
To reset to a demonstration state, we first sample a
demonstration D = (x0 , u0 , x1 , u1 , ...xN , uN ) from the set
of demonstrations. We then uniformly sample a state xi
from D. As in HER, we use the final state achieved in the
demonstration as the goal. We roll out the trajectory with the
given initial state and goal for the usual number of timesteps.
At evaluation time, we do not use this procedure.
We label our method with the behavior cloning loss, Qfilter, and resets from demonstration states as “Ours, Resets”
in the following experiments.
V. E XPERIMENTAL S ETUP
A. Environments
We evaluate our method on several simulated MuJoCo [37]
environments. In all experiments, we use a simulated 7-DOF
Fetch Robotics arm with parallel grippers to manipulate one
or more objects placed on a table in front of the robot.
The agent receives the positions of the relevant objects
on the table as its observations. The control for the agent is
continuous and 4-dimensional: 3 dimensions that specify the
desired end-effector position1 and 1 dimension that specifies
the desired distance between the robot fingers. The agent is
controlled at 50Hz frequency.
We collect demonstrations in a virtual reality environment.
The demonstrator sees a rendering of the same observations
as the agent, and records actions through a HTC Vive
interface at the same frequency as the agent. We have
the option to accept or reject a demonstration; we only
accept demonstrations we judge to be mostly correct. The
demonstrations are not optimal. The most extreme example
is the “sliding” task, where only 7 of the 100 demonstrations
are successful, but the agent still sees rewards for these
demonstrations with HER.
B. Training Details
To train our models, we use Adam [38] as the optimizer
with learning rate 10−3 . We use N = 1024, ND = 128, λ1 =
10−3 , λ2 = 1.0/ND . The discount factor γ is 0.98. We use
100 demonstrations to initialize RD . The function approximators π and Q are deep neural networks with ReLU activations and L2 regularization with the coefficient 5×10−3 . The
final activation function for π is tanh, and the output value
is scaled to the range of each action dimension. To explore
during training, we sample random actions uniformly within
the action space with probability 0.1 at every step, and the
noise process N is uniform over ±10% of the maximum
value of each action dimension. Task-specific information,
including network architectures, are provided in the next
section.
C. Overview of Experiments
We perform three sets of experiments. In Sec. VI, we
provide a comparison to previous work. In Sec. VII we
solve block stacking, a difficult multi-step task with complex
contacts that the baselines struggle to solve. In Sec. VIII
we do ablations of our own method to show the effect of
individual components.
VI. C OMPARISON W ITH P RIOR W ORK
A. Tasks
We first show the results of our method on the simulated
tasks presented in the Hindsight Experience Replay paper
[1]. We apply our method to three tasks:
1) Pushing. A block placed randomly on the table must
be moved to a target location on the table by the robot
(fingers are blocked to avoid grasping).
2) Sliding. A puck placed randomly on the table must be
moved to a given target location. The target is outside
the robot’s reach so it must apply enough force that
the puck reaches the target and stops due to friction.
3) Pick-and-place. A block placed randomly on the table
must be moved to a target location in the air. Note
1 In
the 10cm x 10cm x 10cm cube around the current gripper position
Pushing
1.0
0.4
0.2
0.8
Success Rate
0.6
0.6
0.4
0.2
0.0
2M
4M
6M
Timesteps
8M
10M
0.6
0.4
0.2
0.0
0M
Pick and Place
1.0
Ours
HER
BC
0.8
Success Rate
0.8
Success Rate
Sliding
1.0
0.0
0M
2M
4M
6M
8M
10M
0M
Timesteps
2M
4M
6M
8M
10M
Timesteps
Fig. 2: Baseline comparisons on tasks from [1]. Frames from the learned policy are shown above each task. Our method
significantly outperforms the baselines. On the right plot, the HER baseline always fails.
that the original paper used a form of initializing from
favorable states to solve this task. We omit this for our
experiment but discuss and evaluate the initialization
idea in an ablation.
As in the prior work, we use a fully sparse reward for this
task. The agent is penalized if the object is not at its goal
position:
(
0,
if ||xi − gi || < δ
rt =
(9)
−1, otherwise
where the threshold δ is 5cm.
B. Results
Fig. 2 compares our method to HER without demonstrations and behavior cloning. Our method is significantly faster
at learning these tasks than HER, and achieves significantly
better policies than behavior cloning does. Measuring the
number of timesteps to get to convergence, we exhibit a 4x
speedup over HER in pushing, a 2x speedup over HER in
sliding, and our method solves the pick-and-place task while
HER baseline cannot solve it at all.
The pick-and-place task showcases the shortcoming of RL
in sparse reward settings, even with HER. In pick-and-place,
the key action is to grasp the block. If the robot could manage
to grasp it a small fraction of the time, HER discovers
how to achieve goals in the air and reinforces the grasping
behavior. However, grasping the block with random actions
is extremely unlikely. Our method pushes the policy towards
demonstration actions, which are more likely to succeed.
In the HER paper, HER solves the pick-and-place task
by initializing half of the rollouts with the gripper grasping
the block. With this addition, pick-and-place becomes the
easiest of the three tasks tested. This initialization is similar
in spirit to our initialization idea, but takes advantage of the
fact that pick-and-place with any goal can be solved starting
from a block grasped at a certain location. This is not always
true (for example, if there are multiple objects to be moved)
and finding such a keyframe for other tasks would be difficult, requiring some engineering and sacrificing autonomy.
Instead, our method guides the exploration towards grasping
the block through demonstrations. Providing demonstrations
does not require expert knowledge of the learning system,
which makes it a more compelling way to provide prior
information.
VII. M ULTI -S TEP E XPERIMENTS
A. Block Stacking Task
To show that our method can solve more complex tasks
with longer horizon and sparser reward, we study the task
of block stacking in a simulated environment as shown in
Fig. 1 with the same physical properties as the previous
experiments. Our experiments show that our approach can
solve the task in full and learn a policy to stack 6 blocks
with demonstrations and RL. To measure and communicate
various properties of our method, we also show experiments
on stacking fewer blocks, a subset of the full task.
We initialize the task with blocks at 6 random locations
x1 ...x6 . We also provide 6 goal locations g1 ...g6 . To form a
tower of blocks, we let g1 = x1 and gi = gi−1 + (0, 0, 5cm)
for i ∈ 2, 3, 4, 5.
By stacking N blocks, we mean N blocks reach their
target locations. Since the target locations are always on top
of x1 , we start with the first block already in position. So
stacking N blocks involves N −1 pick-and-place actions. To
solve stacking N , we allow the agent 50 ∗ (N − 1) timesteps.
This means that to stack 6 blocks, the robot executes 250
actions or 5 seconds.
We recorded 100 demonstrations to stack 6 blocks, and
use subsets of these demonstrations as demonstrations for
stacking fewer blocks. The demonstrations are not perfect;
they include occasionally dropping blocks, but our method
can handle suboptimal demonstrations. We still rejected more
than half the demonstrations and excluded them from the
demonstration data because we knocked down the tower of
blocks when releasing a block.
B. Rewards
Two different reward functions are used. To test the
performance of our method under fully sparse reward, we
Ours
Stack 2, Sparse
Stack 3, Sparse
Stack 4, Sparse
Stack 4, Step
Stack 5, Step
Stack 6, Step
99%
99%
1%
91%
49%
4%
Ours,
Resets
97%
89%
54%
73%
50%
32%
BC
HER
65%
1%
0%
-
0%
0%
0%
-
BC+
HER
65%
1%
0%
-
Fig. 3: Comparison of our method against baselines. The
value reported is the median of the best performance (success
rate) of all randomly seeded runs of each method.
reward the agent only if all blocks are at their goal positions:
rt = min 1||xi −gi ||<δ
i
(10)
The threshold δ is the size of a block, 5cm. Throughout the
paper we call this the “sparse” reward.
To enable solving the longer horizon tasks of stacking 4
or more blocks, we use the “step” reward :
X
rt = −1 +
1||xi −gi ||<δ
(11)
i
Note the step reward is still very sparse; the robot only
sees the reward change when it moves a block into its
target location. We subtract 1 only to make the reward more
interpretable, as in the initial state the first block is already
at its target.
Regardless of the reward type, an episode is considered
successful for computing success rate if all blocks are at
their goal position in their final state.
C. Network architectures
We use a 4 layer networks with 256 hidden units per layer
for π and Q for the HER tasks and stacking 3 or fewer
blocks. For stacking 4 blocks or more, we use an attention
mechanism [39] for the actor and a larger network. The
attention mechanism uses a 3 layer network with 128 hidden
units per layer to query the states and goals with one shared
head. Once a state and goal is extracted, we use a 5 layer
network with 256 hidden units per layer after the attention
mechanism. Attention speeds up training slightly but does
not change training outcomes.
D. Baselines
We include the following methods to compare our method
to baselines on stacking 2 to 6 blocks. 2
Ours: Refers to our method as described in section IV-C.
Ours, Resets: Refers to our method as described in section
IV-C with resets from demonstration states (Sec. IV-D).
BC: This method uses behavior cloning to learn a policy.
Given the set of demonstration transitions RD , we train the
2 Because of computational constraints, we were limited to 5 random seeds
per method for stacking 3 blocks, 2 random seeds per method for stacking 4
and 5 blocks, and 1 random seed per method for stacking 6 blocks. Although
we are careful to draw conclusions from few random seeds, the results are
consistent with our collective experience training these models. We report
the median of the random seeds everywhere applicable.
Stack 3, Sparse Reward
1.0
Ours
Ours, Resets
No Q-Filter
No BC
No HER
0.8
Success Rate
Task
0.6
0.4
0.2
0.0
0M
50M
100M
150M
200M
250M
300M
350M
400M
Timesteps
Fig. 4: Ablation results on stacking 3 blocks with a fully
sparse reward. We run each method 5 times with random
seeds. The bold line shows the median of the 5 runs while
each training run is plotted in a lighter color. Note “No
HER” is always at 0% success rate. Our method without
resets learns faster than the ablations. Our method with resets
initially learns faster but converges to a worse success rate.
policy π by supervised learning. Behavior cloning requires
much less computation than RL. For a fairer comparison,
we performed a large hyperparameter sweep over various
networks sizes, attention hyperparameters, and learning rates
and report the success rate achieved by the best policy found.
HER: This method is exactly the one described in Hindsight
Experience Replay [1], using HER and DDPG.
BC+HER: This method first initializes a policy (actor) with
BC, then finetunes the policy with RL as described above.
E. Results
We are able to learn much longer horizon tasks than
the other methods, as shown in Fig. 3. The stacking task
is extremely difficult using HER without demonstrations
because the chance of grasping an object using random
actions is close to 0. Initializing a policy with demonstrations
and then running RL also fails since the actor updates depend
on a reasonable critic and although the actor is pretrained,
the critic is not. The pretrained actor weights are therefore
destroyed in the very first epoch, and the result is no better
than BC alone. We attempted variants of this method where
initially the critic was trained from replay data. However,
this also fails without seeing on-policy data.
The results with sparse rewards are very encouraging. We
are able to stack 3 blocks with a fully sparse reward without
resetting to the states from demonstrations, and 4 blocks with
a fully sparse reward if we use resetting. With resets from
demonstration states and the step reward, we are able to learn
a policy to stack 6 blocks.
VIII. A BLATION E XPERIMENTS
In this section we perform a series of ablation experiments
to measure the importance of various components of our
method. We evaluate our method on stacking 3 to 6 blocks.
We perform the following ablations on the best performing
of our models on each task:
No BC Loss: This method does not apply the behavior
cloning gradient during training. It still has access to demonstrations through the demonstration replay buffer.
Stack 4, Step Reward
0.5
0.6
0.4
0.35
Stack 6, Step Reward
Ours
Ours, Resets
0.30
0.4
0.3
0.2
0.25
0.20
0.15
0.10
0.2
0.1
0.0
0.05
0.0
100M
200M
300M
400M
500M
600M
700M
800M
0M
3.0
2.5
Reward
2.0
1.5
1.0
500M
1000M
1500M
4.0
4.0
3.5
3.5
3.0
3.0
2.5
2.5
2.0
1.5
1.0
0.5
100M
200M
300M
400M
500M
Timesteps
600M
700M
800M
1000M
1500M
2000M
1000M
1500M
2000M
2.0
1.5
0.5
0.0
0M
500M
1.0
0.5
0.0
0.00
0M
Reward
0M
Reward
0.40
Ours
Ours, Resets
No Q-Filter
0.6
Success Rate
0.8
Success Rate
Stack 5, Step Reward
Ours
Ours, Resets
No Q-Filter
No BC
Success Rate
1.0
0.0
0M
500M
1000M
1500M
0M
500M
Timesteps
Timesteps
Fig. 5: Ablation results on longer horizon tasks with a step reward. The upper row shows the success rate while the lower
row shows the average reward at the final step of each episode obtained by different algorithms. For stacking 4 and 5 blocks,
we use 2 random seeds per method. The median of the runs is shown in bold and each training run is plotted in a lighter
color. Note that for stacking 4 blocks, the “No BC” method is always at 0% success rate. As the number of blocks increases,
resets from demonstrations becomes more important to learn the task.
No Q-Filter: This method uses standard behavioral cloning
loss instead of the loss from equation Eq. 8, which means
that the actor tries to mimic the demonstrator’s behaviour
regardless of the critic.
No HER: Hindsight Experience Replay is not used.
A. Behavior Cloning Loss
Without the behavior cloning loss, the method is significantly worse in every task we try. Fig. 4 shows the training
curve for learning to stack 3 blocks with a fully sparse
reward. Without the behavior cloning loss, the system is
about 2x slower to learn. On longer horizon tasks, we do
not achieve any success without this loss.
To see why, consider the training curves for stacking 4
blocks shown in Fig. 5. The “No BC” policy learns to stack
only one additional block. Without the behavior cloning loss,
the agent only has access to the demonstrations through the
demonstration replay buffer. This allows it to view highreward states and incentivizes the agent to stack more blocks,
but there is a stronger disincentive: stacking the tower higher
is risky and could result in lower reward if the agent knocks
over a block that is already correctly placed. Because of
this risk, which is fundamentally just another instance of
the agent finding a local optimum in a shaped reward, the
agent learns the safer behavior of pausing after achieving a
certain reward. Explicitly weighting behavior cloning steps
into gradient updates forces the policy to continue the task.
B. Q-Filter
The Q-Filter is effective in accelerating learning and
achieving optimal performance. Fig. 4 shows that the method
without filtering is slower to learn. One issue with the
behavior cloning loss is that if the demonstrations are suboptimal, the learned policy will also be suboptimal. Filtering
by Q-value gives a natural way to anneal the effect of the
demonstrations as it automatically disables the BC loss when
a better action is found. However, it gives mixed results
on the longer horizon tasks. One explanation is that in the
step reward case, learning relies less on the demonstrations
because the reward signal is stronger. Therefore, the training
is less affected by suboptimal demonstrations.
C. Resets From Demonstrations
We find that initializing rollouts from within demonstration states greatly helps to learn to stack 5 and 6 blocks but
hurts training with fewer blocks, as shown in Fig. 5. Note that
even where resets from demonstration states helps the final
success rate, learning takes off faster when this technique
is not used. However, since stacking the tower higher is
risky, the agent learns the safer behavior of stopping after
achieving a certain reward. Resetting from demonstration
states alleviates this problem because the agent regularly
experiences higher rewards.
This method changes the sampled state distribution, biasing it towards later states. It also inflates the Q values
unrealistically. Therefore, on tasks where the RL algorithm
does not get stuck in solving a subset of the full problem, it
could hurt performance.
IX. D ISCUSSION AND F UTURE W ORK
We present a system to utilize demonstrations along with
reinforcement learning to solve complicated multi-step tasks.
We believe this can accelerate learning of many tasks,
especially those with sparse rewards or other difficulties in
exploration. Our method is very general, and can be applied
on any continuous control task where a success criterion can
be specified and demonstrations obtained.
An exciting future direction is to train policies directly on
a physical robot. Fig. 2 shows that learning the pick-andplace task takes about 1 million timesteps, which is about
6 hours of real world interaction time. This can realistically
be trained on a physical robot, short-cutting the simulationreality gap entirely. Many automation tasks found in factories
and warehouses are similar to pick-and-place but without the
variation in initial and goal states, so the samples required
could be much lower. With our method, no expert needs
to be in the loop to train these systems: demonstrations
can be collected by users without knowledge about machine
learning or robotics and rewards could be directly obtained
from human feedback.
A major limitation of this work is sample efficiency
on solving harder tasks. While we could not solve these
tasks with other learning methods, our method requires a
large amount of experience which is impractical outside
of simulation. To run these tasks on physical robots, the
sample efficiency will have to improved considerably. We
also require demonstrations which are not easy to collect
for all tasks. If demonstrations are not available but the
environment can be reset to arbitrary states, one way to learn
goal-reaching but avoid using demonstrations is to reuse
successful rollouts as in [40].
Finally, our method of resets from demonstration states
requires the ability to reset to arbitrary states. Although we
can solve many long-horizon tasks without this ability, it is
very effective for the hardest tasks. Resetting from demonstration rollouts resembles curriculum learning: we solve a
hard task by first solving easier tasks. If the environment
does not afford setting arbitrary states, then other curriculum
methods will have to be used.
X. ACKNOWLEDGEMENTS
We thank Vikash Kumar and Aravind Rajeswaran for valuable discussions. We thank Sergey Levine, Chelsea Finn, and
Carlos Florensa for feedback on initial versions of this paper.
Finally, we thank OpenAI for providing a supportive research
environment.
R EFERENCES
[1] M. Andrychowicz et al., “Hindsight experience replay,” in Advances
in neural information processing systems, 2017.
[2] M. Večerı́k et al., “Leveraging Demonstrations for Deep Reinforcement Learning on Robotics Problems with Sparse Rewards,” arXiv
preprint arxiv:1707.08817, 2017.
[3] M. P. Deisenroth, C. E. Rasmussen, and D. Fox, “Learning to Control a
Low-Cost Manipulator using Data-Efficient Reinforcement Learning,”
Robotics: Science and Systems, vol. VII, pp. 57–64, 2011.
[4] Y. Duan et al., “One-shot imitation learning,” in NIPS, 2017.
[5] D. A. Pomerleau, “Alvinn: An autonomous land vehicle in a neural
network,” NIPS, pp. 305–313, 1989.
[6] M. Bojarski et al., “End to End Learning for Self-Driving Cars,” arXiv
preprint arXiv:1604.07316, 2016.
[7] A. Giusti et al., “A Machine Learning Approach to Visual Perception
of Forest Trails for Mobile Robots,” in IEEE Robotics and Automation
Letters., 2015, pp. 2377–3766.
[8] J. Nakanishi et al., “Learning from demonstration and adaptation of
biped locomotion,” in Robotics and Autonomous Systems, vol. 47, no.
2-3, 2004, pp. 79–91.
[9] M. Kalakrishnan et al., “Learning Locomotion over Rough Terrain
using Terrain Templates,” in The 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2009.
[10] S. Ross, G. J. Gordon, and J. A. Bagnell, “A Reduction of Imitation
Learning and Structured Prediction to No-Regret Online Learning,”
in Proceedings of the 14th International Conference on Artificial
Intelligence and Statistics (AISTATS), 2011.
[11] A. Ng and S. Russell, “Algorithms for Inverse Reinforcement Learning,” International Conference on Machine Learning (ICML), 2000.
[12] B. D. Ziebart et al., “Maximum Entropy Inverse Reinforcement
Learning.” in AAAI Conference on Artificial Intelligence, 2008, pp.
1433–1438.
[13] P. Abbeel and A. Y. Ng, “Apprenticeship learning via inverse reinforcement learning,” in ICML, 2004, p. 1.
[14] C. Finn, S. Levine, and P. Abbeel, “Guided Cost Learning: Deep
Inverse Optimal Control via Policy Optimization,” in ICML, 2016.
[15] J. Peters, K. Mülling, and Y. Altün, “Relative Entropy Policy Search,”
Artificial Intelligence, pp. 1607–1612, 2010.
[16] M. P. Deisenroth and C. E. Rasmussen, “Pilco: A model-based and
data-efficient approach to policy search,” in ICML, 2011, pp. 465–472.
[17] V. Mnih et al., “Human-level control through deep reinforcement
learning,” Nature, vol. 518, no. 7540, pp. 529–533, 2015.
[18] D. Silver et al., “Mastering the game of Go with deep neural networks
and tree search,” Nature, vol. 529, no. 7587, pp. 484–489, Jan 2016.
[19] S. Levine et al., “End-to-end training of deep visuomotor policies,”
CoRR, vol. abs/1504.00702, 2015.
[20] L. Pinto and A. Gupta, “Supersizing self-supervision: Learning
to grasp from 50k tries and 700 robot hours,” arXiv preprint
arXiv:1509.06825, 2015.
[21] S. Levine et al., “Learning hand-eye coordination for robotic grasping
with deep learning and large-scale data collection,” arXiv preprint
arXiv:1603.02199, 2016.
[22] S. Gu et al., “Deep Reinforcement Learning for Robotic Manipulation with Asynchronous Off-Policy Updates,” arXiv preprint
arXiv:1610.00633, 2016.
[23] T. P. Lillicrap et al., “Continuous control with deep reinforcement
learning,” arXiv preprint arXiv:1509.02971, 2015.
[24] V. Mnih et al., “Asynchronous methods for deep reinforcement learning,” in ICML, 2016.
[25] J. Schulman et al., “Trust region policy optimization,” in Proceedings
of the twenty-first international conference on Machine learning, 2015.
[26] T. Winograd, Understanding Natural Language. Academic Press,
1972.
[27] L. P. Kaelbling and T. Lozano-Perez, “Hierarchical task and motion
planning in the now,” IEEE International Conference on Robotics and
Automation, pp. 1470–1477, 2011.
[28] L. Kavraki et al., “Probabilistic roadmaps for path planning in highdimensional configuration spaces,” IEEE transactions on Robotics and
Automation, vol. 12, no. 4, pp. 566–580, 1996.
[29] S. Srivastava et al., “Combined Task and Motion Planning Through
an Extensible Planner-Independent Interface Layer,” in International
Conference on Robotics and Automation, 2014.
[30] I. Popov et al., “Data-efficient Deep Reinforcement Learning for
Dexterous Manipulation,” arXiv preprint arXiv:1704.03073, 2017.
[31] S. Schaal, “Robot learning from demonstration,” Advances in Neural
Information Processing Systems, no. 9, pp. 1040–1046, 1997.
[32] J. Peters and S. Schaal, “Reinforcement learning of motor skills with
policy gradients,” Neural Networks, vol. 21, no. 4, pp. 682–697, 2008.
[33] J. Kober and J. Peter, “Policy search for motor primitives in robotics,”
in Advances in neural information processing systems, 2008.
[34] T. Hester et al., “Learning from Demonstrations for Real World
Reinforcement Learning,” arXiv preprint arxiv:1704.03732, 2017.
[35] B. Kim et al., “Learning from Limited Demonstrations,” Neural
Information Processing Systems., 2013.
[36] T. Schaul et al., “Universal Value Function Approximators,” Proceedings of The 32nd International Conference on Machine Learning, pp.
1312–1320, 2015.
[37] E. Todorov, T. Erez, and Y. Tassa, “MuJoCo: A physics engine for
model-based control,” in The IEEE/RSJ International Conference on
Intelligent Robots and Systems, 2012.
[38] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,”
International Conference on Learning Representations (ICLR), 2015.
[39] D. Bahdanau, K. Cho, and Y. Bengio, “Neural Machine Translation
by Jointly Learning to Align and Translate,” in ICLR, 2015.
[40] C. Florensa et al., “Reverse Curriculum Generation for Reinforcement
Learning,” in Conference on robot learning, 2017.
| 9 |
1
Minimizing the Maximum End-to-End Network
Delay: Hardness, Algorithm, and Performance
arXiv:1707.02650v3 [] 16 Feb 2018
Qingyu Liu, Lei Deng, Haibo Zeng, Minghua Chen
Abstract—We consider the scenario where a source streams a
flow at a fixed rate to a receiver across the network, possibly
using multiple paths. Each link has a finite capacity constraint.
Transmission over a link incurs an integer delay if the rate is
within the link capacity, and an unbounded delay otherwise.
The objective is to minimize the maximum end-to-end delay
experienced by the flow. The problem, denoted as Min-MaxDelay, appears in various practical scenarios, e.g., delay-critical
video conferencing using inter-datacenter networks. In this paper,
we first show that Min-Max-Delay is NP-hard in the weak sense
and develop an exact algorithm with pseudo-polynomial time
complexity. We then propose a Fully Polynomial Time Approximation Scheme (FPTAS) that obtains a (1 + )-approximate
solution in polynomial time. These results reveal fundamental
difference between the Min-Max-Delay problem and a similar
maximum latency problem studied in the literature, for which is
APX-hard and no PTAS1 exists unless P = NP. Moreover, there
exists no exact pseudo-polynomial-time algorithm or constantapproximate algorithm for the maximum latency problem. We
demonstrate the effectiveness of our algorithms in the scenario of routing delay-critical video-conferencing traffic over
multiple paths of inter-datacenter networks, using simulations
based on Amazon EC2 inter-datacenter topology. Both of our
algorithms achieve the optimal maximum delay performance
in all simulation instances, consistently outperforming all stateof-the-art solutions which only obtain sub-optimal maximum
delay performance in certain instances. Furthermore, simulation
results show that our achieved optimal delay performance always
meet the end-to-end delay requirement for video conferencing
applications, while the sub-optimal delay performance obtained
by the alternatives fail to satisfy the video-conferencing delay
requirement for up to 15% of simulation instances between
certain cross-continental source-receiver pair.
Index Terms—Delay-aware network flow, maximum delay optimization, video traffic, inter-datacenter networks, exact pseudopolynomial time algorithm, approximate algorithm.
I. I NTRODUCTION
A. Motivation
We consider the scenario where a source streams a flow at
a fixed rate to a receiver across a multi-hop network, possibly
Manuscript received ...
Part of this work has been presented at the IEEE Information Theory
Workshop (ITW), Kaohsiung, Taiwan, November 6 - 10, 2017 [1].
Q. Liu, and H. Zeng are with the Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg, USA (e-mail: qyliu14@vt.edu,
haibo.zeng@gmail.com).
L. Deng is with the School of Electrical Engineering & Intelligentization,
Dongguan University of Technology, China (e-mail: denglei@dgut.edu.cn).
M. Chen is with the Department of Information Engineering,
The Chinese University of Hong Kong, Hong Kong, China (e-mail:
minghua@ie.cuhk.edu.hk).
1 Unless P = NP, it holds that FPTAS ( PTAS in that the runtime of a
PTAS is required to be polynomial in problem input but not 1/, while the
runtime of an FPTAS is polynomial in both the problem input and 1/ [2].
using multiple paths, to minimize the maximum end-to-end
delay. We denote the problem as Min-Max-Delay. Transmission over a link is assumed to experience an integer delay
if the rate is within the finite link capacity, and unbounded
delay otherwise. This models many practical applications,
particularly the routing of delay-critical video conferencing
traffic over inter-datacenter networks. We note that using
multiple paths for delay minimization is necessary when the
shortest path is insufficient to support the fix-rate flow due to
the bandwidth limitation.
According to recent reports from Microsoft [3] and
Google [4], most real-world inter-datacenter networks are
characterized by sharing link bandwidth for different applications with over-provisioned link capacities. (i) Real-world
inter-datacenter networks nowadays are utilized to simultaneously support traffic from various services, some of which have
stringent delay requirements (e.g., video conferencing) while
others are bandwidth-hungry and less sensitive to delay (e.g.,
data backup and data maintenance). Link capacity is often
reserved separately for different types of services depending on their characteristics, e.g., bandwidth-hungry (delayinsensitive) services are reserved with a larger capacity. (ii)
Cloud providers typically over-provision their inter-datacenter
link capacity by 2 − 3 times on a dedicated backbone to
guarantee reliability, and the average link-capacity utilizations
(the aggregate utilization of applications, not the bandwidthutilization of individual applications) for busy links are 30 −
60% [5]. As such, in most real-world inter-datacenter networks
queuing delays are negligible and the constant propagation
delays dominate the end-to-end delay, as evaluated by [5] in a
realistic network of Amazon EC22 . These observations justify
our link capacity and delay model, especially for the problem
of routing delay-critical video-conferencing traffic over interdatacenter networks.
Our optimization objective of minimizing the end-to-end
maximum delay is motivated by increasing interests on supporting delay-critical traffic to serve various communication
applications, e.g., the video conferencing services [5], [6].
It is reported that 51 million users per month attend WebEx meetings [7], 3 billion minutes of calls per day use
Skype [8], and 75% of high-growth innovators use video
collaboration [9]. Low cross-network delay is vital for the
video conferencing applications. As recommended by the
2 Note again that there may not be over-provisioning for individual applications/services of their respective reserved link capacities. Since their traffic
peaks may not appear at the same time, the overall effect is that within the
allocated (reserved) bandwidth, traffics of individual applications experience
constant propagation delays; otherwise, they experience unbounded delay.
2
International Telecommunication Union (ITU) [10], for video
conferencing, it is desirable to keep the cross-network oneway delay as low as possible. A delay less than 150ms can
provide a transparent interactivity while delays above 400ms
are unacceptable.
In the delay-sensitive communication field, it is well-known
that flow problems with objectives/constraints related to both
throughput and maximum delay are highly non-trivial. In
fact, it still remains open to characterize the hardness of the
problem Min-Max-Delay and design efficient algorithms with
performance guarantee, in spite of the practical relevance of
Min-Max-Delay for delay-sensitive communications.
The problem Min-Max-Delay models the link delay as a
constant if the link rate is below the capacity and unbounded
otherwise. It is an important special case of the maximum
latency problem in the literature [11]–[13] which assumes the
link delay as a general function of the link traffic rate. It
is known that the maximum latency problem is NP-hard, in
particular APX-hard [12]. This implies that unless P = NP, no
PTAS exists, and there is even no polynomial time algorithm
that can obtain a constant approximation ratio.
In contrast, we show that the problem Min-Max-Delay
is NP-hard in the weak sense, and we design a pseudopolynomial time exact algorithm as well as an FPTAS. Hence,
while Min-Max-Delay is a special case of the maximum
latency problem, they are fundamentally different in terms of
hardness and admitting good exact/approximation algorithms.
We summarize the differences in Tab. I.
B. Contributions
In this paper we focus on the Min-Max-Delay problem and
make following contributions:
B We prove Min-Max-Delay is NP-hard (Thm. 1, Sec. III),
but is in the weak sense (Thm. 3, Sec. IV). Specifically,
we first prove it is NP-hard, and then develop a pseudopolynomial time algorithm (Algorithm 1, Sec. IV) to solve
Min-Max-Delay optimally. The algorithm has a time complexity of O((N 2 dmax )3.5 log(N 2 dmax ) log R), where N ,
max{|V |, |E|}, R is the flow rate (or throughput equivalently)
requirement, and dmax is the maximum link delay. The time
complexity is pseudo-polynomial because it is polynomial in
the numeric value of the input dmax , but is exponential in the
bit length of dmax , i.e., log(dmax ) [14].
B We further propose an FPTAS (Algorithm 3, Sec. V) to
solve Min-Max-Delay approximately. The algorithm achieves
a (1 + )-approximation ratio for any > 0 with a time complexity of O((N M )3.5 log R log(N M )(L + log N )), where
M = |E|(1+1/) and L is the bit complexity for representing
dmax . The time complexity is polynomial in the problem input
and 1/.
B We demonstrate the effectiveness of our algorithms in the
scenario of delay-critical inter-datacenter video-conferencing
traffic routing over multiple paths, using simulations based on
Amazon EC2 inter-datacenter topology. Both of our algorithms
achieve optimal maximum delay performance in all simulation
instances, which always the meet practically acceptable endto-end delay requirement for video conferencing. In contrast,
the state-of-the-art solutions only obtain sub-optimal maximum delay performance, which fail to satisfy the videoconferencing delay requirement for up to 15% of simulation
instances between certain cross-continental source-receiver
pair.
C. Paper organization
The rest of the paper is organized as follows. Sec. II gives
our system model and defines the problem Min-Max-Delay.
Sec. III proves Min-Max-Delay to be NP-hard. We propose a
pseudo-polynomial time algorithm in Sec. IV and an FPTAS
in Sec. V. The NP-hardness proof together with the proposed
pseudo-polynomial time algorithm show that the problem MinMax-Delay is NP-hard in the weak sense. Sec. VI presents our
experiments and Sec. VII reviews existing studies, followed by
the conclusion in Sec. VIII.
II. S YSTEM M ODEL AND P ROBLEM F ORMULATION
We consider a multi-hop network modeled as a directed
graph G , (V, E) with |V | nodes and |E| links. Each link
e ∈ E has a non-negative capacity ce and a non-negative
integer delay de . We define N , max{|V |, |E|} to describe
the size of the network, and dmax , max de as the maximum
e∈E
link delay. A source s ∈ V needs to stream a flow at a positive
rate R to a receiver t ∈ V \{s}, possibly using multiple paths.
We denote P as the set of all paths from s to t. For any
path p ∈ P , we denote its path delay as
X
dp ,
de ,
(1)
e∈E:e∈p
i.e., the summation of link delays along the path. A flow
solution f is defined as the assigned flow rate over P , i.e.,
f , {xp : xp ≥ 0, p ∈ P }. For a flow solution f , we define
X
xe ,
xp
(2)
p∈P :e∈p
as the link rate of link e ∈ E. We further denote the total flow
rate of a flow solution f by |f |, namely
X
|f | =
xp .
(3)
p∈P
The maximum delay of a flow solution f is defined as
D(f ) ,
max
p∈P :xp >0
dp ,
(4)
i.e., the maximum delay among paths with positive rates.
Another well-known end-to-end delay metric for a flow in the
literature is the total delay, defined as
X
X
T (f ) ,
(xp · dp ) =
(xe · de ),
(5)
p∈P
e∈E
i.e., the total delay experienced by all flow units.
We consider the problem of finding a flow solution f to
minimize the maximum delay D(f ) while satisfying both
the flow rate requirement subject to link capacity constraints.
3
TABLE I: Summary of the difference in hardness and admitting exact/approximation algorithms of the Maximum Latency
Problem studied in the literature and problem Min-Max-Delay studied in this paper.
Problem
Hardness
Maximum latency problem
APX-hard [12]
Weakly NP-hard
(Thm. 3, Sec. IV)
Min-Max-Delay
Exact algorithm with pseudo
-polynomial time complexity
7
Polynomial-time algorithm with
constant approximation ratio
7
3(Algorithm 1, Sec. IV)
3(Algorithm 3, Sec. V)
We denote the problem as Min-Max-Delay which can be
formulated as
min D
X
s.t.
xp = R,
(6a)
v1
w0
w1
(6b)
p∈P
X
xe =
p
x ≤ ce , ∀e ∈ E,
p∈P :e∈p
p
xp (d − D) ≤ 0, ∀p ∈ P,
vars.
p
x ≥ 0, ∀p ∈ P,
(6c)
v2
vn
w2
...
wn-1
wn
Link with capacity 1 and positive delay
Source
Link with capacity 1 and delay 0
Receiver
Fig. 1: Reduced network graph from partition problem.
(6d)
(6e)
where (6a) together with (6d) define our objective to minimize
the maximum path delay for s−t paths that carry positive flow
rates (called flow-carrying paths). Constraint (6b) restricts that
the source s sends R rate to the receiver t, and constraint (6c)
requires that the flow rate on link e does not exceed its capacity
ce . In this paper we use fMM to denote the Min-Max-Delay
flow, namely the optimal flow to our problem Min-Max-Delay,
and hence D(fMM ) is the optimal maximum delay supporting
a flow rate of R.
From formulation (6) we observe two difficulties to solve
Min-Max-Delay: (i) the number of paths (number of variables)
is in general exponential in the network size, and (ii) formulation (6) is non-convex due to constraint (6d). In fact, later
in Sec. III we prove Min-Max-Delay is NP-hard. Therefore,
we cannot obtain fMM within polynomial time unless P = NP.
III. NP- HARDNESS
In this section, we analyze the computational complexity of
Min-Max-Delay. Specifically, we prove that Min-Max-Delay
is NP-hard based on the reduction from the well-known NPcomplete partition problem [14]. Thus, even as a special case
of the NP-hard maximum latency problem, Min-Max-Delay
is still NP-hard and it is impossible to solve in polynomial
time unless P = NP.
The partition problem is known to be NP-complete [14] (in
the weak sense). We now leverage it to prove that Min-MaxDelay is NP-hard.
Theorem 1. Min-Max-Delay problem is NP-hard.
Proof: For any partition problem (Definition 2), we can
construct a graph G0 with (2n + 1) nodes and 3n links as
in Fig. 1. All links in the graph have unit capacity. Each
dashed link (wi−1 , wi ) has a delay of ai for any i = 1, · · · , n,
and solid links (wi−1 , vi ) and (vi , wi ) have a delay of zero.
Obviously it takes polynomial time to construct the graph G0
from any given partition problem.
Now we consider the decision version of a Min-Max-Delay
problem instance: in graph G0 with source s = w0 , receiver
t = wn , and flow rate requirement R = 2, is there any feasible
flow f such that the maximum delay D(f ) ≤ b?
In the following we prove the partition problem answers
“Yes” if and only if the decision version of above Min-MaxDelay problem instance answers “Yes”.
If Part. If the decision problem of Min-Max-Delay answers
“Yes”, then there exists a flow f such that D(f ) ≤ b. Since f
is feasible, the total rate from w0 to wn in f is R = 2. Now
due to the capacity constraint and flow conservation, any link
must exactly be assigned a flow rate of 1 to satisfy the total
requirement R = 2. The total delay in flow f is
X
xp dp =
p∈P
X
xe de =
e∈E
X
e∈E
1 · de =
n
X
ai = 2b.
(9)
i=1
Definition 1 (Partition). Given a non-empty set A, its partition
is a set of non-empty subsets such that each element in A is
in exactly one of these subsets.
Since D(f ) ≤ b, we have
Definition 2 (Partition Problem [14]). Given a set of n positive
integers A = {a1 , a2 , ..., an } with sum
X
ai = 2b.
(7)
Also, because the total flow rate is equal to 2, we have
X
X
X
2b =
xp dp =
xp dp ≤ b ×
xp = 2b.
dp ≤ D(f ) ≤ b, ∀p ∈ P with xp > 0.
p∈P
p∈P :xp >0
p∈P :xp >0
(11)
ai ∈A
The partition problem asks is there a partition {A1 , A2 } of A
such that
X
X
ai =
aj = b?
(8)
ai ∈A1
(10)
aj ∈A2
As both ends in (11) are the same, it must be
dp = b, ∀p ∈ P with xp > 0.
(12)
Therefore, all flow-carrying paths have a path delay of b.
We choose an arbitrary flow-carrying path p. Since all solid
4
links have a delay of 0, the path delay of p is the delay of
all dashed links belonging to p. We consider the set A1 that
contains ai if (wi−1 , wi ) ∈ p. Clearly, it holds that
X
ai = b.
(13)
ai ∈A1
We then define A2 = A\A1 . It shall be
X
X
X
ai = 2b − b = b.
aj =
ak −
aj ∈A2
ak ∈A
(14)
ai ∈A1
A1 and A2 are thus a partition of set A and meet the
requirement of the partition problem. Hence, the partition
problem answers “Yes”.
Only If Part. If the partition problem answers “Yes”, then
there exists a partition {A1 , A2 } such that
X
X
aj = b.
(15)
ai =
ai ∈A1
A. DCMF: maximize the flow subject to delay constraint
A closely-related problem to Min-Max-Delay is the DelayConstrained Maximum Flow problem [15], denoted as DCMax-Flow: given a graph G, a source s, a receiver t, and a
deadline T which is a path delay upper bound for all flowcarrying paths from s to t, the problem requires to find a
feasible s − t flow with a maximized flow rate. If we denote
P T as the set of all s − t paths whose path delay does not
exceed T , the DC-Max-Flow can be formulated as
X
xp ,
max
(17a)
p∈P T
aj ∈A2
We then construct the flow f with only two flow-carrying
paths p1 and p2 and set xp1 = xp2 = 1. Since p1 and p2
are disjoint, the capacity constraint is satisfied. Also, since
xp1 + xp2 = R = 2, the rate requirement is satisfied. Thus f
is a feasible flow with maximum delay D(f ) ≤ b. Therefore,
the decision problem of Min-Max-Delay answers “Yes”.
Since the partition problem is NP-complete [14] and the
reduction can be done in polynomial time, the Min-Max-Delay
problem is NP-hard.
Similar to the maximum latency problem, Min-Max-Delay
is NP-hard. However, since in the following section a pseudopolynomial time algorithm is proposed to solve Min-MaxDelay optimally, Min-Max-Delay is actually NP-hard in the
weak sense which is fundamentally different from the maximum latency problem that has been proved to be APX-hard.
IV. E XACT P SEUDO - POLYNOMIAL T IME A LGORITHM
In this section we propose an exact pseudo-polynomial time
algorithm to solve Min-Max-Delay optimally. This, combined
with the NP-hardness result in Thm. 1, shows that Min-MaxDelay is NP-hard in the weak sense. This result, combined
with the existence of FPTAS described later in our Sec. V,
shows that the Min-Max-Delay problem is fundamentally
different from the maximum latency problem studied in the
literature, which is APX-hard and admits no PTAS exists
unless P = NP. Moreover, in our experiments (Sec. VI),
X
xe =
s.t.
aj ∈A2
We now construct two paths p1 and p2 .
• ∀i ∈ [1, n], if ai ∈ A1 , we put (wi−1 , wi ) into path p1 ;
otherwise, we put (wi−1 , vi ) and (vi , wi ) into p1 .
• Similarly, for any i, if ai ∈ A2 , we put (wi−1 , wi ) into
p2 ; otherwise, we put (wi−1 , vi ) and (vi , wi ) into p2 .
Due to the definition of a partition (see Definition 1), A1
and A2 are two disjoint sets, i.e., A1 ∩ A2 = ∅. Thus, we can
easily see that p1 and p2 are two disjoint s − t paths, i.e., p1
and p2 do not share any common link. Furthermore, it clearly
holds that
X
X
dp1 =
ai = b, dp2 =
aj = b.
(16)
ai ∈A1
extensive simulations show that the proposed exact algorithm
always generates the optimal solution in 0.5 second empirically, implying that the exact algorithm is practically efficient.
xp ≤ ce ,
(17b)
p∈P T :e∈p
xp ≥ 0, ∀p ∈ P T .
vars.
(17c)
Since the size of P T could increase exponentially in the
network size, formulation (17) can have an exponential number
of variables. However, Wang and Chen [15] show that DCMax-Flow can be solved efficiently by an edge-based flow
formulation which is a linear program with at most |E|·T variables. They implicitly use the idea of hop-expanded graph [15]
by converting a delay-constrained max-flow problem in the
original graph into a delay-unconstrained max-flow problem
in the hop-expanded graph. In [15], the links are assumed
to have unit-delay, but it is easy to generalize the results to
integer-delay links. Overall the algorithm to solve problem
DC-Max-Flow optimally with input (G, s, t, T ), denoted by
DCMF(G, s, t, T ) in this paper, is to solve the following linear
program [15, Proposition 1],
max
T
X X
xe(d)
(18a)
e∈In(t) d=0
s.t.
X
e)
x(d
=
e
X
xe(d) =
e∈In(v)
x(d)
e ,
xe(d+de ) ,
e∈Out(v)
∀v ∈ V \{s, t}, d ∈ [0, T ]
T
X
vars.
d=0
x(d)
e
(18b)
e∈In(t) d=0
e∈Out(s)
X
T
X X
x(d)
e ≤ ce ,
≥ 0,
(18c)
∀e ∈ E
(18d)
∀e ∈ E, d ∈ [0, T ]
(18e)
where In(v) , {e = (w, v) : e ∈ E, w ∈ V } is the set
containing all incoming links of node v. Similarly, Out(v) ,
{e = (v, w) : e ∈ E, w ∈ V } is the set of outgoing links
(d)
of node v, and xe is the total flow rate that experiences a
delay of d after passing link e from the source s. The objective
(18a) is the total flow rate that arrives at the receiver t within
the deadline T . Constraint (18b) requires the rate entering
the network should equal to the rate leaving the network.
Constraints (18c) are the flow conservation constraints in the
5
expanded graph. Note that by convention, for any link e ∈ E,
(d)
we set xe = 0 for d < 0 and d > T . Constraints (18d) are the
link capacity constraints which describe that flow rate assigned
on the same link but experiencing different delays from the
source should jointly respect the link capacity constraint.
B. MMD: minimize the delay subject to flow requirement
Comparing the problem Min-Max-Delay to the problem
DC-Max-Flow, from a source s to a receiver t in a graph G, if
we denote d∗ (R) as the minimized maximum delay subject to
a rate requirement R (the optimal value of Min-Max-Delay),
and we denote r∗ (T ) as the maximized flow rate subject to a
deadline constraint T (the optimal value of DC-Max-Flow),
we have the following lemma which is helpful to design an
exact algorithm to solve Min-Max-Delay.
Lemma 1. d∗ (R) ≤ T if and only if r∗ (T ) ≥ R.
Proof: If Part. If r∗ (T ) ≥ R, then there exists a flow
solution over P T , i.e., f = {xp : p ∈ P T } such that
X
xp ≥ R.
(19)
p∈P T
We can thus decrease the flow solution f to construct another
flow solution f˜ such that
X
x̃p = R.
(20)
p∈P T
Since f satisfies the capacity constraints, f˜ must also satisfy
the capacity constraints. Thus, f˜ is a feasible solution to
Min-Max-Delay with rate requirement R. In addition, since
all flow-carrying paths in f˜ belong to the set P T , we have
d∗ (R) ≤ D(f˜) ≤ T .
Only If Part. If d∗ (R) ≤ T , then there exists a flow solution
f where the path delay of any flow-carrying path does not
exceed T . Thus all flow-carrying paths belong to P T and f is
also a feasible solution to DC-Max-Flow with a delay bound
T . Thus, it holds that
X
r∗ (T ) ≥
xp = R.
(21)
p∈P T
Lem. 1 suggests a binary search scheme to solve Min-MaxDelay optimally, namely our Algorithm 1. Given a lower
bound Tl (= 0 initially) and an upper bound Tu (= U
initially and U can be the maximum delay of any feasible flow
satisfying rate requirement R) of the optimal maximum delay,
in each iteration we solve the problem DC-Max-Flow with
input (G, s, t, T ) where T = d(Tl + Tu )/2e. We compare the
optimal value of (18), i.e., r∗ (T ) with the rate requirement R.
If r∗ (T ) ≥ R, we decrease the upper bound as Tu = T − 1.
Otherwise, if r∗ (T ) < R, we increase the lower bound as
Tl = T + 1. In the end we can achieve a feasible s − t flow
with maximum flow-carrying path delay optimized and the
given flow rate requirement satisfied.
Theorem 2. Algorithm 1 solves Min-Max-Delay optimally and has a pseudo-polynomial time complexity of
O((N U )3.5 log(N U ) log R) where N , max{|V |, |E|}.
Algorithm 1 MMD(G, R, s, t, U ): solve Min-Max-Delay optimally in pseudo-polynomial time
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
Input: G = (V, E), R, s, t, U
Output: fMM
procedure
Tl = 0, Tu = U , T ∗ = Tu
while Tu ≥ Tl do
T = d(Tl + Tu )/2e
f = DCMF(G, s, t, T )
r∗ (T ) = |f |
if r∗ (T ) ≥ R then
fMM = f , T ∗ = T
Tu = T − 1
else
Tl = T + 1
end if
end while
return fMM
end procedure
Proof: The optimality of the binary search scheme directly follows from Lem. 1.
Our binary search scheme terminates in O(log U ) iterations
where U must be upper bounded by |E| · dmax even in the
worst case. In each iteration given a T ≤ U , we need to solve
the linear program (18). Obviously the number of variables
in (18) is O(|E| · T ). The conservation constraint (18c) is
formulated for v ∈ V \{s, t} and d ∈ [0, T ]. Thus the number
of conservation constraint is O(|V | · T ). Clearly the number
of capacity constraint (18d) is |E|. Overall, the number of
constraints in formulation (18) is O(N T ).
[16] has proposed an algorithm to solve a linear program
within time O((g + h)1.5 hL) where g is the number of
constraints, h is the number of variables, and L denotes the
standard “bit complexity” of the linear program. Therefore,
it takes O((N T )2.5 (log R + N T log(N T ))) time to solve
formulation (18) based on the calculation of L [17] and the
Hadamard’s inequality. Since T ≤ U , the overall time complexity is O((N U )2.5 (log R+N U log(N U )) log U ) which can
be described by O((N U )3.5 log(N U ) log R).
Without loss of generality, clearly it holds that U ≤
|E| · dmax which is an upper bound for the path
delay of any simple source-receiver paths, and hence
the time complexity in Thm. 2 can be described by
O((N 2 dmax )3.5 log(N 2 dmax ) log R). The time complexity is
pseudo-polynomial in the sense that it is polynomial in the
numeric value of the input dmax , but it is exponential in the
bit length of the dmax . In the next section we design an FPTAS
for our problem Min-Max-Delay. For any > 0, the FPTAS
can find a (1+)-approximate solution and the time complexity
is polynomial with the problem input and 1/.
A direct result of Thm. 2 is as follows.
Theorem 3. Min-Max-Delay is NP-hard in the weak sense.
Proof: It follows from Thm. 1 and Thm. 2.
Note that in the end of Algorithm 1, solving linear program
6
(18) gives an edge-based flow with the optimal maximum
delay d∗ (R). We need to do a flow decomposition on the
hop-expanded graph with a deadline constraint T = d∗ (R) in
order to get a path-based flow. The number of nodes and links
in the expanded graph are both O(N d∗ (R)). And since the
flow decomposition in a graph Ĝ(V̂ , Ê) has a time complexity
of O(|Ê|(|V̂ | + |Ê|)) [18], the time complexity to get the
path-based flow by decomposition in the expanded graph will
be O((N U )2 ), which does not affect the time complexity of
Algorithm 1 presented in Thm. 2.
V. F ULLY-P OLYNOMIAL T IME A PPROXIMATION S CHEME
In this section, we propose an FPTAS to solve Min-MaxDelay (1 + )-approximately in polynomial time with all the
problem input and 1/. The existence of FPTAS for the MinMax-Delay problem reveals its fundamental difference to the
maximum latency problem, which is APX-hard and admits no
PTAS unless P = NP.
Recall that our exact algorithm solves Min-Max-Delay
optimally by pursuing the minimum T ∗ = D(fMM ) ∈
{0, 1, 2, ..., U } iteratively such that the problem DC-MaxFlow with input (G, s, t, T ∗ ) can return a feasible s − t flow
with a flow rate r∗ (T ∗ ) ≥ R. Due to the binary search scheme,
the number of iterations to achieve the optimal solution is
O(log U ), which is not a problem to achieve a polynomial time
complexity considering that U ≤ |E| · dmax . The difficulty of
developing polynomial time algorithms in fact is the pseudopolynomial size (O(N T ), T ∈ {0, 1, 2, ..., U }) of the linear
program (18), namely in each iteration, it takes pseudopolynomial time to solve DC-Max-Flow with deadline T .
In order to solve Min-Max-Delay approximately in polynomial time, a natural idea is to follow the similar binary search
procedure of our proposed exact algorithm MMD(·) but in
each iteration quantize T to make the size of the problem
DC-Max-Flow independent of T and polynomial with all
the problem input and 1/, which is challenging since we
also need to maintain certain maximum delay performance
guarantee simultaneously. Our proposed FPTAS in this section
utilizes an adaptive quantization technique on link delay de
and T to address the challenge, and in the end a (1 + )approximate solution is guaranteed in fully polynomial time.
A. Quantized DC-Max-Flow algorithm
Given a deadline T , we first introduce a procedure QDCMF(·) in Algorithm 2 which solves a DC-Max-Flow problem after the quantization of de and T . The procedure must
return a feasible flow with a total flow rate no smaller than
R in polynomial time, if T is an upper bound of the optimal
solution to Min-Max-Delay, i.e. our Algorithm 2 must return
a flow fˆ in polynomial time with |fˆ| ≥ R if T ≥ D[fMM ] (see
our Lem. 3). The procedure will be used iteratively in a binary
search scheme later in our FPTAS, similar to the structure of
our exact algorithm MMD(·) (Algorithm 1).
First, each link delay is quantized (line 5) to be the multiple
of an appropriately defined ∆T (line 4). Then we define the
quantized network Ĝ as the network G with each link delay
de replaced by dˆe (line 7). Besides, T is also quantized to
Algorithm 2 QDCMF(G, s, t, T, ): quantized DC-Max-Flow
algorithm
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
input: G = (V, E), s, t, T ,
output: fˆ
procedure
∆T = T /|E|
dˆe = dde /∆T e, ∀e ∈ E
T̂ = dT /∆T e + |E|
Ĝ(V, E): G(V, E) with each de replaced by dˆe
fˆ = DCMF(Ĝ, s, t, T̂ )
return fˆ
end procedure
be T̂ based on ∆T (line 6). Due to that T̂ = d|E|/e + |E|
which is independent of link delay, clearly the DC-Max-Flow
in the graph Ĝ with the input deadline T̂ is polynomial-timesolvable. Moreover, such T̂ can guarantee that the quantized
deadline T̂ is still an upper bound for the maximum delay
of fMM in the quantized network Ĝ, if the deadline T is an
upper bound for the maximum delay of fMM in G, which will
be proved later in our Lem. 3. Finally in line 8, the returned
solution fˆ is the optimal solution to problem DC-Max-Flow
in the quantized network Ĝ with the quantized deadline T̂ .
Since Ĝ and G share the same network topology and link
capacity, obviously a flow f is feasible in Ĝ if and only if it is
feasible in G. Due to the difference of link delay in those two
networks, the maximum delay of f in Ĝ may be different from
the maximum delay of the same flow f in G. In order to avoid
possible confusion in the following lemmas and theorems, we
use DĜ (f ) to denote the maximum delay of f in the quantized
network Ĝ. Similarly, DG (f ) is the maximum delay of f in
the original network G. Without loss of generality, D(f ) is
same to DG (f ) since G is the input to Min-Max-Delay.
Now first we prove Lem. 2 which will be used later
to prove the (1 + )-approximation ratio for our FPTAS.
Then we give Lem. 3 to show the solution fˆ of algorithm
QDCMF(G, s, t, T, ) (Algorithm 2) must satisfy |fˆ| ≥ R if
D(fMM ) ≤ T .
Lemma 2. It holds that DG (f ) ≤ ∆T · DĜ (f ) where Ĝ and
∆T is defined in Algorithm 2.
Proof: Suppose Pf is the flow-carrying path set of f .
Since the only difference between G and Ĝ is the link delay,
clearly Pf does not change in G and in Ĝ.
Now the lemma holds due to the following proof:
DG (f ) = max
X
p:p∈Pf
= ∆T · max
p:p∈Pf
e:e∈p
X
de ≤ max
p:p∈Pf
X
[∆T dde /∆T e]
e:e∈p
dˆe = ∆T · DĜ (f )
e:e∈p
Lemma 3. If T ≥ D(fMM ), QDCMF(G, s, t, T, ) must return
a feasible flow solution fˆ and |fˆ| ≥ R.
7
Proof: Since T ≥ D(fMM ), by Lem. 1, the solution to
problem DCMF(G, s, t, T ), denoted as f , must satisfy
|f | ≥ R and DG (f ) ≤ T,
(22)
due to the assumption d∗ (R) = D(fMM ) ≤ T , namely there
must exist a feasible flow f in G with a total flow rate lower
bounded by R and a maximum delay upper bounded by T ,
assuming T is an upper bound for the optimal to Min-MaxDelay. As long as we can prove DĜ (f ) ≤ T̂ , namely the
maximum delay of f in Ĝ is no larger than T̂ , we can prove
Lem. 3 since f is a feasible solution to DCMF(Ĝ, s, t, T̂ )
with |f | ≥ R and furthermore fˆ is the optimal solution to
DCMF(Ĝ, s, t, T̂ ) which maximizes the flow rate from s to t,
leading to the conclusion of |fˆ| ≥ |f | ≥ R.
The detailed proof is shown below assuming Pf is the flowcarrying path set of f :
X
DĜ (f ) = max
dde /∆T e
p:p∈Pf
X
≤ max
p:p∈Pf
≤ max
(de /∆T + 1)
p:p∈Pf
X
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
input: G = (V, E), R, s, t, U ,
output: fFPTAS
procedure
Tu = U , Tl = 0, T ∗ = Tu
while Tu ≥ Tl do
T = d(Tu + Tl )/2e
fˆ = QDCMF(G, s, t, T, )
if |fˆ| < R then
Tl = T + 1
else
T ∗ = T , Tu = T − 1
end if
end while
fFPTAS = QMMD(G, R, s, t, T ∗ , )
return fFPTAS
end procedure
Algorithm 4 QMMD(G, R, s, t, T, ): quantized Min-MaxDelay algorithm
1:
(de /∆T ) + |E|
e:e∈p
X
1
· max
de + |E|
=
∆T p:p∈Pf e:e∈p
=
1:
2:
3:
4:
5:
6:
e:e∈p
e:e∈p
(a)
Algorithm 3 FPTAS to solve Min-Max-Delay
1
· DG (f ) + |E|
∆T
2:
3:
4:
5:
6:
7:
(b)
≤ dDG (f )/∆T e + |E| ≤ T̂
Inequality (a) holds since the number of links on any flowcarrying path is upper bound by |E| in f without loss of
generality. Inequality (b) holds due to the inequality (22).
Next we prove the polynomial time complexity for QDCMF(G, s, t, T, ).
Lemma 4. Algorithm 2 has a fully polynomial time complexity
of O((N M )3.5 log R log(N M )) where M = |E|(1+1/)+1.
Proof: It is straightforward that the overall time complexity of QDCMF(G, s, t, T, ) is dominated by the time
complexity of DCMF(Ĝ, s, t, T̂ ) (line 8). Now considering
that
T̂ = dT /∆T e + |E| = d|E|/e + |E| ≤ |E|(1 + 1/) + 1,
following the similar procedure in Thm. 2 we can get the
proposed time complexity.
B. FPTAS to solve Min-Max-Delay
We describe our FPTAS in Algorithm 3, which follows a
similar structure as our MMD(G, R, s, t, U ) (Algorithm 1) that
solves Min-Max-Delay optimally in pseudo-polynomial time,
except for two differences:
(i) in each iteration, FPTAS solves a quantized DC-MaxFlow problem optimally (line 7) while MMD solves the exact
DC-Max-Flow problem optimally, and
(ii) in the algorithm MMD, when the binary search terminates, associated T ∗ is the optimal maximum delay, and thus
8:
9:
10:
input: G = (V, E), R, s, t, T ,
output: fˆ
procedure
∆T = T /|E|
dˆe = dde /∆T e, ∀e ∈ E
T̂ = dT /∆T e + |E|
Ĝ(V, E): G(V, E) with each de replaced by dˆe
fˆ = MMD(Ĝ, R, s, t, T̂ )
return fˆ
end procedure
the solution to problem DC-Max-Flow with deadline T ∗ is the
optimal solution to our Min-Max-Delay. However, when the
binary search terminates in the FPTAS, associated T ∗ may not
be the optimal maximum delay, and the (1 + )-approximate
solution is the solution to problem QMMD(G, R, s, t, T ∗ , )
(line 14), which is the problem Min-Max-Delay (not the
problem DC-Max-Flow) in the quantized network Ĝ with a
maximum delay upper bound T̂ ∗ = dT ∗ /∆T ∗ e + |E|.
The quantization approach in QMMD(·) (Algorithm 4)
is same as in QDCMF(·) (Algorithm 2) with the size of
formulation (18) to be d|E|/e + |E| which is independent
of dmax and polynomial with problem input and 1/, clearly
leading to a polynomial time complexity of our FPTAS. Next
we will prove the (1 + )-approximate performance guarantee.
Lemma 5. In the end of Algorithm 3, it holds that T ∗ ≤
D(fMM ).
Proof: Suppose Topt is the maximum delay performance
of the optimal solution to Min-Max-Delay, namely Topt =
D(fMM ). According to Lem. 3, QDCMF(G, s, t, T, ) always
returns a feasible flow fˆ and |fˆ| ≥ R for any T ∈ [Topt , U ],
and may or may not return such a flow for T ∈ [0, Topt ).
Considering the property of the binary search structure of our
FPTAS, in the end, clearly that T ∗ ≤ Topt = D(fMM ).
Note that in the end of our FPTAS, T ∗ may not be
8
the minimum integer among T ∈ [0, U ] such that QDCMF(G, s, t, T, ) returns a feasible flow with a total rate
no smaller than R. In fact, our FPTAS only requires that
T ∗ ≤ D(fMM ) to guarantee a (1 + ) approximation ratio,
as proved in the following theorem.
Theorem 4. Algorithm 3 is a (1 + )-approximate algorithm
for the Min-Max-Delay problem, namely
D(fFPTAS ) ≤ (1 + )D(fMM )
Proof: The theorem holds due to the following proof:
(a)
(b)
DG (fFPTAS ) ≤ ∆T ∗ · DĜ (fFPTAS ) ≤ ∆T ∗ · DĜ (fMM )
X
= ∆T ∗ · max
dde /∆T ∗ e
p:p∈PfMM
≤ ∆T ∗ · max
p:p∈PfMM
≤ ∆T ∗ · max
p:p∈PfMM
Fig. 2: Topology of the 6 Amazon EC2 datacenters.
TABLE II: Link delays (in ms) for Amazon EC2 [6], (OR:
Oregon, VA: Virginia, IR: Ireland, TO: Tokyo, SI: Singapore,
SP: Sao Paulo).
e:e∈p
X
(de /∆T ∗ + 1)
e:e∈p
X
(de /∆T ∗ ) + ∆T ∗ |E|
e:e∈p
OR
VA
IR
TO
SI
SP
OR
N/A
-
VA
41
N/A
-
IR
86
54
N/A
-
TO
68
101
138
N/A
-
SI
117
127
117
45
N/A
-
SP
104
82
120
151
182
N/A
= DG (fMM ) + ∆T ∗ |E| = DG (fMM ) + T ∗
(c)
≤ (1 + )DG (fMM )
Inequality (a) holds due to Lem. 2. Inequality (b) is true
because fFPTAS is the optimal to problem Min-Max-Delay in
the graph Ĝ (our Algorithm 4). Inequality (c) is true due to
Lem. 5.
Theorem 5. Algorithm 3 has a fully polynomial time complexity of O((N M )3.5 log R log(N M )(L + log N )) where
M = |E|(1 + 1/) + 1 and L is the bit complexity for
representing dmax .
Proof: Assume we need L bits to represent dmax ,
namely dmax = O(2L ). According to Lem. 4, in each
iteration of Algorithm 3, corresponding time complexity is
O((N M )3.5 log R log(N M )). Due to the fact that the initial
upper bound U for the optimal maximum delay is definitely
upper bounded by |E| · dmax , namely U ≤ |E| · dmax , in
the binary search scheme in our FPTAS we need at most
O(log(|E| · dmax )) iterations, leading to the proposed time
complexity in the end. Note that in the running time of
QMMD(·) (line 14) of Algorithm 3 is not a dominating part
according to Thm. 2 and does not affect the time complexity.
Thm. 4 together with Thm. 5 shows that our proposed
Algorithm 3 can guarantee an approximation ratio of (1 + )
within polynomial time for any > 0, and hence Algorithm 3
is an FPTAS for the problem Min-Max-Delay. In contrast, as
a generalization of our Min-Max-Delay, the maximum latency
problem studied in the literature has been proved to be APXhard [12] and admits no PTAS unless P = NP. Thus, although
Min-Max-Delay is an important special case of the maximum
latency problem which has been proved to be quite difficult to
approximate, there exist efficient approximate algorithms, e.g.
our proposed Algorithm 3, to solve the Min-Max-Delay with
an approximation ratio arbitrarily close to one in polynomial
time. This result reveals a fundamental difference between the
two problems.
VI. P ERFORMANCE E VALUATION
In this section we evaluate our proposed algorithms and see
how the empirically achieved maximum delay performance
meets the video-conferencing end-to-end delay requirement
(400ms as suggested by [10]), comparing our algorithms to
state-of-the-art approaches using a real-world continent-scale
inter-datacenter network topology of 6 globally distributed
Amazon EC2 datacenters as shown in Fig. 2, modeled as a
complete undirected graph. Each undirected link is treated
as two directed links that operate independently and have
identical capacities, a common way to model an undirected
graph by a directed one, e.g. in [19]. We set integer link delays
(see Tab. II) and link capacities (see Tab. III) according to
practical evaluations on Amazon EC2 from studies [5], [6].
We do extensive simulations to solve Min-Max-Delay problem instances belonging to 4 different experimental cases (see
Tab. IV). For each case characterized by a specific sourcereceiver pair, we solve Min-Max-Delay instances with flow
rate requirement enumerated from Rmin to Rmax with a unit
step. In this paper, Rmin is set to be the minimum flow rate
when multiple paths (≥ 2 paths) must be used for delay
minimization while supporting the required flow rate, since
otherwise the Min-Max-Delay problem can be reduced to the
simple shortest path problem that is polynomial-time solvable
by Dijkstra’s algorithm [20] and is not our focus. We set Rmax
to be the maximum flow rate that can be sent from the source
TABLE III: Link capacities (in Mbps) for Amazon EC2 [5],
(OR: Oregon, VA: Virginia, IR: Ireland, TO: Tokyo, SI:
Singapore, SP: Sao Paulo).
OR
VA
IR
TO
SI
SP
OR
N/A
-
VA
82
N/A
-
IR
86
72
N/A
-
TO
138
41
56
N/A
-
SI
74
52
44
166
N/A
-
SP
67
70
61
41
33
N/A
9
to the receiver.
Our test environment is an Intel Core i5 (2.40 GHz) processor with 8 GB memory running Windows 64-bit operating
system. All the experiments are implemented in C++ and
linear programs are solved using CPLEX [21].
A. Baseline algorithms
In order to evaluate the performance of our exact algorithm
and FPTAS, we compare them to three existing approaches.
1) the System-Optimal flow algorithm (SO): the systemoptimal flow is a γ(L)-approximate solution to the maximum
latency problem of minimizing the maximum delay with flowdependent link delay model, but theoretically it is not a good
solution to Min-Max-Delay with link capacity constraint and
integer link delay because its maximum delay performance
can become infinitely large compared to the optimal. In the
experiments we obtain the system-optimal flow by solving its
well-known edge-based linear programming formulation:
X
min
(de · xe )
(23a)
e∈E
s.t.
X
e∈Out(s)
xe = R,
(23b)
xe , ∀v ∈ V \{s, t},
(23c)
e∈In(t)
X
X
xe =
e∈Out(v)
vars.
X
xe =
e∈In(v)
xe ≤ ce , ∀e ∈ E,
(23d)
xe ≥ 0, ∀e ∈ E.
(23e)
In order to get the path-based system-optimal flow and associated maximum delay result, we do flow decomposition [18]
on the optimal solution to the linear program (23).
2) the heuristic Iterative Shortest Path algorithm
(ISP) [22]: the ISP approach is proposed by [22] to
solve the maximum latency problem, and it can be used
to handle our Min-Max-Delay directly with additional link
capacity constraint involved. However, ISP is a heuristic
approach with no maximum delay performance guarantee both
in the maximum latency problem and in our Min-Max-Delay.
ISP finds the shortest path from source to receiver iteratively, and in each iteration it assigns as much flow rate as
possible to corresponding returned path. In our experiments
we use Dijkstra’s algorithm [20] to get the shortest path.
3) the heuristic Iterative Linear Relaxation algorithm
(ILR) [5]: the study in [5] optimizes three different endto-end-delay-related objectives in a general multi-commodity
multi-cast scenario with constant link delay and constant link
capacity. And our Min-Max-Delay problem in the singleunicast scenario can be casted by one of the three problems
in [5] with the following formulation:
X
min max{dp φ(xp )} + β
φ(xp )
(24a)
p∈P
s.t.
X
p∈P
p
x = R,
(24b)
p∈P
X
vars.
e∈E:e∈p
p
xp ≤ ce , ∀e ∈ E,
x ≥ 0, ∀p ∈ P.
(24c)
(24d)
TABLE IV: Respective failure (%) of baseline algorithms
to satisfy the video-conferencing delay requirement for 4
different experimental cases.
Source
Receiver
Flow rate,
[Rmin , Rmax ]
ISP
Failure of
ILR
Algo. (%)
SO
Case 1
OR
TO
Case 2
VA
IR
Case 3
VA
SI
Case 4
IR
SI
[139, 442]
[73, 317]
[53, 317]
[45, 319]
7
7
7
2
1
2
0
0
0
9
15
0
where φ(·) is an identity function defined as
(
1, if x > 0,
φ(x) =
0, if x = 0.
The work in [5] defines variables as the flow rate assigned
on the source-receiver paths with upper bounded length to
reduce the time complexity. But in our experiments in order to
minimize the maximum delay, we consider all source-receiver
paths, namely the path set P . In the objective (24a), β is used
by [5] as a penalty when too many paths are utilized to carry
flow rates because the authors of [5] try to get a sparse network
flow solution. However, in our experiments β is set to be 0 to
optimize the maximum delay performance.
Formulation (24) is hard to solve because of the identity
function φ(·). The study in [5] proposes the ILR heuristic
approach to overcome the difficulty by solving linear programs
which are relaxations to formulation (24) iteratively. Details
of ILR are referred to the paper [5]. As discussed later in our
related work section, ILR has neither maximum delay performance guarantee nor running time performance guarantee
theoretically.
B. Experiments of our pseudo-polynomial time algorithm
We first evaluate the performance of our pseudo-polynomial
time algorithm MMD (Algorithm 1) which can solve MinMax-Delay optimally. In the experiments, TU is set to be
the maximum delay of the flow solution to ISP, and thus
the running time of the heuristic approach ISP is part of the
total running time of MMD. We do simulations for extensive
Min-Max-Delay instances as described in our Tab. IV, using
algorithms including SO, ISP, ILR and our MMD.
Recall as discussed in our introduction, to guarantee a
real-time interaction for video conferencing, 400ms is the
maximum acceptable end-to-end delay budget according to the
suggestion from ITU [10]. Note that an end-to-end delay is in
fact the delay experienced by the video and audio through
a video conferencing, i.e. from source camera to receiver
screen, which is a sum of coding delay at source, decoding
delay at receiver and the network transmission delay. In this
paper, however, we only target minimizing the transmission
delay. According to [23], [24], a normal delay for the source
(receiver) to process the video data can be 70ms, and such a
processing delay can be reduced to 35ms for devices with
advanced low-latency technology, e.g. the Sensoray Model
2253 audio/video codec [24].
10
TABLE V: Average empirical maximum delay improvement
(%) comparing the optimal solution of Min-Max-Delay which
can be obtained by our MMD (Algorithm 1) to each of the
baselines in non-trivial problem instances with relatively large
flow rate requirement.
Source
Receiver
Flow rate
Max-delay
imprv. (%)
ISP
ILR
SO
Case 1
OR
TO
[422, 442]
13
11
13
Case 2
VA
IR
[305, 317]
11
6
11
Case 3
VA
SI
[296, 317]
8
8
0
Case 4
IR
SI
[296, 319]
7
8
0
We reasonably assume a 35ms processing delay both for
source and receiver, and hence the transmission delay upper
bound to guarantee an acceptable video-conferencing service
will be 330ms. In any simulation instance, the optimal maximum delay performance which can be obtained by MMD is
always below the 330ms budget, while the maximum delay
achieved by any baseline algorithm violates the 330ms upper
bound and hence fail to provide an acceptable video conferencing in multiple instances with detailed results of failure in
percentage provided in Tab. IV. As shown in the table, baseline
algorithms fail to satisfy the 330ms video-conferencing delay
budget for up to 15% of simulation instances between certain
source-receiver pair (the maximum failure in percentage of
baselines are 7%, 2%, 0% and 15% respectively for the 4
experimental cases). Besides, the baseline ISP (resp. ILR and
SO) fails for up to 9% (resp. 15% and 7%) of all simulation
instances in order to route the video-conferencing traffic within
acceptable delay budget from certain source to certain receiver.
In our simulations, for trivial problem instances with
small/media rate requirement, all algorithms have the same
and optimal maximum delay performance since only two or
three paths will be used. We provide the detailed maximum
delay and running time results for non-trivial problem instances with relatively large flow rate requirement in Fig. 3,
where clearly any of the three baselines fail to solve our
problem optimally in certain instances. In Tab. V, we give
the average maximum delay improvement, namely the average
of (D(f ) − D(fMM ))/D(f ), comparing fMM which is the
optimal to our problem Min-Max-Delay and is the solution
of our MMD, to the flow f which is the solution of individual
baselines, in those non-trivial problem instances.
The running time results are also presented in Fig. 3.
According to the figure, the running time of SO and ISP
is so small that can be ignored, compared with the results of
MMD and ILR. Although our MMD, which can always find
the optimal solution, consumes more time than SO and ISP,
it in fact runs pretty fast in our experiments (less than 0.5
second) for all simulation instances.
C. Experiments of our FPTAS
For our problem Min-Max-Delay, because the exact algorithm (Algorithm 1) has an pseudo-polynomial time complexity theoretically, in Sec. V-B, we develop an FPTAS
(Algorithm 3) which can guarantee a (1 + )-approximate
solution within fully polynomial time. In this section, we
evaluate the performance of our FPTAS for traffic from
Oregon to Tokyo (case 1 in Tab. V) and for traffic from North
Virginia to Singapore (case 3 in Tab. V). In both cases, the
approximation parameter is increased from 0 to 3 with a step
of 0.1, and we obtain the maximum delay and running time
results of the FPTAS for problem instances with different flow
rate requirements, as shown in Fig. 4.
According to the figure, in both experimental cases either
from Oregon to Tokyo or from North Virginia to Singapore,
obviously larger flow rate requirement results in a solution
flow with larger optimal maximum delay. Although theoretically the maximum delay performance of the FPTAS is only
upper bounded by a ratio of (1 + ) compared to the optimal
solution, empirically our FPTAS is able to achieve the optimal
maximum delay in all experimental cases for any ∈ (0, 3]
(the maximum delay result remains to be a constant with
as shown in the figure). According to the running time
results, clearly smaller rate requirement and larger lead to
better running time results, particularly true for < 0.5 when
theoretically our FPTAS always gives a feasible flow with
the maximum delay no worse than 1.5 × OPT where OPT
denotes the optimal maximum delay performance of our MinMax-Delay problem instance.
VII. R ELATED WORK
The maximum latency problem models link delay as a
function with the link aggregated flow rate. Since our MinMax-Delay problem which is characterized by an integer link
delay within finite link capacity can be viewed as a special
case of the maximum latency problem, existing algorithms
for the maximum latency problem can be applied to solve
our problem by assuming the delay function to be an integer
within capacity and go to infinity otherwise. Although most
existing algorithms can solve the maximum latency problem
efficiently, compared to the optimal delay performance, none
of them can achieve theoretically upper bounded maximum
delay for our Min-Max-Delay problem.
Correa et al. in [11], [12] prove the maximum latency
problem is NP-hard, in particular APX-hard, and propose that
compared to the optimal maximum delay, a tight maximum
delay gap of the well-known system-optimal flow is γ(L)
and a tight maximum delay gap of the Nash equilibrium
flow is α(L). Both γ(L) and α(L) are constants that rely on
the link delay function. However, for the link delay model
in our Min-Max-Delay problem, both γ(L) and α(L) are
infinitely large, implying neither the system-optimal flow nor
the Nash equilibrium flow can guarantee a maximum delay
within a finite-ratio gap to the optimal solution to our problem Min-Max-Delay. Roughgarden [13] shows that the Nash
equilibrium also admits a maximum delay no worse than a
network-dependent ratio of (|V |−1) for the maximum latency
problem. Obviously the gap (|V | − 1) is too large for practical
applications. Devetak et al. [22] have introduced fast heuristic
algorithms to optimize maximum delay, but the maximum
delay performance of returned solutions has no performance
guarantee compared to the optimal solution, theoretically.
To our best knowledge, [5] is the only existing study targeting the maximum delay optimization scenario with constant
11
Fig. 3: Running time and achieved maximum delay results of problem instances with large flow rate requirement for 4
experimental cases, comparing our pseudo-polynomial time algorithm (MMD) to three baselines.
Fig. 4: Running time and achieved maximum delay results of our FPTAS for problem instances with different flow rate
requirements and approximation parameter .
link delay and finite link capacity involved. The heuristic
approach in [5] for minimizing the maximum delay has two
limitations: (i) the time complexity could be high because the
number of variables in the approach increases exponentially
in the network size, and (ii) there is not yet theoretical
performance guarantee of the achieved solution.
VIII. C ONCLUSION
We study a delay-critical information flow problem which
optimizes the maximum end-to-end cross-network delay in
the single-unicast scenario, denoted as the Min-Max-Delay
problem. Transmission over a link incurs an integer delay if
the rate is within the finite link capacity, and an unbounded
delay otherwise. Many practical applications, e.g. the routing
of video conferencing traffic over inter-datacenter networks,
can be formulated as such problems.
We prove Min-Max-Delay is NP-hard in the weak sense
and propose two algorithms: (i) an exact algorithm that can
find the optimal solution with a pseudo-polynomial time complexity of O((N 2 dmax )3.5 log(N 2 dmax ) log R), where N ,
max{|V |, |E|} describes the network size, R is the flow rate
requirement, and dmax is the maximum link delay, and (ii) an
FPTAS with a (1 + )-approximation ratio and a polynomial
time complexity of O((N M )3.5 log R log(N M )(L + log N ))
for any > 0, where M = |E|(1 + 1/) + 1 and L is
the bit complexity for representing dmax . Our results reveal
12
fundamental differences between the Min-Max-Delay problem
and the maximum latency problem studied in the literature,
which is APX-hard and admits no PTAS unless P = NP.
Moreover, there has not yet been exact pseudo-polynomialtime algorithm or constant-approximate algorithm for the
maximum latency problem.
We demonstrate the effectiveness of our algorithms in the
scenario of delay-critical inter-datacenter video-conferencing
traffic routing over multiple paths, using simulations based
on the Amazon EC2 inter-datacenter topology. Both of our
algorithms achieve the optimal maximum delay performance
in all simulation instances, and they always meet the endto-end delay requirement for video conferencing applications.
On the contrary, the state-of-the-art solutions only obtain suboptimal maximum delay performance in certain instances, and
fail to satisfy the video-conferencing delay requirement for
up to 15% of simulation instances between certain crosscontinental source-receiver pair.
R EFERENCES
[1] Q. Liu, L. Deng, H. Zeng, and M. Chen, “On the min-max-delay
problem: Np-completeness, algorithm, and integrality gap,” in IEEE
Information Theory Workshop, 2017.
[2] WIKI. Polynomial time approximation scheme. [Online]. Available:
https://en.wikipedia.org/wiki/Polynomial-time approximation scheme
[3] C.-Y. Hong, S. Kandula, R. Mahajan, M. Zhang, V. Gill, M. Nanduri,
and R. Wattenhofer, “Achieving high utilization with software-driven
wan,” in ACM SIGCOMM Computer Communication Review, vol. 43,
no. 4. ACM, 2013, pp. 15–26.
[4] S. Jain, A. Kumar, S. Mandal, J. Ong, L. Poutievski, A. Singh,
S. Venkata, J. Wanderer, J. Zhou, M. Zhu et al., “B4: Experience with
a globally-deployed software defined wan,” ACM SIGCOMM Computer
Communication Review, vol. 43, no. 4, pp. 3–14, 2013.
[5] Y. Liu, D. Niu, and B. Li, “Delay-optimized video traffic routing
in software-defined interdatacenter networks,” IEEE Transactions on
Multimedia, vol. 18, no. 5, pp. 865–878, 2016.
[6] M. H. Hajiesmaili, L. T. Mak, Z. Wang, C. Wu, M. Chen, and
A. Khonsari, “Cost-effective low-delay design for multi-party cloud
video conferencing,” IEEE Transactions on Multimedia, 2017.
[7] WebEx. [Online]. Available: https://blog.webex.com/2016/01/fivereasons-to-join-a-webex-now/
[8] Skype. [Online]. Available: https://news.microsoft.com/bythenumbers/
skype-calls
[9] Cisco. [Online]. Available: http://www.cisco.com/c/dam/en/us/solutions/
collateral/collaboration/midsize-collaboration-solutions/high-growthinnovative-companies.pdf
[10] ITU, “Series g: Transmission systems and media, digital systems and
networks,” 2003.
[11] J. Correa, A. Schulz, and N. Moses, “Computational complexity, fairness, and the price of anarchy of the maximum latency problem,” in Inter.
Conf. Integer Programming and Combinatorial Optimization, 2004.
[12] J. Correa, A. Schulz, and N. Stier-Moses, “Fast, fair, and efficient flows
in networks,” Operations Research, vol. 55, no. 2, pp. 215–225, 2007.
[13] T. Roughgarden, “The maximum latency of selfish routing,” CORNELL
UNIV ITHACA NY DEPT OF COMPUTER SCIENCE, Tech. Rep.,
2004.
[14] M. R. Garey and D. S. Johnson, ““strong” np-completeness results:
motivation, examples, and implications,” Journal of the ACM, vol. 25,
no. 3, pp. 499–508, 1978.
[15] C.-C. Wang and M. Chen, “Sending perishable information: Coding
improves delay-constrained throughput even for single unicast,” in IEEE
International Symposium on Information Theory, 2014.
[16] P. M. Vaidya, “Speeding-up linear programming using fast matrix
multiplication,” in IEEE Symp. Foundations of Computer Science, 1989.
[17] Y. Lee and A. Sidford, “Path finding
methods for linear programming:
√
Solving linear programs in Õ( rank) iterations and faster algorithms
for maximum flow,” in Symp. Foundations of Computer Science, 2014.
[18] L. Ford and D. Fulkerson, “Constructing maximal dynamic flows from
static flows,” Operations Research, vol. 6, no. 3, pp. 419–433, 1958.
[19] B. Grimmer and S. Kapoor, “Nash equilibrium and the price of anarchy
in priority based network routing,” in Computer Communications, IEEE
INFOCOM 2016-The 35th Annual IEEE International Conference on.
IEEE, 2016, pp. 1–9.
[20] E. W. Dijkstra, “A note on two problems in connexion with graphs,”
Numerische mathematik, vol. 1, no. 1, pp. 269–271, 1959.
[21] IBM, “Cplex optimizer,” 2017. [Online]. Available: https://www01.ibm.com/software/commerce/optimization/cplex-optimizer/
[22] F. Devetak, J. Shin, T. Anjali, and S. Kapoor, “Minimizing path delay
in multipath networks,” in IEEE Conf. Communications, 2011.
[23] P. Eberlein. Understanding video latency: what is video
latency
and
why
do
we
care
about
it?
[Online]. Available: http://www.vision-systems.com/content/dam/VSD/
solutionsinvision/Resources/Sensoray video-latency article FINAL.pdf
[24] I. Sensoray Company. Usb audio/video codec model 2253 hardware
manual. [Online]. Available: http://www.sensoray.com/downloads/man
2253 hw 1.2.1.pdf
| 8 |
PROTEIN THREADING BASED ON NONLINEAR INTEGER PROGRAMMING
Wajeb Gharib Gharibi
Department of Computer Engineering & Networks
Jazan University
Jazan 82822-6694, Saudi Arabia
Gharibi@jazanu.edu.sa
Marwah Mohammed Bakri
Department of Biology
Jazan University
Jazan 755, Saudi Arabia
Marwah890@gmail.com
Abstract
PDB in the past three years have similar structural folds to
the ones in PDB.
Protein threading is a method of computational
protein structure prediction used for protein sequences
which have the same fold as proteins of known structures
but do not have homologous proteins with known
structure. The most popular algorithm is based on linear
integer programming.
In this paper, we consider methods based on
nonlinear integer programming. Actually, the existing
linear integer programming is directly linearized from the
original quadratic integer programming. We then develop
corresponding efficient algorithms.
Keywords: Protein Threading, Protein Structure
Alignment, Integer Programming, Relaxation
1
INTRODUCTION
Protein structure prediction from amino acid
sequence is a fundamental scientific problem and it is
regarded as a grand challenge in computational biology
and chemistry.
Protein threading problem also referred as the holy
grail of molecular biology on the second half of the
genetic code is to determine the three-dimensional folded
shape (protein structure prediction) of a protein (sequence
of characters drawn from an alphabet of 20 letters). It is
important because the biological function of proteins
underlies all life, their function is determined by their
three-dimensional shape, and their shape determined by
one-dimensional sequence.
The prediction is made by "threading" (i.e. placing,
aligning) each amino acid contained in the target
sequence to a position in the template structure, and
evaluating how well the target fits the template. After the
best-fit template is selected, the structural model of the
sequence is built based on the alignment with the chosen
template. The protein threading method is based on two
basic observations. One is that the number of different
folds in nature is fairly small (approximately 1000), and
the other is that according to the statistics of the Protein
Data Bank (PDB), 90% of the new structures submitted to
A general paradigm of protein threading consists of
the following four steps: the construction of a structure
template database, the design of the scoring function,
threading alignment and threading prediction. The third
step is one of the major tasks of all threading-based
structure prediction programs, which mainly dedicated to
solving the optimal alignment problem derived from a
scoring function considering pairwise contacts.
As a formal presentation of the problem, let C called
core be a set of m items , called segments of length .
This set must be aligned to a sequence L of N characters
from some finite alphabet. Let
be the position in L
where starts. An alignment is called feasible threading
if:
1)
for all i,
2) the length
characters; i.e
(called gap or loop) of uncovered
is bounded, say
.
Each feasible threading
) is scored
by a function
where score the
placement of the segment i to a given position and is
used in some experiments for scoring the gap between
two consecutive segments. If the problem now is to
minimize f(t) over the set F of feasible threading, one can
show the equivalents with the shortest path problem
between two vertices of a very structured graph.
The model of protein threading problem is to
minimize the objective function
Subject to
Where m is the number of segments,
(The number are the lengths of the segments increased
by
the minimal number of gaps between the
segments k and k+1) is the number of possible placements
of each segment relative to the end of the previous one,
are binary variables with
meaning the segment
i starts from the obuolute position
of the
position sequence L.
Many different algorithms have been proposed for
finding the correct threading of a sequence onto a
structure, though many make use of dynamic
programming in some form. For full 3-D threading, the
problem of identifying the best alignment is very difficult
(it is an NP-hard problem). Researchers have made use of
many combinatorial optimization methods to arrive at
solutions. There are many algorithms, for example, the
protein threading software RAPTOR, which is based on
linear integer programming.
In this paper, we focus on developing efficient
algorithms. We notice that the mathematical models used
in the literatures are normally a linear integer
programming, which can actually be regarded as a
linearization of a quadratic integer programming problem.
This motivates us to study the original quadratic integer
programming directly. Recently, quadratic integer
programming becomes a hot research topic in
optimization society. Many mathematical tools such as
conic programming are developed, with which we can
construct corresponding efficient algorithms.
Now, consider the zero-one quadratic programming
problem
P : min C T x x T Qx
s.t. h x x Gx g
x X {0,1},
T
T
(1, 1)
smaller. More tight linearization strategies are proposed in
this article for further improvement.
This article is organized as follows. In section 2, we
shortly describe the existing efficient linearization
approach. In section 3, we introduce our approach and
represent the linearized model. We conclude the paper in
section 4.
2 THE EXISTING EFFICIENT LINEARIZATION
APPROACH
Define
i
min
/ max min / max{ Qi x : x X }, i,
(2.1)
where Q i is the i-th row of Q, and X is any suitable
relaxation of X such that the problem (2.1) can be solved
relatively
easily.
min / max be
i
components min
,
/ max
the
vector
with
i 1, 2, ..., n, and
min/ max diag ( i min/ max ) . Similarly, define
i
min
/ max min / max{ Gi x : x X }, i,
(2.2)
and
i
T
min / max (min
/ max , i 1,..., n) ,
i
min / max diag (min
/ max , i 1,..., n).
Sherali and Smith [14] reformulated Problem P as
an equivalent bilinearly constrained bilinear problem by
introducing Qx and Gx . Linearizing the terms
x i i and x i i by s i and z i respectively,
they
obtained
(1, 2)
BP : min cT x eT S
s. t. Qx
(2,4)
(1, 3)
hT x eT z g
(2,5)
Gx
(2,6)
where Q and G are general symmetric matrices of
dimension n n .
This problem is a generalization of unconstrained
zero-one quadratic problems, zero-one quadratic knapsack
problems, quadratic assignment problems and so on. It is
clearly NP-hard.
Linearization strategies are to reformulate the zero-one
quadratic programs as equivalent mixed-integer
programming problems (1.1) and (1.3) with additional
binary variables and/or continuous variables and
continuous constraints, see [1, 2, 3, 6, 7, 8, 9, 10, 12, 13].
Recently, Sherali and Smith [14] developed small
linearizations for (1.1) - (1.3), which is more general with
structure. The linearization generated by our approach is
i
i
min
xi si' max
xi , i
(2, 3)
(2,7)
imin xi Si imax xi (1 xi ), i,
(2,8)
imin xi zi' imax xi , i,
(2,9)
imin (1 xi ) (i zi' ) imax (1 xi ), i,
x X
where e is a conformable vector of ones and the
constrains (2.7) - (2.10) comes from multiplying
(2, 10)
(2,11)
min max , min max
by xi and (1 xi ).
(2.12)
let x be part of an optimal solution to Problem BP. Then x
solves Problem P.
Besides, BP can be improved by the additional cuts
BP (2.3) - (2.11) has the following equivalent compact
formulation
T
BP: min cT x eT s min
x
(2.13)
i
i
i
( min
min
wmax
) xi si zi 0, i,
(2.31)
which is derived from multiplying
i
max
i i w
i
max
by
max{(G i Qi )x : x X } .
(2.14)
x i where w
T
hT x eT z min
xg
(2.15)
3
Gx
(2.16)
i
i
0 si ( max
min
) xi , i,
(2.17)
Motivated by [15], we first reveal the relation between
general quadratic and piece-wise linear terms for zero-one
variables.
i
i
0 yi ( max
min
)(1 xi ), i,
(2.18)
Lemma 3.1. Let
s. t. Qx y s min e
0 zi (
i
max
) xi
i
min
(i zi )
x X
i
min
i
max
(2.19)
(
i
max
) xi , i, (2.20)
i
min
(2.21)
A REPRESENTATION APPROACH
i
i
i
xi Qi x max{ min
xi , Qi x max
xi max
},
(3.1)
i
i
i
xi Qi x min{ max
xi , Qi x min
xi min
}
(3.2)
via the linear transformation
Proof. Suppose
i
si si min
xi , i,
clearly
i
yi i si min
(1 xi ), i,
zi zi
(2.22)
x , i,
i
min i
Since the optimization and constraint senses of BP
tend to push the variables s to their lower bounds and z to
their upper bounds, the final relaxed version of BP was
written as
T
BP: min cT x eT s min
x
(2.23)
x X {0,1}n . for all i 1,..., n ,
0
x i 0, the left hand side of (3.1) is
and
the
right
hand
i
side of (3.1) reads max{ max
, Qi x } Qi x , which is
equal to the left hand side. The proof of (3.1) is completed
and (3.2) can be similarly verified.
Corollary 3.1.
Let x X {0,1}n . for all i 1,..., n,
i
i
i
max{ min
xi , Qi x max
xi max
} si
0 y [ max min ](e x)
(2.25)
min{
s0
(2.26)
(2.27)
if and only if
Gx z min
(2.28)
si xi Qi x.
0 z [ max min ] x
(2.29)
x X ,
(2.30)
T
T
min
by deleting the upper bounding inequalities for s and
z in (2.17) and (2.20), and combining (2.16) with
(2.20).
It was shown in [14] that Problems BP and P are
equivalent in the sense that for each feasible solution to
one problem, there exists a feasible solution to the other
problem having the same objective value. Furthermore,
(3.3)
x , Qi x x },
i
max i
h xe z x g
T
becomes
x i 0, it must hold that x i 1, the right hand
(2.24)
s. t. Q x y s min e
side
i
max{0, Qi x max
} 0 . On the other hand, if
i
min i
i
min
(3.4)
Combining (3.1) with (3.2), we have
i
i
i
xi Qi x max{ min
xi , Qi x max
xi max
}
min{
x , Qi x
i
max i
(3.5)
x },
i
min i
i
min
The above results hold true for
G i and ¸ defined
before. Linearization based on Corollary 3.1 is just BP
(2.3) - (2.11), where the linear inequalities (2.7) - (2.8) is
nothing but (3.3). We remark here the four inequalities
implied by (3.3) were first introduced in [8]. Actually, not
all inequalities (3,3) are necessary in the final linearized
model. To see this, below we first introduce the principle
of reformulating zero-one quadratic programs into piecewise linear programs. Generally, for continuous
programs, we have
n
hT x zi g ,
(3.9)
i 1
i
zi max
xi ,
(3.10)
i
i
zi Gi x min
xi min
,
(3.11)
Proposition 3.1. Any convex program with linear or
piece-wise linear objective function and constraints is
equivalent to a linear program in the sense that there is a
one-to-one projection between both feasible solutions.
since (3.9)-(3.11) is a relaxation of (3.7) and (3.9)-(3.11)
also implies (3.7).
Proof. We notice that min f ( x ) is equivalent to
Now we can obtain a linearization for (3.6)-(3.8),
which is similarly to BP except that we do not require
y 0 and z 0 . In other words, they are redundant in
min t
s.t. t f ( x ) 0
BP
Without loss of generality we assume that the
objective function is linear. The constraint set is convex
and characterized by piece-wise linear inequalities. It
follows that it is convex polyhedral, which must have
linear expression.
It is easy to see that the equivalence of Proposition
3.1 holds if we restrict the variables to be zeros or ones.
Next we show the existence of such equivalent 'convex'
piece-wise linear program for zero-one quadratic
minimization problem.
Proposition 3.2. For any zero-one quadratic minimization
problem, there is an equivalent zero-one piece-wise linear
program with convex objective function and constraints.
Proof. Clearly, the maximum of several linear
functions is convex and the minimum is concave. Then
(3.1) and (3.2) in Lemma 3.1 provide the convex and
concave formulations, respectively. Therefore, for any
given zero-one quadratic minimization problem, we can
obtain an equivalent convex piece-wise linear program by
using (3.1) and/or (3.2). Note that we use (3.1) and (3.2)
simultaneously only when handling equality constraints,
see also Corollary 3.1.
Now we can see that (1.1) - (1.3) has the following
equivalent formulation
n
i
i
i
min cT x max{ min
xi , Qi x max
xi max
}
(3.6)
i 1
n
i
i
i
s.t. hT x min {max
xi , Gi x min
xi min
} g , (3.7)
i 1
x X {0,1}n .
Linearizing (3.6)-(3.8) becomes very easy. For
example, (3.7) is equivalent to
(3.8)
.
Finally, we point out that the non-necessity of
inequalities such as y 0 and z 0 was also observed
in [1, 2]. Actually, the linearization generated by our
convex piece-wise approach coincides theirs.
4
CONCLUSIONS
In this article, we defined the protein folding
problem and discussed its solution through presenting
small linearizations for the zero-one quadratic
minimization problem.
We present the equivalence of quadratic terms and
piece-wise linear terms for zero-one variables. There are
two piece-wise formulations, convex and concave cases.
We show the smaller linearization is based on the convex
piece-wise
objective
function
and
constraints.
Linearization generated by our approach is smaller than
that in [14]. Our approach can be easily extended to
linearize polynomial zero-one minimization problems
which have many applications, particularly in biological
computing problems.
5
REFERENCES
[1]. Adams W., and Forrester R., “A Simple Approach
for Generating Concise Linear Representations of
Mixed 0-1 Polynomial Programs,” Operations
Research Letters, vol. 33, no. 1, pp. 55-61, 2005.
[2]. Adams W., and Forrester R., “Linear Forms of
Nonlinear Expressions: New Insights on Old Ideas,”
Operations Research Letters, vol. 35, no. 4, pp. 510518, 2007.
[3]. Adams W., Forrester R., and Glover F.,
“Comparisons and Enhancement Strategies for
Linearizing Mixed 0-1 Quadratic Programs,”
Discrete Optimization, vol.1, no.2, pp. 99-120,
2004.
[4]. Adams W.P.,and Sherali H.D, “A tight linearization
and an algorithm for zero-one quadratic
programming problems,” Manage. Sci., vol.32, no.
10, pp. 1274-1290, 1986.
[5]. Al-Khayyal A. and Falk J.E., “Jointly constrained
biconvex
programming,”
Mathematics
of
Operations Research, vol.8, no. 2, pp. 273, 1983.
[6]. Chaovalitwongse W., Pardalos P. M. and Prokopyev
O.A., “A new linearization technique for multiquadratic 0-1 programming problems,” Operations
Research Letters, vol.32, pp.517-522, 2004.
[7]. Fortet R., “L'algebre de boole et ses applications en
recherche
operationnelle,”
Cahiers
du
Centred'Etudes de Recheche Operationnelle, vol.1,
pp. 5-36, 1959.
[8]. Glover F., “Imporved linear integer programming
formulations of nonlinear integer problems,”
Manage. Sci., vol. 22, no. 4, pp. 455-460, 1975.
[9]. Glover F. and Woolsey E., ”Further reduction of
zero-one polynomial programming problems to
zero-one linear programming problems,” Oper. Res.,
vol. 21, no. 1, pp. 156-161, 1973.
[10]. Glover F. and Woolsey E., “Converting the 0-1
polynomial programming problem to a 0-1 linear
program,” Oper. Res., vol.22, no. 1, pp.180-182,
1974.
[11]. McCormick P., ”Computability of global solution to
factorable nonconvex problems: Part I - Convex
underestimating
problems,”
Mathematical
Programming, vol.10, pp.147-175, 1976.
[12]. Oral M. and Kettani O., “A linearization procedure
for quadratic and cubic mixed-integer problems,”
Operations Research vol.40, pp.109-116, 1990.
[13]. Oral M. and Kettani O., “Reformulating nonlinear
combinatorial optimization problems for higher
computational efficiency,” European Journal of
Operational Research, vol.58, pp. 236-249, 1992.
[14]. Sherali H.D., Smith J.C., “An improved
linearization strategy for zero-one quadratic
programming problems,” Optimization Letters,
vol.1, pp. 33-47, 2007.
[15]. Xia Y., Yuan Y., “A new linearization method for
quadratic assignment problems,” Optimization
Methods and Software, vol. 21, no. 5, pp. 803-816,
2006.
[16]. Gharibi W., Xia Y., “A Tight Linearization Strategy
for Zero-One Quadratic Programming Problems”,
International Journal of Computer Science Issues
(IJCSI), Volume 9, Issue 3, April 2012.
| 5 |
JOURNAL OF COMPUTING, VOLUME 3, ISSUE 7, JULY 2011, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG
68
Timing and Code Size Optimization on
Achieving Full Parallelism in Uniform Nested
Loops
Y. Elloumi, M.Akil and M.H. Bedoui
Abstract- Multidimensional Retiming is one of the most important optimization techniques to improve timing parameters of nested
loops. It consists in exploring the iterative and recursive structures of loops to redistribute computation nodes on cycle periods, and thus
to achieve full parallelism. However, this technique introduces a large overhead in a loop generation due to the loop transformation. The
provided solutions are generally characterized by an important cycle number and a gr eat code size. It represents the most limiting
factors while implementing them in embedded systems.
In this paper, we present a new Multidimensional Retiming technique, called “Optimal Multidimensional Retiming” (OMDR). It reveals
the timing and dat a dependency characteristics of nodes, to minimize the overhead. The experimental results show that the average
improvement on t he execution time of the nested loops by our technique is 19.31% compared to the experiments provided by an
existent Multidimensional Retiming Technique. The average code size is reduced by 43.53% compared to previous experiments.
Index Terms— Graph Theory, Multidimensional Applications, Optimization, Parallelism and concurrency.
—————————— ——————————
1 INTRODUCTION
T
he design of real time systems should respect many
constraints such as the execution time and code size,
which require using optimization techniques. The
Retiming presents one of these techniques which can be used
to add and remove registers in order to provide a more
efficient circuit [1].
The increased complexity of such application leads to the
frequent use of a nested iterative and recursive loops. Such
applications can be modeled as “Multidimensional Data
Flow Graph” (MDFG). The standard software pipelining
techniques can only be used to optimize a one-dimensional
loop. When they are applied to optimize nested loops, the
performance improvement is very limited [8].
Other works are proposed to offer an optimization
technique taking advantage of the multiple nested loops,
which is called “Multi-Dimensional Retiming” (MDR). It
aims achieving full parallelism of uniform nested loops. It
consists in scheduling the MDFG with the minimum cycle
period and modifying the execution order of nodes, such as
each one is executed in a separate cycle.
But, Achieving full parallelism requires adding a large
code overhead [2],[3]. It dramatically increases the whole
code size of the provided MDFG. Furthermore, this extra
code requires a significant cycle number to be executed
outside the loop body. Thus, the provided solution does not
allow achieving an application with an adequate execution
time and a code size. It represents a limiting factor to
implement the provided MDFG in a real-time embedded
system.
We propose in this paper a new technique of MDR,
called “Optimal Multidimensional Retiming”. It allows
————————————————
• Y. Elloumi is a P.H.D. student in Paris-Est University, 93162 Noisy le
Grand Cedex, France.
• M. Akil is Professor at computer science department, ESIEE, Paris, 93162
Noisy le Grand Cedex, France.
• M.H. Bedoui is Professor in faculty of medicine of Monastir, University of
Monastir, 5019, Monastir, Tunisia,
redistributing optimally the nodes on cycle periods, while
scheduling the MDFG with the minimal cycle period. This
technique significantly allows optimizing the number of
period cycles (notably the execution time) and the code size,
by exploring the execution time and data dependency
between nodes belonging to the MDFG. Thus, it provides
enhanced solutions, compared to the existent techniques.
The rest of the paper is organized as follow. In section 2,
we give an overview of MDFG formalism. In section 3, we
list the existent MDR techniques and their constraints and
limits. In section 4, we present the theory of the “Optimal
Multidimensional Retiming” technique by describing the
principles and basics concepts, and proposing the
correspondent algorithms. Experimental results are
presented in section 5, followed by concluding remarks in
section 6.
2 MULTIDIMENSIONAL DATA FLOW GRAPH
The Multidimensional Data Flow Graph (MDFG) is an
extension of the classic data flow graph that allows to
represent a nested iterative and recursive structures. It is
modeled by a node-weighted and edge-weighted directed
graph such as 𝐺 = (𝑉, 𝐸, 𝑑, 𝑡), where V is the set of
computation nodes, 𝐸𝐶 𝑉 × 𝑉 is the set of edges, and 𝑑(𝑒𝑖 ) is
a function from E to Z n , representing the multidimensional
delay between two nodes, where n is the number of
dimensions (loops), and 𝑡(𝑣𝑗 ) is a function from V to the
positive integers, representing the computation time of the
node 𝑣𝑗 .
For a MDFG with n dimensions, each edge 𝑒 ∶ 𝑣𝑖 → 𝑣𝑗 is
characterized by a delay where 𝑑(𝑒) = (𝑐1 , 𝑐2 , … , 𝑐𝑛 ). The
value 𝑐𝑘 represents the difference between the execution
iteration of 𝑣𝑗 and the execution iteration of 𝑣𝑖 of the loop 𝑘.
We show in Fig.1.a a two-dimensional Data Flow Graph
(2DFG) corresponding the Wave Digital Filter described in
Algorithm 1, which is composed of two nested loops. The
execution of each node in V exactly represents one iteration,
which is the execution of one instance of the loop body. Each
edge belonging to the 2DFG shown in Fig.1.a is labeled by a
delay 𝑑(𝑒) = (𝑑. 𝑥, 𝑑. 𝑦). Both terms « 𝑑. 𝑥 » and « 𝑑. 𝑦 »
JOURNAL OF COMPUTING, VOLUME 3, ISSUE 7, JULY 2011, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG
represent the difference between the iteration number
executing 𝑣𝑗 and the iteration number executing 𝑣𝑖 , in the
outermost loop as well as in the innermost loop [9].
For an edge 𝑒 : 𝑣𝑖 → 𝑣𝑗 , the delay 𝑑(𝑒) = (0, 𝑥) consists
in the execution of 𝑣𝑖 and 𝑣𝑗 in the same iteration of the
outermost loop. For the innermost loop, if the node vi is
executed in the iteration 𝑘, the node 𝑣𝑗 is executed in the
ALGORITHM 1
WAVE DIGITAL FILTER
0: For i from 0 to m do
1: For j from 0 to n do
2:
D(i,j)= B(i -1 , j+1) × C(i -1 , j-1)
3:
A(i,j)= A(i,j) × 5
4:
B(i,j)= A(i,j) + 1
5:
C(i,j)= A(i,j) + 2
6: End for
7: End for
G illustrates the dependencies between copies of nodes
representing the MDFG G, such as the CDG shown in Fig.2.b
which corresponds to the MDFG G in Fig.1.a. A node in CDG
is a computational cell that represents a complete iteration.
The CDG of a nested loop is bounded by the loop indexes.
A schedule vector s defines a sequence of execution in
the cell dependency graph. The CDG shown in Fig.2.b, can
be executed by a row-wise execution sequence, i.e., the
schedule vector 𝑠 = (1,0). A legal MDFG 𝐺 = (𝑉, 𝐸, 𝑑, 𝑡) is
realizable if there exists a schedule vector s for the cell
dependency graph in respect to G; i.e., 𝑠 × 𝑑(𝑒) ≥ 0, 𝑒 ∈ 𝐸,
and no cycle exists in its corresponding CDG. Note that delay
vectors (0,1) and (0, −1) are both legal in respect to the
schedule vector 𝑠 = (1,0), but they create cycles in cell
dependency graph.
3 MULTIDIMENSIONAL RETIMING
iteration(𝑘 − 𝑥). An edge with zero delay 𝑑(𝑒) = (0,0)
represents a data dependency in the same iteration, such as
the edges 𝐷 → 𝐴, 𝐴 → 𝐵 and 𝐴 → 𝐶 as shown in Fig. 1.a.
𝑒
69
3.1 Principles
The retiming technique consists in redistributing delays in
We use the notation 𝑣𝑖 → 𝑣𝑗 to indicate that 𝑒 is an edge
𝑝
from 𝑣𝑖 node to 𝑣𝑗 node, and 𝑣𝑖 ⇒ 𝑣𝑗 to mean that 𝑝 is a path
from 𝑣𝑖 node to 𝑣𝑗 node. The delay vector of a path 𝑝: 𝑣𝑖
𝑒𝑚
𝑒𝑚+1
𝑒𝑛
�� 𝑣𝑖+1 �⎯⎯� … → 𝑣𝑗 is 𝑑(𝑝) = ∑𝑘=𝑛
and the total
𝑘=𝑚 𝑑(𝑒𝑘 )
𝑘=𝑗
computation time of a path 𝑝 is 𝑡(𝑝) = ∑𝑘=𝑖 𝑡(𝑣𝑘 ). The period
during which all computation nodes in iteration are
executed, according to existing data dependencies and
without resource constraints, is called a cycle period. The
cycle period C(G) of an MDFG is the maximum computation
time among paths that have a zero delay. For example,
assuming that each node is executed in one time unit
𝑡(𝐴) = 𝑡(𝐵) = 𝑡(𝐶) = 𝑡(𝐷) = 1, the MDFG of Fig.1.a
has C(G) = 3. It can be measured through the paths 𝑝: 𝐷 →
𝐴 → 𝐵 or 𝑝: 𝐷 → 𝐴 → 𝐶, as shown in the iteration scheduling
illustrated in Fig.1.b. Each set of nodes belonging to the same
iteration are modeled by a different motif.
The execution pattern of a nested loop can be illustrated
by iteration space as shown in Fig.2.a. Each cell in the
iteration space is a copy of the MDFG. The marked cell,
labeled by (0,0), is the first iteration to be executed. This
…….
B
C
A
e2
(0,0)
B
C
(1,-1)
e4
e5
(1,1)
(a)
D
B
C
A
D
B
(0,0)
e3
D
T
im
e
A
(0,0)
e1
C
A
D
(b)
Fig.1. (a) MDFG of Wave Digital Filter; (b) Iteration scheduling of the
MDFG in Fig.1.a.
graph is transformed on an acyclic graph, called cell
dependency graph (CDG), allowing to show clearly the
execution sequence of a nested loop. The CDG of an MDFG
(a)
(b)
Fig.2. (a) Iteration space of the MDFG in Fig.1; (b) The cell
dependency graph.
the graph. This technique can be applied on a data flow
graph to minimize the cycle period in a polynomial time. The
delays are moved around in the graph in the following way:
a delay unit is drawn from each of the incoming edges of 𝑣,
and then added to each of the outgoing edges of 𝑣, or vice
versa [1]. In the case of MDFG, it consists in redistributing
the execution of nodes on the iterations. The retiming vector
𝑟(𝑢) of a node 𝑢 ∈ 𝐺 represents the offset between the
original iteration containing 𝑢, and the one after retiming.
Note that the retiming technique preserves data
dependencies of the original MDFG. Therefore, we have
𝑑𝑟 (𝑒) = 𝑑(𝑒) + 𝑟(𝑢) − 𝑟(𝑣) for every edge and 𝑑𝑟 (𝑙) = 𝑑(𝑙)
for every cycle 𝑙 ∈ 𝐺. After retiming, the execution of the
node u in the iteration i is moved to the iteration 𝑖 − 𝑟(𝑢).
We show in Fig.3.a the MDFG 𝐺𝑟 = (𝑉, 𝐸, 𝑑𝑟 , 𝑡) of the
wave digital filter after applying the retiming function
𝑟(𝐷) = (0,1). When a delay is pushed through node 𝐷 to its
outgoing edge as shown in Fig.3.a, the actual effect on the
Algorithm 2 of the new MDFG is that the 𝑖 th copy of 𝐷 is
shifted up and is executed with (𝑖 − (0,1)) th copy of nodes 𝐴,
𝐵, and 𝐶. The original zero-delay edge 𝐷 → 𝐴 in Fig.1.a now
has a delay (0,1) after retiming as shown in Fig.3.a. Node 𝐷
in the new loop body has not any data dependency with
other nodes executed in the same cycle. So, node 𝐷 can be
executed in parallel to node 𝐴, as shown in the iteration
scheduling of Fig.3.b. Thus, the cycle period is reduced
from three to two time units.
P
P
JOURNAL OF COMPUTING, VOLUME 3, ISSUE 7, JULY 2011, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG
70
In fact, every retiming operation corresponds to a
software pipelining operation. When 𝑟(𝑢) delay units are
pushed through a node u, every copy of this node is moved
by 𝑟(𝑢) iterations. Hence, a new iteration consists in
redistributing the execution nodes into different iterations.
ALGORITHM 2
WAVE DIGITAL FILTER AFTER RETIMING BY THE FUNCTION
R(D)=(0,1)
0: For i from 0 to m do
1: D(i,0) = B(i-1 , 1) × C(i-1 , -1)
2: For j from 0 to n-1 do
3:
D(i,j+1) = B(i-1 , j+2) × C(i-1 , j)
4:
A(i,j) = D(i,j) × 5
5:
B(i,j) = A(i,j) + 1
6:
C(i,j) = A(i,j) + 2
7: End for
8: A(i,n) = D(i,n) × 5
9: B(i,n) = A(i,n) + 1
10: C(i,n) = A(i,n) + 2
11:End for
(a)
Fig.4. (a) The iteration space the retimed MDFG in Fig.3; (b) The cell
dependency graph.
Hence, the MDR technique aims to transform a realizable
MDFG G on MDFG 𝐺𝑟 in a way that 𝐺𝑟 is still realizable.
Using such concepts, the basic conditions for legal
multidimensional retiming are defined in the following
e2
(0,0)
C
(1,-1)
e4
e5
(1,1)
(0,1)
e3
(a)
D
C
(1,-3)
e4
e5
(1,-1)
D
B
C
A
D
B
C
A
D
B
C
A
D
A
D
D
(0,1)
e3
(a)
Prologue
(b)
Fig.5. (a) Fully parallelized graph of the MDFG in Fig.1; (b) Iteration
scheduling of the MDFG in Fig.5.a
lemma [2].
B
C
A
D
Lemma 1. Let 𝐺 = (𝑉, 𝐸, 𝑑, 𝑡) be a realizable MDFG, r a
multidimensional retiming, and s a schedule vector for the
retimed graph 𝐺 𝑟 = (𝑉, 𝐸, 𝑑𝑟 , 𝑡), then
T
im
e
(0,0)
e1
e2
(0,1)
B
…….
…….
r(D)=(0,1)
A
A
(0,1)
e1
r(D)=(0,2)
r(A)=(0,1)
T
im
e
Some nodes are shifted out of the loop body to provide the
necessary data for the iterative process, which is called
prologue. Correspondingly, some nodes will be executed
after the loop body to complete the process, which is called
epilogue.
Using MD retiming function r, we can trace the pipelined
nodes and also measure the size of the prologue and
epilogue. For node v with retiming 𝑟(𝑣) = (𝑖, 𝑗), there are i
copies of node v appearing in the prologue of outer loop, and
j copies of node v in the prologue of the innermost loop. The
B
(b)
B
C
A
D
B
C
A
D
1.
2.
D
Prologue
(b)
Fig.3. (a) MDFG of algorithm 2; (b) Iteration scheduling of the MDFG in
Fig.3.a
number of copies of a node in the epilogue can also be
derived in a similar way. The iteration space of the retimed
MDFG shown in Fig.4.a with retiming 𝑟(𝐷) = (0,1) clearly
shows that one copy of node D is pushed out of the loop
body on j-dimension, and becomes prologue for the
innermost loop. The corresponding cell dependency graph is
shown in Fig.4.b.
It is known that an MDFG can always be fully
parallelized by applying successively the MD retiming
functions 𝑟(𝐷) = (0,2) and 𝑟(𝐴) = (0,1), which is illustrated
in Fig.5.a. We note that the retimed MDFG has non-zerodelay on each edge. It implies that the nodes belonging to the
same iteration in the original loop body are distributed into
three cycle periods. The MDFG is then scheduled with the
minimal cycle period equal to one time unit, as schematized
the iteration scheduling of Fig.5.b.
To achieve a realizable MDFG after retiming, the legality
condition,𝑠 × 𝑑(𝑒) ≥ 0, has to be satisfying, and there should
not exist any cycle in the cell dependency graph of the
MDFG.
3.
4.
for any path , we have 𝑑𝑟 (𝑝) = 𝑑(𝑝) + 𝑟(𝑢) − 𝑟(𝑣)
for any cycle 𝑙 ∈ 𝐺 we have 𝑑𝑟 (𝑙) = 𝑑(𝑙)
𝑒
for any edge 𝑢 → 𝑣, 𝑑𝑟 (𝑒) × 𝑠 ≥ 0
there is no cycle in the DG equivalent to the MDFG G.
The selection of a legal multidimensional Retiming
function is based on the edge delay of the MDFG. The
approach proposed in [2],[3] consists in defining a
scheduling subspace S for a realizable MDFG 𝐺 = (𝑉, 𝐸, 𝑑, 𝑡).
It represents the space region where there exist schedule
vectors that realize 𝐺; i.e., if schedule 𝑠 ∈ 𝑆 then 𝑠 × 𝑑(𝑒) ≥ 0
for any 𝑒 ∈ 𝐸. In fact, the multidimensional retiming
technique means to decrease zero-delay edges. Thus, a
strictly positive scheduling subspace 𝑠 + is the set al all
vectors 𝑠 ∈ 𝑆 where 𝑠 × 𝑑(𝑒) > 0 for every 𝑑(𝑒) ≠ (0,0, … ,0).
The method of predicting a legal multidimensional retiming
is introduced in the next theorem.
Theorem 1. let 𝐺 = (𝑉, 𝐸, 𝑑, 𝑡) be a realizable MDFG, 𝑠 + a
strictly positive scheduling sub-space of G, s a scheduling vector
in 𝑠 + , and 𝑢 ∈ 𝑉 a node with all incoming edge having nonzero
delay. A legal MD retiming r of u is any vector orthogonal to s.
3.2 Multidimensional Retiming techniques
We describe in this section the existent multidimensional
retiming techniques. They are characterized by achieving full
parallelism by providing the MDFG with no zero-delay edge
[2],[3],[7].
a) Incremental Multidimensional Retiming
JOURNAL OF COMPUTING, VOLUME 3, ISSUE 7, JULY 2011, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG
This technique is based on selecting a set of nodes that
can be retimed by the same multidimensional retiming
function, as described in the following corollary.
Corollary 1. Given a realizable MDFG 𝐺 = (𝑉, 𝐸, 𝑑, 𝑡), 𝑠 + a
strictly positive scheduling sub-space of G, s a scheduling vector
ALGORITHM 5
CHAINED MULTIDIMENSIONAL RETIMING
Input : a realizable MDFG G =(V,E,d,t),
Output : a realizable MDFG Gr =(V,E,dr,t) without d(e)=(0,0,
… ,0)
0: Begin
1: Find a legal MDR function r as describedin steps 2 and 3
in algorithm 3
2: Provide the multi -chain graph and the maximal length
of chain K, as indicated in algorithm 4
3: For each node v with label i do
4:
Apply the MDR function (k-i)× (r)
5: End for
6: End
71
nodes assigned by 𝐴𝑗 . Fig.7 illustrates the full parallelized
MDFG of the IIR filter after applying the Incremental
Multidimensional Retiming.
The steps of algorithm 3 are repeated four times, where
in each one a different MD retiming function is applied. The
fully parallelized MDFG is showed in Fig.7 where all edges
are non-zero delay.
a) Chained Multidimensional Retiming [2],[3]
This technique allows obtaining the full parallelism
solution by defining just one MDR function. It is based on the
following corollary.
in 𝑠 + , and an MD retiming function r orthogonal to s, if a set
𝑋𝐶𝑉has all incoming edges nonzero, then 𝑟(𝑋) is a legal MD
retiming.
Thus, this technique consists in defining a schedule
vector s as described in definition 1, and chooses an MDR
function orthogonal to s. This chosen function is applied to
each node respecting the previous corollary. Those steps are
ALGORITHM 3
INCREMENTAL MULTIDIMENSIONAL RETIMING
Input : a realizable MDFG G =(V,E,d,t)
Output : a realizable MDFG Gr=(V,E,dr ,t) without d(e) =
(0,0, … ,0)
0: Begin
1: While exist zero-delay edge in the graph Do
2: Find a scheduling vector s=(s.x,s.y), that s.x+s.y is
minimum
3: Choose a MDR function
4: Apply the selected MDR function to any nodes that
has all incoming edges with nonzero delays and at
least one outgoing edge with zero delay
5: End while
6: End
repeated incrementally, until all zero-delay edges are
transformed, as described in algorithm 3.
We apply the algorithm above to the 2DFG of Infinite
Fig.6. MDFG of IIR Filter.
Impulse Response filter (IIR) that is illustrated in Fig.6. It is
composed by multiplier nodes assigned by 𝑀𝑖 and adder
Fig.7. IIR Filter MDFG after Incremental MDR.
Corollary 2. [2] Given 𝐺 = (𝑉, 𝐸, 𝑑, 𝑡), 𝑆 + a strict positive
scheduling subspace for G,s a scheduling vector 𝑆 + , a MD
retiming function orthogonal to s, a set 𝑋𝐶𝑉 which all incoming
edges nonzero, and an integer value 𝑘 > 1, then (𝑘 × 𝑟) (𝑋) is
a legal MD retiming.
Thus, it applies the MD retiming to successive nodes in a
path where each node has a retiming function multiple of the
selected retiming value smaller than its predecessor nodes.
ALGORITHM 4
MULTI-CHAIN GRAPH CONSTRUCTION
Input : a realizable MDFG G =(V,E,d,t)
Output : Labeled Multi -chain Graph
0: Begin
1: Remove all non-zero delay edges from the MDFG
2: For each chainCH do
3: Compute the length L of CH
4: For each node starting from the last to the first do
5:
Labeled the node by L
6:
L=L-1
7: End For
8: End For
9: End
This technique starts by transforming the MDFG on
Multi-chain Graph as described in Algorithm 4. Each chain
represents a node succession where all interconnected edges
between them are zero-delay. Each node is labeled by a level
whose the value is greater than its predecessor node and
smaller than its successor ones.
In the case of MDFG of IIR filter, the red integers above each
node of Fig.8 represent the level values that are labeled after
executing algorithm 4. Therefore, the multi-chain maximum
length of 𝐺 is 4.
JOURNAL OF COMPUTING, VOLUME 3, ISSUE 7, JULY 2011, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG
So, The technique proceeds to retime all the labeled
nodes by a MD retiming function (𝑘 − 𝑖) × (𝑟), as described
in algorithm 5.
We present in Fig.8 the full parallelized graph of the IIR
filter after applying the chained multidimensional retiming.
The algorithm starts by finding the MDR function which is
equal to (1, −1). We note that all zero delay edges in the
original MDFG are assigned by a delay vector equal to the
MDR function.
Until now, no research works have been interested in
comparing results provided by the techniques described
above. The random choice of the scheduling vector does not
allow defining the technique providing the optimal solution.
However, based on their approaches, Chained MDR is
generally more performing than the Incremental MDR. The
first one consists in defining just one scheduling vector,
which is executed in 𝑂(|𝐸|); while the second requires
defining scheduling vector for each iteration of algorithm 4,
which is executed in 𝑂(|𝑉|).
b) SPINE (Software PIpelining of NEsted loops)
Multidimensional Retiming
This technique tries to provide a more optimal MDFG in
terms of execution time and code size than those described
above. It proceeds to remove all delays such as (0, 𝑘) by
merging them in a delay such as (𝑖, 𝑗). This modification is
applicable only if the MDFG contains at least one edge with a
72
loop. Since this limit is directly associated to the size of the
iteration space, it is called spatial constraint and it is formally
defined as follows.
ALGORITHM 6
SPINE-FULL ALGORITHM
Input : a realizable MDFG G =(V,E,d,t)
Output : a realizable MDFG
Gr =(V,E,dr,t) fully
parallelized with minimum code size
1: Begin
2: If s=(1,0) is legal then
3:
Apply s=(1,0)
4: Else If s=(0,1) is legal and d(e)×(0,1)>=0 then
5:
Apply s=(0,1)
6: Else If s=(1,1) is legal and d(e)×(1,1)>=0 then
7:
Apply s=(1,1)
8: Else
9: Choose a legal scheduling vector s such as d(e)×s>=0,
for any edge and | sx| +| sy| is minimal
10: End
Definition 1. Let a MDFG G contains k-level nested loop N,
controlled by the set of indices 𝐼 = {𝑖0 , 𝑖1 , … , 𝑖𝑘 }, whose values
vary, in unitary increments, in the range 𝐿 = {𝑙0 , 𝑙1 , … , 𝑙𝑘 } to
𝑈 = {𝑢0 , 𝑢1 , … , 𝑢𝑘 } where L is the set of lower boundaries for
the indices and U is the set of maximum values, such as
𝑙𝑗 ≤ 𝑖𝑗 ≤ 𝑢𝑗 , then the spatial constraint Sc of the loop is defined
as:
𝑆𝑐 = [(𝑢0 − 𝑙0 + 1), (𝑢1 − 𝑙1 + 1), … , (𝑢𝑘 − 𝑙𝑘 + 1)]
This definition allows establishing the relation between
the maximum retiming operation and the spatial constraint
according to the following lemma.
Lemma 2. Given a k-level loop N with spatial constraint 𝑆𝑐 =
[𝑠0 , 𝑠1 , … , 𝑠𝑘 ]. The multi-dimensional retiming technique will be
able to achieve full parallelism of the loop body instructions if
the maximum retiming vector r applied to any node u, 𝑟(𝑢) =
(𝑟0 , 𝑟1 , … , 𝑟𝑘 ) satisfies the following condition:
𝑟𝑗 < 𝑠𝑗 𝑠𝑢𝑐ℎ 𝑎𝑠 0 ≤ 𝑗 ≤ 𝑘
Fig.8. IIR Filter MDFG after Chained MDR.
delay equal to (𝑖, 𝑗) such as 𝑖 > 0.
It consists in finding a scheduling vector 𝑠 and a retiming
function 𝑟 orthogonal to 𝑠, as described in algorithm 6, to
provide a minimal overhead.
3.3 Multidimensional Retiming Constraints [5]
We describe in this paragraph the algorithmic constraints
which must be taken into account to achieve full parallelism.
These constraints come from the ratio between loop bounds
and a number of MDR functions to apply.
For example, consider the multi-dimensional data flow
graph in Fig.1; it is easy to verify that if 𝑑(𝑒4) = (0, 𝑘) where
𝑘 < 3, retiming 𝐷 and 𝐴 by some vector (0, 𝑝) will not
satisfy the goal of re-distributing the delays among all edges
in the graph. The same will happen if 𝑑(𝑒5) = (𝑚, 0)
where 𝑚 < 3 for any retiming vectors of the form (𝑞, 0).
Thus, if the loop has only one occurrence, i.e., the loop
boundaries are both 1, then no parallelism can be obtained.
This last constraint is equally applicable to a software or
hardware implementation of the retimed loop.
This study begins by evaluating the constraints imposed
by the limitation on the number of iterations comprising the
3.4 Limitations of existing techniques
We have shown that nested loops can always be fully
parallelized using MD retiming. The presented techniques of
MD retiming are a polynomial time algorithm that fully
parallelizes a given MDFG 𝐺 = (𝑉, 𝐸, 𝑑, 𝑡) by selecting a legal
schedule vector s with 𝑠 × 𝑑(𝑒) > 0, 𝑒 ∈ 𝐸, and a retiming
vector 𝑟 where 𝑟 is orthogonal to 𝑠. Each MD retiming
techniques presented above shows that the selected 𝑠 is a
legal schedule vector for the retimed graph where 𝑑𝑟 (𝑒) ≠
𝑑(𝑒) ± 𝑘. 𝑟.
However, Multidimensional retiming techniques imply a
large overhead of the generated code. It is caused by several
aspects of loop transformation. First, the code size is
increased because of the large code sections of the prologue
and epilogue produced in all the loop dimensions. Second,
the computation of the new loop bounds and loop indexes
need to be recomputed [10].
Moreover, the execution of the prologue and epilogue
section is not fully parallel, which requires a considerable
period cycle number compared to that required by the loop
body. Those disadvantages are aggravated in terms of the
retiming vector value, and the number of the
multidimensional retiming function.
JOURNAL OF COMPUTING, VOLUME 3, ISSUE 7, JULY 2011, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG
Each Multidimensional Retiming techniques have a
specific approach to choose the retiming function in order to
decrease the overhead size. Chained and incremental
Multidimensional retiming techniques proceed to chose a
scheduling vector 𝑠 = (𝑠. 𝑥, 𝑠. 𝑦), where 𝑠. 𝑥 + 𝑠. 𝑦 is
minimum. The SPINE technique tries to modify the MDFG
with the intention of applying an MDR function that skews
the minimum column-wise or/and row-wise. Despite
providing an optimal solution, it is reliable only in the
particular case of MDFG. In opposite cases, it applies the
same approach than other techniques.
However, all existent techniques consist in retiming each
node of the MDFG having an out-coming edge with zerodelay: if a path p as 𝑑(𝑝) = (0,0, … ,0) is composed by n
nodes, any technique applies (𝑛 − 1) MDR function to
achieve full parallelism. But, overhead consequences are
sudden after applying each MDR function: the more the
number of MDR function increases, the more the
consequences are dramatic.
As a result, the provided solution becomes very
complicated and not sufficient to be implemented in
embedded systems. Therefore, the existing MD retiming
techniques, although achieving full parallelism, are not
suitable for software nested loops.
4 THEORY OF OPTIMAL MULTIDIMENSIONAL RETIMING
In this section, we present the theoretical foundation of our
proposal MDR technique “Optimal Multidimensional
Retiming”. It aims at minimizing MDR functions by
exploring execution times and data dependency of nodes,
while achieving full parallelism.
3.1 Principle
Multidimensional retiming techniques allow scheduling the
MDFG with a minimal cycle period. For any path p 𝑝: 𝑣𝑖
𝑒𝑚+1
body is executed in 5 cycles. We illustrate in Fig.9 the static
schedule of such an iteration after achieving full parallelism
by the chained multidimensional retiming which is retimed
by the function 𝑟 = (−1,1). The nodes belonging to the same
iteration are modeled by gray circles.
Each gray node has not any data dependency with any
other node executed in the same cycle period. However,
these scheduling shows that provided data by some nodes
are not consumed immediately. For example, nodes 𝐴5 and
𝐴7 are executed in just one time unit and their provided
values are consumed two times units later. We conclude that
cycle periods are not exploited optimally: (more than 66% of
the cycle periods (𝑖 − 2, 𝑗 + 2), (𝑖 − 3, 𝑗 + 3) and (𝑖 − 4, 𝑗 + 4)
are not used). However, they allow executing more than one
node, due to the difference between execution time.
Let us try to execute nodes 𝐴5 and 𝐴7 in the same cycle
period as nodes 𝐴1, 𝐴2 and 𝐴6, as schematized in Fig.10. The
correspond MDFG with new delay values is illustrated in
Fig.11. This transformation results in a legal MDFG that
respects all conditions of lemma 1. Furthermore, it still keeps
a fully parallelized execution, while preserving data
dependency for the whole application. Compared to Fig.8,
this transformation can be considered as decreasing the
number of MDR functions by depriving nodes 𝐴5 and 𝐴7 to
be retimed.
In fact, minimizing the number of MDR functions
implies decreasing the overhead of the generated code. It
consists in reducing the correspondent prologue, epilogue
M5
M6
M7
M8
M1
M2
M3
A3
M5
M6
M7
M8
M1
M2
M3
Theorem 2. Let 𝐺 = (𝑉, 𝐸, 𝑑, 𝑡) a MDFG, the minimal value of
cycle 𝑐𝑚𝑖𝑛 for the G graph is:
𝑐𝑚𝑖𝑛 = 𝑚𝑎𝑥{𝑡(𝑣𝑖 ), 𝑣𝑖 ∈ 𝑉}
We choose to model the MDFG of IIR filter with different
execution time of nodes such as of 𝑡(𝑀𝑖 ) = 3 and 𝑡(𝐴𝑗 ) = 1.
The minimal cycle 𝑐𝑚𝑖𝑛 of this graph is equal to the execution
time of the multiplication node.
Applying any MDR techniques to the IIR filter results in
the fact that each iteration belonging to the original loop
A4
A1
A2
A6
A5
A7
A8
Cycle
(i-3 , j+3)
M4
𝑒𝑛
�� 𝑣𝑖+1 �⎯⎯� … → 𝑣𝑗 of MDFG, they proceed to execute each
node 𝑣𝑘 where 𝑖 ≤ 𝑘 ≤ 𝑗 in a period cycle separately. These
approaches can be generalized into general-time cases [2] [4].
In fact, the computation nodes belonging to a data flow
graph have not generally the same execution times. These
depend on the kind of operation to be done; for example, a
multiplication node needs usually more clock period than an
adder node.
In this case, the minimal cycle period should be fixed
differently. A cycle period represents a time interval leading
to execute computation nodes. The minimal value of a period
cycle can be defined as the smaller time interval that allows
executing any node belonging to the MDFG. Thus, the
minimal cycle period should be equal to the maximal
execution time of node, as described in theorem 2 [1].
Cycle
(i-4 , j+4)
M4
A3
A4
A1
A2
A6
A5
A7
A8
T
im
e
𝑒𝑚
73
M5
M6
M7
M8
M1
M2
M3
Cycle
(i-2 , j+2)
M4
A3
M5
M6
M7
M8
M1
M2
M3
M6
M7
M8
M1
M2
M3
A1
A2
A6
A5
A7
A8
Cycle
(i-1 , j+1)
M4
A3
M5
A4
A4
A1
A2
A6
A5
A7
A8
Cycle
(i , j)
M4
A3
A4
A1
A2
A6
A5
A7
A8
Fig.9. Iteration scheduling after chained MDR.
and the loop bound and index instructions. Moreover, it
results in decreasing the number of cycle periods required to
execute any iteration from 5 to 4, while respecting a fully
parallelized execution. This minimization of cycle periods
implies a similar minimization on the execution time of the
whole application. Thus, this minimization of MDR functions
leads to improve the performance of the provided full
parallel solution.
JOURNAL OF COMPUTING, VOLUME 3, ISSUE 7, JULY 2011, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG
Our technique proceeds to share the MDFG on zerodelay paths. It leads to maximize the node number in such a
A8
M5
M6
M7
M8
M1
M2
M3
A3
A4
A2
A1
Cycle
(i-2 , j+2)
A7
A5
M4
A6
A8
M5
M6
M7
M8
M1
M2
M3
Cycle
(i-1 , j+1)
A7
A5
M4
T
im
e
Such modification can be defined as applying the MDR
to path p that can be composed of several nodes where 𝑡(𝑝)
is smaller than 𝑐𝑚𝑖𝑛 . The more the paths contain nodes, the
more the MDR function number decreases. Thus, we propose
a new MDR approach which consists in applying MDR
function to a path of nodes, which can be executed in the
same period cycle, instead of applying the MDR to each node
separately. Our approach is based on selecting the path with
the maximal nodes to achieve full parallelism with a minimal
number of MDR functions. We start by computing the
74
A3
A4
A2
A1
A6
A8
M5
M6
M7
M8
M1
M2
M3
A5
M4
A3
M5
M6
M7
M8
M1
M2
M3
A4
A5
M4
A4
A6
A2
A1
A6
M5
M6
M7
M8
M1
M2
M3
M4
A5
A8
A3
A4
A2
A1
Cycle
(i-1 , j+1)
A7
A6
M6
M7
M8
M1
M2
M3
M4
A5
A8
A3
A4
A1
Cycle
(i , j)
A7
A2
A6
A8
Fig.10. Iteration scheduling after collecting nodes in cycle (i-2, j+2).
Fig.11. MDFG of Fig.10.
minimal period cycle 𝑐𝑚𝑖𝑛 , extracting the paths with maximal
nodes from the MDFG while keeping their execution in 𝑐𝑚𝑖𝑛 ,
and applying the MDR function to the extracted paths.
3.2 Basics concepts
Our technique consists in retiming a path of nodes that can
be executed in the same cycle period. It means that those
nodes are executed in the same iteration of the original loop
body; i.e., all edges belonging to the path have zero-delay.
We call such a kind of path “zero-delay path”, which is
indicated in theorem 3.
𝑒𝑚+1
M1
M2
M3
A4
A1
Cycle
(i , j)
A7
A5
M4
A2
A6
Definition 2. Let 𝐺 = (𝑉, 𝐸, 𝑑, 𝑡), 𝑐𝑚𝑖𝑛 the minimal period cycle
of G. The optimal multidimensional retiming consists in
retiming a set of path p follows :
2.
3.
𝑒𝑛
Theorem 3. Let 𝑝: 𝑣𝑖 �� 𝑣𝑖+1 �⎯⎯� … → 𝑣𝑗 . p is zero-delay 𝑑(𝑝) =
(0, … ,0),if and only if :
∀ 𝑒𝑙 : 𝑣𝑘 → 𝑣𝑘+1 , 𝑑(𝑒𝑙 ) = (0, … ,0); 𝑠𝑢𝑐ℎ 𝑎𝑠 𝑒𝑙 ∈ 𝐸, 𝑣𝑘 ∈ 𝑉, 𝑘
∈ [𝑖, (𝑗 − 1)], 𝑙 ∈ [𝑚, 𝑛]
𝑚𝑎𝑥 (𝑐𝑎𝑟𝑑(𝑝))
𝑑(𝑝) = (0, … ,0)
𝑡(𝑝) ≤ 𝑐𝑚𝑖𝑛
𝑒𝑚
𝑒𝑚+1
𝑒𝑛
Where 𝑝: 𝑣𝑖 �� 𝑣𝑖+1 �⎯⎯� … → 𝑣𝑗 as 𝑣𝑘 ∈ 𝑉 ∀ 𝑘 ∈ [𝑖, 𝑗]
and card(p) is the number of nodes belonging to p such as
𝑐𝑎𝑟𝑑(𝑝) = 𝑗 − 𝑖 + 1.
To preserve data dependency of MDFG, each p path
should not contain any cycle in a way that no cycle between
vp and vm where ∀ 𝑝, 𝑚 ∈ [𝑖, 𝑗] for each selected path
𝑒𝑚
𝑒𝑚
M8
path, while being executed in the minimal cycle period. We
define our idea of optimal multidimensional retiming as
described in definition 2.
1.
M5
M7
Fig.15. Iteration scheduling of MDFG shown in Fig.14.
Cycle
(i-2 , j+2)
A7
M6
A3
A8
T
im
e
A3
Cycle
(i-3 , j+3)
A7
A2
A1
M5
𝑒𝑚+1
𝑒𝑛
Furthermore,
multidimensional
𝑝: 𝑣𝑖 �� 𝑣𝑖+1 �⎯⎯� … → 𝑣𝑗 .
retiming consists in executing several paths in the same cycle
period. Thus, such a path should not have any cycle between
nodes that they belong to.
Our technique is based on retiming a zero-delay path. It
means that all edges belonging to this path preserve the same
delay (zero-delay). Only the delay of in-coming and outcoming edges of the whole path are changed. Thus, it means
retiming the last node that it belongs to.
Referring to [2] and [3], the multidimensional retiming
function is defined from edges having a non-zero delay of
the MDFG. However, there is not constraint that requires
applying the MDR function to nodes with non-zero incoming edges. In the case of zero delay paths, it can be
applied to any node belonging to them, as defined in
theorem 4.
Theorem
4.
Let 𝐺 = (𝑉, 𝐸, 𝑑, 𝑡), and a zero delay path 𝑝: 𝑣𝑖
𝑒(0,…,0)
𝑒(0,..,0)
�⎯⎯⎯� … �⎯⎯� 𝑣𝑗 . If r is a legal MDR function of 𝑣𝑖 ,then r is a
legal MDR function of vk, where 𝑣𝑘 ∈ 𝑃 and 𝑘 ∈ [(𝑖 + 1) , 𝑗].
Proof. a strict positive scheduling sub-space S + contains all
scheduling vectors s where 𝑑(𝑒) × 𝑠 > 0 for each 𝑑(𝑒) ≠
(0,0, … ,0), such as described in definition 2. This implies
that 𝑣𝑖 and vk have the same sub-space 𝑆 + . But, a legal MDR
𝑟 of 𝑣𝑖 is any orthogonal vector to 𝑠, where 𝑠 ∈ 𝑆 +, as
indicated in theorem 1. This means that a legal MDR 𝑟 of 𝑣𝑖
is a legal MDR of 𝑣𝑘 .
So, we provide an MDR function as indicated in theorem
2. This function is applied to the last node of the zero delay
path to retime.
JOURNAL OF COMPUTING, VOLUME 3, ISSUE 7, JULY 2011, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG
We conclude in section 3 that it is more sufficient to
select an MDR function to apply it for all nodes to be retimed
to achieve full parallelism, as described in corollary 1. Thus,
we proceed by selecting every last node of a zero-delay path
that will be retimed. Afterwards, we find a legal MDR
function to be applied successively to the selected nodes
while respecting data dependency of the MDFG.
ALGORITHM 8
OPTIMAL MULTIDIMENSIONAL RETIMING
Input : a realizable MDFG G =(V,E,d,t)
Output : a realizable MDFG Gr =(V,E,dr,t)
0: Begin
1: Find a legal MDR function r
2: Provide the LMDFG and the maximal length M , as
described in algorithm 7
3: For i from M down to 1 do
4:
Select nodesv k that are labeled by i
5:
For each v k do
6:
Apply the MDR Retiming (i × r)
7:
End for
8: End for
9: End
3.3 Path Extraction
This step is based on exploring nodes belonging to the
MDFG by testing their data dependency and execution time,
to share them onto paths. We proceed by sweeping
ALGORITHM 7
LABELED MULTIDIMENSIONAL DATA FLOW GRAPH
CONSTRUCTION
Input : a realizable MDFG G =(V,E,d,t)
Output : Labeled Multidimensional Data Flow Graph
(LMDFG), maximal length M
0: Begin
1: Compute cmin
2: Extract all nonzero delay edges from G
3: i =1
4: Add the elements (v j, t(v j)) to R, such as vj is a node
without outcoming edge
5: While R is not empty do
6: For each v j of R do
7:
Collect all predecessor ofv j
8:
For each predecessorv p of v j do
9:
If t(v p)+t(v j) <= cmin and respect data dependency
then
10:
Add (v p, t(v p)+t(v j)) to R
11:
Else
12:
Add (v p, t(v p)) to N E
13:
End If
14:
End for
15: End for
16: Label all nodes of N E by i
17: I= i+1
18: R=N E
19: End while
20: M = i
21: End
75
incrementally the MDFG from the opposite direction of edge.
For each node, we verify that it respects the previous
conditions, to execute it in the suitable cycle.
Our process consists on extracting the last node of each
zero delay path that will be retimed, and labeled by an
increasing order starting from 1. The result is illustrated in a
“Labeled Multidimensional Data Flow Graph” (LMDFG)
taken from the MDFG, as described in algorithm 7.
We proceed by exploring the MDFG in the opposite
direction of data dependency. We start by extracting the nonzero delay edges from the MDFG and collecting all nodes 𝑣𝑘
without outcoming edge in 𝑅 list. For each node belonging to
𝑅, we define all the predecessor nodes 𝑣𝑝 to verify that can be
executed in the same cycle time as nodes in 𝑅. For each 𝑣𝑝 , if
𝑡�𝑣𝑝 � + 𝑡(𝑣𝑘 ) < 𝑐𝑚𝑖𝑛 and if 𝑣𝑝 respects data dependency
conditions described above, we consider 𝑣𝑝 as a node
belonging to the path and we add it to the 𝑅 list. Else, 𝑣𝑘 is
the last node of the previous path that should be retimed,
and then we add then the node 𝑣𝑘 to 𝑁𝐸 list to label it. This
test is repeated for all predecessor nodes of 𝑅 list element.
The next step consists in labeling all the nodes of 𝑁𝐸 list by
1(first value of 𝑖), before replacing 𝑅 elements by 𝑁𝐸
elements. These steps are repeated until testing all nodes of
MDFG.
We show in Fig.12 the LMDFG of IIR filter with 𝑡(𝑀𝑖 ) =
3 and 𝑡(𝐴𝑗 ) = 1. The first iteration of algorithm 7 consists in
labeling the nodes 𝑀1, 𝑀2, 𝑀3, 𝑀4, 𝐴3 and 𝐴4 by 1. The last
iteration assigns nodes 𝑀5, 𝑀6, 𝑀7 and 𝑀8 by 2. Therefore,
the multi-chain maximum length is 2.
3.4 Optimal Multidimensional Algorithm
Our technique starts by finding a legal MDR r of MDFG, as
indicated in theorem 2. After that, it provides the LMDFG
and the maximal label by running algorithm 7. Then, it
selects nodes with maximal label, and applies the MDR
function (𝑖 × 𝑟). These steps are repeated by decreasing the
label of the selected node until achieving retiming all the
labeled nodes, as described in algorithm 8.
Fig.12. Labeled Multidimensional Data Flow Graph (LMDFG) of IIR
filter.
JOURNAL OF COMPUTING, VOLUME 3, ISSUE 7, JULY 2011, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG
As an example, we apply this algorithm to the MDFG of
IIR filter. It starts by defining an MDR function 𝑟 = (1, −1),
and providing the LMDFG shown in Fig.12. The first
iteration of the algorithm retimes the nodes that are labeled
by 2, by applying the retiming function 𝑟 = (2, −2) as
illustrated in Fig.13. The second iteration retimes the other
labeled nodes by 𝑟 = (1, −1), to provide the fully
parallelized MDFG of IIR filter as shown in Fig.14. The nodes
interconnected by a zero-delay edge are executed in the same
cycle period. It means that nodes belonging to the same
iteration in the original MDFG are executed in three cycle
periods, as illustrated in the scheduling iteration of the fully
parallelized MDFG in Fig.15.
76
TABLE 4
EVOLUTION OF THE CODE SIZE IN TERMS OF CYCLE PERIOD
MDFG
G1
G2
G3
G4
G5
G6
Optimal MDR
144
144
64
144
60
60
Code size
Chained MDR
400
400
400
400
72
72
Improve
48%
48%
84%
48%
16.66%
16.66%
5 EXPERIMENTAL RESULTS
In this section, we validate our MDR technique by comparing
its provided MDFGs to those generated by the chained MDR.
Four parameters are compared in our experimentation: the
cycle period, the number of MDR functions, the execution
time and the code size. We choose as an application the IIR
filter graph and the wave digital filter after applying the
Fettweis transformation [2], as illustrated in Fig.16. These
two graphs are both composed of addition and
multiplication nodes.
Our experiments consists in modeling each MDFG
frequently in different cases of the execution time 𝑡(𝑣𝑖 ) where
𝑣𝑖 𝜖 𝑉, whose values are indicated in Table.1. We present in
the last column the minimum cycle period whose values are
defined as theorem 2.
Fig.14. Full parallelized IIR filter after applying optimal MDR.
required to achieve full parallelism, for each MDFG, using
both techniques.
After providing the fully parallelized MDFGs, we
generate their respective algorithms, to extract their time and
code size parameters. We present in Table.3 the values of the
period cycle number and the execution time of each MDFG
For each MDFG in Table.1, we apply both chained MDR
and optimal MDR techniques. Each one is characterized by a
number of MDR functions applied to achieve full parallelism.
The chained MDR generates the same fully parallelized
MDFG with the same MDR function number, whatever the
Fig.16. MDFG of wave digital filter
illustrated in Table.1 in terms of MDR techniques. The
column “improve” presents the improvement of the
execution time of result generated by our approach
compared to those generated by the chained MDR, which
accounts for an average improvement of 19.31%.
The code size of each MDFG provided in term of both of
TABLE 1
CYCLE PERIOD IN TERMS OF NODE EXECUTION TIMES
Application
IIR Filter
Fig.13. Retiming labeled nodes by 2.
set of node execution time is.
Contrariwise, optimal MDR provides a specific MDFG
for each MDFG of Table.1, with different MDR function
numbers and retimed nodes. To guaranty a reliable
comparison, we use the same MDR function for both
techniques, which we apply the functions 𝑟 1 = (1, −1) and
𝑟2 = (0,1) respectively to the IIR filter and wave digital
filter. Table.2 illustrates the numbers of MDR functions
WD filter
MDFG
G1
G2
G3
G4
G5
G6
t(A i )
1
1
1
2
1
1
t(M j)
2
3
4
5
2
3
Cmin
2
3
4
5
2
3
the two MDR techniques are shown in Table.4. Each value
presents the instructions number of the loop body and the
overhead caused by an MDR transformation. The code size
values mention that our technique proposes an average
improvement equal to 43.53% of the code size.
JOURNAL OF COMPUTING, VOLUME 3, ISSUE 7, JULY 2011, ISSN 2151-9617
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG
6 CONCLUSION
In this paper, we have proposed a new Multidimensional
Retiming technique to achieve full parallelism of MDFGs. It
allows providing an optimized MDFG compared to those
provided by the existent techniques. It allows minimizing
Multidimensional functions by exploring the execution times
and data dependencies between the nodes.
In the section above, we have applied our technique and
the chained MDR on different cases of MDFG. The results
have shown that our technique generates a more efficient
solution MDFG, than those generated by the chained MDR,
in terms of cycle number, execution time and code size. We
have concluded that our technique provides a more efficient
solution, which allows respecting timing and code
constraints while implementing the nested loop in embedded
TABLE 2
EVOLUTION OF MDR FUNCTION NUMBER IN TERMS OF CYCLE
PERIOD
MDFG
G1
G2
G3
G4
G5
G6
MDR function number
Chained MDR
Optimal MDR
2
4
2
4
1
4
2
4
1
2
1
2
systems.
As an optimization technique, we try in our future works
to study using our MDR technique with other optimization
approaches such as unrolling, loop fusion ... It consists in
defining the applying order and the evolution of the
performance parameter in terms of both approaches.
However, MDR techniques are based on scheduling the
MDFG with a minimum cycle period. This period cycle value
does not mean providing an MDFG with minimum
execution time. We will be interested to extending our MDR
technique to provide the adequate period cycle and a
TABLE 3
EVOLUTION OF CYCLE NUMBER AND EXECUTION TIME IN TERMS
OF CYCLE PERIOD
Execution time
Cycle number
MDFG Optimal Chained Optimal Chained
MDR
MDR
MDR
MDR
G1
258
316
516
632
G2
258
316
774
948
G3
229
316
916
1264
G4
258
316
1290
1580
G5
500
600
1000
1200
G6
500
600
1500
1800
Improve
18.35%
18.35%
27.53%
18.35%
16.66%
16.66%
scheduling approach which allows providing MDFG with
minimum execution time.
Also, in the case of real-time embedded system, the
design consists in respecting the code size constraint, which
should not be exceeded. This principle implies reducing
execution time while achieving a limit value of the code size.
Thus, based on the opposite evolution of the timing
parameters and the code size in terms of MDR functions, we
77
will be interested on proposing an optimization approach
using the MDR technique: it requires finding the set of MDR
functions that provide a retimed MDFG with a best ratio
between the mentioned parameters.
REFERENCES
[1] E. Leiserson & B. Saxe, “retiming synchronous circuitry”,
algorithmica, Springer New York, volume 6, numbers 1-6, juin 1991
[2] N.L. Passos, E.H.M. Sha, “achieving full parallelism using multidimensional retiming”, IEEE Transactions on Parallel and Distributed
Systems , Volume 7 , Issue 11 (November 1996)
[3] N.L. Passos, E.H.M. Sha, ”Full Parallelism In Uniform Nested Loops
Using Multi-Dimensional Retiming”, International Conference on
Parallel Processing, 1994.
[4] Q. Zhuge, C. Xue, M. Qiu, J. Hu and E. H.-M. Sha, "Timing
Optimization via Nest-Loop Pipelining Considering Code Size", Journal
of Microprocessors and Microsystems, Volume 32, Issue 7, October 2008.
[5] by N. L. Passos , D. C. Defoe, R. J. Bailey, R. H. Halverson, and R. P.
Simpson, “Theoretical Constraints on Multi-Dimensional Retiming
Design Techniques", Proceedings of the AeroSense-Aerospace/Defense
Sensing, Simulation and Controls, Orlando, FL, April, 2001.
[6] Qingfeng Zhuge, Chun Jason Xue, Meikang Qiu, Jingtong Hu, Edwin
H.-M. Sha, ”timing optimization via nested loop pipelinig considering
code size”, journal of microprocessors and Microsystems 32, 351-363,
2008.
[7] Qingfeng Zhuge, Zili Shao, Edwin H.-M. Sha, “timing optimization of
nested loops considering code size for DSP applications”, Proceeding of
the 2004 international conference on parallel processing, 2004.
[8]Q. Zhuge, E.H.-M. Sha, C. Chantrapornchai, CRED: code size
reduction technique and implementation for software-pipelined
applications, in Proceedings of the IEEE Workshop of Embedded System
Codesign (ESCODES), September, 2002, pp. 50–56.
[9] T. C. Denk and K. K. Parhi, "Two-Dimensional Retiming," in the IEEE
Transactions on VLSI, Vol. 7, No. 2, June, 1999, pp. 198-211.
[10] N. L. Passos and E. H.-M. Sha, "Scheduling of Uniform MultiDimensional Systems under Resource Constraints," in the IEEE
Transactions on VLSI Systems, December, 1998, Volume 6, Number 4,
pages 719-730.
Yaroub Elloumi is a PHD student since September 2009, registered in
both Paris-Est university (France) and Sfax university (Tunisia). He is a
member of Institut Gaspard-Monge, unité mixte de recherche CNRSUMLPE-ESIEE, UMR 8049. His research interests are High-level design
of real-time system, optimization techniques, and H igh-level parameter
estimation.
Mohamed Akil received his PhD degree from the Montpellier university
(France) in 1981 and hi s doctorat d'état from the Pierre et Marie curie
University (Paris, France) in 1985. He currently teaches and does
research with the position of Professor at computer science department,
ESIEE, Paris. He is a member of Institut Gaspard-Monge, unité mixte de
recherche CNRS-UMLPE-ESIEE, UMR 8049. His research interests are
Architecture for image processing, Image compression, Reconfigurable
architecture and FPGA, High-level Design Methodology for multi-FPGA,
mixed architecture (DSP/FPGA), System on C hip (SoC) and parallel
programming of 2D/3D topological operators. Dr. Akil has more than 80
research papers in the above areas.
Mohamed Hedi Bedoui received his PhD degree from Lille University in
1993. He currently teaches with the position of Professor of biophysics in
the Faculty of Medicine of Monastir (FMM), Tunisia. He is a member of
Medical Technology and i mage processing team (TIM), UR 08-27. His
research interests are real-time and embedded systems, image & signal
processing and har dware/software design in medical field, electronic
applications in biomedical instrumentation. He is the president of the
Tunisian Association of Promotion of Applied Research.
| 6 |
International Journal of Distributed and Parallel Systems (IJDPS) Vol.2, No.6, November 2011
DISTRIBUTED EVOLUTIONARY COMPUTATION: A
NEW TECHNIQUE FOR SOLVING LARGE NUMBER
OF EQUATIONS
Moslema Jahan, M. M. A Hashem and Gazi Abdullah Shahriar
Department of Computer Science and Engineering,
Khulna University of Engineering and Technology, Khulna 9203, Bangladesh
mjahan.cse@gmail.com, mma_hashem@hotmail.com and rana_kuet@yahoo.com
ABSTRACT
Evolutionary computation techniques have mostly been used to solve various optimization and learning
problems successfully. Evolutionary algorithm is more effective to gain optimal solution(s) to solve
complex problems than traditional methods. In case of problems with large set of parameters,
evolutionary computation technique incurs a huge computational burden for a single processing unit.
Taking this limitation into account, this paper presents a new distributed evolutionary computation
technique, which decomposes decision vectors into smaller components and achieves optimal solution in
a short time. In this technique, a Jacobi-based Time Variant Adaptive (JBTVA) Hybrid Evolutionary
Algorithm is distributed incorporating cluster computation. Moreover, two new selection methods named
Best All Selection (BAS) and Twin Selection (TS) are introduced for selecting best fit solution vector.
Experimental results show that optimal solution is achieved for different kinds of problems having huge
parameters and a considerable speedup is obtained in proposed distributed system.
KEYWORDS
Master-Slave Architecture, Linear Equations, Evolutionary Algorithms, Hybrid Algorithm, BAS selection
method, TS selection method, Speedup.
1. INTRODUCTION
In recent years, application of evolutionary algorithms is increasing to a greater extent.
Evolutionary algorithms (EAs) are stochastic search methods that have been applied
successfully in many search, optimization and machine learning problems. Successful use of
evolutionary algorithm for solving linear equations is applied in [1], [2], [3]. However, it is
often very difficult to estimate the optimal relaxation factor, which is the key parameter of the
successive over relaxation (SOR) method. Optimal solution is achieved quickly as relaxation
factors are adapted automatically in evolutionary algorithm. Equation solving abilities was
extended in [2], [3], [4] by using time variant parameter. The invention of hybrid evolutionary
algorithm [5], [6] brought a greater benefit to solve linear equations within very short time.
Many problems with huge parameters such as Numerical Weather Forecasting, Chain Reaction,
Astrophysics (Modelling of Black hole), Astronomical formation, Semiconductor Simulation,
Sequencing of the human genome, Oceanography need high computational cost in case of
single processor. One approach to overcome this kind of limitation is to formulate the problem
into distributed computing structure.
The main parallel achievements in the algorithmic families including the evolutionary
computation, parallel models and parallel implementations are discussed in [7]. A distributed
cooperative coevolutionary algorithm is developed in [8] which is beneficial for solving
complex problems. As there are no free lunch theorems for optimization algorithms, a graceful
convergence is the key challenge for designing an optimization algorithm. A number of “no free
DOI : 10.5121/ijdps.2011.2604
31
International Journal of Distributed and Parallel Systems (IJDPS) Vol.2, No.6, November 2011
lunch” (NFL) theorems [9] are presented for any algorithm, which state that any two algorithms
are equivalent when their performance is averaged across all possible problems. On the other
hand, there are coevolutionary free lunches theorems in [10]. The proposed technique follows
coevolutionary theme. A distributed technique [11] is proposed for parallelizing fitness
evaluation time. Fitness evaluation time is high but other operations of evolutionary algorithm
take more time. This paper proposes a Distributed Evolutionary Computation (DEC) in which
mutation and adaptation processes are also distributed. This is the Champion Selection
technique, where best champion is selected within short period of time.
Master (Server)
Slaves (Clients)
Figure 1. Distributed Master-Slave Architecture
Basic master-slave architecture (Figure 1) is used in proposed distributed technique that follows
a server-client paradigm where connections are closed after each request. Slaves are connected
with master through local area network (LAN) to take the advantage of distributed processing
power of slaves. The basic approach of this system is to split a large problem into many
subproblems and to evolve subproblems separately. These subproblems are then combined and
actual solution is achieved. This process continues until the less erroneous solution comes out
compared to a threshold error level.
The remainder of this paper is structured as follows: Section 2 represents the previous work
related to the proposed work. Section 3 mentions Jacobi method of solving linear equations and
distributed model. Timing calculation is discussed in section 4. Section 5 examines results of
the experiment of various problems and provides a comprehensive comparison of single and
distributed system on the basis of BAS and TS selection mechanism. Finally, Section 6 provides
our concluding remarks.
2. RELATED WORK
The intrinsically parallel and distributed nature of EAs did not escape the attention of early
researchers. Grefenstette [12] was one of the first in examining a number of issues pertaining to
the parallel implementations of GAs in 1981. Grosso [13] is another early attempt to introduce
parallelism using spatial multipopulation model. Several attempts were made to have a better
and fast system that is capable of doing Evolutionary computations in parallel fashion. DREAM
(Distributed Resource Evolutionary Algorithm Machine) [14] is such a system that used island
model architecture on peer-to-peer connection. Both island-model and master-slave architecture
has been combined at ParadisEO (PARAllel and DIStributed Evolving Objects) [15]. But in
either case, all genetic operations are not done with a distributed manner. Among different
models and architectures, this paper follows master-slave architecture to develop parallel and
distributed environment.
32
International Journal of Distributed and Parallel Systems (IJDPS) Vol.2, No.6, November 2011
JDEAL (Java Distributed Evolutionary Algorithms Library) [16] is a master-slave architecture
coded in Java platform. Paladin-Dec [17] was another Java implementation of genetic
algorithms, genetic programming and evolution strategies, with dynamic load balancing and
fault tolerance. Still the communications among the nodes in distributed architecture uphold an
issue. ECJ [18] is a Java-based framework that doing its computation using Java TCP/IP
sockets. MPI (Message Passing Interface) is used at [19] with C++ framework. The developed
distributed EC system was integrated transparently with the C++ Open BEAGLE framework
[20] in 2002. Parallel performance of MPI sorting algorithms is presented in [21]. By gathering
all ideas this paper implement a hybrid algorithm combining the jacobi-based successive
relaxation (SR) method with evolutionary computation techniques which follow Java-based
framework with socket programming in distributed manner and related speedup is calculated.
Same algorithm was implemented in single processing system [4] using C++ framework and the
related speedup was calculated. Finally experimental result shows distributed system is more
speedy than single system to solve the problems with huge parameters.
Selection mechanism of an Evolutionary Computation technique has been a key part which
brings up a significant computational cost. In 1982 Hector Garcia-Molina provides an election
algorithm in [22] for two categories failure environments by which a coordinator will be selected
when failure occurs in distributed computing system. A sum-bottleneck path algorithm is
developed in [23] that allows the efficient solution of many variants of the problem of optimally
assigning the modules of a parallel program over the processors of a multiple-computer system
under some constraints on the structure of the partitions. An individual selection method is
provided in [24] to select efficient individuals and whole system follows a distributed technique.
In 2010, an improved system of an abstraction, all-pairs that fits the needs of several
applications in biometrics, bioinformatics, and data mining is implemented in [25] shows the
effect of campus grid system which is more secured than single system because of following
distributed manner. Different papers followed various election mechanisms for choosing right
population. Like these, our paper also provides two new selection mechanisms BAS which
choose all best individual with less error rate and TS which choose one individual between
adjacent two and develop twin copy of the individual with low error.
3. PROPOSED MODEL OF DISTRIBUTED EVOLUTIONARY ALGORITHM
The new distributed technique is anticipated based on Jacobi Based Time- Variant Adaptive
Hybrid Evolutionary algorithm [3] through cluster computing environment. The proposed
algorithm initializes a relaxation factor in a given domain which is adapted with time and fitness
of the solution vector.
3.1. Jacobi Method of Solving Linear Equations
Consider the following linear equations:
(1)
Ax = b or Ax − b = 0
n
n
n
Where A ∈ ℜ × ℜ and x, b ∈ ℜ
Let, A is n × n matrix, where aii is the diagonal elements, aij is other elements of the A
matrix and bi is elements of b matrix. For solving linear equations Jacobi method is used [26].
Let,
Linear equations is Ax = b and A ≠ 0
(D + U + L )x = b , Where A = (D + U + L )
Dx = b − (U + L )x or, x = D −1b − D −1 (U + L) x
Then
or,
or, x = H j x + V j
Where H j = D −1 (− L − U ) and V j = D −1b
The linear equation can be written as
33
International Journal of Distributed and Parallel Systems (IJDPS) Vol.2, No.6, November 2011
n
∑a
ij x j
= b j , (i = 1,2,3,.....n )
(2)
j =1
In Jacobi method by using SR technique [27] is given by,
ω
x i(k +1) = x i(k ) +
bi −
a ii
n
∑a
(k )
ij x j
j =1
,
(i = 1,2,3,.....n )
(3)
In matrix form matrix-vector equation is
x (k +1) = H ω x (k ) + Vω
(4)
Where H ω called Jacobi iteration matrix, and Vω are given successively by
H ω = D −1 {(1 − ω )I − ω (L + U )}
And
−1
Vω = ωD b
(5)
(6)
If ω is set at a value between 0 and 1, the result is weighted average of corresponding previous
result and sum of other (present or previous) result. It is typically employed to make a nonconvergence system or to hasten convergence by dampening out oscillations. This approach is
called successive under relaxation. For value of ω from 1 to 2, extra weight is placed. In this
instance, there is an implicit assumption that the new value is moving in the correct direction
towards the true solution but at a very slow rate. Thus, the added weight ω is intended to
improve the estimate by pushing it closer to the truth. This type of modification, which is called
over relaxation, is designed to accelerate the convergence of an already convergent system. This
approach is called successive over relaxation (SOR). The combine approach, i.e. for value of ω
from 0 to 2, is called successive relaxation or SR technique [26].
Iterative process is continued using equation (3) until the satisfactory result is achieved. Based
on this method, different hybrid evolutionary algorithms are developed in [1] [2]. Parallel
iterative solution method for large sparse linear equation systems is developed in [28].
3.2. Proposed Distributed Technique
The availability of powerful network computers represents a wealth of computing resources to
solve problems with large computational effort. Proposed distributed technique uses masterslave architecture and classical numerical method with Evolutionary Algorithm for solving
complex problems.
In this system, large problems are decomposed into smaller subproblems and mapped into the
computers available in a distributed system. Communication process between master and slaves
follow message passing paradigm that is identical with [29]. This technique introduces the high
performance cluster computation, linking the slaves through LAN to exploit the advantage of
distributed processing of the subproblems. Here each slave in a cluster always solves same
weighted subproblems although machine configuration is different. In this cluster computing
environment, master processor defines number of subproblems according to slave number in a
cluster. The workflow diagram of proposed algorithm is portrayed as in Figure 2.
34
International Journal of Distributed and Parallel Systems (IJDPS) Vol.2, No.6, November 2011
Start
Set Parameters
Initialization of
Population
Stop
Y
Error<Th
reshold
Begin
Mutation
Recombination
Connection Setup with
master processor
Fitness
Evaluation
Determine number
of Subpopulations
Waiting for coming job
N
Adaptation
Y
Is BAS
Method?
Job
assigned?
Distribute
all
subpopulat
ions?
N (TS)
N
Twin
selection
Block of genetic
operations
Return subpopulation to
master
Send solution to master
and update slave status
Y
Sleep until all
solutions return from
all slaves
End
Block of genetic operations
Champion selection
Master processor
Slave processor
Figure 2. The workflow diagram of DEC system
The proposed workflow diagram can be mapped into master-slave paradigm. In Figure 3 all
steps of proposed DEC are expressed using numbering system in each position. Step 1, 2, 7 are
completed in master processor and step 3, 4, 5 occurred in slave processors but working
principle of step 7 depends on selection method. For both methods, step 6 is dedicated for
checking whether selected offspring from all slaves return back in master processor or not.
35
International Journal of Distributed and Parallel Systems (IJDPS) Vol.2, No.6, November 2011
Server
(Master)
Controller
Evaluators
(Slaves)
Loopback
2
3
1
4
7
6
5
Figure 3. Working model of proposed technique
3.2.1. Initialization
In step 1, master processor initializes problems with population and relaxation factor. Initial
population X (0 ) = x1(0 ) , x 2(0 ) ,.....x N(0 ) is generated randomly using normal distribution at master.
Here N is the population size. Let k ← 0 where k is the generation counter. Relaxation factors
ω (0 ) = ω1(0 ) , ω 2(0 ) ,......, ω N(0 ) are also initialized on the basis of corresponding individuals.
{
{
}
}
3.2.2. Recombination
In step 2, Recombination operation is performed on parent at master and population
(
X (k + c ) = R X (k )
)
t
( )
is obtained, where R = rij
n
and
N×N
∑r
ij
= 1 and rij ≥ 0 for 1 ≤ i ≤ N
j =1
Matrix R is a stochastic matrix.
Population generated after recombination operation is distributed among slaves. Then mutation,
fitness evaluation and adaptation operations are performed on that distributed subpopulations.
3.2.3. Mutation
After completing recombination operation step 3 provides Mutation operation on the
subpopulation in slave processors and mutated subpopulation X (k + m ) is obtained. For each
Mutation on subpopulation is
N sub
∑x
(k + m)
i
= H ω x i( k + c ) + Vω
(7)
i =1
3.2.4. Fitness Evaluation
After completing mutation operation, fitness evaluation is performed in step 4. Error function is
N sub
E=
∑E
(8)
i
i =1
N sub
Where E =
∑ a (x ) − b
ij
j
i
, i = 1,2,....., N sub
j =1
The fitness evaluation of an individual measures the accuracy of an individual for a particular
problem and calculates the error rate which is used for selecting best individuals.
36
International Journal of Distributed and Parallel Systems (IJDPS) Vol.2, No.6, November 2011
3.2.5. Adaptation
In step 5, adaptation is performed on mutated offspring according to fitness evaluation.
Consider two individuals X and Y , corresponding errors e x and e y and relaxation factors ω x
and ω y .
If e x > e y then ω x is adapted to ω y .
ω xm = (0.5 + p x )(ω x + ω y )
Similarly if e y < e x then ω y is adapted to ω x and if e x = e y then no adaptation is
performed. Px is denoted as adaptive (TVA) probability parameter of ω x .
In step 6, controller checks whether offspring from all slaves have reached in master processor
or not. In BAS method, step 6 starts its task after completion of adaptation operation but in TS
method, after completing of partial selection of best offspring in each slave.
3.2.6. Selection
The DEC system provides two selection methods named BAS and TS. In BAS method, best
individuals are selected among parent and mutated offspring. At position 7, mutated offspring
from all slaves are combined and selection mechanism is performed on parent and newly
generated offspring by mutation.
Mathematically,
Selection(i ) = errmin {x1 , x 2 , x 3 ,.....x 2 N −i +1 }
(9)
N
Selection =
∑ Selection(i )
(10)
i =1
Individuals compete among themselves and select best individual based on error value. Before
finding the optimized solution, overall system will be continued in same process.
TS method provides a partial selection operation on mutated offspring in each slave where one
best individual is selected between two consecutive individuals and developed twin copy of
selected offspring.
Mathematically,
Selection(i ) = errmin {x i , x i +1 }
(11)
N sub
Selection =
∑ Selection(i)
(12)
i =1
These selected offspring are combined in server and select best one. The selected copy will be
the champion among all individuals or optimized solution for a particular problem if this fulfils
the desired condition, otherwise fill up the whole archive and next generation will be continued.
There is no direct communication among the subpopulations. All communications are
performed between the subpopulation and the central server. Such communications take place in
every generation before reaching the accurate result.
3.3. Scenario of “Champion Selection”
The DEC system can be compared with a game where a champion will be selected on the basis
of some criteria. At the starting moment, all players are presented at master which is a
coordinator. The Coordinator divides all players into different teams and assigns these teams in
different slaves. The number of players in a team is determined according to the slave number in
37
International Journal of Distributed and Parallel Systems (IJDPS) Vol.2, No.6, November 2011
a game region that is a cluster. After reaching the team, some operations are performed on that
team separately and simultaneously in each slave. Each player has enriched after operation and
all players from each slave come back in master. After coming back, all players compete with
each other and select best one. If the selected one fulfils the desired criteria then this is the
champion of the game or game will be continued following same process.
Subpopulation
Individuals
1
Slave 1
2
3
Slave 2
4
5
Server
(Master)
Slave 3
6
Figure 4. The model of DEC
A prototype model of DEC with six individuals over three slave computers is illustrated in
Figure 4. The figure shows the division of main population into three subpopulations containing
two individual in each according to the number of slave computers where master processor is
the coordinator which will take the decision. Each slave accepts two individuals among six of
them. Some Evolutionary mechanisms are operated separately in each slave. After completing
the operations, subpopulations are returned back to master and individuals compete among
themselves and a champion is coming out. If the champion is not best suited with the standard
value, this process will be continued. Otherwise this champion is the winner of the game.
3.4. Explanation of Selection Methods
This section represents two selection methods which will help to find out the optimized solution
of a problem. Working mechanisms of these two methods are as follows:
3.4.1. BAS selection method
In this method, best individual is selected between parent and mutated offspring in each
iteration. After coming back all individuals from all slaves to master, they compete with each
other and provide best half of the total individuals including parent and offspring and assign
priority according to the error rate. On the basis of the priority, a champion is selected. If the
error rate of the champion is equal or less to the standard value, this is the optimized solution of
the problem. Otherwise the process will be going on.
38
International Journal of Distributed and Parallel Systems (IJDPS) Vol.2, No.6, November 2011
Parent
a
b
c
e3
e2 e5
Offspring
d
e8
e7
Recombination
and Mutation
e1
e3
e5
e1
e6
e4
Selection
e4
Figure 5. Selection mechanism for BAS method
In Figure 5: a, b, c and d patterns represent four parent individuals and the stripped patterns
represent offspring. Error of each parent individual is e3, e2, e5, e7 and corresponding
relaxation factors are ω3, ω2, ω5, ω7. Similarly error of each offspring is e8, e1, e6 and e4 and
relaxation factors are ω8, ω1, ω6 and ω4. Here error number represents the value of the error.
Now best half individuals are selected between parent and offspring according to error value.
Here two individuals are selected from parent and two are selected from offspring.
Corresponding relaxation factor is also rearranged. Next generation is started with these selected
offspring and their respective relaxation factor.
The time variant adaptive (TVA) parameters are defined as
Px = E x × N (0,0.25) × Tω
And Px is denoted as adaptive (TVA) probability parameter of ω x , and
Py = E y × [N (0,0.25)]× Tω
(13)
(14)
γ
t
And Py is denoted as adaptive (TVA) probability parameter of ω y ; Where Tω = 1 −
T
Here λ and γ are exogenous parameters, used for increased or decreased of rate of change of
curvature with respect to number of iterations; t and T denote number of generation and
maximum number of generation respectively. Also N (0,0.25) is the Gaussian distribution with
mean 0 and standard deviation 0.25.
Now E x and E y denote the approximate initial boundary of the variation of TVA parameters
of ω x and ω y respectively. If ω ∗ is denoted as the optimal relaxation factor then
E x = Px | max =
ωy ~ ωx
(
2 ωx +ω y
(15)
)
(
)
So that ω xm = (0.5 + Px | max ) ω x + ω y ≈ ω y and
∗
E y = Py | max =
∗
ω ~ ωy
(
2 ω y −ωL
)
or
ω ~ ωy
ωU − ω y
(16)
39
International Journal of Distributed and Parallel Systems (IJDPS) Vol.2, No.6, November 2011
(
(
)
)
ω y + Py |max ωU − ω y , ω y > ω x
so that, ω ∗ ≈ ω ym =
ω y + Py |max ω L − ω y , ω y < ω x
(17)
3.4.2. TS selection method
Each slave contains a subpopulation and each subpopulation consists of at least two individuals.
After completing adaptation operation, best individual is selected between two consecutive
individuals. These individuals are returned back to the master and compete among themselves.
After the competition, a best quality champion is selected according to the error value and
checking with the standard value. If the error value is equal or less to the standard value, this is
the optimized solution. Otherwise this best champion is cloned and fills the archive and
continues the process. The corresponding relaxation factors are also rearranged which is
required for adaptation operation of next generation.
Parent
a
b
c
e3
e2 e5
Offspring
d
e7
e8
e1
Recombination
and Mutation
e1 e1 e1
e1
e6
e4
Selection
e1 e1 e6 e6
Select best champion
among all champions
Figure 6. Selection mechanism for TS method
In Figure 6: a, b, c and d patterns represent four parent individuals and the stripped designs
represent offspring. Error of each offspring is e8, e1, e6, e4 and corresponding relaxation factors
are ω8, ω1, ω6, ω4. There are two subpopulations in two slaves and each contains two
individuals. The error rate of the individuals in 1st subpopulation is e1 and e8, and 2nd
subpopulation is e6 and e4. Now, best individual is selected between two consecutive offsprings
and made a twin of it. The individual with the error rate e1 is chosen from 1st individual and
individual with the error rate e6 is chosen from 2nd subpopulation. These individuals are
returned back to master and select a best quality champion according to the error rate. This
champion is cloned to the equal number of the parent and next generation is started with these
cloned individuals. Corresponding relaxation factor is also rearranged.
The time variant adaptive (TVA) parameters are defined as
Px = gauss x × Pmax × Tω
(18)
Px is denoted as adaptive (TVA) probability parameter of ω x , and
Py = gauss y × Pmin × Tω
(19)
40
International Journal of Distributed and Parallel Systems (IJDPS) Vol.2, No.6, November 2011
Py is denoted as adaptive (TVA) probability parameter of ω y
Where
1
Tω = λ ln1 +
, λ > 10
t+λ
Approximately, Pmax and Pmin are two exogenous parameters that are assumed as
Pmax = 0.125 and
Pmin = 0.0325
Here, λ and γ are exogenous parameters, used for increased or decreased of rate of change of
curvature with respect to number of iterations; t and T denote number of generation and
maximum number of generation respectively. Also, gauss x (0,0.25) is the Gaussian distribution
with mean 0 and standard deviation 0.25. Now Pmax and Pmin denote the approximate initial
boundary of the variation of TVA parameters of ω x and ω y respectively. And if ω ∗ is denoted
as the optimal relaxation factor then
ω y + Py | max ωU − ω y , ω y > ω x
ω ∗ ≈ ω ym =
ω y + Py | max ω L − ω y , ω y < ω x
(
(
)
)
4. COMPUTATION TIME ANALYSIS
DISTRIBUTED PROCESSORS
(20)
OF
SINGLE PROCESSOR
AND
Timing is the main parameter which is compared for single and distributed processors. For a
specific optimization problem, DEC system provides some notation for recombination time,
mutation time, fitness evaluation time, adaptation time, and selection time can be denoted as
Tr , Tm , T f , Ta and Ts . Total time in single processor is indicated by Tsin gle . Time in single
processor is represented as follows,
TSingle = Tr + Tm + T f + Ta + Ts
(21)
In case of distributed system, time is calculated in master and slaves separately. Then total time
is calculated by combining master and slave time.
4.1. Time in master processor
For a particular problem, Recombination operation is performed on initial population and time
Tr is calculated. Then the server distributes individuals among all slaves connected to it.
Number of subpopulations is p which is distributed in slaves. In this case, Marshalling and
transmission time is considered. Marshalling time is the time to construct data stream from
object and transmission time is the time to place data stream in channel. Marshalling time of i th
individual is Tmi and Transmission time with i th individual is Ttransi .
Marshalling and transmission time with p subpopulations = P[Tmi + Ttransi ]
4.2. Time in slave processors
Mutation time, fitness evaluation time, adaptation time with p subpopulations is Tm , T f and
Ta respectively. Unmarshalling time Tumi is considered here. Unmarshalling time is the time to
create object from data stream. After completing adaptation operation, each slave sends mutated
offspring to server when BAS selection method is considered but in TS method, slaves send
individuals which are selected as best quality for a particular slave. In BAS method, selection
operation is performed in master processor but partial selection operation is completed in slave
processor in TS method. So this selection time for TS method is calculated in slave processor
41
International Journal of Distributed and Parallel Systems (IJDPS) Vol.2, No.6, November 2011
and sends to master. The processing time of each slave is not equal because of communication
delay. In experimental calculation, maximum time is considered for all cases.
4.3. Time in Master Processor for selection
The basic difference between BAS and TS selection method is to perform selection operation in
two different ways. So selection time for BAS and TS method is calculated in different manner.
Selection time with 2 N subpopulation is Ts , N individuals for parent and N individuals for
offspring. Maximum slave time is considered for calculating speedup. Consider m is the
number of slaves. In BAS selection method, total time for distributed processors is as follows,
[
)]
(
TDistributed ( BAS ) = Tr + p Tmi + Ttransi + max 1....m Tm + T f + Ta + pTumi + Ts
(22)
Where Tdistributed represents total distributed time, Tr , Tmi , Ttransi , Tm , T f , Ta , Tumi , Ts represents
recombination time, marshalling time, transmission time, mutation time, fitness evaluation time,
adaptation time, unmarshalling time and selection time.
Speedup for distributed technique using BAS method is:
Speedup = TSingle T Distributed ( BAS )
(23)
In TS selection method, selection operation is performed on mutated offspring in each slave
where each best individual is selected between two consecutive individuals
and corresponding time is obtained. Then slaves send selected offspring with selection time
calculated in slave to master.
Total selection time with p subpopulation = Ts
Consider m is the number of slaves. In the case of TS method:
[
TDistributed = Tr + p[Tmi + Ttransi ] + T f + Tm + Ta + Ts
]
(24)
Speedup for TS method:
Speedup = TSingle T Distributed
(25)
Furthermore, percentage of improvement:
%=Speedup/Number of computers
(26)
Computation time in distributed processors will be less than single processor. So, speedup will
be gradually rising with increasing number of individuals in a population.
5. EXPERIMENTAL RESULTS
The environment of experiment consists with 15 homogeneous computers and 100 Mbits/sec
Ethernet switch. This system is implemented in Java on a personal computer with Intel Pentium
IV and 512MB RAM. In our experiment, individual values are generated randomly. Here,
random values are generated using normal distribution with the range from -15 to 15. In order to
evaluate the effectiveness and efficiency of the proposed algorithm, various problems are tested.
A problem which is tested for different approaches is shown in different graphs. The testing
problem is:
Ax = b
Where, a ii = 20.0 , a ij ∈ (0,1) , bi = 10 * i
and i = 1,2,3,......, n , j = 1,2,3,......, n
42
International Journal of Distributed and Parallel Systems (IJDPS) Vol.2, No.6, November 2011
The parameter is n = 100 and the problem was solved with an error rate resides in the domain
from 9 × 10 −9 to 1×10 −8 for both BAS and TS method. Different experiments were carried out
using this problem.
Table I gives the results produced by proposed distributed algorithm and single processor
algorithm. Here, different problems are tested with comparing time between single and
distributed processors. Table I provides the results using BAS method. Similarly, Table II
summarizes experimental results of various problems with TS method. In these two tables,
number of computers in a cluster is five as distributed processors. In all cases, optimum solution
is achieved. It is possible to solve various benchmark problems using the proposed distributed
system.
It is very clear from Table Ι and Table II that performance of distributed processors is better
than single processor. Although BAS and TS both are selection methods that are used in
distributed system, BAS method shows better performance than TS method.
Table 1. Experimental Results of Time for Different Problems in Single and Distributed
Processors for BAS Method
Problem
Number
Problem
Individu
al
Number
Parame
ter
Numbe
r
Time in
Single
Processor
(seconds)
Time in
Distributed
Processor
(seconds)
Error
p1
a ii = 20.0; a ij ∈ (0,1);
40
170
4.02 × 10 −1
6.88 × 10 −2
9 × 10 −4
bi = 10 * i
p2
a ii = 20n; a ij = j; bi
40
120
3.68 × 10 −1
6.75 × 10 −2
9 × 10 −9
p3
a ii = 2i 2 ; a ij = j; bi =
40
120
3.38 × 10 −1
6.16 ×10 −2
9 ×10 −9
20
100
1.18 × 10 −1
3.57 × 10 −2
9 × 10 −9
20
100
1.33 × 10 −1
3.45 × 10 −2
9 × 10 −9
40
100
1.85 × 10 0
2.13 × 10 −2
9 × 10 −4
p4
a ii ∈ (− 100,100 );
a ij ∈ (− 10,10 );
bi ∈ (− 100,100 )
p5
a ii ∈ (− 70,70 );
a ij ∈ (0,7 );
bi ∈ (0,70 )
p6
a ii = 70;
a ij ∈ (− 10,10 );
bi ∈ (− 70,70 )
Table 2. Experimental Results of Time for Different Problems in Single and Distributed
Processors for TS Method
43
International Journal of Distributed and Parallel Systems (IJDPS) Vol.2, No.6, November 2011
Proble
m
Numb
er
p1
Problem
aii = 20.0; aij ∈ (0,1
Individual
Number
Parameter
Number
Time in
Single
Processor
(seconds)
1.21 × 10 −1
Time in
Distribute
d
Processor
(seconds)
9.93 × 10 −2
20
100
Error
1× 10 −8
bi = 10 * i
p2
a ii = 20n; a ij = j; b
20
100
1.28 × 10 −1
1.05 × 10 −1
1× 10 −8
p3
a ii = 2i 2 ; a ij = j; b
20
100
1.48 × 10 −1
1.22 × 10 −1
1× 10 −8
p4
aii ∈ ( −100,100 ) ;
20
100
1.46 × 10 −1 1.34 × 10 −1
1× 10 −5
20
50
1.06 × 10 −1
9.59 × 10 −2
1× 10 −4
20
10
1.12 × 10 −1
9.94 × 10 −2
1× 10 −4
aij ∈ ( −10,10 ) ;
bi ∈ ( −100,100 )
p5
a ii ∈ (− 70,70 );
a ij ∈ (0,7 ); bi ∈ (0,
p6
aii = 70;
aij ∈ ( −10,10 ) ;
bi ∈ ( −70, 70 )
5.1. Speedup comparison between BAS and TS method
To compare speedup, BAS and TS method follows the system standard with 40 individuals for
5 and 10 number of computers in a cluster as well as 30 individuals for 15 number of computers
when parameters are 100 for each case.
8
BAS method
TS method
7
Speed up
6
5
4
3
2
1
5
10
Number of computers in a cluster
15
Figure 7. Speed up measurement according to number of computers in a cluster
In Figure 7, speedup is calculated using eqn (23) in BAS method and TS method uses eqn (25).
Speedup is 3.36, 4.57, 6.72 in BAS method and 2.109, 1.999, 2.421 in TS method when number
44
International Journal of Distributed and Parallel Systems (IJDPS) Vol.2, No.6, November 2011
of slave computers is 5, 10 and 15. Percentage of improvement is sequentially 67.2 %, 45.7 %
and 44.8 % for BAS method as well as 21.09 %, 19.99 % and 24.21 % for TS method which is
calculated based on eqn (26). It can be easily visualized that BAS method provides better
performance than TS method.
0.02
Time(Seconds)
0.015
0.01
0.005
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Computer Number
Figure 8. Time (in seconds) measurement according to computer number in a cluster
Figure 8 shows the time required for performing genetic operation in each slave. Here, number
of slave computers in a cluster is 15 and individual is 30 and parameters are 100. There is no
load balancing system so different slave computers need different time.
Intuitively, workload balancing for a distributed system can be difficult because the working
environment in a network is often complex and uncertain. From Figure 8, we can see that
computer number 1 needs the highest computation time among all slaves. For calculating total
distributed time, this maximum value is considered in both methods.
5.2. Comparing time with dimension
Figure 9 and Figure 10 show the time requirement on the basis of parameter number to solve the
problems using BAS and TS method and compare the performance in single and distributed
processors. The system runs with 5 and 20 slave computers in a cluster and parameters are 50,
60, 70, 80, 90 and 100. There is a fluctuation of time with increasing number of parameters
because of random production. All parameters are same for two methods. Distributed system
needs less time than single system in both cases but BAS method is better selection mechanism
than TS selection method.
0.16
0.16
0.14
0.14
0.1
0.08
Time(Seconds)
Time(Seconds)
0.12
Single Processor
Distributed Processor
0.06
0.12
0.1
0.08
0.04
0.06
0.02
0.04
0
50
Single Processor
60
70
80
Number of parameters
90
100
Figure 9. Time measurement according to
the number of parameters in BAS method
0.02
50
Distributed Processor
60
70
80
90
Number of parameters
100
Figure 10. Time measurement according to
the number of parameters in TS method
45
International Journal of Distributed and Parallel Systems (IJDPS) Vol.2, No.6, November 2011
5.3. Comparing time with individual
Time is compared between single and distributed processors according to individuals using
BAS and TS method; this is shown in Figure 11 and Figure 12. The number of slave computers
in a cluster is 5 and number of parameters is 100 in all cases where individual number is varying
with the value of 10, 20, 30, 40 and 50. Time is increasing with increasing number of
individuals but more time is always needed in single system comparing with distributed system.
0.35
0.35
Single Processor
Single Processor
0.3
Distributed Processor
0.25
Time(Seconds)
Time(Seconds)
0.3
0.2
0.15
0.1
Distributed Processor
0.25
0.2
0.15
0.1
0.05
0.05
0
10
20
30
40
Number of individual
0
10
50
20
30
40
Number of individual
50
Figure 12. Measurement of Time with
number of individuals in TS method
Figure 11. Measurement of Time with
number of individuals in BAS method.
5.4. Comparing error with generation
Figure 13 and Figure 14 visualize the error rating with number of generations in BAS and TS
method. In both cases, efficiency is compared between single and distributed. These two
methods use 5 slave computers in a cluster but other parameters are different. Number of
parameters is 100 and number of individuals is 40 for BAS method but TS method uses 90
parameters and 20 individuals for this experiment. From figures it is easily understandable that
some cases of distributed system needs less generation than single system to go convergence.
0.01
0.015
Single Processor
Single Processor
Distributed Processor
0.008
Distributed Processor
0.01
Error
Error
0.006
0.004
0.005
0.002
0
7
8
9
10
11
Generation
12
Figure 13. Error measurement according to
generation in BAS method
13
0
10
11
12
13
14
Generation
15
16
Figure 14. Error measurement according to
generation in TS method
46
International Journal of Distributed and Parallel Systems (IJDPS) Vol.2, No.6, November 2011
In all cases, performance of distributed processor is better than single processor. In the mean
time it is also noticeable that BAS selection method shows incredible performance than TS
method although this method completes a partial selection in slave processor. A tricky point is
that individuals are filtering primarily in each slave, so there may be discarded better quality
individual from a slave. On the other hand, BAS method choose best individual from parent and
all offspring coming back from all slaves. So achieving optimal result in BAS method is faster
than TS method. Apparently, it is not realized because of distributed selection operation in TS
method which is not in BAS method. But in some case, TS method provide better performance
if the selection order is perfect for each slave.
6. CONCLUSIONS
It is easy to solve linear equations with less number of parameters in single processor but high
computational cost is required for large number of parameters. This cost can be drastically reduced
using distributed system. This paper introduced a new distributed algorithm for solving large number
of equations with huge parameters. It also introduced two new selection methods called BAS and TS
for selecting offspring. In these methods, best individuals are selected and computation load is
distributed in each slave and mutation, fitness evaluation, and adaptation operations are performed
on distributed load. As distributed technique is used, computation time is reduced using these
selection methods but BAS method provides better performance compared to TS method. For both
selection methods, computational time is analyzed and hence speed up is calculated in this new
distributed computing system.
REFERENCES
[1]
Jun He, Jiyou Xu and Xin Yao, (September 2000) “Solving Equations by Hybrid Evolutionary
Computation Techniques”, IEEE Transactions on Evolutionary Computations, Vol. 4, No. 3, pp.
229-235.
[2]
A.R.M. Jalal Uddin Jamali, Md. Bazlar Rahman and M.M.A. Hashem, (2005), “Solving Linear
Equations by Classical Jacobi-SR Based Hybrid Evolutionary Algorithm with Time-Variant Adaptation
Technique”, Ganit: Journal of Bangladesh Mathematical Society, Vol. 24, No. 24, pp. 67-79.
[3]
A.R.M. Jalal Uddin Jamali, M.M.A. Hashem and Md. Bazlar Rahman, (2004), “An approach to Solve
Linear Equations Using a Time-Variant Adaptation Based Hbrid Evolutionary Algorithm”,
Jahangirnagar University Journal of Science, Vol. 27, pp. 277-289.
[4]
Jamali A.R.M.J.U., (2004), “Study of Solving Linear Equations by Hybrid Evolutionary Computation
Techniques”, M.Phil Thesis, Khulna University of Engineering & Technology, Khulna, Bangladesh.
[5]
Hashem, M.M.A., (1999) “Global Optimization through a New Class of Evolutionary Algorithm,”
Ph.D. dissertation, Diss. No. 19, Saga University, Japan, pp. 1-25.
[6]
K. Watanabe and M.M.A. Hashem, (2004) “Evolutioary Computations: New Algorithms and their
Applications to Evolutionary Robots”, Vol. 3, Springer-Verlag, Berlin.
[7]
E. Alba and M. Tomassini, (October 2002), “Parallelism and Evolutionary Algorithms”, IEEE
Transactions on Evolutionary Computations, Vol. 6, No. 5, pp. 443-462.
[8]
K. C. Tan, Y. J. Yang and C. K. Goh, (October 2006), “A Distributed Cooperative Coevolutionary
Algorithm for Multiobjective Optimization”, IEEE Transactions on Evolutionary Computation, Vol.
10, No. 5.
[9]
David H. Wolpert and William G. Macready, (April 1997), “No Free Lunch Theorems for
Optimization”, IEEE Transactions on Evolutionary Computation, Vol. 9, No. 1, pp. 67-82.
[10]
David H. Wolpert and William G. Macready, (December 2005), “Coevolutionary Free
Lunches”, IEEE Transactions on Evolutionary Computation, Vol. 9, No. 6, pp. 721-734.
[11]
M. Dubreuil, C. Gagńe, and M. Parizeau, (February 2006), “Analysis of a Master-Slave Architecture for
Distributed Evolutionary Computations”, IEEE Transactions on Systems, Man and Cybernetics-Part B,
Vol. 36, No. 1, pp. 229-235.
47
International Journal of Distributed and Parallel Systems (IJDPS) Vol.2, No.6, November 2011
[12]
J. J. Grefenstette, (1981), “Parallel adaptive algorithms for function Optimization”, Vanderbilt
University, Nashville, TN, Tech. Rep, CS- 81-19.
[13]
P. B. Grosso, (1985), “Computer simulation of genetic adaptation Parallel subcomponent interaction
in a multilocus model”, Ph. D. dissertation, Uni. Michigan, Ann Arbor.
[14]
B. Paechter, T. Bӓck, M. Schoenauer, M. Sebag, A. E. Eiben, J. J. Merelo, and T. C. Fogarty,
(2000), “A distributed resource evolutionary algorithm machine (DREAM)” in Proceedings of
CEC’00. IEEE Press, pp. 951–958.
[15]
S.Cahon, N. Melab and E. Talbi, (2004), “ParadisEO: A framework for the reusable design of parallel
and distributed metaheuristics”, Journal of Heuristics, pp. 357-380.
[16]
J.Costa, N. Lopes and P. Silva, “JDEAL: The java distributed evolutionary algorithms library”,
http://laseeb.isr.istutl.pt/sw/jdeal/home.html.
[17]
K. C. Tan, A. Tay and J. Cai, (August 2003), “Design and implementation of a distributed evolutionary
computing software”, IEEE Transactions on Systems, Man and Cybernetics, Part C, Vol. 3, pp. 325338.
[18]
S. Luke, L. Panait, G. Balan, S. Paus, Z. Skolicki, J. Bassett, R. Hubbley and A. Chircop,(2005), “ECJ
1: A java-based evolutionary computation and genetic programming research system”.
[19]
Message Passing Interface Forum (1994). MPI: A message-pasing interface standard, Int. J.
Supercomputer. Application, Vol. 8, No. 3, pp. 165-414.
[20]
C. Gagné and M. Paizeau, (2002), “Open BEAGLE: A New Versatile C++ framework for evolutionary
computation”, in Late Breaking Papers of GECCO 2002, New York, NY, USA.
[21]
Alaa Ismail El-Nashar, (May 2011), “Parallel Performance of MPI Sorting Algorithms on DualCore Processor Windows-Based Systems”, International Journal of Distributed and Parallel
Systems (IJDPS), Vol. 2, No. 3, pp. 1-14.
[22]
Hector Garcia-Molina, (1982), “Elections in a Distributed Computing System”, IEEE Transactions
on Computers, Vol. 31, No. 1, pp. 48-59.
[23]
Shahid H. Bokhari, (1988), “Partitioning Problems in Distributed Parallel, Pipelined, and
Distributed Computing”, IEEE Transactions on Computers, Vol. 37, No. 1, pp. 48-57.
[24]
Moslema Jahan, M. M. A Hashem and G. A. Shahriar, (2008), “A New Distributed Evolutionary
Computation Technique for Solving Large Number of Equations”, IEEE 11th International
Conference on Computer and Information Technology (ICCIT).
[25]
Christopher Moretti, Hoang Bui, Karen Hollingsworth, Brandon Rich, Patrick Flynn and Douglas
Thain, (2010), “ All-Pairs: An Abstraction for Data-Intensive Computing on Campus Grids”,
IEEE Transactions on Parallel and Distributed System, Vol. 21, No. 1, pp. 33-46.
[26]
Gerald, C. F. and P.O. Wheatley, (1994), “Applied Numerical Analysis (5th edition)”, Addison-Wesley,
New York.
[27]
Engell-Mǘllges, G.E. and F. Uhlig, (1999), “Numerical Algorithms with C”, Springer-Verlag,
Heidlberg.
[28]
Rashid Mehmood, Jon Crowcroft, (2005), “Parallel iterative solution method for large spare
linear equation systems”, Technical Report, University of Cambridge Computer Library,
Cambridge, UK, UCAM-CL-TR-650, ISSN 1476-2986.
[29]
Alaa Ismail El-Nashar, (March 2011), “To Parallelize or Not to Parallelize, Speed up Issue”,
International Journal of Distributed and Parallel Systems (IJDPS), Vol. 2, No. 2, pp. 14-28.
48
International Journal of Distributed and Parallel Systems (IJDPS) Vol.2, No.6, November 2011
Authors
Moslema Jahan achieved the B.Sc. Engg. Degree (with honours) in Computer
Science and Engineering (CSE) from KhulnaUniversity of Engineering &
Technology (KUET), Khulna, Bangladesh. Currently she is serving as a Lecturer in
the Department of Computer Science & Engineering in Dhaka University of
Engineering & Technology (DUET), Gazipur. Her research interest is on Parallel
and Distributed computing, Evolutionary Computation, Networking and Image
processing. Moslema Jahan is now Member of the Engineers Institution,
Bangladesh (IEB).
M. M. A. Hashem received the Bachelor’s degree in Electrical & Electronic
Engineering from Khulna University of Engineering & Technology (KUET),
Bangladesh in 1988. He acquired his Masters Degree in Computer Science from
Asian Institute of Technology (AIT), Bangkok, Thailand in 1993 and PhD degree
in Artificial Intelligence Systems from the Saga University, Japan in 1999. He is a
Professor in the Department of Computer Science and Engineering, Khulna
University of Technology (KUET), Bangladesh. His research interest includes Soft
Computing, Intelligent Networking, Wireless Networking, Distributed Evolutionary
Computing etc. He has published more than 50 referred articles in international Journals/Conferences. He
is a life fellow of IEB and a member of IEEE. He is a coauthor of a book titled “Evolutionary
Computations: New Algorithms and their Applications to Evolutionary Robots,” Series: Studies in
Fuzziness and Soft Computing, Vol. 147, Springer-Verlag, Berlin/New York, ISBN: 3-540-20901-8,
(2004). He has served as an Organizing Chair, IEEE 2008 11th International Conference on Computer
and Information Technology (ICCIT 2008) and Workshops, held during 24-27 December, 2008 at KUET.
Currently, he is working as a Technical Support Team Consultant for Bangladesh Research and Education
Network (BdREN) in the Higher Education Quality Enhancement Project (HEQEP) of University Grants
Commission (UGC) of Bangladesh.
Gazi Abdullah Shahriar achieved the B.Sc. Engg. Degree in Computer Science and
Engineering (CSE) from Khulna University of Engineering & Technology (KUET),
Khulna, Bangladesh. Now he is working at Secure Link Services BD Ltd as software
engineer. His current research interest is in Evolutionary computation, Distributed
computing and Bioinformatics.
49
| 9 |
A Polynomial Time Algorithm for Finding Area-Universal
Rectangular Layouts
arXiv:1302.3672v7 [cs.CG] 15 Sep 2016
Jiun-Jie Wang
State University of New York at Buffalo, Buffalo, NY 14260, USA.
Email: jiunjiew@buffalo.edu
Abstract
A rectangular layout L is a rectangle partitioned into disjoint smaller rectangles so that no four
smaller rectangles meet at the same point. Rectangular layouts were originally used as floorplans
in VLSI design to represent VLSI chip layouts. More recently, they are used in graph drawing as
rectangular cartograms. In these applications, an area a(r) is assigned to each rectangle r, and
the actual area of r in L is required to be a(r). Moreover, some applications require that we use
combinatorially equivalent rectangular layouts to represent multiple area assignment functions. L
is called area-universal if any area assignment to its rectangles can be realized by a layout that is
combinatorially equivalent to L.
A basic question in this area is to determine if a given plane graph G has an area-universal rectangular layout or not. A fixed-parameter-tractable algorithm for solving this problem was obtained
2
in [4]. Their algorithm takes O(2O(K ) nO(1) ) time (where K is the maximum number of degree 4
vertices in any minimal separation component), which is exponential time in general case. It is an
open problem to find a true polynomial time algorithm for solving this problem. In this paper, we
describe such a polynomial time algorithm. Our algorithm is based on new studies of properties of
area-universal layouts. The polynomial run time is achieved by exploring their connections to the
regular edge labeling construction.
1
Introduction
A rectangular layout L is a partition of a rectangle R into a set R(L) = {r1 , . . . , rn } of disjoint smaller
rectangles by vertical and horizontal line segments so that no four smaller rectangles meet at the same
point. An area assignment function of a rectangular layout L is a function a : R(L) → R+ . We say L is
a rectangular cartogram for a if the area of each ri ∈ R(L) equals to a(ri ). We also say L realizes the
area assignment function a.
Rectangular cartograms were introduced in [14] to display certain numerical quantities associated
with geographic regions. Each rectangle ri represents a geographic region. Two regions are geographically
adjacent if and only if their corresponding rectangles share a common boundary in L. The areas of the
rectangles represent the numeric values being displayed by the cartogram.
In some applications, several sets of numerical data must be displayed as cartograms of the same set
of geographic regions. For example, three figures in [14] are the cartograms of land area, population, and
wealth within the United States. In such cases, we wish to use cartograms whose underlying rectangular
layouts are combinatorially equivalent (to be defined later). Fig 1 (1) and (2) show two combinatorially
equivalent layouts with different area assignments. The following notion was introduced in [4].
Definition 2. A rectangular layout L is area-universal if any area assignment function a of L can be
realized by a rectangular layout that is combinatorially equivalent to L.
A natural question is: which layouts are area-universal? A nice characterization of area-universal
rectangular layouts was discovered in [4]:
1
s
(1)
(2)
(3)
Figure 1: Examples of rectangular layout: (1) and (2) are two combinatorially equivalent layouts with
different area assignments. Both are area-universal layouts. (3) A layout that is not area-universal.
Theorem 3. [4] A rectangular layout L is area-universal if and only if every maximal line segment in
L is a side of at least one rectangle in L. (A maximal line segment is a line segment in L that cannot
be extended without crossing other line segments in L.)
In Fig 1, the layouts (1) and (2) are area-universal, but the layout (3) is not. (The maximal vertical
line segment s is not a side of any rectangle.)
For a plane graph G, we say a rectangular layout L represents G if the following hold: (1) The set of
smaller rectangles of L one-to-one corresponds to the set of vertices of G; and (2) two vertices u and v
are adjacent in G if and only if their corresponding rectangles in L share a common boundary. In other
words, if L represents G, then G is the dual graph of small rectangles in L.
Area-universal rectangular layout representations of graphs are useful in other fields [13]. In VLSI
design, for example [17], the rectangles in L represent circuit components, and the common boundary
between rectangles in L model the adjacency requirements between components. In early VLSI design
stage, the chip areas of circuit components are not known yet. Thus, at this stage, only the relative
positions of the components are considered. At later design stages, the areas of the components (namely,
the rectangles in L) are specified. An area-universal layout L enables the realization of the area assignments specified at later design stages. Thus, the ability of finding an area-universal layout at the early
design stage will greatly simplify the design process at later stages. The applications of rectangular
layouts and cartograms in building design and in tree-map visualization can be found in [2, 1]. Heuristic
algorithms for computing the coordinates of a rectangular layout that realizes a given area assignment
function were presented in [16, 11].
A plane graph G may have many rectangular layouts. Some of them may be area-universal, while
the others are not. Not every plane graph has an area-universal layout. In [15], Rinsma described an
outerplanar graph G and an area assignment to its vertices such that no rectangular layout realizes the
area assignment. Thus it is important to determine if G has an area-universal layout or not. Based
on Theorem 3, Eppstein et al. [4] described an algorithm that finds an area-universal layout for G if
2
one exists. Their algorithm takes O(2O(K ) nO(1) ) time, where K is the maximum number of degree 4
vertices in any minimal separation component. For a fixed K, the algorithm runs in polynomial time.
However, their algorithm takes exponential time in general case.
In this paper, we describe the first polynomial-time algorithm for solving this problem. Our algorithm
is based on studies of properties of area-universal layouts and their connection to the regular edge labeling
construction. The paper is organized as follows. In §2, we introduce basic definitions and preliminary
results. §3 outlines a Face-Addition algorithm with exponential time that determines if G has an areauniversal rectangular layout. §4 introduces the concepts of forbidden pairs, G-pairs and M-triples that
are extensively used in our algorithm. In §5, we describe how to convert the Face-Addition algorithm
with exponential time to an algorithm with polynomial time.
2
vN
vN
c
b g
vW
h vE
e
a f d
vS
vN
c
b
g
vW
c
b
h
e
a
f
b
g
vE vW
h
e
a
d
vS
(1) Lext
vN
c
(2) Gext
f
d
g
vE vW
h
e
a
f
vS
vS
(3) G1
(4) G2
vE
d
Figure 4: Examples of rectangular layout and REL. (1) Rectangular layout Lext ; (2) The graph
corresponding to Lext with an REL R = {T1 , T2 }; (3) the graph G1 of R; (4) the graph G2 of R.
2
Preliminaries
In this section, we give definitions and important preliminary results. Definitions not mentioned here
are standard. A graph G = (V, E) is called planar if it can be drawn on the plane with no edge crossings.
Such a drawing is called a plane embedding of G. A plane graph is a planar graph with a fixed plane
embedding. A plane embedding of G divides the plane into a number of connected regions. Each region
is called a face. The unbounded region is called the exterior face. The other regions are called interior
faces. The vertices and edges on the exterior face are called exterior vertices and edges. Other vertices
and edges are called interior vertices and edges. We use cw and ccw as the abbreviation of clockwise and
counterclockwise, respectively.
For a simple path P = {v1 , v2 , · · · , vp } of G, the length of P is the number of edges in P . P is called
chord-free if for any two vertices vi , vj with |i − j| > 1, the edge (vi , vj ) ∈
/ E. A triangle of a plane graph
G is a cycle C with three edges. C divides the plane into its interior and exterior regions. A separating
triangle is a triangle in G such that there are vertices in both the interior and the exterior of C.
When discussing the rectangular layout L of a plane graph G, we can simplify the problem as follows.
Let a, b, c, d be the four designated exterior vertices of G that correspond to the four rectangles in L
located at the southwest, northwest, northeast and southeast corners, respectively. Let the extended
graph Gext be the graph obtained from G as follows:
1. Add four vertices vW , vN , vE , vS and four edges (vW , vN ), (vN , vE ), (vE , vS ), (vS , vW ) into Gext .
2. Connect vW to every vertex of G on the exterior face between a and b in cw order. Connect vN
to every vertex of G on the exterior face between b and c in cw order. Connect vE to every vertex
of G on the exterior face between c and d in cw order. Connect vS to every vertex of G on the
exterior face between d and a in cw order.
See Figs 4 (1) and (2) for an example. It is well known [12] that G has a rectangular layout L if
and only if Gext has a rectangular layout Lext , where the rectangles corresponding to vW , vN , vE , vS are
located at the west, north, east and south boundary of Lext , respectively. Not every plane graph has
rectangular layouts. The following theorem characterizes the plane graphs with rectangular layouts.
Theorem 5. [12] A plane graph G has a rectangular layout L with four rectangles on its boundary if
and only if:
1. Every interior face of G is a triangle and the exterior face of G is a quadrangle; and
2. G has no separating triangles.
3
A plane graph that satisfies the conditions in Theorem 5 is called a proper triangular plane graph.
From now on we only consider such graphs.
Our algorithm relies heavily on the concept of the regular edge labeling (REL) introduced in [9].
RELs have also been studied by Fusy [7, 8], who refers them as transversal structures. REL are closely
related to several other edge coloring structures of planar graphs that can be used to describe straight
line embeddings of orthogonal polyhedra [5, 6].
Definition 6. Let G be a proper triangular plane graph. A regular edge labeling REL R = {T1 , T2 } of
G is a partition of the interior edges of G into two subsets T1 , T2 of directed edges such that:
• For each interior vertex v, the edges incident to v appear in ccw order around v as follows: a set
of edges in T1 leaving v; a set of edges in T2 leaving v; a set of edges in T1 entering v; a set of
edges in T2 entering v. (Each of the four sets contains at least one edge.)
• Let vN , vW , vS , vE be the four exterior vertices in ccw order. All interior edges incident to vN are
in T1 and entering vN . All interior edges incident to vW are in T2 and entering vW . All interior
edges incident to vS are in T1 and leaving vS . All interior edges incident to vE are in T2 and
leaving vE .
Fig 4 (2) shows an example of REL. (The green solid lines are edges in T1 . The red dashed lines
are edges in T2 .) It is well known that every proper triangular plane graph G has a REL, which can
be found in linear time [9, 10]. Moreover, from a REL of G, we can construct a rectangular layout L
of G in linear time [9, 10]. Conversely, if we have a rectangular layout L for G, we can easily obtain a
REL R of G as follows. For each interior edge e = (u, v) in G, we label and direct e according to the
following rules. Let ru and rv be the rectangle in L corresponding to u and v respectively.
• If ru is located below rv in L, the edge e is in T1 and directed from u to v.
• If ru is located to the right of rv in L, the edge e is in T2 and directed from u to v.
The REL R obtained as above is called the REL derived from L. (See Fig 4 (1) and (2)).
Definition 7. Let L1 and L2 be two rectangular layouts of a proper triangular plane graph G. We say
L1 and L2 are combinatorially equivalent if the RELs of G derived from L1 and from L2 are identical.
Thus, the RELs of G one-to-one correspond to the combinatorially equivalent rectangular layouts
of G. We can obtain two directed subgraphs G1 and G2 of G from an REL R = {T1 , T2 } as follows.
• The vertex set of G1 is V . The edge set of G1 consists of the edges in T1 with direction in T1 , and
the four exterior edges directed as: vS → vW , vS → vE , vW → vN , vE → vN .
• The vertex set of G2 is V . The edge set of G2 consists of the edges in T2 with direction in T2 , and
the four exterior edges directed as: vS → vW , vN → vW , vE → vS , vE → vN .
Fig 4 (3) and (4) show the graph G1 and G2 for the REL shown in Fig 4 (2). For each face f1 in
G1 , the boundary of f1 consists of two directed paths. They are called the two sides of f1 . Each side of
f1 contains at least two edges. Similar properties hold for the faces in G2 [7, 8, 9, 10].
Definition 8. A REL R = {T1 , T2 } of G is called slant if for every face f in either G1 or G2 , at least
one side of f contains exactly two directed edges.
Theorem 3 characterizes the area-universal layouts in terms of maximal line segments in L. The
following lemma characterizes area-universal layouts in term of the REL derived from L.
4
c
e
d
f
b
e
c
f
b
d
a
a
(1)
(2)
Figure 10: (1) a fan F(a, b, c) has the back boundary (a, b, c) and the front boundary (a, d, e, f, c); (2) a
mirror fan M(a, f, e) has the back boundary (a, b, c, d, e) and the front boundary (a, f, e).
Lemma 9. A rectangular layout L is area-universal if and only if the REL R derived from L is slant.
Proof: Note that each face in G1 (G2 , respectively) corresponds to a maximal vertical (horizontal,
respectively) line segment in L. (In the graph G1 in Fig 4 (3), the face f1 with the vertices f, e, g, c, h
corresponds to the vertical line segment that is on the left side of the rectangle h in Fig 4 (1)).
Assume L is area-universal. Consider a face f in G1 . Let lf be the maximal vertical line segment
in L corresponding to f . Since L is area-universal, lf is a side of a rectangle r in L. Without loss of
generality, assume r is to the left of lf . Then the left side of the face f consists of exactly two edges.
Thus G1 satisfies the slant property. Similarly, we can show G2 also satisfies the slant property.
Conversely, assume R is a slant REL. The above argument can be reversed to show that L is
area-universal.
The REL shown in Fig 4 (2) is not slant because the slant property fails for one G2 face. So the
corresponding layout shown in Fig 4 (1) is not area-universal. By Lemma 9, the problem of finding an
area-universal layout for G is the same as the problem of finding a slant REL for G. From now on, we
consider the latter problem and G always denotes a proper triangular plane graph.
3
Face-Addition Algorithm with Exponential Time
In this section, we outline a Face-Addition procedure that generates a slant REL R = {T1 , T2 } of
G through a sequence of steps. The procedure starts from the directed path consisting of two edges
vS → vE → vN . Each step maintains a partial slant REL of G. During a step, a face f of G1 is added
to the current graph, resulting in a larger partial slant REL. When f is added, its right side is already
in the current graph. The edges on the left side of f are placed in T1 and directed upward. The edges of
G in the interior of f are placed in T2 and directed to the left. The process ends when the left boundary
vS → vW → vN is reached. With this informal description in mind, we first introduce a few definitions.
Then we will formally describe the Face-Addition algorithm (which takes exponential time).
Consider a face f of G1 added during the above procedure. Because we want to generate a slant
REL R, at least one side of f must be a path of length 2. This motivates the following definition. Figs
10 (1) and (2) show examples of a fan and a mirror fan, respectively.
Definition 11. Let vl , vm , vh be three vertices of G such that vl and vh are two neighbors of vm and
(vl , vh ) ∈
/ E. Let Pcw be the path consisting of the neighbors {vl , v1 , . . . , vp , vh } of vm in cw order between
vl and vh . Let Pccw be the path consisting of the neighbors {vl , u1 , . . . , uq , vh } of vm in ccw order between
vl and vh . Note that since G has no separating triangles, both Pcw and Pccw are chord-free.
1. The directed and labeled subgraph of G induced by the vertices vl , vm , vh , v1 , . . . , vp is called the fan
at {vl , vm , vh } and denoted by F(vl , vm , vh ), or simply g.
5
• The front boundary of g, denoted by α(g), consists of the edges in Pcw directed from vl to vh
in cw order. The edges in α(g) are colored green.
• The back boundary of g, denoted by β(g), consists of two directed edges vl → vm and vm → vh .
The edges in β(g) are colored green.
• The inner edges of g, denote by γ(g), are the edges between vm and the vertices v 6= vl , vh
that are on the path Pcw . The inner edges are colored red and directed away from vm .
2. The directed and labeled subgraph of G induced by the vertices vl , vm , vh , u1 , . . . , uq is called the
mirror fan at {vl , vm , vh } and denoted by M(vl , vm , vh ), or simply g.
• The front boundary of g, denoted by α(g), consists of two directed edges vl → vm to vm → vh .
The edges in α(g) are colored green.
• The back boundary of g, denoted by β(g), consists of the edges in Pccw directed from vl to vh
in ccw order. The edges in β(g) are colored green.
• The inner edges of g, denote by γ(g), are the edges between vm and the vertices v 6= vl , vh
that are on the path Pccw . The inner edges are colored red and directed into vm .
Both F(vl , vm , vh ) and M(vl , vm , vh ) are called a gadget at vl , vm , vh . We use g(vl , vm , vh ) to denote
either of them. The vertices other than vl and vh are called the internal vertices of the gadget. If a
gadget has only one inner edge, it can be called either a fan or a mirror fan. For consistency, we call it
a fan. We use g0 = F(vS , vE , vN ) to denote the initial fan, and gT = M(vS , vW , vN ) to denote the final
mirror fan. The following observation is clear:
Observation 12. For a slant REL, each face f of G1 is a gadget of G.
The REL shown in Fig 4 (2) is generated by adding the gadgets: F(vS , vE , vN ), F(vS , d, h),
F(f, h, c), M(e, b, vN ), F(vS , f, e), M(vS , vW , vN ). The following lemma is needed later.
Lemma 13. The total number of gadgets in G is at most O(n2 ).
Proof: Let deg(v) denote the degree of the vertex v in G. For each v, there are at most 2 · deg(v) ·
(deg(v)
− 3) gadgets with v as its middle element. Thus the total number of gadgets of G is at most
P
2
v∈V 2 · deg(v) · (deg(v) − 3) = O(n ).
Definition 14. A cut C of G is a directed path from vS to vN that is the left boundary of the subgraph
of G generated during the Face-Addition procedure. In particular, C0 = vS → vE → vN denotes the
initial cut and CT = vS → vW → vN denotes the final cut.
Let C be a cut of G. For any two vertices v1 , v2 of C, C(v1 , v2 ) denotes the subpath of C from v1 to
v2 . The two paths C and C0 enclose a region on the plane. Let G|C denote the subgraph of G induced
by the vertices in this region (including its boundary).
Consider a cut C generated by Face-Addition procedure and a gadget g = g(vl , vm , vh ). In order for
Face-Addition procedure to add g to C, the following conditions must be satisfied:
A1: no internal vertices of α(g) are in C; and
A2: the back boundary β(g) is contained in C; and
A3: g is valid for C (the meaning of valid will be defined later).
6
If g satisfies the conditions A1, A2 and A3, Face-Addition procedure can add g to the current graph
G|C by stitching β(g) with the corresponding vertices on C. (Intuitively we are adding a face of G1 .)
Let G|C ⊗ g denote the new subgraph obtained by adding g to G|C . The new cut of G|C ⊗ g, denoted
by C ⊗ g, is the concatenation of three subpaths C(vS , vl ), α(g), C(vh , vN ).
The conditions A1 and A2 ensure that C ⊗ g is a cut. Any gadget g satisfying A1 and A2 can be
added during a step while still maintaining the slant property for G1 . However, adding such a g may
destroy the slant property for G2 faces. The condition A3 that g is valid for C is to ensure the slant
property for G2 faces. (The REL shown in Fig 4 (2) is not slant. This is because the gadget F(f, h, c)
is not valid, as we will explain later.) This condition will be discussed in §4.
After each iteration of Face-Addition procedure, the edges of the current cut C are always in T1 and
directed from vS to vN . All G1 faces f1 in G|C are complete (i.e. both sides of f1 are in G|C ). Some
G2 faces in G|C are complete. Some other G2 faces f2 in G|C are open. (i.e. the two sides of f2 are not
completely in G|C .)
Definition 15. Any subgraph G|C generated during the execution of Face-Addition procedure is called a
partial slant REL of G, which satisfies the following conditions:
1. Every complete G1 and G2 face in G|C satisfies the slant REL property.
2. For every open G2 face f in G|C , at least one side of f has exactly one edge.
The intuitive meaning of a partial slant REL G|C is that it is potentially possible to grow a complete
slant REL of G from G|C . The left boundary of a partial slant REL R is called the cut associated with
R and denoted by C(R).
Definition 16.
1. PSR(G) denotes the set of all partial slant RELs of G that can be generated by
Face-Addition procedure.
2. G̃ = {g | g is a gadget in a R ∈ PSR(G)}.
Observe that every slant REL R of G is in PSR(G). This is because R is generated by adding a
sequence of gadgets g1 , . . . gT = M(vS , vW , vN ) to the initial gadget g0 = F(vS , vE , vN ). So if we choose
this particular gi during the ith step, we will get R at the end. Thus G has a slant REL if and only if
gT ∈ G̃. Note that Face-Addition procedure works only if we know the correct gadget addition sequence.
Of course, we do not know such a sequence. The Face-Addition algorithm, described in Algorithm 1,
generates all members in PSR(G).
Algorithm 1: Face-Addition algorithm with Exponential Time
1.1
1.2
1.3
1.4
1.5
1.6
Initialize G̃ = {g0 }, and PSR(G) = {G|α(g0 ) };
repeat
Find a gadget g of G and an R ∈ PSR(G) such that the conditions A1, A2 and A3 are
satisfied for g and C = C(R);
Add g into G̃, and add the partial slant REL R ⊗ g into PSR(G);
until no such g and R can be found;
G has a slant REL if and only if the final gadget gT ∈ G̃;
Because |PSR(G)| can be exponentially large, Algorithm 1 takes exponential time.
4
Forbidden Pairs, G-Pairs, M-Triples, Chains and Backbones
In this section, we describe the conditions for adding a gadget to a partial slant REL R ∈ PSR(G),
while still keeping the slant REL property for G2 faces. (In other words, the condition A3.)
7
4.1
Forbidden Pairs
Consider a R ∈ PSR(G) and its associated cut C = C(R). Let e be an edge of C. We use open-face(e)
to denote the open G2 face in G|C with e as its open left boundary. The type of open-face(e) specifies
the lengths of the lower side Pl and the upper side Pu of open-face(e):
• Type (1,1): length(Pl ) = 1 and length(Pu ) = 1.
• Type (1,2): length(Pl ) = 1 and length(Pu ) ≥ 2.
• Type (2,1): length(Pl ) ≥ 2 and length(Pu ) = 1.
• Type (2,2): length(Pl ) ≥ 2 and length(Pu ) ≥ 2.
Note that the type of every open G2 face in a partial slant REL cannot be (2, 2). Based on the
properties of REL, we have the following (see Fig 18):
Observation 17. Let R ∈ PSR(G) and e be an edge on C(R).
• If e is the last edge of α(g) of a fan or a mirror fan g, the type of open-face(e) is (2,1).
• If e is a middle edge of α(g) of a fan g, the type of open-face(e) is (1,1).
• If e is the first edge of α(g) of a fan or a mirror fan g, the type of open-face(e) is (1,2).
(2, 1)
(2, 1)
(1, 1)
(1, 1)
(1, 2)
(1, 2)
a
(1)
(2)
Figure 18: The types of open G2 faces: (1) Faces defined by edges on the front boundary of a fan; (2)
Faces defined by edges on the front boundary of a mirror fan.
Definition 19. A pair (g, g ′ ) of two gadgets of G is called a forbidden pair if either (1) the first edge of
β(g) is the last edge of α(g ′ ); or (2) the last edge of β(g) is the first edge of α(g ′ ).
Lemma 20. If a partial REL R contains a forbidden pair (g, g′ ), then R is not slant.
Proof: Case 1: Suppose the first edge e1 of β(g) is the last edge of α(g ′ ) (see Fig 21 (1)). Let e2 be
the first edge of α(g). The type of open-face(e1 ) is (2, 1) (regardless of whether g′ is a fan or a mirror
fan). Note that open-face(e2 ) extends open-face(e1 ). The length of the upper side of open-face(e1 ) is
increased by 1. Thus the type of open-face(e2 ) is (2, 2) and the slant property for G2 face fails.
Case 2: Suppose the last edge e1 of β(g) is the first edge of α(g ′ ) (see Fig 21 (2)). Let e2 be the last
edge of α(g). The type of open-face(e1 ) is (1, 2) (regardless of whether g′ is a fan or a mirror fan). Note
that open-face(e2 ) extends open-face(e1 ). The length of the lower side of open-face(e1 ) is increased by
1. Thus the type of open-face(e2 ) is (2, 2) and the slant property for G2 face fails.
In the REL R shown in Fig 4 (2), (F(f, h, c), F(vS , d, h)) is a forbidden pair. So R is not a slant
REL.
8
(2, 1)
1
2
2
(2, 1)
e1 (1, 2)
(1, 1)
g
(1, 1)
(1, 2)
a
(2)
(1)
Figure 21: The proof of Lemma 20: (1) g is a fan; (2) g is a mirror fan.
4.2
The Condition A3
The following lemma specifies a necessary and sufficient condition for adding a fan into G̃, and a sufficient
condition for adding a mirror fan into G̃.
Lemma 22. Let R ∈ PSR(G) and C = C(R) be its associated cut. Let g L be a gadget and L = β(g L ).
Suppose that the conditions A1 and A2 are satisfied for gL and C.
1. A fan gL can be added to R (i.e. g L satisfies the condition A3) if and only if there exists a gadget
g R ∈ R such that β(g L ) ⊆ α(g R ).
2. A mirror fan gL can be added to R (i.e. g L satisfies the condition A3) if there exists a gadget
g R ∈ R such that β(g L ) ⊆ α(g R ).
e4
e3
e4
e''
e2
gL
(2, 1)
e' (1, 1)
gR
e1
e''
e''
e3
(2, 1)
(1, 1)
(1, 2)
(1)
a
e2
e'
e1
gO
gU
(2, 1)
e2 (1, 1) R
g
gR
gL
e3
gL
(2, 1)
ek
e''
e'
(1, 2)
e1
(1, 1)
gB
(1, 2)
e'
e1
(1, 1)
a
(3)
(2)
a
(4)
Figure 23: (1) and (2) open faces defined by edges on the front boundary of a fan g L ; (3) and (4) open
faces defined by edges on the front boundary of a mirror fan g O .
Proof: If part of (1): Suppose there exists a gadget gR ∈ R such that β(g L ) ⊆ α(gR ). (Figs 23 (1) and
(2) show two examples. In Fig 23 (1), gR is a fan. In Fig 23 (2), gR is a mirror fan). Let e1 , . . . , ek be
the edges in α(g L ). Let e′ , e′′ be the two edges in β(g L ). Let C ′ = C ⊗ g L be the new cut after adding
gL . For each 2 ≤ i ≤ k − 1, the type of open-face(ei ) is (1, 1).
• open-face(e1 ) extends open-face(e′ ), and add 1 to the length of the upper side of open-face(e′ ).
9
• open-face(ek ) extends open-face(e′′ ), and add 1 to the length of the lower side of open-face(e′′ ).
Regardless of where e′ , e′′ are located on α(gR ), and regardless of whether g R is a fan (see Fig 23
(1)) or a mirror fan (see Fig 23 (2)), the type of open-face(e1 ) is (1, 2); and the type of open-face(ek ) is
(2, 1). Thus R ⊗ g L ∈ PSR(G).
Only if part of (1): Suppose that there exists no gadget gR ∈ R such that β(gL ) ⊆ α(gR ). Let e′ , e′′
be the two edges of β(g L ). e′ must be on the front boundary of some gadget g′ in R. e′′ must be on
the front boundary of some gadget g′′ in R. Clearly g ′ 6= g′′ . (If g′ = g′′ , we would have β(g L ) ⊆ α(g′ )).
Then either (g L , g′ ) or (g L , g ′′ ) must be a forbidden pair. By Lemma 20, gL cannot be added to R.
(2) Let gL be a mirror fan. Suppose there exists a gadget gR ∈ R such that β(gO ) ⊆ α(gR ) (see Fig
23 (3)). Similar to the proof of the if part of (1), we can show R ⊗ g L ∈ PSR(G).
By Lemma 22, the only way to add a fan g L to R is by the existence of a gadget g R ∈ R such that
⊆ α(gR ). For a mirror fan g, there is another condition for adding g to R which we discuss next.
Let v1 = vS , v2 , . . . , vt−1 , vN be the vertices of C = C(R) from lower to higher order. Let e1 and et
be the first and the last edge of C. Imagine we walk along C from vS to vN . On the right side of C, we
pass through a sequence of gadgets in R whose front boundary (either a vertex or an edge) touches C.
Let support(R) = (g1 , g2 , . . . , gk−1 , gk ), where e1 ∈ α(g1 ) and et ∈ α(gk ), denote this gadget sequence.
Note that some gadgets in support(R) may appear multiple times in the sequence. (See Fig 26 (1) for
an example.)
Consider a mirror fan g O to be added to R. Note that L = β(g O ) is a subsequence of C. Let a and
b be the lowest and the highest vertex of L. Let el be the first edge and eh be the last edge of L. When
walking along L from a to b, we pass through a subsequence of the gadgets in support(R) on the right
of L. Let support(L, R) = (g B = gp , gp+1 , . . . , gq−1 , gq = gU ) denote this gadget subsequence, where:
β(gL )
• g B is the gadget such that el ∈ α(gB ).
• g U is the gadget such that eh ∈ α(gU ).
In Fig 26 (1), if we add a mirror fan g3 with L = β(g3 ) = (a, b, f, h, k, d), then support(L, R) =
(g1 , g0 , g2 ).
Lemma 24. Let R ∈ PSR(G) and C = C(R) be its associated cut. Let g O be a mirror fan and
L = β(g O ). Suppose that the conditions A1 and A2 are satisfied for g O and C. Let support(L, R) =
(gB , gp+1 , · · · , gq−1 , gU ). Then gO can be added to R (i.e. gO satisfies the condition A3) if and only if
neither (gO , gB ) nor (g O , gU ) is a forbidden pair.
Proof: First suppose that gO can be added to R to form a larger partial slant REL. Then, by Lemma
20, neither (gO , gB ) nor (gO , g U ) is a forbidden pair. Conversely, suppose that neither (g O , gB ) nor
(gO , g U ) is a forbidden pair. Let e1 , . . . , ek be the edges of L. The type of open-face(e1 ) is either (1, 1)
or (1, 2). The type of open-face(ek ) is either (1, 1) or (2, 1). (Fig 23 (4) shows an example.) Let e′ , e′′
be the two edges in α(gO ). After adding gO to R, the types of open-face(e′ ) and open-face(e′′ ) becomes
(1, 2) and (2, 1), respectively. They still keep the slant property for G2 faces. Moreover, for each edge
ei (2 ≤ i ≤ k − 1), open-face(ei ) becomes a valid complete G2 face after adding g O to R. Hence R ⊗ gO
is a partial slant REL of G.
4.3
Connections, Chains and Backbones
Given an R ∈ PSR(G) and a gadget g, it is straightforward to check if the conditions in Lemmas 22 and
24 are satisfied. However, as described before, maintaining the set PSR(G) requires exponential time.
So we must find a way to check the conditions in Lemmas 22 and 24 without explicit representation of
R.
10
i
g5
2
g4
g7
j
k
VN
i
C(h, i)
g6
g8
f
VE
C(c, f)
e
d
g2
VE
k
c
g3
f
g1
e
h
g5
O1
g2
g0
g4 g
1
g3
b
(1)
h
g0
d
a
c
(2)
VS
b
a
Figure 26: (1) R ∈ PSR(G) is obtained by adding gadgets g1 , g2 , g3 , g4 , g5 , g6 , g7 , g8 , in this order, to
g0 . C(R) = (vS , a, c, d, e, j, i, vN ); support(R) = (g0 , g1 , g3 , g2 , g8 , g7 , g5 , g4 , g0 ) and (g0 C g1 C g3 C
g2 C g8 C g7 C g5 C g4 C g0 ). The pair (g0 , g1 ) belongs to the G-pair Λ1 = (g1 , g0 ), the
triple (g1 , g3 , g2 ) belongs to the M-triple Λ2 = (g1 , g3 , g2 ), the triple (g2 , g8 , g7 ) belongs to the M-triple
Λ3 = (g2 , g8 , g7 ), the pair (g7 , g5 ) belongs to the M-triple Λ4 = (g6 , g7 , g5 ), the pair (g5 , g4 ) belongs
to the G-pair Λ5 = (g5 , g4 ) and the pair (g4 , g0 ) belongs to the G-pair Λ6 = (g4 , g0 ). (2) The G-pair
Λ = (g1 , g0 ) is a fractional connection with two pockets: O1 is bounded by C(c, f ) and α(Λ)(c, f ) and
O2 is bounded by C(h, i) and α(Λ)(h, i).
Consider two R, R′ ∈ PSR(G) such that R =
6 R′ but support(R) = support(R′ ). Clearly this implies
C(R) = C(R′ ). By Lemmas 22 and 24, a gadget g can be added to R if and only if g can be added to R′ .
Thus, whether g can be added to an R ∈ PSR(G) is completely determined by the structure of gadgets
in support(R). There may be exponentially many R′ ∈ PSR(G) with support(R′ ) = support(R).
Instead of keeping information of all these R′ , we only need to keep the information of the structure of
support(R). This is the main idea for converting Algorithm 1 to a polynomial time algorithm. In order
to describe the structure of support(R), we need the following terms and notations.
Definition 25. Let R ∈ PSR(G) and g be a gadget with L = β(g).
• If support(L, R) contains only one gadget g R , and the conditions A1, A2 and A3 are satisfied, then
(g, g R ) is called a G-pair. We use (gL , gR ) to denote a G-pair.
• If support(L, R) contains at least two gadgets, and the conditions A1, A2 and A3 are satisfied,
then (g B , g, gU ) is called a M-triple. We use (gB , gO , gU ) to denote a M-triple.
• A G-pair (gL , gR ) or a M-triple (gB , gO , gU ) is called a connection and denoted by Λ.
L , v L ), g R = g(v R , v R , v R ), the front bound• For a connection Λ = (g L , gR ), where gL = g(vlL , vm
m h
l
h
L
R
ary of Λ, denoted by α(g , g ) or α(Λ), is the concatenation of the paths α(gR )(vlR , vlL ), α(gL ),
α(gR )(vhL , vhR ).
B , v B ), g O = g(v O , v O , v O ), g U =
• For a connection Λ = (g B , gO , g U ), where gB = g(vlB , vm
m h
l
h
B
O
U
U
U
g(vl , vm , vh ), the front boundary of Λ, denoted by α(g , g , gU ) or α(Λ), is the concatenation
of the paths α(g B )(vlB , vlO ), α(gO ), α(gU )(vhO , vhU ).
It is tempting to think that if all gadgets in support(R) have been added into G̃, then R has
been constructed. Unfortunately, this is not true. In order to form R, the gadgets in support(R) =
11
(g1 , g2 , . . . , gk−1 , gk ) must have been added to G̃ in the following way: When walking along C(R) from vS
to vN , the gadgets in support(R) form a sequence (Λ1 , . . . , Λp ) of connections such that each consecutive
pair (gi , gi+1 ) or triple (gi−1 , gi , gi+1 ) of gadgets belong to a Λj (1 ≤ j ≤ p); and each consecutive pair
Λi , Λi+1 share a common gadget in support(R). (See Fig 26 (1) for an illustration).
Note that when the pair (gi−1 , gi ) and the pair (gi , gi+1 ) belong to the same connection Λj , it means
gi−1 and gi+1 are the same gadget and (gi , gi+1 ) = (gi , gi−1 ) is a G-pair. In this case, we keep only one Λj
in the sequence Λ1 . . . , Λp . As seen in Fig 26 (1), in addition to these connections Λj (1 ≤ j ≤ p), some
gadget pairs (or triples) that are not consecutive in support(R) may also form additional connections.
(In Fig 26 (1), the gadgets g0 and g2 are not consecutive in support(R). But they form a G-pair (g2 , g0 )).
Let Con(R) denote the set of connections formed by the gadgets in support(R). (By this definition,
each Λ ∈ Con(R) has at least two gadgets in support(R)). It is the structure of Con(R) that determines
if a new gadget g can be added to R or not. In general, the connections in Con(R) cannot be described
as a simple linear structure. To describe it precisely, we need the following definitions.
Consider a connection Λ ∈ Con(R). If α(Λ)∩C is a contiguous subpath of C, Λ is called a contiguous
connection. If not, Λ is called a fractional connection. (In Fig 26 (1), the G-pair (g1 , g0 ) is a fractional
connection. Because the cut C ∩ α(g1 , g0 ) are (a, b, c) and (f, h) and (i, k), they are not a contiguous
subpath of C.) Consider a fractional connection Λ. Let u and v be the lowest and the highest vertices
of C ∩ α(Λ) respectively. When walking along C from u to v, we encounter α(Λ) multiple times.
The subpath C(u, v) can be divided into a number of subpaths that are alternatively on α(Λ), not on
α(Λ), . . ., on α(Λ). There exist at least two vertices a, b in α(Λ) such that C(a, b) ∩ α(Λ)(a, b) = {a, b}.
For each such pair of vertices a, b, the interior region bounded by the subpaths C(a, b) and α(Λ)(a, b)
is called a pocket, denoted by O = (C(a, b), Λ), of Con(R). Fig 26 (2) shows a fractional connection Λ
(the G-pair (g1 , g0 )) with two pockets O1 and O2 .
A connection Λ ∈ Con(R) is called maximal if it is not contained in any pocket of Con(R). A
maximal connection can be either contiguous or fractional. A non-maximal connection Λ′ ∈ Con(R)
is either completely contained in some pocket O formed by a subpath of C and a maximal fractional
connection Λ (namely all gadgets of Λ′ are contained in O); or partially contained in O (namely some
gadget of Λ′ is contained in O and some gadget of Λ′ is shared with Λ). (In Fig 26 (2), the G-pair (g3 , g1 )
and the M-triple (g1 , g2 , g0 ) are partially contained in the pocket O1 . The M-triple (g4 , g5 , g2 ) and the
G-pair (g4 , g3 ) are completely contained in O1 ). Note that a pocket may contain other smaller pockets.
In general, the pockets of Con(R) are nested in a forest-like structure.
The way to deal with fractional connections is very similar to contiguous connections. Hence in the
following paragraphs, we will assume there are no fractional connections.
Definition 27. Let two gadgets {g, g ′ } belong to a connection Λ. We say g precedes g ′ on C and write
g C g′ if the following conditions hold: (1) ((α(g) ∩ C) ∪ (α(g′ ) ∩ C)) is contiguous on C; (2) When
walking along C, we encounter the gadget g before g′ .
Depending on the types of connections and their positions on a cut C, there are five cases for the
relation C . (They are shown in Fig 28.)
Definition 29. Given a partial slant REL R with its associated cut C, a sequence of gadgets (g1 , g2 , · · · , gk )
in support(R) is called a chain of C and denoted by chain(C) if the following conditions hold:
1. (g1 C g2 C · · · C gk ) and
2. for each 1 ≤ i ≤ k − 1, either (gi , gi+1 ) or (gi , gi+1 , gi+2 ) belongs to a connection Λ ∈ Con(R).
In Fig 26 (1), (g0 C g1 C g3 C g2 C g8 C g7 C g5 C g4 C g0 ) is a chain of C where we
have the G-pair (g1 , g0 ), the M-triple (g1 , g3 , g2 ), the M-triple (g2 , g8 , g7 ), the M-triple (g6 , g7 , g5 ), the
G-pair (g5 , g4 ) and the G-pair (g4 , g0 ).
Because the way a partial slant REL R is constructed, the following property is clear.
12
gU
gR
gU
gU
gL
gO
gO
g
gB
gB
(1)
gB
gR
gL
(2)
(3)
(4)
(5)
Figure 28: (1) Case 1: (g L , gR ) is a G-pair and gL C gR ; (2) Case 2: (gL , gR ) is a G-pair and gR C gL ;
(3) Case 3: (gB , g O , gU ) is a M-triple and gB C gO C g U ; (4) Case 4: (gB , gO , gU ) is a M-triple and
gO C g U ; (4) Case 5: (g B , gO , gU ) is a M-triple and g B C gO .
Property 30. Given a partial slant REL R with its associated cut C, the support(R) = (g1 , g2 , · · · , gk )
is a chain of C.
Given a partial slant REL R with its associated cut C, if we can add a gadget g to R, then it implies
that back boundary L = β(g) of g is a part of C. Let support(L, R) be a subsequence of support(R)
consisting of gadgets in support(R) that touch L. We can define an order L which is similar to C .
Definition 31. Given a mirror fan gO with L = β(g O ), let two gadgets {g, g ′ } belong to a connection Λ.
We say g precedes g′ on L and write g L g′ if the following conditions hold: (1) (α(g) ∩ L) ∪ (α(g ′ ) ∩ L)
is contiguous on L; (2) When walking along L, we encounter the gadgets g before g′ .
Definition 32. Let g O be a mirror fan with L = β(g O ). A backbone(L, R) consists of a sequence of
gadgets (g1 = gB , g2 , · · · , gk = gU ) in support(L, R) such that
1. (g1 L g2 L · · · L gk ),
2. for each 1 ≤ i ≤ k − 1, either (gi , gi+1 ) or (gi , gi+1 , gi+2 ) belongs to a connection Λ ∈ Con(R) and
3. Neither (gO , gB ) nor (g O , g U ) is a forbidden pair.
In Fig 26 (1), consider the mirror fan g3 with L = β(g3 ). We have: support(L) = (g1 , g0 , g2 ) where
Λ1 is the G-pair (g1 , g0 ), Λ2 is the G-pair (g2 , g0 ) and g1 L g0 L g2 .
Based on above discussion, we can restate Lemma 24 as follows:
Lemma 33. Let R ∈ PSR(G) and C = C(R). Let g O be a mirror fan with L = β(gO ). Suppose
that the conditions A1 and A2 are satisfied for gO and C. Let support(L, R) = (g B , gp+1 , · · · , gq−1 , gU ).
Then (gB , gO , gU ) forms a M-triple (i.e. gO satisfies the condition A3) if and only if there exists
a backbone(L, R) consisting of gadgets in support(L, R) and connections in Con(R). Note that each
gadget and connection in backbone(L, R) belong to the same partial slant REL R.
5
Face-Addition Algorithm with Polynomial Time
We will present our polynomial time Face-Addition algorithm in this section. In §5.1, we will describe
the algorithm to find a superset of chains. In §5.2, we will give more details of key procedures in §5.1. In
§5.3, we will present an example that Algorithm 2 may combine two subchains of two different partial
13
RELs into a chain which only satisfies the order C in Property 30 (there exist gadgets coming from
different chains). In §5.4, we will describe a backtracking algorithm to check whether whether a chain
in the superset of chains constructed by Algorithm 2 corresponds to a slant REL or not. Also, we will
give runtime analysis of the backtracking algorithm.
5.1
Polynomial Time algorithm
The polynomial time Face-Addition algorithm is described in Algorithm 2.
Algorithm 2: Face-Addition Algorithm with Polynomial Time
Input: A proper triangular plane graph G
2.1 Set Ṽ = {g0 = F(vS , vE , vN )} and V̂ = ∅;
2.2 repeat
2.3
Find a gadget g such that:
either: there exist v-G-pairs (g, gR ) ∈
/ V̂ with gR ∈ Ṽ (v-G-pairs are defined later);
add g into Ṽ (if it’s not already in Ṽ); add all such v-G-pairs (g, gR ) into V̂;
or: g is a mirror fan and there exist v-M-triples (gB , g, gU ) 6∈ V̂ with gB , gU ∈ Ṽ (v-M-triples are
defined later);
add g into Ṽ (if it’s not already in Ṽ); add all such v-M-triples (gB , g, g U ) into V̂;
2.4
2.5
2.6
2.7
2.8
2.9
until no such gadget g can be found;
if gT = M(vS , vW , vN ) is not in Ṽ then
G has no slant REL;
else
Backtrack each v-chain of gT (v-backbone of gT ) to check whether G corresponds a slant
REL in Algorithm 4;
end
Algorithm 2 emulates the operations of Algorithm 1 without explicitly maintaining the set PSR(G).
Instead, it keeps two sets: (1) a set Ṽ of gadgets of G which contains the gadgets in the set G̃ defined in §3,
and (2) a set V̂ of connections of G which contains the connections in the set {Con(R)|R ∈ PSR(G)}
defined in §4.3. In §4.3, many concepts (cut, chain, backbone . . . etc.) were defined referring to a
R ∈ PSR(G). We now need counterparts of these concepts without referring to a specific R. For a
concept x defined previously, we will use virtue x or simply v-x for the counterpart of x. (For example,
v-cut for virtue cut, v-chain for virtue chain, v-backbone for virtue backbone). A v-G-pair (v-M-triple,
respectively) is similar to a G-pair (M-triple, respectively) but without referring to a specific R ∈
PSR(G). Whenever Algorithm 1 adds a gadget g to G̃ through a G-pair (or a M-triple, respectively),
Algorithm 2 adds g into Ṽ and add a corresponding v-G-pair (or v-M-triple, respectively) into V̂.
Initially, V̂ is empty and Ṽ contains only the initial fan g0 = F(vS , vE , vN ). In each step, the algorithm
finds either new v-G-pairs (g, g R ) with gR ∈ Ṽ; or new v-M-triples (gB , g, g U ) with gB , g U ∈ Ṽ. In either
case, it adds g into Ṽ. But instead of using a R ∈ PSR(G), Algorithm 2 relies on the information stored
in Ṽ and V̂ to find v-G-pairs and v-M-triples.
Fix a step in Algorithm 2 and consider the sets Ṽ and V̂ after this step. Any simple path C in G from
vS to vN is called a v-cut of G. A gadget pair (g, gR ) is called a v-G-pair if gR ∈ Ṽ and β(g) ⊆ α(gR ).
14
For a v-cut C, define:
Ṽ(C) = {g ∈ Ṽ | α(g) intersects C}
V̂(C) = {Λ ∈ V̂ | the frontiers of at least two gadgets of Λ intersect C}
Let e1 and et be the first and the last edge of C. A subset of gadgets D ⊆ Ṽ(C) is called a v-support
of C if the following conditions hold:
• The gadgets in D can be arranged into a sequence (g1 , g2 , . . . , gk ) such that e1 ∈ α(g1 ), et ∈ α(gk )
and, when walking along C from vS toward vN , we encounter these gadgets in this order.
• Any two (or three) consecutive gadgets (gi , gi+1 ) (or (gi−1 , gi , gi+1 )) belong to a connection in V̂.
If a set S of connections formed by the gadgets in a v-support of C satisfies the structure property
described in Definition 29, S is called a v-chain of C. Clearly, any chain is also a v-chain.
Let g be a mirror fan with L = β(g). Let a and b be the lowest and the highest vertex of L, and el
and eh the first and the last edge of L, respectively. Define:
Ṽ(L) = {g ∈ Ṽ | α(g) intersects L}
V̂(L) = {Λ ∈ V̂ | the frontiers of at least two gadgets of Λ intersect L}
A subset of gadgets D ⊆ Ṽ(L) is called a v-support of L if the following conditions hold:
• The gadgets in D can be arranged into a sequence (g B = gp , g2 , . . . , gq = g U ) such that el ∈ α(gB ),
eh ∈ α(gU ) and, when walking along L from a toward b, we encounter these gadgets in this order.
• Any two (or three) consecutive gadgets (gi , gi+1 ) (or (gi−1 , gi , gi+1 )) belong to a connection in V̂.
If a set S of connections formed by the gadgets in a v-support of L only satisfies the order property
L and the third property described in Definition 32, S is called a v-backbone of L. If there is a vbackbone(L), we call (g B , g, g O ) a v-M-triple. Both v-G-pairs and v-M-triples are called v-connections.
First, we bound the number of loop iterations in Algorithm 2. By Lemma 13, the number of gadgets
in G is at most N = O(n2 ). So the number of v-G-pairs is at most O(N 2 ) and the number of v-M-triples
is at most O(N 3 ). Hence V̂ contains at most O(n6 ) elements. Since each iteration adds at least either
a v-G-pair or a v-M-triple into V̂, the number of iterations is bounded by O(n6 ).
We need to describe how to perform the operations in the loop body, which is clearly dominated by
finding v-G-pairs and finding v-M-triples. Given two gadgets g, gR and the sets Ṽ and V̂, it is easy
to check if (g, g R ) is a v-G-pair (i.e. gR ∈ Ṽ and β(g) ⊆ α(gR )) in polynomial time. However, finding
v-M-triples (gB , g, g U ) is much more difficult. In §5.2, we show this can be done, in polynomial time, by
finding a v-backbone(β(g)) consisting of connections in V̂. This will establish the polynomial run time
of the repeat loop of Algorithm 2.
Lemma 34. Let S be the set of all v-backbones of gT . (Because L = β(gT ) is a v-cut, each v-backbone
of L is actually a v-chain of G.) For each R ∈ PSR(G) with its associated cut C = C(R), there exists
a v-chain S ∈ S (which is a v-backbone of L) generated by Algorithm 1 such that S = chain(C).
Proof: For each mirror fan g with L = β(g), if g is in some partial slant REL, then its backbone follows
the order L . So we have G̃ ⊆ Ṽ and {Con(R)|R ∈ PSR(G)} ⊆ V̂. Since we use the two supersets Ṽ
and V̂ of G̃ and Ĝ to find v-backbones of gT (chains of G), we immediately have:
{chain(C)|R is a slant REL of G with its associated cut C = C(R)} ⊆ S.
15
In this subsection, we have described Algorithm 2 which constructs (1) the set S of v-chains such
that chain(C) of each partial REL R ∈ PSR(G) with its associated cut C = C(R) is included in S,
(2) the set V̂ of v-connections containing each connection Λ ∈ {Con(R)|R ∈ PSR(G)}, (3) the set Ṽ of
gadgets containing each gadget g ∈ G̃.
Lemma 34 states that any chain is a v-chain. But the reverse is not necessarily true. In §5.3, we
provide an example that a v-chain is not equal to the chain of the associated cut of any partial slant
REL. Thus we need to check whether a v-chain constructed by Algorithm 2 is really a chain of a slant
REL R of G. In §5.4, we describe a backtracking algorithm to detect all such v-chains.
5.2
Algorithm for Finding v-Backbones and v-M-triples
Consider a gadgets (g B , g O , gU ) with the back boundary L = β(gO ). Let a and b be the lowest and the
highest vertex of L, e1 and e2 the first and the last edge of L. In this subsection, we show how to check
whether (gB , g O , gU ) is a v-M-triple or not in polynomial time. By the definition of v-M-triples, this is
equivalent to finding v-backbone(L)s by using the v-connections in V̂.
Let Ṽ(L) be the set of gadgets in Ṽ that can be in any v-backbone(L). From the conditions described
in Definition 32, Ṽ(L) contains the gadgets g ∈ Ṽ that satisfy the following conditions:
• The front boundary of g intersects L and g belongs to some v-connection Λ ∈ V̂.
• the front boundary of gB contains e1 ; and (g O , gB ) is not a forbidden pair.
• the front boundary of gU contains e2 ; and; and (g O , gU ) is not a forbidden pair.
To determine which gadgets in Ṽ(L) can form a v-backbone of L, we construct a directed acyclic
graph HL = (VL , EL ) as follows:
Definition 35. Given a triple (g B , gO , gU ) with L = β(g O ), the backbone graph HL of gO is defined as
follows:
• VL = {(l, g)|l = L ∩ α(g) and g ∈ Ṽ(L)} and
• EL = {(l1 , g1 ) → (l2 , g2 )|
{g1 , g2 } belongs to a v-connection Λ ∈ V̂(L) and l1 ∪ l2 is contiguous on L }
A source (sink, respectively) vertex has no incoming (outgoing, respectively) edges in HL . The
intuitive meaning of a directed path P ∈ HL=β(gO ) from the source to the sink is that P corresponds
to a v-backbone of L = β(g O ) and for each vertex (l, g) ∈ P , g corresponds a gadget in a v-backbone
and l is equal to the intersection L ∩ α(g) of L and the front boundary of g. Moreover, for different vM-triples of gO Λ1 = (gB1 , gO , g U1 ), Λ2 = (gB2 , gO , gU2 ) ∈ V̂(L), we have vertices vO = (L ∩ α(g O ), gO )
′ = (L ∩ α(g O ), g O ) in H to represent g O such that Λ and Λ represent different subpaths
and vO
L
1
2
in HL : one is (L ∩ α(gB1 ), g B1 ) → vO = (L ∩ α(gO ), g O ) → (L ∩ α(gU1 ), gU1 ) and the other one is
′ = (L ∩ α(g O ), g O ) → (L ∩ α(g U2 ), g U2 ) where the vertex v represents the mirror
(L ∩ α(gB2 ), gB2 ) → vO
O
′
fan in Λ1 and vO represents the mirror fan in Λ2 .
Lemma 36. HL is acyclic and can be constructed in O(|V̂|2 ) time.
Proof: Consider a g ∈ Ṽ. Knowing L, we can easily determine if g is in Ṽ(L) in constant time. So we
can identify the set VL in O(|V̂|) time. For two vertices (l, g) and (l′ , g′ ) in VL , the edge (l, g) → (l′ , g′ )
exists if and only if the following two conditions are satisfied: (1) g and g ′ belong to some v-connection
in V̂(L); (2) l ∪ l′ is contiguous on L; and (2) when walking along L upwards, we encounter the gadgets g
before g′ . These two conditions can be easily checked in constant time. So the set EL can be determined
in O(|VL |2 ) = O(|V̂|2 ) time. Thus HL can be constructed in O(|V̂|2 ) time.
The edge directions of HL are defined by the relation L . Since L is acyclic, HL is acyclic.
16
Lemma 37. Given a triple (gB , gO , g U ) with L = β(gO ), let HL be the graph defined in Definition 35.
1. Each directed path from the source to the sink in HL corresponds to the v-M-triple (gB , gO , gU ).
2. The v-M-triple (gB , gO , gU ) corresponds to a set of directed paths from g B to gU in HL .
Proof: Statement 1. Consider any directed path (lB , gB ) → · · · → (lU , gU ) from the source (lB , gB ) to
the sink (lU , g U ) in HL . Since each directed edge (l, g) → (l′ , g′ ) in HL follows the order L on L, each
directed path from (lB , gB ) to (lU , gU ) is a v-backbone of gO and (g B , gO , g U ) is a v-M-triple.
Statement 2: Consider a v-M-triple (gB , g O , gU ). This means that there exists a v-backbone gB L
g2 L · · · L gk−1 L gU on L. Because each g L g′ on L is a directed edge (l, g) → (l′ , g′ ) in HL , we
have that (lB , g B ) → (l2 , g2 ) → · · · → (lk−1 , gk−1 ) → (lU , gU ) is a directed path from (lB , gB ) to (lU , gU )
in HL .
Note that there may exist multiple paths in HL from (lB , gB ) to (lU , g U ). All these paths correspond
to the same v-M-triple (g B , g O , gU ). The intuitive meaning of this fact is as follows. When we add g O
via the v-M-triple (gB , g O , gU ), even though the gadgets g B and g U are fixed, the v-connections and
gadgets in the v-backbone(L)s between g B and gU may be different. But as long as they form a valid
v-backbone(L), we can add g O .
The following Algorithm 3 finds v-M-triples (gB , gO , gU ) by finding v-backbone(L)s.
Algorithm 3: Find v-M-triples
3.1
3.2
3.3
Input: A triple (gB , gO , g U ) with L = β(gO ) and the set V̂ of v-connections
From the connections in V̂, identify the set V̂(L);
Construct the directed graph HL as in Definition 35;
By using Lemma 37, return whether (gB , gO , gU ) is v-M-triple or not;
Theorem 38. Given a gadgets triple (gB , gO , gU ), Algorithm 3 can successfully test whether (gB , g O , gU )
is a v-M-triple in polynomial time.
Proof: The correctness of the algorithm follows from Lemma 37. By Lemma 36, the steps 1 and 2 can
be done in polynomial time.
Step 3: Since HL is acyclic, we can use breadth-first search to find whether (lU , gU ) is reachable from
B
(l , gB ). Then (gB , g, g U ) is a v-M-triple if and only if (lU , gU ) is reachable from (lB , gB ). This step is
carried out by calling breadth-first search which takes polynomial time. So the total time for this step
is polynomial.
Note that the total number of source to sink paths in HL can be exponential. However, we only need
to find one path from (lB , gB ) to (lU , gU ).
5.3
An Example that A v-Chain Does Not Have A Slant REL
In this subsection, we present an example to show why a v-chain defined in the last subsection does
not necessarily have a corresponding partial slant REL. Imagine that we have two v-chains C and C ′ .
Suppose that C can be partitioned into C = (C1 , C2 , C3 ) and C ′ can be partitioned into C ′ = (C1′ , C2′ , C3′ )
such that C2 = C2′ , then we may have another two v-chains (C1 , C2 = C2′ , C3′ ) and (C1′ , C2 = C2′ , C3 ).
However, both of the two v-chains can’t correspond any partial slant REL. Fig 39 (3) shows a v-chain
(g1 , g2 , g3 , g4 , g5 , g6 ) which can’t have a corresponding partial slant REL where (g1 , g2 , g3 ) from R1 and
(g3 , g4 , g5 , g6 ) from R2 shares a common gadget g3 .
17
g3
g2
g1
1
g4
g3
gA
g0
(1)
2
g5
gF
gE gD g
g6
g4
g3
R
g2
g5 g6
g1
C
gB
g0
(2)
(3)
Figure 39: (1) is a partial slant REL R1 which consists of gadgets {g0 , gA , gF , g1 , g2 , g3 }; (2) is a partial
slant REL R2 which consists of gadgets {g0 , g3 , g4 , g5 , g6 , gB , gC , gD , gE }; (3) (g1 , g2 , g3 , g4 , g5 , g6 ) is a
v-chain where (g1 , g2 , g3 ) is a subchain of R1 and (g3 , g4 , g5 , g6 ) is a subchain of R2 . However, the v-chain
is not coming from the same partial slant REL. {g1 , g2 , g3 } can be added into R1 only when {g0 , gA }
have been added into R1 . The order of added gadgets in R2 is: (g0 , gB , g6 , g5 , gC , gD , gE , g3 , g4 ). But,
gA and each gadget of {gB , gC , gD , gE } can not coexist in the same REL because some faces of gA and
each gadget of {gB , gC , gD , gE } overlap.
5.4
An Algorithm to Find Conflicting Gadgets via Backtracking
In the last subsection, we know that each v-chain of gT only contains partial information of a complete
slant RELR. In this subsection, we use a recursive constructive definition to define a hierarchal v-chain
which represents sufficient information of a complete REL R and can be represented by a DAG as
follows:
Definition 40. Given the final mirror fan gT with C = β(gT ), a hierarchal v-chain J = (V (J ), E(J ))
of C is a DAG recursively defined as follows:
1. The root J (r) ∈ V (J ) is a sequence of pairs ((C1 , g1 ), (C2 , g2 ), · · · , (Ck , gk )) where
(a) (g1 , g2 , · · · , gk ) is a v-chain (g1 C g2 C · · · C gk ) of C and
(b) each Ci = C ∩ α(gi ), 1 ≤ i ≤ k, is a portion of the front boundary α(gi ) of gi .
2. While(α(g0 ) * C)
(a) select a gadget g such that
i. α(g) ⊆ C and
ii. there exist a sequence of pairs ((l1 , g) ∈ S1 , (l2 , g) ∈ S2 , · · · , (lh , g) ∈ Sh ) where each
Si , 1 ≤ i ≤ h, is a vertex of J and l1 ∪ l2 ∪ · · · ∪ lh = α(g),
(b) create a vertex S consisting of a sequence of pairs ((l1 , g1 ), (l2 , g2 ), · · · , (lh , gh )) where
i. (g1 L g2 L · · · L gh ) is a v-backbone of L = β(g) and
ii. each li = L ∩ α(gi ), 1 ≤ i ≤ h, is a portion of the front boundary α(g) of g,
(c) add S into V (J ) and for each Si , 1 ≤ i ≤ h, add an arc Si → S into E(J ). And,
(d) change C to C(vS , a) ∪ β(g) ∪ C(b, vN ) where a and b are the first and the last vertices of
β(g), respectively.
Intuitively a hierarchal v-chain J is a hierarchal decomposition of a complete slant REL R and
the root J (r) of J represents a chain(C) of R’s associated cut C = C(R). In the following definition,
a hierarchial structure H consists of a set of DAGs (backbone graphs) and H can implicitly store all
possible hierarchal v-chains J .
18
Definition 41. H = (V (H), E(H)) is a DAG where
1. for each vertex v ∈ V (H), v represents a DAG H(v) = (V (H(v)), E(H(v))) over V (H(v)) where
(a) every vertex w ∈ V (H(v)) is a pair (l, g) and l is a portion of the front boundary α(g) of g,
(b) for each arc e = (ue → u′e ) in E(H(v)), e associates with a DAG H(v ′ ), v ′ ∈ V (H) (the
associated DAG of e is denoted by H(e)), the source of H(e) is the starting vertex ue of e and
the sink of H(e) is the ending vertex u′e of e.
2. an ordered pair (u, v) belongs to E(H) if there exists an arc e in E(H(u)) such that e’s associated
DAG H(e) is equal to H(v).
We call an ordered pair vertices (u, v) ∈ E(H) a super arc of H. Also, for each arc e ∈ E(H(u)), let He
be the maximal subgraph of H which can be reached from H(e) via super arcs.
From now on, (1) when we mention a DAG H(e) from an arc, it means that the arc e is in the DAG
represented by a vertex in V (H), (2) when we mention a DAG H(v) represented by a vertex v, it means
that the vertex v is a vertex in V (H), and (3) we use the term H(eC ) to represent the DAG in the root
of the hierarchal structure H.
Given a fan g and an arc e = (l1 , g1 ) → (l2 , g2 ) ∈ E(H(v)), (1) e is a complete arc on g if g = g1 = g2 ,
(2) e is a left partial arc on g if g 6= g1 and g = g2 , (3) e is a right partial arc on g if g = g1 and g 6= g2 ,
and (4) e is minimal if l1 ∪ l2 is a contiguous path. Algorithm 4 emulates the growing process of all
hierarchal v-chains J as follows:
1. add the root v into H and let H(r) be the backbone graph HC=β(gT ) of gT . Now, V (H) = {r} and
the root J (r) of each hierarchal v-chain J is a directed path P ∈ H(v), and vice versa.
2. iteratively selects a gadget g (to be defined in Definition 44) such that
if g is a mirror fan and has a path P = (· · · , (lB , g B ), (l = α(g), g), (lU , gU ), · · · ) ∈ H(v), (1) add
a vertex v ′ into V (H), (2) let H(v ′ ) be the backbone graph HL′ =β(g) of g, (3) change P to
(· · · , (lB , gB ), (lU , gU ), · · · ) ∈ H(v), (4) add a super arc from v to v ′ in E(H) and (5) let H(e′ )
be H(v ′ ) where e′ = (lB , gB ) → (lU , gU ). The backbone graph of g is embedded into H(e′ ).
See Fig 42 as an example.
Otherwise, g is a fan. For each maximal path P = (v1 , v2 , · · · , vk ) ∈ H(v), v ∈ V (H) where the
gadget gi of each vi = (li , gi ) is equal to g, (1) merge P , (2) add an arc e′ between v1 and vk
and (3) set H(e′ ) = (β(g) ∩ α(g R ), g R ) (the backbone graph of g) where (g, g R ) is a v-G-pair
in V̂. The backbone graph of g is embedded into H(e′ ). Figs 43 (1) and (2) show an example
of P before merging P and Figs 43 (3) and (4) show an example of P after merging P .
The next definition defines a removable gadget g which can be selected in Algorithm 4 and add the
backbone graph HL=β(g) into H . Intuitively a removable gadget g means that all gadgets g ′ which are
connections (g′ , g) in V̂ have been selected and removed from Algorithm 4.
Definition 44. In Algorithm 4, we say a gadget g is removable from a DAG H(e) V (H) if there exists
a vertex (l, g) ∈ H(e) and we cannot find a vertex (l′ , g′ ) from another DAG H(e′ ) in V (H) such that g
and g′ belong to some connection Λ ∈ V̂. Note that the vertex (l′ , g′ ) can also be selected from H(e).
Now we give the definition of a conflicting hierarchal v-chain J which cannot form a slant REL R.
An example for a conflicting hierarchal v-chain J has been shown in Fig 39.
Definition 45. A hierarchal v-chain J is conflicting on a gadget g if there exist pairs (l, g) ∈ S and
(l′ , g′ ) ∈ S ′ where S and S ′ are two vertices in V (J ) such that
19
e2
e3
e1
e4
g
g1
g3
g2
v1
e1
gB
gU
gB
(2)
e1
1 2
g1
3
4
5
e4
v1
e1
g
6
e4
!
g2 g3
g
v5
gU
g
(1)
e4
e3
e2
5
,-
6
"#
1
g
g
$%
2
g1
(3)
()
&'
3
g2
4
*+
5
g
g
g3
(4)
Figure 42: (1) and (2) show a M-triple (gB , g, g U ) and suppose that HC has a directed path P = (· · · ,
e1 , e2 , e3 , e4 , · · · ); (3) and (4) show that after removing g, we add a new arc e′ into HeC and P becomes
(· · · , e1 , e′ , e4 , · · · ) ∈ HC . And, (e′1 , e′2 , e′3 , e′4 , e′5 , e′6 ) is a directed path in H(e′ ) where (e′1 , e′2 , e′3 , e′4 ,
e′5 , e′6 ) is a v-backbone of L = β(g).
1. if g 6= g′ , g and g′ overlap at least one face.
2. Otherwise (g = g′ ), l and l′ overlap at least two vertices.
Note that S might be equal to S ′ . Moreover, we say the vertex (l′ , g′ ) is conflicting to (l, g) on g if (l, g)
and (l′ , g ′ ) satisfy one of the above two conditions. On the other hand, we say (l′ , g′ ) is compatible
to (l, g) on g if (l′ , g′ ) is not conflicting to (l, g) on g. And, for a hierarchal v-chain J , we say J is
compatible on g if J is not conflicting on g.
Next we can start to define that H is compatible on a gadget g as follows:
Definition 46. Given a hierarchal structure H = (V (H), E(H)) and a gadget g ∈ Ṽ, we say H is
compatible on g if
1. there exists a directed path P ∈ H(eC ) such that for each vertex (l, g) ∈ P , each vertex (l′ , g ′ ) ∈ P
other than (l, g) is compatible to (l, g) on g. And,
2. for each arc e ∈ P , the hierarchal substructure He of H is also compatible on g.
We say (1) a directed path P ∈ H(eC ) is compatible on g if P satisfies the conditions 1 and 2. And, (2)
a directed path P ∈ H(eC ) is conflicting on g if P violates the condition 1 or the condition 2. Moreover,
an arc e ∈ P is compatible on g if He is compatible on g. On the other hand, e is conflicting on g if
He is not compatible on g.
From the above definition of a compatible hierarchal structure H, we immediately have a recursive
procedure to check whether there exists a compatible path P on g in H(eC ) as follows: for each directed
path P ∈ H(eC ), recursively check each arc e ∈ P whether the DAG H(e) ∈ He (H(e) is the root’s
associated DAG in He ) has a compatible directed path on g or not. Then P is conflicting on g if and
only if P becomes disconnected after removing all conflicting arcs e on g from H(eC ). It is stated in
Property 54.
20
e2
e1
e1
e41 5
1
e11
e12
e21 e3
e22
e13
g
e14
e4
e1
1 2
e1
e24
gR
(1)
e1
e3
e23
JK
gI
e2
2
2
e11
e51 e1 e2
1
1
1
e
1 e 2
/1
52
1 2 1 e3 4 4
1
1
0
1
2
3
1
e34
62
83
e3
e4
94
:5
e14 e4 e4
2 3
e13 e23
L3
44
34
1
2
1
(2)
e4
e24
e 31
e1
e4 .
e34
=1
e1
e11
1
e21 e3
1
?
1
@
g
(3)
;<
>2
PQ
1
H4
MN
2
gC
(4)
e4
e24 e4
3
G5
gE
Figure 43: (1) and (2) are an example of H, complete arcs and partial arcs; (3) and (4) are an example
to explain how H changes its structure after removing a fan g; (1) and (2): (· · · , e1 , e2 , e3 , e4 , · · · ) is a
directed path P ∈ H(eC ) where e1 is a left partial arc on (g, gR ), {e2 , e3 } are complete arcs on (g, gR ) and
e4 is a right partial arc on (g, gR ). Also, (e11 , e12 , e13 , e14 , e15 ) is a directed path in H(e1 ), (e21 , e22 ) is a directed
path in H(e2 ), (e31 , e32 ) is a directed path in H(e3 ) and (e41 , e42 , e43 ) is a directed path in H(e4 ); (3) and
(4): after removing the fan g, change the vertices v2 = (l2 , g) and v4 = (l4 , g) to v2 = (β(g) ∩ α(gR ), gR )
and v4 = (β(g) ∩ α(gR ), gR ), respectively where β(g) ∩ α(gR ) is the intersection of the back boundary
β(g) and the front boundary α(gR ), and (g, g R ) is a G-pair in Ṽ. Also, the arcs {e2 , e3 } are replaced by
the arc e′ and H(e′ ) is the path (v2 , (β(g) ∩ α(gR ), g R ), v4 ).
Briefly speaking, Algorithm 4 iteratively removes the root J (r) of a conflicting hierarchal-v-chain J
from H(eC ). Also, we utilize Algorithms 5 and 6 to adjust the structure of H. Moreover, after recursively
adjusting H (it means that via adjusting H(eC ) ∈ H, we also adjust the structure of He , e ∈ H(eC )),
we have the following fact: for each H(e) ∈ H, if there does not exist a directed path between v1 and
v2 in H(e) before removing g from H(e), then v1 remains disconnected to v2 in H(e) after removing g
from H(e). At the end of Algorithm 4, we can conclude that each connected path P ∈ H(eC ) has a
corresponding compatible hierarchal v-chain J . The intuitive meaning of a path P ∈ H(eC ) keeps its
connectivity after removing g is that P can add g into its corresponding hierarchal v-chain J .
How to efficiently check whether a directed path P ∈ H(eC ) is compatible on g or not? We can
recursively check whether there exists a compatible directed path P ′ ∈ H(e) on g for each complete and
partial arcs e ∈ H(eC ). In Observations 48 and 51, we describe the recursive formulas to check complete
arcs and partial arcs whether they are compatible on g or not. After we check all complete arcs and
partial arcs, we keep all compatible arcs on g in H(eC ) and check whether there exists a directed path
from source to sink in H(eC ). (The root J (r) of a compatible hierarchal v-chain J .) The recursive
procedures to check directed paths, complete arcs and partial arcs on g in H(eC ) are described in Lemmas
47, 50 and 53, respectively.
The main task for EXPAND operation in Algorithm 5 is to add the backbone graph of a mirror fan
into H. See Fig 42 as an example for EXPAND operation.
In the following lemma, we describe the recursive structure of a compatible directed path P ∈ H(eC ).
A simple way to explain Lemma 47 is that to recursively check a compatible directed path P in H(eC )
is equal to, for each arc e ∈ P , recursively check whether there exists a compatible directed path P ′ in
H(e). In general, each directed path P ∈ H(eC ) can be decomposed into five parts: (1) the subpath
from source which doesn’t have any partial arc and complete arc on g, (2) the subpath which only has
a left partial arc on g, (3) the subpath which only has complete arcs on g, (4) the subpath which only
has a right partial arc on g and (5) the subpath to sink which doesn’t have any partial arc and complete
arc on g. Because each arc e in a compatible directed path P must be compatible on g , it implies that
21
Algorithm 4: Find Conflicting Gadgets via Backtracking Algorithm
4.1
4.2
4.3
4.4
4.5
4.6
4.7
4.8
4.9
4.10
4.11
4.12
4.13
4.14
4.15
4.16
4.17
4.18
4.19
4.20
Input: Sets Ṽ and V̂
Add the backbone graph HC=β(gT ) of gT into H. V (H) = {H(eC ) = HC };
while the initial fan g0 is not removable from H(eC ) do
Find a removable gadget g from H(eC );
if g is a mirror fan then
for each M-triple (g B , g, gU ) ∈ V̂ do
EXPAND g in H by Algorithm 5;
end
else if g is a fan then
for each G-pair (g, g R ) do
Recursively check whether each complete arc and partial arc in H(eC ) are compatible
on g or not by Algorithm 6;
end
Recursively delete all complete arcs on g from H(eC );
end
Remove g from Ṽ and all connections (g, gR ) and (gB , g, gU ) from V̂;
end
if there exists a path P = (e1 , e2 , · · · , ek ) ∈ H(eC ) where each ei , 1 ≤ i ≤ k, is a complete arc on
g0 then
G have an area-universal rectangular layout;
else
G does not have any area-universal rectangular layout;
end
H(e) must have at least one directed path from source to sink which is compatible on g. We describe
their recursive structures of complete arcs and partial arcs on g in Observations 48, 49, 51 and 52. From
the above discussion, we immediately have Lemma 47.
Lemma 47. Given a fan g, suppose there is a directed path P = (e1 , e2 , ez , ep , ec1 , ec2 , · · · , eckc , eq ,
e′1 , e′2 , · · · , e′z ′ ) ∈ H(eC ) where the arcs ep and eq are the left and right partial arcs on g, respectively,
and each arc eci , 1 ≤ i ≤ kc , is a complete arc on g. Then, P is the root J (r) of a compatible hierarchal
v-chain J on g if and and if
• for each complete arc eci ∈ P, 1 ≤ i ≤ kc , there is a compatible directed path Pic ∈ H(eci ) on g (see
Observations 48 and 49 for more details of a complete arc),
• for the left partial arc ep ∈ P , there is a compatible directed path P p ∈ H(ep ) on g (see Observations
51 and 52 for more details of a left partial arc) and
• for the right partial arc eq ∈ P , there is a compatible directed path P q ∈ H(eq ) on g (see Observations 51 and 52 for similar details of a right partial arc).
Given a complete arc e = (l1 , g) → (l2 , g) ∈ H(eC ) on g, each arc e′ ∈ P is also a complete arc on
g. And, we know that if we want to guarantee that a complete arc e is compatible, we must recursively
check whether H(e) can have a directed path which only consists of compatible complete arcs on g.
Obviously, to recursively check a compatible complete arc on g is a recursive procedure implemented by
dynamic programming technique. Also, the base case for the recursive procedure is that a complete arc
on g whose two end vertices (l1 , g) and (l2 , g) have that l1 ∪ l2 is contiguous on the front boundary of g.
It means that (l1 , g) → (l2 , g) is compatible on g. See Fig 43 as an example of a complete arc.
22
Algorithm 5: EXPAND a mirror fan in H
5.1
5.2
5.3
5.4
5.5
5.6
5.7
Input: The hierarchal structure H with the root H(eC ) and a M-triple (gB , g, gU ) ∈ V̂
Add the backbone graph HL=β(g) into H as a vertex v ∈ V (H);
e1
e2
for each DAG H(e) ∈ V (H) such that there is a subpath (l1 , gB ) −→
(α(g), g) −→
(l2 , g U ) in H(e)
where l1 and l2 are portions of α(gB ) and α(gU ), respectively do
e
e′
e
1
2
Replace (l1 , gB ) −→
(α(g), g) −→
(l2 , gU ) by (l1 , gB ) −
→ (l2 , gU ) in H(e);
Set H(e′ ) = H(v) and add a super arc from H(e) to H(v) in H;
Add an arc from the starting vertex of e to the source vertex of H(v) and an arc from the sink
vertex of H(v) to the ending vertex of e in H(e);
Remove the vertex (α(g), g) from H(e);
end
From the above discussion, we can describe recursive structures of a compatible complete arc on g
in Observations 48 and 49:
Observation 48. Given a fan g, a complete arc e is compatible on g if and only if there exists a directed
path P = (ec1 , ec2 , · · · , eckc ) ∈ H(e) where
• the source vertex of P is the starting vertex of e,
• the sink vertex of P is the ending vertex of e and
• each eci , 1 ≤ i ≤ kc , is a compatible complete arc on g.
Observation 49. Given a fan g, a minimal complete arc e = (l1 , g) → (l2 , g) is compatible on g if and
only if l1 ∪ l2 is contiguous on the front boundary α(g) of g (the last vertex of l1 overlaps the first vertex
of l2 ).
Based on Observations 48 and 49, we can check a complete arc e ∈ H(eC ) on g whether it is
compatible on g or not via Lemma 50.
Lemma 50. Given a fan g, we can recursively check whether each complete arc e ∈ H(eC ) on g satisfies
structure described in Observations 48 and 49 as follows:
1. recursively check whether each arc e′ ∈ H(e) satisfies the structures in Observations 48 and 49,
2. keep all arcs passing the above tests in H(e), and
3. check whether H(e) has a directed path from source to sink. If yes, keep e in H(eC ). Otherwise,
delete e from H(eC ).
For a left partial arc e = (l1 , g1 ) → (l2 , g2 ) ∈ H(eC ) on g, because g2 is equal to g, each directed
path P in H(e) can be partitioned into (1) the subpath that consists of neither complete arcs nor partial
arcs on g, (2) the left partial arc on g and (3) the subpath that only consists of complete arcs on g.
See Fig 43 as an example of a left partial arc. Similarly, to check a compatible left partial arc on g is
a recursive procedure which can be implemented by dynamic programming technique. Also, the base
case for the recursive procedure is a left partial arc (l1 , g1 ) → (l2 , g2 = g) on g which has (1) (g2 = g, g1 )
is a v-connection in V̂ and (2) l1 ∪ l2 is contiguous on the front boundary α(g2 , g1 ) of the connection
(g2 , g1 ). It means that (l1 , g1 ) → (l2 , g2 = g) is compatible on g. See Fig 43 for examples of a left partial
arc and a minimal left partial arc. From the above discussion, we can describe recursive structures of a
compatible left partial arc on g in Observations 51 and 52:
23
Observation 51. Given a fan g and a left partial arc e on g, a left partial arc e is compatible on g if
and only if there exists a directed path P = (e1 , e2 , · · · , ez , ep , ec1 , ec2 , · · · , eckc ) in H(e) where
• the source vertex of P is the starting vertex of e,
• the sink vertex of P is the ending vertex of e,
• for each 1 ≤ i ≤ z, ei = (li , gi ) → (li+1 , gi+1 ) is an arc where g 6= gi and g 6= gi+1 ,
• the arc ep is a compatible left partial arc on g, and
• each arc eci , 1 ≤ i ≤ kc , is a compatible complete arc on g.
Observation 52. Given a fan g, a minimal left partial arc e = (l1 , g1 ) → (l2 , g2 = g) on g is a compatible
left partial arc on g if and only if (1) (g2 = g, g1 ) is a connection in V̂ and (2) l1 ∪ l2 is contiguous on
the front boundary α(g2 , g1 ) of the connection (g2 , g1 ) (the last vertex of l1 overlaps the first vertex of
l2 ).
Based on Observations 51 and 52, we can recursively check whether a partial arc e ∈ H(eC ) is
compatible on g or not via Lemma 53.
Lemma 53. Given a fan g, we can recursively check whether a partial arc e ∈ H(eC ) on g satisfies the
structures in Observations 51 and 52 as follows:
1. recursively check whether each partial arc e′ ∈ H(e) on g satisfies the structures in Observations
51 and 52,
2. recursively check whether each complete arc e′ ∈ H(e) on g satisfies the structures in Observations
48 and 49,
3. keep all arcs passing the above tests in H(e), and
4. check whether H(e) has a directed path from source to sink. If yes, keep e in H(eC ). Otherwise,
delete e from H(eC ).
There are three main tasks of MERGE operation in Algorithm 6. The first one is to recursively check
each complete arc on a removable gadget g ∈ Ṽ. The second one is to recursively check each partial arc
on g. The final one is to remove g and maintain the connectivity for each compatible directed path in
H(eC ). What we do in the final for loop is to reconnect a new arc between a left partial arc eL and a
right partial arc eR if and only if there exists a compatible directed path from eL to eR . See Fig 43 as
an examples for Algorithm 6.
Note that the connectivity between vL and vR is based on the arcs which are compatible on g in
Algorithm 6.
There are two important properties for the correctness of Algorithm 4. The first one states that
we can eliminate each conflicting directed path P (hierarchal v-chain J ) on g via removing g from
H(eC ). The second one states that the number of directed paths (hierarchal v-chains) decreases during
Algorithm 4 executes.
Property 54. For each DAG H(e) ∈ V (H), a directed path P ∈ H(e) is conflicting on g if and only if
P becomes disconnected after removing g from H(e).
Property 55. For each DAG H(e) ∈ V (H), if any two vertices u, v ∈ H(e) are disconnected, then u
and v remain disconnected after removing g from H(e).
24
Algorithm 6: MERGE H via Dynamic Programming
6.1
6.2
6.3
6.4
6.5
6.6
6.7
6.8
6.9
6.10
Input: The hierarchal structure H with the root H(eC ) and a G-pair (g, gR ) ∈ V̂
for each complete arc ec ∈ H(eC ) on g do
Recursively check ec whether ec is compatible on g or not (this recursive check follows Lemma
50);
end
for each partial arc ep ∈ H(eC ) on g do
Recursively check ep whether ep is compatible on g or not (this recursive check follows Lemma
53);
end
for each pair of left partial arc eL = va → vL = (vL , g) and right partial arc
eR = vR = (vR , g) → vb in H(eC ) such that vL and vR remain connected in H(eC ) do
Change vL = (vL , g) and vR = (lR , g) to vL = (β(g) ∩ α(g R ), gR ) and vR = (β(g) ∩ α(g R ), gR ),
respectively;
Add an arc e′ = (vL → vR ) into the graph H(eC ) and let H(e′ ) be the directed path
(vL → (β(g) ∩ α(g R ), gR ) → vR );
end
From the above two properties, we can see that if we can recursively guaranteed that for each
compatible arc e ∈ H(eC ) on g, H(e) has Properties 54 and 55, then each compatible directed path
P ∈ H(eC ) on g is also a compatible hierarchal v-chain on g. And, in the final "for" loop of Algorithm
6, it connects a new arc between vL and vR if and only if there is a compatible directed path on g from
vL and vR . Hence it guarantees that no pair of vertices (vL , vR ) turns into connected if vL and vR are
disconnected before removing conflicting arcs on g.
Theorem 56. Algorithm 4 can successfully check whether there exists a directed path P ∈ H(eC ) such
that P has a corresponding slant REL.
Proof: The correctness of Algorithm 4 is based on Properties 54 and 55. Clearly, from Property 54,
each conflicting directed path P ∈ H(eC ) (the root J (r) of each hierarchal v-chain J ) on g becomes
disconnected after removing a removable gadget g and from Property 55, it remains disconnected in
the following steps. Hence in the final step of Algorithm 4, each connected directed path P ∈ H(eC )
has been proven that P corresponds to the root J (r) of a compatible hierarchal v-chain on g for every
gadget g ∈ Ṽ. Also, if a directed path P ∈ H(eC ), corresponds to the root J (r) of a hierarchal v-chain
J , keeps its connectivity after recursively removing a gadget g from H(eC ), then this hierarchal v-chain
J can add the gadget g into its corresponding slant REL. Hence we can have that each connected
directed path in H(eC ) has a corresponding slant REL R.
Theorem 56 has proven that we can backtrack all directed paths P ∈ H(eC ) to know whether P
represents the chain of a partial slant REL R.
Theorem 57. The time complexity of Algorithm 4 is polynomial bound.
Proof: Let K be the number of iterations in Algorithm 4 and Ni , 1 ≤ i ≤ K, be the size of H in the
i-th iteration. The time analysis of backtracking is based on three parts: (1) K is polynomial bound,
(2) each Ni , 1 ≤ i ≤ K, is polynomial bound and (3) time complexity T (Ni ) in each i-th iteration is
polynomial bound.
Obviously, the number K of total iterations is bounded by the number of connections V̂. By Lemma
13, the number of gadgets in G is at most N = O(n2 ) and the number of connections in G is at most
O(N 3 ) = O(n6 ). Hence K polynomially grows with respect to the number of G’s vertices n.
25
For each i-th iteration, we either execute EXPAND or MERGE operations to adjust H’ structure.
When Algorithm 4 executes EXPAND operation on a removable gadget g with the back boundary
L = β(g), we add g’s backbone graph HL into H where HL ’s size (the number of vertices in HL ) is
polynomial bound. Since the number K of total iterations is polynomial bound, the total vertices added
into H in all EXPAND operations are bounded by the summation of all backbone graphs’s size. Hence,
the summation of all backbone graphs’s size is polynomial bound.
When Algorithm 4 executes MERGE operation on a removable gadget g with a G-pair (g, gR ), we
replace each maximal compatible directed path P on g in H by an arc e′ = (gu , lu ) → (gv , lv ) between
the two end vertices of P where each arc in P is a complete arc on g, and add H(e′ ) into H where
|H(e′ )| consists of the newly-added vertex (gR , l). Note that the added vertex (gR , l) only connects to
the two end vertices (gu , lu ) and (gv , lv ) of e, and cannot be connected to other vertices in H in following
iterations. Also, (g R , l) is removed from H when removing a removable gadget gR from H. Hence the
total number of newly-added arcs e′ are summation of all backbone graphs’s size and it is polynomial
bound. Also, the size of all added H(e′ ) is polynomial bound.
Because the total vertices added into H are polynomial bound during each iteration and the number
K of iterations is polynomial bound, the maximum of H’s size is polynomial bound of each iteration in
Algorithm 4. Hence each number Ni , i ≥ 1, of vertices of H in each i-th iteration is polynomial bound.
Now we analyze the time complexity of each iteration. When Algorithm 4 executes EXPAND
operation on a removable gadget g with the back boundary L = β(g), we take polynomial time to
add the g’s backbone graph HL into H because HL ’s size is polynomial bound.
When Algorithm 4 executes MERGE operation on a removable gadget g, the tasks of MERGE
operation consist of (1) recursively checking each complete arc on g in H by Lemma 50, (2) recursively
checking each partial arc on g in H by Lemma 53 and (3) check that for each DAG H(e), whether there
exists a directed path from the source to the sink in H(e) after removing all conflicting complete and
partial arcs from H(e).
The total work of a MERGE operation can be simply described as follows: check each DAG H(e) ∈
V (H) whether H(e) has at least one connected directed path from the source to the sink in H(e). And,
it can be done by executing a breadth-first search in H(e) since H(e) is a DAG. Hence the complexity
of the total work of a MERGE operation is polynomial bound as poly(max1≤i≤K Ni ). Note that the
order to check each DAG H(e) ∈ V (H) is a bottom-up traversal in H as follows: a DAG H(e) ∈ V (H)
is ready to check if and only if every DAG H(e′ ), e′ ∈ E(H(e)), has been checked.
Because each iteration takes polynomial time as T (Ni ) ≤ poly(max1≤i≤K Ni ), the total time complexity of all K iterations in Algorithm 4 is also polynomial bound.
Theorem 57 has proven that the time complexity of Algorithm 4 is polynomial bound.
26
References
[1] M. Bruls, K. Huizing, and J. J. van Wijk, Squarified treemaps, in Proceedings of the Eurographics
and IEEE TCVG Symposium on Visualization, Springer, 2000, pp. 33-42.
[2] C. F. Earl and L. J. March, Architectural applications of graph theory, Applications of Graph
Theory, R. Wilson and L. Beineke, eds., Academic Press, London, 1979, pp. 327-355.
[3] Erik D. Demaine and Martin L. Demaine; Jigsaw Puzzles, Edge Matching, and Polyomino Packing:
Connections and Complexity, Graphs and Combinatorics, 23 [Supple], 2007, Digital Object Identifier
(DOI) 10.1007/s00373-007-0713-4.
[4] D. Eppstein, E. Mumford, B. Speckmann, and K. Verbeek, Area-universal and constrained rectangular layouts, SIAM J. Comput. 41, no. 3, 2012, pp. 537-564.
[5] D. Eppstein, Regular labelings and geometric structures, in Proceedings of the 22nd Canadian
Conference on Computational Geometry, 2010, pp. 125-130.
[6] D. Eppstein and E. Mumford, Steinitz theorems for orthogonal polyhedra, in Proceedings of the
26th ACM Symposium on Computational Geometry, 2010, pp. 429-438.
[7] É. Fusy Transversal structures on triangulations, with application to straight-line drawing, in Proceedings of the 13th International Symposium on Graph Drawing (GD 2005), Lecture Notes in
Comput. Sci. 3843, Springer, Berlin, 2006, pp. 177-188.
[8] É. Fusy, Transversal structures on triangulations: A combinatorial study and straight-line drawings,
Discrete Math. 309, 2009, pp. 1870-1894.
[9] X. He, On finding the rectangular duals of planar triangular graphs, SIAM J. Comput. 22, no. 6,
1993, 1218-1226.
[10] G. Kant and X. He, Regular edge labeling of 4-connected plane graphs and its applications in graph
drawing problems, Theoret. Comput. Sci. 172, (1997), pp. 175-193.
[11] M. van Kreveld and B. Speckmann, On rectangular cartograms, Computational Geometry 37, 2007,
175-187.
[12] K. Koźmiński and E. Kinnen Rectangular duals of planar graphs, Networks 5, 1985, pp. 145-157.
[13] E. Mumford, Drawing graphs for cartographic applications, Ph.D. thesis, Technische Universiteit
Eindhoven, Eindhoven, The Netherlands, 2008.
[14] E. Raisz, The rectangular statistical cartogram, Geographical Review 24 (2), 1934, 292-296.
[15] I. Rinsma, Nonexistence of a certain rectangular floorplan with specified areas and adjacency,
Environ. Plann. B 14, 1987, pp. 163-166.
[16] S. Wimer, I. Koren, and I. Cederbaum, Floorplans, planar graphs, and layouts, IEEE Trans.
Circuits Syst. 35, 1988, pp. 267-278.
[17] G. K. H. Yeap and M. Sarrafzadeh, Sliceable floorplanning by graph dualization, SIAM J. Disc.
Math. 8, 1995, pp. 258-280.
27
| 8 |
A NORMAL GENERATING SET FOR THE TORELLI GROUP OF A
COMPACT NON-ORIENTABLE SURFACE
Abstract. For a compact surface S, let I(S) denote the Torelli group of S. For a
compact orientable surface Σ, I(Σ) is generated by BSCC maps and BP maps (see [10]
and [11]). For a non-orientable closed surface N , I(N ) is generated by BSCC maps and
BP maps (see [5]). In this paper, we give an explicit normal generating set for I(Ngb ),
where Ngb is a genus-g compact non-orientable surface with b boundary components for
g ≥ 4 and b ≥ 1.
1. Introduction
For g ≥ 1 and b ≥ 0, let Ngb denote a genus-g compact connected non-orientable surface
with b boundary components, and let Ng = Ng0 . In this paper, we regard Ngb as a surface
obtained by attaching g Möbius bands to a sphere with g + b boundary components, as
shown in Figure 1. We say each of these Möbius bands attached to this sphere a cross cap.
The mapping class group M(Ngb ) of Ngb is the group consisting of isotopy classes of all
diffeomorphisms over Ngb which fix each point of the boundary. The Torelli group I(Ngb )
of Ngb is the subgroup of M(Ngb ) consisting of elements acting trivially on the integral
first homology group H1 (Ngb ; Z) of Ngb . The Torelli group of a compact orientable surface
is generated by BSCC maps and BP maps (see [10] and [11]). In particular, Johnson
[6] showed that the Torelli group of an orientable closed surface is finitely generated by
BP maps. Hirose and the author [5] showed that I(Ng ) is generated by BSCC maps and
BP maps for g ≥ 4. In this paper, we give an explicit normal generating set for I(Ngb )
consisting of BSCC maps and BP maps, for g ≥ 4 and b ≥ 1.
attach
{
{
arXiv:1601.06257v2 [math.GT] 14 Feb 2017
RYOMA KOBAYASHI
g
b
Figure 1. A genus-g compact connected non-orientable surface with b
boundary components.
Let N be a compact connected non-orientable surface. For a simple closed curve c on
N, we call c an A-circle (resp. an M-circle) if its regular neighborhood is an annulus
(resp. a Möbius band), as shown in Figure 2. For an A-circle c, we can define the mapping
class tc , called the Dehn twist about c, and the direction of the twist is indicated by a
1
2
R. KOBAYASHI
(a) A-circles.
(b) M-circles.
Figure 2.
c
tc
Figure 3. The Dehn twist tc about c.
small arrow written beside c as shown in Figure 3. We can notice that an A-circle (resp.
an M-circle) passes through cross caps even times (resp. odd times).
Let α, β, β ′ , γ, δi , ρi , σij and σ̄ij be simple closed curves on Ngb as shown in Figure 4.
The main result of this paper is as follows.
Theorem 1.1. Let g ≥ 5 and b ≥ 0. In M(Ngb ), I(Ngb ) is normally generated by tα ,
b
b
tβ t−1
β ′ , tδi , tρi , tσij and tσ̄ij for 1 ≤ i, j ≤ b − 1 with i < j. In M(N4 ), I(N4 ) is normally
generated by tα , tβ t−1
β ′ , tδi , tρi , tσij , tσ̄ij and tγ for 1 ≤ i, j ≤ b − 1 with i < j.
β
α
σij
'
δi
β
i
j
(a) Loops α, β, β ′ , δi and σ̄ij .
σij
ρi
γ
i
j
(b) Loops γ, ρi and σij .
Figure 4.
In this paper, for f, g ∈ M(Ngb ), the composition gf means that we first apply f and
then g.
2. Basics on mapping class groups for non-orientable surfaces
2.1. On mapping class groups for non-orientable surfaces.
Mapping class groups for orientable surfaces are generated by only Dehn twists. Lickorish showed that M(Ng ) is generated by Dehn twists and Y -homeomorphisms and that
the subgroup of M(Ng ) generated by all Dehn twists is an index 2 subgroup of M(Ng )
(see [7, 8]). Hence M(Ng ) is not generated by only Dehn twists. On the other hand,
A NORMAL GENERATING SET FOR I(Ngb )
3
since a Y -homeomorphism acts trivially on H1 (Ng ; Z/2Z), M(Ng ) is not generated by
only Y -homeomorphisms.
Chillingworth [2] found a finite generating set for M(Ng ). M(N1 ) and M(N11 ) are
trivial (see [3]). Finite presentations for M(N2 ), M(N21 ), M(N3 ) and M(N4 ) are obtained
by [7], [12], [1] and [14], respectively. Paris-Szepietowski [9] obtained a finite presentation
of M(Ngb ) for b = 0, 1 and g + b > 3.
Let N be a non-orientable surface, and let a and m be an oriented A-circle and an
M-circle on N respectively such that a and m mutually intersect transversely at only one
point. We now define a Y -homeomorphism Ym,a . Let K be a regular neighborhood of a∪b
in N, and let M be a regular neighborhood of m in the interior of K. We can see that
K is homeomorphic to the Klein bottle with one boundary component. Ym,a is defined as
the isotopy class of a diffeomorphism over N which is described by pushing M once along
a and which fixes each point of the boundary and the exterior of K (see Figure 5).
K
a
m
Ym,a
M
Figure 5. The Y -homoemorphism Ym,a .
2.2. On Torelli groups for non-orientable surfaces.
Let c be an A-circle on a non-orientable surface N such that N \ c is not connected.
We call tc a bounding simple closed curve map, for short a BSCC map. For example, in
Theorem 1.1, tα , tγ , tδi , tρi and tσij are BSCC maps. Let c1 and c2 be A-circles on N such
that N \ci is connected, N \(c1 ∪c2 ) is not connected and one of its connected components
is an orientable surface with two boundary components. We call tc1 t−1
c2 a bounding pair
−1
map, for short a BP map. For example, in Theorem 1.1, tβ tβ ′ is a BP map.
Hirose and the author [5] obtained the following theorem.
Theorem 2.1 ([5]). For g ≥ 5, I(Ng ) is normally generated by tα and tβ t−1
β ′ in M(Ng ).
−1
I(N4 ) is normally generated by tα , tβ tβ ′ and tγ in M(N4 ).
Theorem 1.1 is a natural extension of Theorem 2.1.
We can check that all BSCC maps and BP maps are in I(Ngb ). In addition, by Theorem 1.1, we have that I(Ngb ) is generated by BSCC maps and BP maps. We do not know
whether or not I(Ngb ) can be finitely generated.
2.3. Capping, Pushing and Forgetful homomorphisms.
Let N be a compact non-orientable surface. Take a point ∗ in the interior of N. Let
M(N, ∗) denote the group consisting of isotopy classes of all diffeomorphisms over N
which fix ∗ and each point of the boundary, and let I(N, ∗) denote the subgroup of
M(N, ∗) consisting of elements acting trivially on H1 (N; Z)
Take a point ∗ in the interior of Ngb−1 . We regard Ngb as a subsurface of Ngb−1
not containing ∗. The natural embedding Ngb ֒→ Ngb−1 induces the homomorphism
Cgb : M(Ngb ) → M(Ngb−1 , ∗), called the capping homomorphism. We have the following
lemma.
Lemma 2.2 (cf. [4, 13]). ker Cgb is generated by tδb .
4
R. KOBAYASHI
Since tδb is in I(Ngb ) we obtain the following.
Corollary 2.3. ker Cgb |I(Ngb ) is generated by tδb .
We remark that Cgb and Cgb |I(Ngb ) are not surjective.
The pushing homomorphism Pgb−1 : π1 (Ngb−1 , ∗) → M(Ngb−1 , ∗) is defined as follows.
For x ∈ π1 (Ngb−1 , ∗) take a representative oriented loop x̃ based at ∗. Pgb−1 (x) is described
by pushing ∗ once along x̃ and fixes each point of exterior of neighborhood of x̃. Note
that Pgb−1 is an anti-homomorphism, that is, for x, y ∈ π1 (Ngb−1 , ∗) we have Pgb−1 (xy) =
Pgb−1 (y)Pgb−1 (x). The forgetful homomorphism Fgb−1 : M(Ngb−1 , ∗) → M(Ngb−1 ) is defined
naturally. Note that Fgb−1 is surjective.
We have the natural exact sequence
Fgb−1
Pgb−1
π1 (Ngb−1 , ∗) −→ M(Ngb−1 , ∗) −→ M(Ngb−1 ) −→ 1.
Since ImPgb−1 is in I(Ngb−1 , ∗) and Fgb−1 (I(Ngb−1 , ∗)) is equal to I(Ngb−1 ), we have the
exact sequence
π1 (Ngb−1 , ∗) −→ I(Ngb−1 , ∗) −→ I(Ngb−1 ) −→ 1.
3. A normal generating set fot Cgb (I(Ngb )) in Cgb (M(Ngb ))
By the capping homomorphism Cgb : M(Ngb ) → M(Ngb−1 , ∗) we have that a normal
generating set for I(Ngb ) in M(Ngb ) consists of tδb and lifts by Cgb of normal generators of
Cgb (I(Ngb )) in Cgb (M(Ngb )). Thus in this section we consider a normal generating set for
Cgb (I(Ngb )) in Cgb (M(Ngb )).
Let αi and βj be oriented loops on Ngb−1 based at ∗ as shown in Figure 6, and let xi
and yj be the elements of π1 (Ngb−1 , ∗) corresponding to αi and βj respectively. Note that
π1 (Ngb−1 , ∗) is generated by xi and yj for 1 ≤ i ≤ g and 1 ≤ j ≤ b − 1.
*
αg
β1
βb-1
{
{
α1
g
b-1
Figure 6. The oriented loops αi and βj on Ngb−1 .
Let p : π1 (Ngb−1 , ∗) → π1 (Ng , ∗) denote the natural surjection defined as p(xi ) = xi and
p(yj ) = 1, for b ≥ 1. For x ∈ π1 (Ngb−1 , ∗) we can denote p(x) = xεi11 xεi22 · · · xεitt , where t is
the word length of p(x) and εk = ±1. We define
Oi (x) = ♯{i2k−1 | i2k−1 = i},
Ei (x) = ♯{i2k | i2k = i},
Γb−1
= {x ∈ π1 (Ngb−1 , ∗) | Oi (x) = Ei (x), 1 ≤ i ≤ g}.
g
−2
−1 3 −1
For example, for x = x1 y2 x2 x−1
∈ π1 (Ngb−1 , ∗), since p(x) =
3 y5 y1 x1 x2 y4 x3
−1
−1 −1
x1 x2 x3 x1 x2 x3 , we have that Oi (x) and Ei (x) are equal to 1 (resp. 0) for i = 1, 2, 3
(resp. i ≥ 4), and hence x is in Γgb−1 .
A NORMAL GENERATING SET FOR I(Ngb )
5
In this section we prove the following three propositions.
Proposition 3.1. Im Fgb−1 |Cgb (I(Ngb )) is equal to I(Ngb−1 ).
Proposition 3.2. ker Fgb−1 |Cgb (I(Ngb )) is equal to Pgb−1 (Γb−1
g ).
b−1 2
b−1
Proposition 3.3. Pgb−1 (Γb−1
g ) is the normal closure of Pg (xg ), Pg (yj ) and
b
b
Pgb−1 (xg yj x−1
g ) for 1 ≤ j ≤ b − 1 in Cg (M(Ng )).
By Proposition 3.1 and Proposition 3.2, we have the exact sequence
Γb−1
−→ Cgb (I(Ngb )) −→ I(Ngb−1 ) −→ 1.
g
Hence Cgb (I(Ngb )) is the normal closure of Pgb−1 (Γgb−1 ) and lifts by Fgb−1 |Cgb (I(Ngb )) of normal
generators of I(Ngb−1 ). In addition, by Proposition 3.3 we obtain the following.
Corollary 3.4. In Cgb (M(Ngb )), Cgb (I(Ngb )) is normally generated by Pgb−1 (x2g ), Pgb−1 (yj ),
b−1
Pgb−1 (xg yj x−1
|Cgb (I(Ngb )) of normal generators of I(Ngb−1 ), for 1 ≤ j ≤
g ) and lifts by Fg
b − 1.
3.1. Proof of Proposition 3.1.
To prove Proposition 3.1, it suffices to show that for any ϕ ∈ I(Ngb−1 ) there is ϕ
e ∈ I(Ngb )
such that (Fgb−1 ◦ Cgb )(ϕ)
e = ϕ.
Let γi and δj be oriented loops on Ngb as shown in Figure 7, and let ci and di be the
elements of H1 (Ngb ; Z) corresponding to γi and δj respectively. As a Z-module, H1 (Ngb , Z)
has a presentation
H1 (Ngb , Z) = hc1 , . . . , cg , d1, . . . , db | 2(c1 + · · · + cg ) + (d1 + · · · + db ) = 0i,
and is isomorphic to Zg+b−1 .
γg
δ1
δb
{
{
γ1
g
b
Figure 7. The oriented loops γi and δj on Ngb .
For f ∈ M(Ngb ) we denote by f∗ the automorphism over H1 (Ngb ; Z) induced by f . Since
f fixes each point of the boundary of Ngb , we have that f∗ (dj ) = dj for 1 ≤ j ≤ b.
For any ϕ ∈ I(Ngb−1 ) there exists ψ ∈ M(Ngb ) such that (Fgb−1 ◦ Cgb )(ψ) = ϕ. For
1 ≤ i ≤ g there is an integer ni such that ψ∗ (ci ) = ci + ni db . Let γij and e
γij be simple
b
closed curves on Ng as shown in Figure 8, and let τij = tγij tγeij , for 1 ≤ i < j ≤ g. τij is
a mapping class which is described by pushing the b-th boundary component once along
between γij and e
γij . We can check
ci − db (t = i),
cj + db (t = j),
(τij )∗ (ct ) =
ct
(t 6= i, j).
6
R. KOBAYASHI
γij
~
γ
ij
i
j
Figure 8. The loops γij and e
γij .
n
g−1
n2 n1
Let τ = τg−1g
· · · τ2g
τ1g and ϕ
e = τ ψ. Since (Fgb−1 ◦ Cgb )(τ ) = 1 we have (Fgb−1 ◦ Cgb )(ϕ)
e =
ϕ. For 1 ≤ i ≤ g − 1 we calculate
ϕ
e∗ (ci ) =
=
=
=
=
=
τ∗ (ψ∗ (ci ))
τ∗ (ci + ni db )
τ∗ (ci ) + ni τ∗ (db )
(τig )n∗ i (ci ) + ni db
(ci − ni db ) + ni db
ci .
In addition, we calculate
ϕ
e∗ (cg ) = τ∗ (ψ∗ (cg ))
= τ∗ (cg + ng db )
= τ∗ (cg ) + ng τ∗ (db )
= (cg +
g−1
X
nk db ) + ng db
k=1
= cg +
g
X
nk db .
k=1
Let c = 2(c1 + · · · + cg ) + (d1 + · · · + db )(= 0). We see
ϕ
e∗ (c) = 2(ϕ
e∗ (c1 ) + · · · + ϕ
e∗ (cg )) + (ϕ
e∗ (d1 ) + · · · + ϕ
e∗ (db ))
g
X
nk db ) + (d1 + · · · + db )
= 2(c1 + · · · + cg−1 + cg +
k=1
g
= c+2
X
nk db
k=1
= 2
g
X
nk db .
k=1
Since ϕ
e∗ (c) = 0 we have
g
X
k=1
nk = 0, and hence ϕ
e∗ (cg ) = cg . Hence we have that ϕ
e∗ is
the identity. Therefore we conclude that ϕ
e is in I(Ngb ) with (Fgb−1 ◦ Cgb )(ϕ)
e = ϕ. Thus we
complete the proof of Proposition 3.1.
3.2. Proof of Proposition 3.2.
A NORMAL GENERATING SET FOR I(Ngb )
7
Note that Pgb−1 (x) can be lifted by Cgb if and only if the word length of p(x) ∈ π1 (Ng , ∗)
is even. Let π1+ (Ngb−1 , ∗) denote the subgroup of π1 (Ngb−1 , ∗) consisting of x such that the
word length of p(x) is even. For x ∈ π1+ (Ngb−1 , ∗), let x∗ denote the automorphism over
H1 (Ngb , Z) induced by the natural lift by Cgb of Pgb−1 (x). For example, the natural lifts of
Pgb−1 (x2g ) and Pgb−1 (yj ) are tρb and tσjb t−1
δj , respectively.
−1
2
We can see that (xi )∗ = 1 and (yj )∗ = 1. Hence we have that (xi xj )∗ , (xi x−1
j )∗ , (xi xj )∗ ,
−1
−1
(x−1
i xj )∗ and (xj xi )∗ are mutually equal. Let xij = (xi xj )∗ for 1 ≤ i, j ≤ g. Note that
(γij )∗ (i < j),
xij =
−1
(γji
)∗ (j < i).
Lemma 3.5. For 1 ≤ i, j, k, l ≤ g, we have xij xkl = xkl xij .
Proof. If (i, j, k, l) = (i, i, i, i), (i, i, k, i), (i, i, i, l), (i, i, k, k), (i, j, i, j), (i, j, j, i) and
(i, i, k, l), it is clear that xij xkl = xkl xij . Hence we check the cases where (i, j, k, l) =
(i, j, i, l), (i, j, k, j), (i, j, k, i) and (i, j, k, l). For any 1 ≤ i, j ≤ g since xij acts trivially on
d1 , d2 , . . . , db , we check the actions on c1 , c2 , . . . , cg .
For any mutually different indices 1 ≤ i, j, l ≤ g we see
x (c − db ) = ci − 2db (t = i),
ij i
xij (cj )
= cj + db (t = j),
xij xil (ct ) =
xij (cl + db ) = cl + db (t = l),
x (c )
= ct
(t 6= i, j, l),
ij t
x (c − db ) = ci − 2db (t = i),
il i
xil (cj + db ) = cj + db (t = j),
xil xij (ct ) =
xil (cl )
= cl + db (t = l),
x (c )
= c
(t 6= i, j, l).
il
t
t
Hence we have xij xil = xil xij .
For any mutually different indices 1 ≤ i, j, k ≤ g we see
x (c )
= ci − d b
ij i
xij (cj + db ) = cj + 2db
xij xkj (ct ) =
xij (ck − db ) = ck − db
x (c )
= ct
ij t
x (c − db ) = ci − db
kj i
xkj (cj + db ) = cj + 2db
xkj xij (ct ) =
xkj (ck )
= ck − d b
x (c )
= c
kj
t
Hence we have xij xkj = xkj xij .
For any mutually different indices 1 ≤ i, j, k
x (c + db )
ij i
xij (cj )
xij xki (ct ) =
xij (ck − db )
x (c )
ij t
x (c − db )
ki i
xki (cj + db )
xki xij (ct ) =
xki (ck )
x (c )
ki t
t
(t = i),
(t = j),
(t = k),
(t 6= i, j, k),
(t = i),
(t = j),
(t = k),
(t 6= i, j, k).
≤ g we see
=
=
=
=
ci
cj + d b
ck − d b
ct
(t = i),
(t = j),
(t = k),
(t 6= i, j, k),
=
=
=
=
ci
cj + d b
ck − d b
ct
(t = i),
(t = j),
(t = k),
(t 6= i, j, k).
8
R. KOBAYASHI
Hence we have xij xki = xki xij .
For any mutually different indices 1 ≤ i, j, k, l ≤ g we see
xij (ci )
= ci − db (t = i),
= cj + db (t = j),
xij (cj )
xij (ck − db ) = ck − db (t = k),
xij xkl (ct ) =
xij (cl + db ) = cl + db (t = l),
x (c )
= ct
(t 6= i, j, k, l),
ij t
xkl (ci − db ) = ci − db (t = i),
xkl (cj + db ) = cj + db (t = j),
xkl (ck )
= ck − db (t = k),
xkl xij (ct ) =
xkl (cl )
= cl + db (t = l),
x (c )
= ct
(t 6= i, j, k, l).
kl t
Hence we have xij xkl = xkl xij .
Thus we obtain the claim.
For x ∈ π1+ (Ngb−1 , ∗) denote p(x) = xεi11 xεi22 · · · xεi2l2l . Since (yj )∗ = 1, we have x∗ =
xi1 i2 xi3 i4 · · · xi2l−1 i2l . For 1 ≤ i ≤ g, let s = ♯{k | i2k−1 = i2k = i}, t = Oi (x) − s and
u = Ei (x) − s. Since xii = 1 and xkl (ci ) = ci , by Lemma 3.5 we have
x∗ (ci ) =
=
=
=
=
xk(u)i · · · xk(1)i · xij(t) · · · xij(1) · xsii (ci )
xk(u)i · · · xk(1)i · xij(t) · · · xij(1) (ci )
xk(u)i · · · xk(1)i (ci − tdb )
(ci + udb ) − tdb
ci + (u − t)db ,
using the suitable indices j(1), . . . , j(t) and k(1), . . . , k(u). Therefore we have that x ∈
Γb−1
if and only if x∗ = 1.
g
Note that ker Fgb−1 |Cgb (I(Ngb )) is equal to the intersection of ker Fgb−1 and Cgb (I(Ngb )). For
b−1
b
b
any x ∈ Γb−1
g , since x∗ = 1, we have that Pg (x) is in Cg (I(Ng )). In addition, since
Pgb−1 (x) is in ker Fgb−1 , we have that Pgb−1 (x) is in ker Fgb−1 |Cgb (I(Ngb )) . Hence we conclude
b−1
|Cgb (I(Ngb )) .
Pgb−1 (Γb−1
g ) ⊂ ker Fg
b−1
For any ϕ ∈ ker Fg |Cgb (I(Ngb )) , since ϕ is in ker Fgb−1 there exists x ∈ π1 (Ngb−1 , ∗) such
that ϕ = Pgb−1 (x). In addition, since ϕ is in Cgb (I(Ngb )) we have x∗ = 1, and hence x ∈ Γb−1
g .
b−1
b−1
b−1
b−1
b−1
Hence ϕ is in Pg (Γg ). Therefore we conclude ker Fg |Cgb (I(Ngb )) ⊂ Pg (Γg ).
Thus we complete the proof of Proposition 3.2.
3.3. Proof of Proposition 3.3.
We have the exact sequence
π1+ (Ngb−1 , ∗) −→ Cgb (M(Ngb )) −→ M(Ngb−1 ) −→ 1.
b
b
Hence a normal generator of Pgb−1 (Γb−1
g ) in Cg (M(Ng )) is the image of a normal generator
+
of Γb−1
in π1 (Ngb−1 , ∗). Therefore we consider about the normal generators of Γb−1
in
g
g
+
b−1
π1 (Ng , ∗).
Let πgb−1 denote the finitely presented group with the generators xi and yj , and with the
relators x2i , yj and [xi(1) xi(2) , xi(3) xi(4) ], where 1 ≤ i, i(1), i(2), i(3), i(4) ≤ g, 1 ≤ j ≤ b − 1
and [x, y] means xyx−1 y −1. There is the natural surjection ψ : π1 (Ngb−1 , ∗) → πgb−1 . We
first show the following lemma.
A NORMAL GENERATING SET FOR I(Ngb )
9
Lemma 3.6. Γb−1
is equal to ker ψ.
g
Proof. It is clear that Γb−1
⊃ ker ψ. We show Γb−1
⊂ ker ψ. Namely, it suffices to show
g
g
that x ≡ 1 modulo x2i , yj and [xi(1) xi(2) , xi(3) xi(4) ] for any x ∈ Γb−1
g .
b−1
For any x ∈ Γg we can denote x ≡ xi1 xi2 · · · xi2l modulo x2i and yj . Since
Oi1 (x) = Ei1 (x), there exists 1 ≤ t ≤ l such that i1 = i2t . Hence, modulo x2i , yj
and [xi(1) xi(2) , xi(3) xi(4) ], we calculate
x ≡
=
≡
=
≡
..
.
=
≡
≡
xi1 xi2 · · · xi2t · xi2t+1 · · · xi2l
[xi1 xi2 , xi3 xi4 ]xi3 xi4 xi1 xi2 · xi5 xi6 · · · xi2t · xi2t+1 · · · xi2l
xi3 xi4 xi1 xi2 · xi5 xi6 · · · xi2t · xi2t+1 · · · xi2l
xi3 xi4 [xi1 xi2 , xi5 xi6 ]xi5 xi6 xi1 xi2 · xi7 xi8 · · · xi2t · xi2t+1 · · · xi2l
xi3 xi4 xi5 xi6 xi1 xi2 · xi7 xi8 · · · xi2t · xi2t+1 · · · xi2l
xi3 · · · xi2t−2 [xi1 xi2 , xi2t−1 xi2t ]xi2t−1 xi2t xi1 xi2 · xi2t+1 · · · xi2l
xi3 · · · xi2t−1 · xi2t xi1 · xi2 · xi2t+1 · · · xi2l
xi3 · · · xi2t−1 · xi2 · xi2t+1 · · · xi2l .
Let x′ = xi3 · · · xi2t−1 · xi2 · xi2t+1 · · · xi2l . It immediately follows that the word length of
′
′
′
x′ is 2l − 2 and x′ is in Γb−1
g . Since Oi3 (x ) = Ei3 (x ), there exists 2 ≤ t ≤ l such that
i3 = i2t′ . Then, similarly there exists x′′ such that the word length of x′′ is 2l − 4, x′′ is in
Γgb−1 and x′ ≡ x′′ modulo x2i , yj and [xi(1) xi(2) , xi(3) xi(4) ]. Repeating the same operation,
we have that x ≡ x′ ≡ x′′ ≡ · · · ≡ 1 modulo x2i , yj and [xi(1) xi(2) , xi(3) xi(4) ]. Therefore we
have x ∈ ker ψ.
Thus we obtain the claim.
We next show the following lemma.
Lemma 3.7. πgb−1 has a presentation with
πgb−1 = hx1 , . . . , xg , y1 , . . . , yb−1 | x21 , . . . , x2g , y1 , . . . , yb−1, (xi xj xk )2 , 1 ≤ i < j < k ≤ gi.
Proof. It suffices to show that (xi xj xk )2 is a product of conjugations of some
[xi(1) xi(2) , xi(3) xi(4) ] and that [xi(1) xi(2) , xi(3) xi(4) ] is a product of conjugations of some
(xi xj xk )2 , modulo x21 , x22 , . . . , x2g .
For any 1 ≤ i < j < k ≤ g, modulo x21 , x22 , . . . , x2g we see
(xi xj xk )2 ≡ (xi xj xk )xj xj (xi xj xk )
= xi xj · xk xj · xj xi · xj xk
≡ xi xj · xk xj · (xi xj )−1 (xk xj )−1
= [xi xj , xk xj ].
If (i(1), i(2), i(3), i(4)) = (i, i, i, i), (i, i, i, j), (i, i, j, i), (i, j, i, i), (j, i, i, i)
(i, i, j, j), (i, j, i, j), (i, j, j, i), (i, i, j, k) and (i, j, k, k), we have immediately that
[xi(1) xi(2) , xi(3) xi(4) ] ≡ 1 modulo x21 , x22 , . . . , x2g . Hence we check the other cases.
10
R. KOBAYASHI
For mutually different indices 1 ≤ i, j, k, l ≤ g, modulo x21 , x22 , . . . , x2g we see
[xi xj , xi xk ] ≡
≡
[xi xj , xk xj ] ≡
≡
[xi xj , xk xi ] ≡
≡
[xi xj , xj xk ] ≡
xi xj xi xk xj xi xk xi
xi (xj xi xk )2 x−1
i ,
xi xj xk xj xj xi xj xk
(xi xj xk )2 ,
xi xj xk xi xj xi xi xk
(xi xj xk )2 ,
xi xj xj xk xj xi xk xj
≡
[xi xj , xk xl ] ≡
≡
≡
(xi xk xj )2 ,
xi xj xk xl xj xi xl xk
(xi xj xk )2 xk xj xi · xi xj xl (xl xj xi )2 xl xk
(xi xj xk )2 xk xl (xl xj xi )2 (xk xl )−1
For the relator (xi xj xk )2 , applying conjugations and taking their inverses, it suffices to
consider the case i < j < k. Thus we obtain the claim.
By Lemma 3.6 and Lemma 3.7 we have the short exact sequence
1 −→ Γb−1
−→ π1 (Ngb−1 , ∗) −→ πgb−1 −→ 1.
g
Let (πgb−1 )+ denote the quotient group of π1+ (Ngb−1 , ∗) by Γb−1
g . Then we have the short
exact sequence
1 −→ Γb−1
−→ π1+ (Ngb−1 , ∗) −→ (πgb−1 )+ −→ 1.
g
From presentations of π1+ (Ngb−1 , ∗) and (πgb−1 )+ , we obtain the normal generators of Γb−1
g
in π1+ (Ngb−1 , ∗).
−1
Lemma 3.8. π1+ (Ngb−1 , ∗) is the free group freely generated by xi x−1
g , xg xj , yk and xg yk xg
for 1 ≤ i ≤ g − 1, 1 ≤ j ≤ g and 1 ≤ k ≤ b − 1.
Proof. We use the Reidemeister Schreier method. π1+ (Ngb−1 , ∗) is an index 2 subgroup of
π1 (Ngb−1 , ∗). Let U = {1, xg }. Remark that U is a Schreier transversal for π1+ (Ngb−1 , ∗)
in π1 (Ngb−1 , ∗). Let X = {x1 , . . . , xg , y1, . . . , yb−1 }. For u ∈ U and x ∈ X, ux = 1 if
(u, x) = (1, yj ) or (xg , xi ) (resp. ux = xg if (u, x) = (1, xi ) or (xg , yj )). A generating set of
−1
π1+ (Ngb−1 , ∗) is defined as B = {uxux−1 | u ∈ U, x ∈ X, ux ∈
/ U}. We see 1xi 1xi = xi xg−1 ,
−1
xg xj xg xj −1 = xg xj , 1yk 1yk = yk , xg yk xg yk −1 = xg yk x−1
g , for 1 ≤ i ≤ g − 1, 1 ≤ j ≤ g
−1
−1
is not in B. Thus we
and 1 ≤ k ≤ b − 1. Since 1xg 1xg = xg x−1
g = 1 ∈ U, 1xg 1xg
obtain the claim.
−1
Lemma 3.9. (πgb−1 )+ has a presentation with the generators xi x−1
g , xg xj , yk and xg yk xg
for 1 ≤ i ≤ g − 1, 1 ≤ j ≤ g and 1 ≤ k ≤ b − 1, and the following relators
2
(1) xi x−1
g · xg xi , xg for 1 ≤ i ≤ g − 1,
(2) yj for 1 ≤ j ≤ b − 1,
−1
−1
(3) xi x−1
g · xg xj · xk xg · xg xi · xj xg · xg xk for 1 ≤ i < j < k ≤ g,
(4) xg xi · xi x−1
g for 1 ≤ i ≤ g − 1,
(5) xg yj x−1
for
1 ≤ j ≤ b − 1,
g
−1
−1
(6) xg xi · xj xg · xg xk · xi x−1
g · xg xj · xk xg for 1 ≤ i < j < k ≤ g.
A NORMAL GENERATING SET FOR I(Ngb )
11
Proof. We apply the Reidemeister Schreier method for the presentation of πgb−1 in
Lemma 3.7. By the argument similar to Lemma 3.8, (πgb−1 )+ is generated by xi x−1
g ,
−1
xg xj , yk and xg yk xg for 1 ≤ i ≤ g − 1, 1 ≤ j ≤ g and 1 ≤ k ≤ b − 1. Let R be
the set of the relators of πgb−1 in Lemma 3.7. A set of the relators of (πgb−1 )+ is defined as S = {uru−1 | u ∈ U, r ∈ R}, where U = {1, xg }. We see x2i = xi x−1
g · xg xi ,
2
−1
−1
−1
2 −1
yj = yj , (xi xj xk ) = xi xg · xg xj · xk xg · xg xi · xj xg · xg xk , xg xi xg = xg xi · xi x−1
g ,
−1
2 −1
−1
−1
−1
xg yj x−1
=
x
y
x
,
x
(x
x
x
)
x
=
x
x
·
x
x
·
x
x
·
x
x
·
x
x
·
x
x
.
Thus
we
g j g
g
i j k
g i
j g
g k
i g
g j
k g
g
g
obtain the claim.
By Lemma 3.8 and Lemma 3.9, Γb−1
is normally generated by the relators of (πgb−1 )+
g
b−1 2
of Lemma 3.9, in π1+ (Ngb−1 , ∗). Hence Pgb−1 (Γb−1
g ) is normally generated by Pg (xi ),
b−1
−1
b−1
2 −1
Pgb−1 (yj ), Pgb−1 ((xi xj xk )2 ), Pgb−1 (xg x2i x−1
g ), Pg (xg yj xg ) and Pg (xg (xi xj xk ) xg ).
2
2 −1
bound a Möbius
Representative loops of x2i , xg x2i x−1
g , (xi xj xk ) and xg (xi xj xk ) xg
b−1 2
b−1
2 −1
b−1
band (see Figure 9).
Therefore Pg (xi ), Pg (xg xi xg ), Pg ((xi xj xk )2 ) and
b−1
2 −1
Pg (xg (xi xj xk ) xg ) are conjugate to Pgb−1 (x2g ). Thus we complete the proof of Proposition 3.3.
*
x12
xg2
(a) Representative loops of x21 , x22 , . . . , x2g bound
a Möbius band. Similarly a representative loop
of xg x2i x−1
g bounds a Möbius band for 1 ≤ i ≤ g.
*
(x1x2x3)2
(b) A representative loop of (x1 x2 x3 )2 bounds a Möbius band. Similarly representative loops of (xi xj xk )2 and xg (xi xj xk )2 xg−1 bound
a Möbius band for 1 ≤ i < j < k ≤ g.
Figure 9.
4. Proof of Theorem 1.1
We have the following short exact sequence:
1 → ker Cgb |I(Ngb ) → I(Ngb ) → Cgb (I(Ngb )) → 1.
Hence I(Ngb ) is the normal closure of ker Cgb |I(Ngb ) and lifts by Cgb of normal generators of
Cgb (I(Ngb )). By Corollary 2.3 and Corollary 3.4, we have that I(Ngb ) is normally generated
−1
b
b−1
b
by tδb , the lift tρb by Cgb of Pgb−1 (x2g ), the lift tσjb t−1
δj by Cg of Pg (yj ), the lift tσ̄jb tδj by Cg
b−1
of Pgb−1 (xg yj x−1
◦ Cgb )|I(Ngb ) of normal generators of I(Ngb−1 ). We prove
g ) and lifts by (Fg
12
R. KOBAYASHI
Theorem 1.1 by induction on the number b of the boundary components of Ngb . We take
the natural lift by (Fgb−1 ◦ Cgb )|I(Ngb ) of a normal generator of I(Ngb−1 ).
At first, I(Ng ) is normally generated by tα , tβ t−1
β ′ and tγ (see [5]). Hence we have that
−1
1
I(Ng ) is normally generated by tα , tβ tβ ′ , tγ and tδ1 , tρ1 . Similarly we have that I(Ng2 ) is
normally generated by tα , tβ t−1
β ′ , tγ , tδ1 , tρ1 and tδ2 , tρ2 , tσ12 , tσ̄12 . For b ≥ 3, suppose that
b−1
I(Ng ) is normally generated by tα , tβ t−1
β ′ , tγ , tδi , tρi , tσij and tσ̄ij for 1 ≤ i, j ≤ b − 1
b
with i < j. Then we have that I(Ng ) is normally generated by tα , tβ t−1
β ′ , tγ , tδi , tρi , tσij ,
tσ̄ij and tδb , tρb , tσkb , tσ̄kb for 1 ≤ i, j, k ≤ b − 1 with i < j. Hence we obtain a normal
generating set for I(Ngb ). In particular, for g ≥ 5 since I(Ng ) is normally generated by tα
b
and tβ t−1
β ′ (see [5]), we do not need tγ as a normal generator of I(Ng ) for g ≥ 5. Finally,
proving the following lemma, we finish the proof of Theorem 1.1.
Lemma 4.1. tδb , tρb , tσkb and tσ̄kb are not needed as normal generators of I(Ngb ).
Proof. Let aij , bjk and ckl be simple closed curves on Ngb as shown in Figure 10, for
1 ≤ i, j ≤ g and 1 ≤ k, l ≤ b − 1. Let dm be a diffeomorphism defined by pushing the
m-th boundary component once along an arrow as shown in Figure 11. Remark that the
isotopy class of dm is not in M(Ngb ), since dm does not fix boundary. taij and tdm (aij )
are conjugate to tα . tbjk and tdm (bjk ) are conjugate to ether tρk or t−1
ρk . tckl and tdm (ckl )
are conjugate to tσkl , with m 6= k, l. tdk (ckl ) and tdl (ckl ) are conjugate to tσ̄−1
and tσ̄kl ,
kl
respectively. Hence it suffices to show that tδb , tρb , tσkb and tσ̄kb are products of these
Dehn twists.
i
j
aij
k
l
ckl
bjk
Figure 10. The loops aij , bjk and ckl .
1
g
1
b
Figure 11. The diffeomorphism dm .
For simplicity, we denote taij = ai,j , tbjk = bj,k , tckl = ck,l , tdm (aij ) = ai,j;m, tdm (bjk ) =
bj,k;m and tdm (ckl ) = ck,l;m. We remark that ai,j , bj,k and ck,l are described as shown in
Figure 12.
A NORMAL GENERATING SET FOR I(Ngb )
13
ai,j
bj,k
ck,l
Figure 12.
tδb , tρb and tσkb are explicitly described by products of Dehn twists as follows.
tδb = (tδ1 · · · tδb−1 )−g−b+3
(a1,2 · · · a1,g · b1,1 · · · b1,b−1 ) · · · (ag−1,g · bg−1,1 · · · bg−1,b−1 )(bg,1 · · · bg,b−1 )
(c1,2 · · · c1,b−1 ) · · · (cb−3,b−2 cb−3,b−1 )(cb−2,b−1 ),
tρb = (tδ1 · · · tδb−1 )−g−b+4
(a1,2 · · · a1,g−1 · b1,1 · · · b1,b−1 ) · · · (ag−2,g−1 · bg−2,1 · · · bg−2,b−1 )(bg−1,1 · · · bg−1,b−1 )
(c1,2 · · · c1,b−1 ) · · · (cb−3,b−2 cb−3,b−1 )(cb−2,b−1 ),
tσkb = (tδ1 · · · tδk−1 · tδk+1 · · · tδb−1 )−g−b+4
(a1,2 · · · a1,g · b1,1 · · · b1,k−1 · b1,k+1 · · · b1,b−1 ) · · ·
(ag−1,g · bg−1,1 · · · bg−1,k−1 · bg−1,k+1 · · · bg−1,b−1 )(bg,1 · · · bg,k−1 · bg,k+1 · · · bg,b−1 )
(c1,2 · · · c1,k−1 · c1,k+1 · · · c1,b−1 ) · · · (ck−2,k−1 · ck−2,k+1 · · · ck−2,b−1 )
(ck−1,k+1 · · · ck−1,b−1)(ck+1,k+2 · · · ck+1,b−1 ) · · · (cb−3,b−2 cb−3,b−1 )(cb−2,b−1 ).
In addition, Since tσ̄kb = t−1
dk (σkb ) , tσ̄kb is a product of ai,j;k , bi,j;k , ci,j;k and tδi .
Thus we obtain the claim.
Acknowledgement
The author would like to express his thanks to Genki Omori for his valuable suggestions
and useful comments.
References
[1] J.S. Birman, D.R.J. Chillingworth, On the homeotopy group of a non-orientable surface, Math. Proc.
Camb. Phil. Soc. 71 (1972), 437–448. Erratum: Math. Proc. Camb. Phil. Soc. 136 (2004), 441–441.
[2] D.R.J. Chillingworth, A finite set of generators for the homeotopy group of a non-orientable surface,
Math. Proc. Camb. Phil. Soc. 65 (1969), 409–430.
[3] D.B.A. Epstein, Curves on 2-manifolds and isotopies, Acta Math. 115 (1966), 83–107.
[4] B. Farb, D, Margalit, A primer on mapping class groups, Princeton Mathematical Series, 49.
[5] S. Hirose, R. Kobayashi, A normal generating set for the Torelli group of a non-orientable closed
surface, arXiv:1412.2222 [math.GT], 2015.
14
R. KOBAYASHI
[6] D. Johnson, Homeomorphisms of a surface which act trivially on homology, Proc. Amer. Math. Soc.
75 (1979), 119–125.
[7] W.B.R. Lickorish, Homeomorphisms of non-orientable two-manifolds, Math. Proc. Camb. Phil. Soc.
59 (1963), 307–317.
[8] W.B.R. Lickorish, On the homeomorphisms of a non-orientable surface, Math. Proc. Camb. Phil. Soc.
61 (1965), 61–64.
[9] L. Paris, B. Szepietowski, A presentation for the mapping class group of a nonorientable surface,
arXiv:1308.5856v1 [math.GT], 2013.
[10] J. Powell, Two theorems on the mapping class group of a surface, Proc. Amer. Math. Soc. 68 (1978),
no. 3, 347–350.
[11] A. Putman, Cutting and pasting in the Torelli group, Geom. Topol. 11 (2007), 829–865.
[12] M. Stukow, Dehn twists on nonorientable surfaces, Fund. Math. 189 (2006), no. 2, 117–147.
[13] M. Stukow, Commensurability of geometric subgroups of mapping class groups, Geom. Dedicata 143
(2009), 117–142.
[14] B. Szepietowski, A presentation for the mapping class group of the closed non-orientable surface of
genus 4, J. Pure Appl. Algebra 213 (2009), no. 11, 2001–2016.
Department of General Education,
Ishikawa National College of Technology,
Tsubata, Ishikawa, 929-0392, Japan
E-mail address: kobayashi ryoma@ishikawa-nct.ac.jp
| 4 |
Tensor network method for reversible classical computation
Zhi-Cheng Yang,1 Stefanos Kourtis,1 Claudio Chamon,1 Eduardo R. Mucciolo,2 and Andrei E. Ruckenstein1
arXiv:1708.08932v2 [cond-mat.stat-mech] 5 Feb 2018
2
1
Physics Department, Boston University, Boston, Massachusetts 02215, USA
Department of Physics, University of Central Florida, Orlando, Florida 32816, USA
(Dated: February 6, 2018)
We develop a tensor network technique that can solve universal reversible classical computational
problems, formulated as vertex models on a square lattice [Nat. Commun. 8, 15303 (2017)]. By
encoding the truth table of each vertex constraint in a tensor, the total number of solutions compatible with partial inputs/outputs at the boundary can be represented as the full contraction of a
tensor network. We introduce an iterative compression-decimation (ICD) scheme that performs this
contraction efficiently. The ICD algorithm first propagates local constraints to longer ranges via
repeated contraction-decomposition sweeps over all lattice bonds, thus achieving compression on a
given length scale. It then decimates the lattice via coarse-graining tensor contractions. Repeated
iterations of these two steps gradually collapse the tensor network and ultimately yield the exact
tensor trace for large systems, without the need for manual control of tensor dimensions. Our protocol allows us to obtain the exact number of solutions for computations where a naive enumeration
would take astronomically long times.
I.
INTRODUCTION
Physics-inspired approaches have led to efficient algorithms for tackling typical instances of hard computational problems, shedding new light on our understanding of the complexity of such problems [1, 2]. The conceptual framework of these approaches is based on the
realization that the solutions of certain computational
problems are encoded in ground states of appropriate
statistical mechanics models. However, the existence of
either a thermodynamic phase transition into a glassy
phase or a first-order quantum phase transition represent
obstructions to reaching the ground state, often even for
easy problems [3–7]. Recently, Ref. [8] introduced a new
class of problems by mapping a generic reversible classical computation onto a two-dimensional vertex model
with appropriate boundary conditions. The statistical
mechanics model resulting from this mapping displays no
bulk thermodynamic phase transitions and the bulk thermodynamics is independent of the classical computation
represented by the model. Taken together, these features
remove an obvious obstacle to reaching the ground state
of a large class of computational problems and imply that
the time-to-solution and the complexity of the problem
are determined by the dynamics of the relaxation of the
corresponding system to its ground state. However, when
thermal annealing is employed, the resulting dynamics is
found to be extremely slow, and even easy computational
problems cannot be efficiently solved. Since any classical
computation implemented as a reversible circuit can be
formulated in this fashion, finding an algorithm that can
solve the resulting vertex models efficiently would have
far-reaching repercussions.
In this paper, we introduce a tensor network approach
that can treat vertex models encoding computational
problems. Tensor networks are a powerful tool in the
study of classical and quantum many-body systems in
two and higher spatial dimensions, and are also used as
compressed representations of large-scale structured data
in “big-data” analytics [9–11]. Here we are interested in
taking the trace of tensor networks [12–14], to count the
number of solutions of a computational problem. As opposed to thermal annealing, which serially visits individual configurations, tensor network schemes sum over all
configurations simultaneously. As a result, tensor-based
approaches lead to a form of virtual parallelization [15],
which, under certain circumstances, speeds up the computation of the trace. Most of the physics-driven applications have focused on tensor network renormalization
group (TNRG) algorithms that coarse grain the network
while optimally removing short-range entanglement [16–
29]. There are, however, two aspects of our physicsmotivated work that are qualitatively different from that
of TNRG approaches. First, vertex models of computational gates are intrinsically not translationally invariant.
Second, the trace over the tensor network, which counts
the number of solutions of the computational circuit (the
analogue of the zero temperature partition function of
statistical mechanics models) must be computed exactly,
within machine precision. (Approximations of the tensors lead to approximate counting, which in certain problems is no easier than exact counting [30].) Both features
are naturally treated by the methods proposed in this paper.
In our tensor network approach, the truth table of each
vertex constraint corresponding to a computational gate
is encoded in a tensor, such that the local compatibility between neighboring bits (or spins) is automatically
guaranteed upon contracting the shared bond between
two tensors. Summing over all possible unfixed boundary vertex states and contracting the entire tensor network give the partition function, which counts the total
number of solutions compatible with the boundary conditions, a problem belonging to the class #P. Finding a
solution can then be accomplished by fixing one boundary vertex at a time, with the total number of trials linear
in the number of input bits [15].
Our tensor network method, which we refer to as the
2
iterative compression-decimation (ICD) algorithm, can
be regarded as a set of local moves defining a novel dynamical path to the ground state of generalized vertex
models on a square lattice. These moves can be shown
to decrease or leave unchanged the bond dimensions of
the tensors involved, thus achieving optimal compression
(i.e., minimal bond dimension) of the tensor network on
a lattice of fixed size. The algorithm’s first step is to
propagate local vertex constraints across the system via
repeated contraction-decomposition sweeps over all lattice bonds. These back and forth sweeps are the higher
dimensional tensor-network analog of those employed in
the one-dimensional finite-system density matrix renormalization group (DMRG) method [31]. For problems
with non-trivial boundary conditions, such as those encountered in computation, these sweeps also propagate
the boundary constraints into the bulk, thus progressively building the connection between opposite (i.e., input/output) boundaries. In the next step, the algorithm
decreases the size of the lattice by coarse-graining the
tensor network via suitable contractions. Repeated iterations of these two steps allow us to reach larger and
larger system sizes while keeping the tensor dimensions
under control, such that ultimately the full tensor trace
can be taken.
The computational cost of ICD hinges upon the maximum bond dimension of the tensors during the coarsegraining procedure. We identify the hardness of a given
counting problem by studying the scaling of the maximum bond dimension as a function of the system size,
the concentration of nontrivial constraints imposed by
TOFFOLI gates, and the ratio of unfixed boundary vertices. We further present both the average and typical
maximum bond dimension distributions over random instances of computations. While we cannot distinguish
between polynomial and exponential scaling for the hardest regime of high TOFFOLI concentration, there exist
certain regimes of the problem where the bond dimension grows relatively slowly with system size. Therefore,
within this regime, we are able to count the exact number
of solutions within a large search space that is intractable
via direct enumerations.
The rest of the paper is organized as follows. We first
briefly introduce the tensor network representation of
generic vertex models on a square lattice in Sec. II. Section III describes the ICD algorithm for coarse-graining
and efficiently contracting the tensor network. In Section IV we apply the ICD method to reversible classical
computational problems as encoded in the vertex model
of computation introduced in Ref. [8], and discuss a relation between the number of solutions of the computational problem and the maximum bond dimension of the
tensor network from an entanglement perspective. Section V presents the implementation of the ICD and the
accompanying numerical scaling results for random computational networks defined by the concentration of Toffoli gates placed randomly at vertices of a tilted planar
square lattice. Finally, we close with Sec. VI, where we
outline future applications of our ICD algorithm to both
computational and physics problems.
II.
TENSOR NETWORK FOR VERTEX
MODELS
We start by introducing the tensor network representation for a generic vertex model. In our formulation,
discrete degrees of freedom reside on the edges of a regular lattice and they are coupled locally to their neighboring degrees of freedom. Couplings between degrees of
freedom are denoted by vertices. The couplings at each
vertex n = 1, . . . , Nsites , where Nsites denotes the total
number of vertices, are encoded into a tensor T [n] whose
rank will depend on the connectivity of the lattice. Fixing the state at all edges incident to a vertex collapses
the corresponding tensor to a scalar. For concreteness, let
us consider the square lattice as an example, as shown
in Fig. 1; generalizations to other types of lattices are
straightforward. Each tensor T [n] is therefore a rank-4
tensor T [n]ijkl , where i, j, k, l denote bond indices.
i
T [n]
T [n]ijkl
j
l
k
Figure 1. (Color online) Vertex model on a square lattice. A
local tensor T [n]ijkl is defined on each lattice site.
The tensor representation is quite general. If, for example, one associates a Boltzmann weight with each combination of bond index values, one can encode statistical
mechanics problems into the tensor network [17, 18, 22–
25]. Alternatively, by assigning boolean 1s to “compatible” combinations of bond index values and boolean 0s
to “incompatible” ones, such that the tensor represent a
vertex constraint or a truth table, one can either study
statistical mechanical vertex models at zero temperature,
or implement computational circuits with the tensor network (Sec. IV). Finally, one could even embed the weights
of a discretized path integral for a 1+1D quantum problem in a two-dimensional network. For finite systems
with boundaries, the boundary tensors will have a different rank from the bulk tensors.
We define the tensor trace of the network as
Y
Z = tTr
T [n]ijkl ,
(1)
n
where n runs over all lattice sites and tTr denotes full contractions of all bond indices. This trace may correspond
3
to the partition function for the 2D classical system, or
the number of possible solutions of a computation, or the
imaginary-time path integral for a 1D quantum system,
etc. In general, a brute-force evaluation of the full tensor
trace multiplies the dimensions of the tensors, thereby
requiring a number of operations exponential in system
sizes. It is therefore expedient for any strategy of evaluating the trace to keep the dimensions of tensors under
control at intermediate steps, so that the tensor trace
can be ultimately taken. Ideally, one would like a protocol that uses all available information — such as boundary conditions, compatibility constraints, energy costs or
Boltzmann weights, depending on the particular problem at hand — to compress the tensor network as much
as possible, while maintaining all the essential information therein. In Sec. III, we propose an efficient iterative
scheme that achieves this goal. In particular, as we detail
in Sec. IV, our algorithm provides a simple way to deal
with finite systems without translational invariance, and
subject to various types of boundary conditions.
III.
COMPRESSION-DECIMATION
ALGORITHM
In this section, we describe the compressiondecimation algorithm that facilitates the exact contraction of tensor networks. The algorithm consists of two
steps. First, we perform sweeps on the lattice via a singular value decomposition (SVD) of pairs of tensors in
order to eliminate short-range entanglement and propagate information from the boundary to the bulk, hence
removing the redundancies in the bond dimensions. Due
to its nature, we call this step compression. Next, we
contract pairs of rows and columns of the lattice such
that the system size is reduced. This step is referred to
as decimation. The two steps are then repeated until the
size and bond dimensions of the tensor network become
small enough to allow an exact full contraction of the
network.
Locally, the sweeps remove redundancies due to either
short-range entanglement or incompatibility in the local
tensors, and compress the information into tensors with
smaller bond dimensions. Globally, the sweeps propagate information about the boundary conditions to the
bulk, thus imposing global constraints on the local bulk
tensors. Moreover, since the sweeping is performed back
and forth across the entire lattice, it does not differentiate between whether or not translational invariance is
present. Therefore, our scheme may be thought of as
a higher dimensional analog of the finite-system DMRG
algorithm that applies to generic vertex models on finite
lattices.
A.
Compression
In this step, we visit sequentially each bond in the lattice and contract the corresponding indices of the two
tensors sharing this bond. We then perform an SVD on
the contracted bond and truncate the singular value spectrum keeping only those greater than a certain threshold δ. After that, the tensors are reconstructed with a
smaller bond dimension. We define each forward plus
backward traversal of all the bonds in the network as
one sweep. The specific choice of the threshold δ depends on the desired precision, as well as the problem
we are dealing with. For example, in formulating TNRG
algorithms, δ can be chosen to be some small but finite
number. On the other hand, for computational problems
such as counting, δ is chosen to be zero within machine
precision.
Let us take two tensors with the shared bond labeled by
i, T [1]a1 a2 a3 i and T [2]b1 ib2 b3 , as shown in Fig. 2a, where
we denote the dimension of bond i as di . We would like to
reduce di via a SVD. In principle, this can be achieved by
directly contracting T [1] and T [2] along dimension i into
a matrix MA,B = T [1]A,i T [2]i,B , where we have grouped
the other three indices of each tensor into superindices
A ≡ (a1 a2 a3 ) and B ≡ (b1 b2 b3 ), and then performing an
SVD. However, to avoid decomposing the matrix MA,B
with potentially large bond dimensions, we first do an
SVD on each individual tensor (Fig. 2b):
T [1]A,i =
T [2]i,B =
U [1]A,r Λ[1]r V [1]|r,i ,
U [2]i,r0 Λ[2]r0 V
[2]|r0 ,B .
(2a)
(2b)
Notice that the contraction of T [1] and T [2] can then be
written as
T [1]T [2] = U [1]A,r Λ[1]r V [1]|r,i U [2]i,r0 Λ[2]r0 V [2]|r0 ,B .
(2c)
This implies that we can instead perform an SVD on
fr,r0 =
the part shown within brackets in Eq. (2c): M
|
Λ[1]r V [1]r,i U [2]i,r0 Λ[2]r0 , which has much smaller dimensions since dr ≤ min(dA , di ), dr0 ≤ min(dB , di ). Now we
fr,r0 to obtain (Fig. 2c)
perform an SVD on the matrix M
fr,r0 = Ur,s Λs V | 0 .
M
s,r
(2d)
e(a a a ),s (Λs )1/2 ,
Te[1]a1 a2 a3 s ≡ U
1 2 3
|
e
T [2]sb1 b2 b3 ≡ (Λs )1/2 Ves,(b
,
1 b2 b3 )
(2e)
(At each SVD step described above, we discard singular values that are smaller than δ.) Therefore, after the
above steps, the bond dimension ds ≤ min(dr , dr0 ) ≤
min(di , dA , dB ). Finally, we construct new tensors as
(2f)
where the dimension of the shared bond is reduced
(Fig. 2d,e). Starting from one boundary, we visit sequentially each bond i ∈ 1, . . . , Nbonds , where Nbonds is
the total number of bonds in the lattice, and perform the
4
steps outlined above, until we reach the opposite boundary. Then we repeat the procedure in the opposite direction, until we reach the original boundary. The sweeping
can be repeated Nsweeps times, or until convergence of all
bond dimensions.
a1
(a)
(b)
b1
T [1]
T [2]
i
a2
b3
a3
U [1]
(a)
b2
⇤[1] V | [1]
⇤[2] V | [2]
U [2]
(b)
U
U [1]
⇤
V|
V | [2]
(c)
U [1] U
Figure 3. (Color online) (a) A column contraction involving
pairs of tensors along the x direction. (b) A row contraction
involving pairs of tensors along the y direction. The new
tensors resulting from the contractions are denoted by pink
dots, and the new bonds are denoted by orange lines.
⇤1/2 V | V | [2]
⇤1/2
(d)
column contraction, we contract pairs of tensors along
the x direction, and obtain a new tensor (see also Fig. 2a):
a1
(e)
a2
Te[1]
b1
Te[2]
s
a3
b3
b2
Figure 2. (Color online) The contraction-decomposition step
in the sweeping. (a) Two tensors T [1] and T [2] sharing a bond
i. (b) Perform SVDs on individual tensors respectively. (c)
Perform an SVD on the shaded part. (d) Split the resultant
matrices into two pieces. (e) Construct new tensors Te[1] and
Te[2].
B.
Decimation
The second step of the algorithm is to contract pairs
of rows and columns of the tensor network, so as to yield
a lattice with a smaller number of sites [22, 24]. As we
show in Fig. 3, this step consists of a column contraction
(Fig. 3a), followed by a row contraction (Fig. 3b). In the
T(a1 b1 )a2 (a3 b2 )b3 =
X
T [1]a1 a2 a3 i T [2]b1 ib2 b3 .
(3)
i
We then perform a row contraction similarly, during
which pairs of tensors are contracted along the y direction.
During this step, the dimensions of the bonds perpendicular to the current direction of contractions are multiplied and hence will inevitably grow. Therefore, after all
columns/rows are contracted, we sweep back and forth
again to reduce the bond dimensions. A simplified version of the compression-decimation scheme is presented
as pseudocode in Algorithm 1.
5
Algorithm 1 Iterative Compression-Decimation
Input: tensor network on a square lattice {T [n] | n ∈
1, . . . , Nsites }; Nsweeps ≥ 1; δ ≥ 0 (SVD truncation parameter).
Output: Z, as defined in Eq. (1)
1: repeat
2:
for i = 1, . . . , Nsweeps do (compression)
3:
for b = 1, . . . , Nbonds do (forward sweep)
4:
Contract, SVD, and update tensors as in Eq. 2
5:
end for
6:
for b = Nbonds , . . . , 1 do (backward sweep)
7:
Carry out backward sweep similarly
8:
end for
9:
end for
10:
Perform column contractions by Eq.(3) (decimation)
11:
Perform row contractions similarly (decimation)
12: until network is decimated to single site
13: Carry out tensor trace Eq. (1)
IV.
THE GENERAL TOFFOLI-BASED VERTEX
MODEL
In this section, we provide an example of a hard computational problem where our scheme can be applied to
find solutions in cases that are otherwise intractable. The
models we study here follow from the vertex model representation of reversible classical computations introduced
in Ref. [8]. We remark that this general vertex model
can address generic satisfiability problems, a statement
that follows from a series of results already documented
in the literature:
1. The circuit satisfiability (CSAT) problem is NPcomplete [33, 34];
2. The CSAT problem can be formulated in terms of
reversible circuits [35];
3. Any reversible circuit can be constructed using only
TOFFOLI gates [35];
A few remarks are in order. First, the lattice structure
lends us more flexibility with the coarse-graining step
since one does not have to contract every pair of rows
and columns. For example, in cases of systems without
translational invariance, representing either disordered
statistical mechanics models or models encoding computational circuits, the bond dimensions are in general
not distributed uniformly across the entire lattice. One
could then perform the contractions selectively on rows
and columns containing mostly tensors with small bond
dimensions while leaving the rest for the next coarsegraining step. In practice, one could set an appropriate threshold in the algorithm depending on the specific
problems. Second, the procedure described here is closely
related to the TNRG algorithms where the key is to optimally remove short-range entanglement at each RG step.
For example, Ref. [25] proposes a loop optimization approach for TNRG. An important step in that method is to
filter out short-range entanglement within a plaquette via
a QR decomposition, which we believe should be equivalent to our SVD-based sweeping. Moreover, as shall
be shown in Sec. V C, the sweeps take into account the
local environment around each tensor. The loop structure of short-range entanglement is eliminated (at least
partially) when we visit each bond around the loop and
sweep across the whole system. Whether or not more
elaborate schemes [20–29, 32] for taking into account the
tensor environment can improve the performance of the
sweeps in the ICD scheme will not concern us in this
work: we will see that even the simple sweep protocol
described above is sufficient for the solution of complex
generic computational problems. Third, our procedure
is more apt for systems without translational invariance,
e.g., spin glasses. Finally, the computational cost scales
as O(χ5 ) for the SVD steps, and O(χ7 ) for the tensor
contraction steps, where χ is the maximum bond dimension of the tensors. Hence the computational cost of our
compression-decimation algorithm scales as O(χ7 ).
4. Any reversible circuit constructed out of TOFFOLI
gates can be mapped onto our vertex model representation, with the addition of an appropriate number of identity and swap gates [8].
Hence, our vertex model can encode other satisfiability problems such as 3-SAT, which can be mapped into
CSAT. (Indeed, it is possible to program 3-SAT with n
variables and m clauses into a vertex model using a lattice of size n × 2m [36].)
The vertex model is defined on a square lattice of finite
size with periodic boundary conditions in the transverse
direction, thus placing the model on a cylinder. Depending on the specific computation, different types of boundary conditions are imposed in the longitudinal direction.
In addition, this model does not display translational invariance since different gates of the computational circuit
are implemented by different vertices. This model can encode general computational problems, including any of
the hard instances, and serves as an excellent candidate
to benchmark the performance of our scheme.
We start by giving a self-contained review of the general vertex model encoding reversible classical computations introduced in Ref. [8] and construct its tensor network representation. This is based on the fact that any
Boolean function can be implemented using a reversible
circuit constructed out of TOFFOLI gates, which are reversible three-bit logic gates taking the inputs (a, b, c)
to (a, b, ab ⊕ c). To facilitate the coupling of far-away
bits while maintaining the locality of TOFFOLI gates,
we use two-bit SWAP gates to swap neighboring bits,
(a, b) → (b, a), until pairs of distant bits are adjacent to
one another. Bits that do not need to be moved are simply copied forward using two-bit Identity (ID) gates. To
obtain a plane-covering tiling and thus a square-lattice
representation of the circuit, we combine the SWAP
and ID gates into the three-bit gates: ID-ID, ID-SWAP,
SWAP-ID, SWAP-SWAP, and represent each of them as
6
well as the TOFFOLI gate as a vertex with three inputs
and three outputs. The five types of vertices are shown
in Fig. 4, with the input and output bits explicitly drawn
on the links.
a
a
a
b
c
c
b
b
a
b
c
a
a
ab
c
TOFFOLI
b
c
b
SWAP-SWAP
a
ID-ID
a
a
c
b
b
b
a
c
c
b
ID-SWAP
c
c
SWAP-ID
Figure 4. (Color online) Five types of vertices used for the
vertex model representation of reversible classical computations. The input and output bits are denoted by blue squares
on the links associated with a given vertex.
Alternatively, one can think of bits as spin 1/2 particles
located on the bonds between vertices, whereas each vertex imposes local constraints between “input” and “output” spins, such that only 23 = 8 out of the 26 = 64
total configurations are allowed. For all five types of vertices, one can write local one- and two-spin interaction
terms, such that the allowed configurations are given by
the ground-state manifold of the Hamiltonian comprised
of all these terms [8]. The allowed configurations are then
separated from the excited states by a gap set by the energy scale of the couplings. In the large-couplings limit,
interactions can be equivalently thought of as constraints
and one therefore needs only to consider the subspace
where local vertex constraints are always satisfied.
Using the five types of vertices introduced above, one
can map an arbitrary classical computational circuit onto
a vertex model on a tilted square lattice, as shown in
Fig. 5.
Bits at the left and right boundaries store the input
and output respectively, and the horizontal direction corresponds to the computational “time” direction. The
boundary condition along the transverse direction is chosen to be periodic. Spin degrees of freedom representing
input and output bits associated with each vertex are
placed on the links. This model can be shown to display
no thermodynamic phase transition irrespective of the
circuit realizations via a straightforward transfer matrix
calculation [8].
When either only the input or only the output boundary bits are fully determined a priori, the physical system
functions as a regular circuit: the solution can be ob-
Figure 5. (Color online) Vertex model on a tilted square lattice encoding a generic classical computation. The left and
right boundaries stores the input and output states, and periodic boundary condition is taken along the transverse direction.
tained by passing the boundary state through the next
column of gates, obtaining the output, then passing this
output on to the next column of gates, repeating the procedure until the other boundary is reached. This mode of
solution, which we shall call direct computation, is trivial
and its computational cost scales linearly with the area
of the system.
On the other hand, by fixing only a subset of the left
and right boundaries, a class of nontrivial problems can
be encoded in the vertex model. For example, one can
cast the integer factorization problem on a reversible multiplication circuit precisely in this way [8, 37]. In these
cases, the boundary state cannot be straightforwardly
propagated from the boundaries throughout the entire
bulk, as the input or output of one or more gates is at
most only partly fixed, and therefore direct computation
unavoidably halts. Without any protocol of communication between the two partially fixed boundaries, one is
left with trial-and-error enumeration of all boundary configurations, whose number grows exponentially with the
number of unfixed bits at the boundaries. Even though
it is sometimes possible to exploit special (nonuniversal)
features of specific subsets of problems in order to devise efficient strategies of solution (e.g., factorization with
sieve algorithms), general schemes that perform favorably in solving the typical instances in the encompassing
class are important, both for highlighting the underlying
universal patterns and as launchpads towards customized
solvers for particular subsets of problems. The algorithm
introduced in this work is of the latter general kind.
A.
Tensor network representation
We shall now construct a tensor network representation of the vertex model, such that the full contraction
of the tensors yields the total number of solutions satisfying the boundary conditions. In the statistical mechanics language, this is the partition function of the vertex
model at zero temperature, which essentially counts the
ground state degeneracy.
7
Bulk tensors. We define a rank-4 tensor associated
with each vertex in the bulk, Tijkl , as shown in Fig. 6a.
The tensor components are initialized to satisfy the truth
table of the vertex constraint, meaning that Tijkl = 1 if
(ij) → (kl) satisfies the vertex constraint, and Tijkl = 0
otherwise. Here the indices should be understood as integers labeling the spin (bit) states on each bond. Notice
that the indices i, l correspond to double bonds on the
lattice while j, k correspond to single bonds. Therefore,
the original bond dimensions of the indices (i, j, k, l) are
(4, 2, 2, 4).
For concreteness, let us give an example of encoding
the truth table of the TOFFOLI gate into the tensor
Tijkl . First, recall that the gate function of TOFFOLI
is (a, b, c) → (a, b, d = c ⊕ ab). Comparing Fig. 6a with
Fig. 4, we identify on the input side, i ≡ (ab) = 21 b +
20 a, j = c; on the output side, k = a, l ≡ (bd) = 21 d+20 b.
In Table I, we explicitly list the truth table of the TOFFOLI gate and its corresponding non-zero tensor components. All unspecified tensor components are set to
zero. Tensors encoding the other four types of vertex
constraints can be obtained in a similar fashion.
meaning as the bulk tensors (Fig. 6b). Here we draw a
distinction between boundary tensors whose vertex states
are fixed and those that are not. For fixed boundary
vertices, Tij = 1 only for one component corresponding
to the fixed state, whereas for unfixed ones, Tij = 1 ∀ i, j.
Under the above definitions of local tensors, local compatibility between spins shared by two vertices is automatically guaranteed when contracting the corresponding two tensors. Moreover, the unfixed boundary tensors already encode the information of all possible vertex
states in a compact way, fulfilling a form of classical virtual parallelization [15]. Therefore, the full contraction
of the tensor network — if it can be performed — will
give the total number of solutions subject to a certain
boundary condition.
input output tensor component
a b c a b d Tijkl ≡ T(ab)ca(bd)
Figure 6. (Color online) Definition of (a) bulk and (b) boundary tensors.
0
0
0
0
1
1
1
1
0
0
1
1
0
0
1
1
0
1
0
1
0
1
0
1
0
0
0
0
1
1
1
1
0
0
1
1
0
0
1
1
0
1
0
1
0
1
1
0
T0000
T0102
T2001
T2103
T1010
T1112
T3013
T3111
=1
=1
=1
=1
=1
=1
=1
=1
Table I. Truth table and the corresponding tensor components
for the TOFFOLI gate. On the input side, i ≡ (ab) = 21 b +
20 a, j = c; on the output side, k = a, l ≡ (bd) = 21 d + 20 b.
All unspecified components are zero.
Boundary tensors. The vertices at the boundary have
only two bonds. Hence we define a rank-2 tensor Tij
at the boundary, where the indices i, j have the same
|ψi =
=
X
{qin ,qout }
X
{qin ,qout }
i
Tijkl
j
(a)
B.
k
i
l
j
Tij
i
j
(b)
Entanglement and number of solutions
Before moving on to the concrete application of the
algorithm, let us try to gain some insights into the bond
dimensions of local tensors needed to encode the information of the total number of solutions from an entanglement point of view [38]. Let us denote the collection of
free vertex states at the input and output boundaries by
{qin } and {qout }. We construct a weight W ({qin , qout })
which equals 1 if the state {qin , qout } is a solution, and 0
otherwise. The partition function is then given by
X
Z=
W ({qin , qout }),
(4)
{qin ,qout }
which equals the total number of solutions. Now we construct a quantum state as follows:
W ({qin , qout }) |{qin , qout }i
tTr (T [1]qin1 T [2]qin2 · · · T [i]T [i + 1] · · · T [N ]qoutL∂ ) |{qin }, {qout }i,
where N is the total number of vertices, L∂ is the number of unfixed vertices on each boundary, and tTr denotes tracing over all internal indices of the tensors.
Let us imagine taking a cut perpendicular to the peri-
(5)
odic direction and divide the system into two subsystems. The entanglement between subsystem left and
right is determined by the singular value spectrum of
the matrix W({qin }, {qout }) reshaped from the weight
8
W ({qin , qout }). W is a matrix whose entries are either
1 or 0, and there can be at most one entry in each row
and column that equals 1 due to the reversible nature of
the circuit. Thus the rank of the matrix W is Z, and
the zeroth-order Rényi entropy S (0) = lnZ. The entanglement entropy of the quantum state (5) is hence upper
bounded by S (1) ≤ S (0) = lnZ. Therefore, at least when
there is only a small (nonextensive) number of solutions,
the amount of entanglement is low and the information
can be encoded in tensors with small bond dimensions.
It may seem from the above argument that in the opposite limit of a large (extensive) number of solutions, the
bond dimensions would necessarily be large. However,
this is not true in general. Consider the open boundary
condition under which every locally compatible configuration is a solution. In this extreme limit, the quantum
state (5) is an equal amplitude superposition of all configurations, i.e., a product state. Such a state can be
represented with tensors of bond dimension one in the
‘x-basis’. One thus expects that in cases of many solutions, the state should also be close to a product state
with low entanglement, and hence can be represented
with tensors of small bond dimensions. The above arguments indicate that, if there is a highly entangled regime
where the bond dimensions required to represent the solution are large, then it must necessarily be for systems
with an intermediate number of solutions. In Sec. V C, we
show numerically that, even in the intermediate regime
where the solutions of an arbitrary vertex model are more
than just a few, it is possible to obtain an efficient and
compressed tensor network representation of the allowedconfiguration manifold.
Having argued that, for the case of problems with a
small number of solutions, solutions of vertex models
with partially fixed boundaries can be encoded into tensor networks with small bond dimensions, we set out to
find this tensor-network representation. The pertinent
motivating question is: given that there is a representation that can compress the full information of all solutions with relatively small bond dimensions, how can we
find it efficiently?
V.
APPLICATION OF THE ICD TO THE
RANDOM VERTEX MODEL
In this section we apply ICD to N random instances of
vertex lattices of fixed concentration of TOFFOLI gates,
and fixed and equal concentrations of all four other types
of gates shown in Fig. 4, with random input states for
each instance. By evaluating the full tensor trace for
each of these N instances and for various lattice sizes,
we obtain information about the average scaling of performing the underlying classical computations by means
of the ICD method. Moreover, we study the full distribution of the maximum bond dimension χ over random
realizations and find that the typical behavior is generally different than the average, due to the presence of
heavy tails in the bond-dimension distribution. Finally,
we establish numerically that the scaling of the actual
running time τ with the maximum bond dimension is
always better than the worst-case estimate τ ∼ O(χ7 ).
A.
Local moves
Since the vertex model is defined on a tilted square
lattice, we first need to turn it into a lattice as shown in
Fig. 1 in order to apply our algorithm in Sec. III. This can
be done by performing local moves on the tilted lattice,
which we explain below.
(a)
(b)
(c)
Figure 7. (Color online) Illustration of the local moves which
turn the original lattice into a square lattice rotated by 45◦ .
In (a), sites belonging to sublattice A and B are shown in
blue and green dots, respectively. From (b) to (c), four sites
belonging to a diamond are contracted into one.
The tilted square lattice Fig. 7a is bipartite, with two
sublattices A and B. Local tensor decompositions and
contractions for tensors on each sublattice can rearrange
the lattice into an “untilted” one, rotated by 45◦ with
respect to the original lattice. We start by splitting each
tensor on the original vertex lattice into two along either
horizontal or vertical direction, depending on which sublattice the corresponding site belongs to. Let us take a
bulk tensor Tijkl on the original lattice. If the site belongs
to sublattice A, we decompose P
the tensor horizontally
into two rank-3 tensors, Tijkl = q Aijq Bklq ; if the site
belongs to sublattice B, we instead decompose the tenP e
e
sor vertically, Tijkl = q A
ikq Bjlq , as shown in Fig. 8.
Such a decomposition can be achieved via an SVD on
|
the original tensors, T(ij),(kl) = U(ij),q Λq Vq,(kl)
to yield
|
Aijq = U(ij),q (Λq )1/2 and Bklq = (Λq )1/2 Vq,(kl)
. We visit
each site and split the tensors in this way. This turns the
9
tensor network into the structure shown in Fig. 7b. We
then further contract four tensors in a diamond into one
and finally arrive at a new square lattice rotated by 45◦
with respect to the original one (Fig. 7c). With these
local moves, which have to be carried out only once, we
cast the problem into the form discussed in Sec. II.
i
k
i
j
l
j
i
k
k
(a)
q
l
i
k
q
(b)
j
l
j
l
Figure 8. (Color online) Local moves that decompose each
tensor on the original lattice into two along either horizontal
or vertical direction, depending on whether the site belongs
to sublattice A (a) or B (b).
However, instead of doing an SVD on the original tensor, here we can use the fact that the tensors encode the
truth tables of reversible gates and use an alternative
method. Define a new set of tensors with an auxiliary inq
dex q = 0, 1, . . . , 7 labeling the vertex state, Teijkl
. Now
the component of this rank-(4,1) tensor is one if and only
if q is the same as the input state labeled by (i, j). Then,
the desired decomposition can be achieved as follows:
X q
X q
Aijq =
Teijkl , Bklq =
Teijkl ,
ij
kl
eikq =
A
X
jl
q
Teijkl
,
ejlq =
B
X
ik
q
Teijkl
.
(6)
One can easily check that the contraction of the A and B
tensors gives back the original tensor T , and hence this
achieves the splitting shown in Fig. 8. The remaining
steps of the algorithm are carried out exactly in the same
way as before. By construction, the bonds between the
resulting bulk tensors all have dimension 8.
B.
Control of bond dimensions
We can now apply the compression-decimation algorithm to count the number of solutions for a given boundary condition. As we discuss in Sec. III A, a truncation
threshold δ needs to be specified in the sweeping step of
the algorithm. Since we are performing an exact counting, no approximation in the truncation of the bond dimensions is made during the coarse-graining procedure,
i.e., we choose δ = 0 within machine precision. This is a
key methodological difference of the ICD to TNRG methods, which approximate physical observables to within a
certain accuracy by enforcing a finite δ.
As mentioned above, from a statistical mechanics
point-of-view, what we are computing is the zerotemperature partition function of the vertex model,
which yields the ground state degeneracy. In the bulk,
all locally compatible configurations are equally possible
until they receive information from the boundary conditions. Therefore, the coarse-graining step effectively
brings the boundaries close to one another, and the
sweeping step propagates information from the boundary to the bulk and knocks out states encoded in local
tensors that are incompatible with the global boundary
conditions.
The reason why the growth of bond dimensions remains controlled is that longer-range compatibility constraints over increasingly larger areas are enforced upon
the coarse-grained tensors. These constraints are propagated to neighboring coarse-grained tensors upon sweeping, thus further reducing bond dimensions and compressing the tensor-network representation. For the trivial cases of either fixing all gates on one boundary or
leaving them all free (open boundary condition), we
have checked that the tensors converge to bond dimension one (scalars) after one sweep, without the need of
coarse-graining. The tensor contraction is then simply reduced to multiplications of scalars, which can be trivially
computed and indeed gives the correct counting. This
demonstrates that the sweeping is responsible for propagating information from the boundary, and that the case
of fully fixing one boundary is thus equivalent to direct
computation, as described in Sec. IV.
In cases of mixed boundary conditions, the sweeping
on the original lattice scale will generally not be sufficient
to propagate information across the whole system or establish full communications between the two boundaries.
Thus one would expect that while the bond dimensions
close to the boundary may be small, those deep in the
bulk may be large. We therefore perform the contractions selectively on rows and columns containing mostly
tensors with small bond dimensions while leaving the rest
for the next coarse-graining step, as described in Sec. III.
C.
Numerical results
The computational cost of the ICD algorithm is determined by the maximum bond dimension encountered
during the coarse-graining and sweeping procedures. In
this section, we study the scaling of the maximum bond
dimension as function of the set of parameters defining an
instance of the problem: the number of vertices in each
column L, the total number of columns (circuit depth)
W , the concentration of TOFFOLI gates c, and the number of unfixed boundary vertices L∂ . For a given set of
parameters, we consider random tensor networks corresponding to typical instances of computational problems.
10
18
18
(a)
1.25
4
1.2
16
16
14
14
12
12
10
10
8
8
1.15
1.1
3
1.05
1
2
50
100
150
200
1
0.2
0
50
100
150
0.25
0.3
0.35
0.4
0.45
0.5
200
# sweeps
Figure 9. (Color online) The average bond dimension of the
entire lattice as a function of the number of sweeps in the
compression-decimation steps. The bumps where the average
bond dimension increases slightly correspond to the points
where we coarse-grain the lattice via column and row contractions. Inset: zoom-in plot from the 20th sweeping step.
By looking at the scaling of the bond dimensions, we gain
some understanding of how the hardness of the problems
depends on various parameters, which may serve as a
guidance for designing and analyzing computational circuits for practical problems.
Before looking into the scaling of the maximum bond
dimensions, we first show the average bond dimension
for the entire lattice as a function of the number of
sweeps in the compression-decimation steps. As seen
from Fig. 9, the average bond dimension indeed decreases
as the sweeping is performed. The bumps in the plot
correspond to the points where we coarse-grain the lattice via column and row contractions. At a given length
scale, the average bond dimension converges after a few
sweeps. As we increase the length scales, the average
bond dimension may first increase, but will eventually
drop again as we perform sweeps at the new length scale.
This demonstrates that the sweeping is able to impose
global constraints at the boundary into the bulk, hence
keeping the bond dimensions of bulk tensors under control.
We expect the maximum bond dimension to follow
the scaling function χ = G(L∂ /L, c, L, W/L). Below we
study the growth of maximum bond dimensions as a function of each system parameter numerically. First, we
consider the scaling of χ as the ratio of unfixed boundary vertices L∂ /L is varied, with the other parameters
fixed. As shown in Fig. 10a, the bond dimensions are
small for both small and large L∂ /L. This is in agreement with our discussions in Sec. IV B, where we argued
that in both regimes the states are close to product states
and there should exist a representation in which the bond
dimensions are small (the ‘z-basis’ and ‘x-basis’). For intermediate values of L∂ /L, the bond dimensions grow,
indicating the existence of a hard regime where either
there is no such a representation of small bond dimensions to fully encode the solutions, or it is very hard to
40
35
40
(b)
35
30
30
25
25
20
20
15
15
10
10
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
Figure 10. (Color online) Scaling of the average maximum
bond dimension χ with (a) the ratio of unfixed boundary vertices L∂ /L, and (b) TOFFOLI concentration c, versus the
scaling of the typical maximum bond dimension ehlnχi . The
remaining parameters are fixed in each plot. The data are obtained by averaging over 2000 realizations of random tensor
networks.
find such a representation via tensor optimization algorithms.
Fixing L∂ /L = 0.36, which corresponds to the hard
regime in Fig. 10a, we plot the scaling of χ as a function of the TOFFOLI concentration c. The TOFFOLI
gates impose nontrivial vertex constraints, which involve
a nonlinear relationship between the input and output
bits. In fact, in the absence of TOFFOLI gates, the vertex model can be expressed as 3L decoupled Ising chains
whose dynamics are simple [8]. In the ICD algorithm, the
maximum bond dimension indeed grows with increasing
TOFFOLI concentrations, as depicted in Fig. 10b.
Now let us look at the scaling of χ as a function of the
input size L. Again, we fix L∂ /L = 0.36 to stay in the
hard regime. Figure 11a shows that the maximum bond
dimension increases with increasing input size, even when
the aspect ratio W/L of the circuit is fixed. Because of
the limited range of L we were able to analyze, we cannot draw any conclusion regarding the functional form
of this scaling, which would determine the complexity of
our algorithm. However, we can demonstrate that our
algorithm is able to solve the problem in regimes that
are still intractable using a naive enumeration of solutions. For this purpose, we move away from the hardest regime and choose L∂ /L = 0.5. As can be seen in
11
100
100
(a)
65
(b)
65
80
80
60
60
60
60
40
40
55
55
20
20
50
50
25
30
35
40
45
50
16
49
16
(c)
14
14
12
12
10
10
8
8
3
3.1
3.2
3.3
3.4
51
53
55
57
59
61
63
65
35
30
35
(d)
30
25
25
20
20
15
15
10
10
4.5
4.7
4.9
5.1
5.3
5.5
5.7
5.9
6.1
Figure 11. (Color online) Scaling of the average maximum bond dimension hχi with L (a,b) and W/L (c,d), versus the scaling
of the typical maximum bond dimension ehlnχi . The data are obtained by averaging over 500 to 2000 realizations of random
tensor networks. In (a), the last point is averaged over 7 realizations and the error bar is not shown. The blue dotted line is a
guide to the eye, and corresponds to a quadratic fitting. In all cases, the typical values stay below the average values, due to
the presence of heavy tails in the distribution of χ.
Fig. 11b, we are able to reach much larger values of L
in this regime, and the bond dimensions, although still
growing, increase at a much slower pace. In fact, we were
able to reach L = 96 with an average maximum bond dimension hχi = 78.25 (data not shown). Since half of
the input vertices are unknown, a direct trial-and-error
enumeration would take 848 ≈ 1043 iterations to perform
an exact counting, which is prohibitive even with parallelization. We have thus shown that there is a subset of
nontrivial problems that can easily be solved by the ICD
method, for which (a) direct enumeration is impossible
to scale up and (b) efficient custom algorithms are not
known.
Finally, we show the scaling with the aspect ratio
W/L. As we previously discussed, the key to the reduction of the bond dimensions of bulk tensors is the
global constraint imposed at the boundary. The coarsegraining step brings the boundaries close together while
the sweeping step helps propagate information. Therefore, one should expect the problem to become harder
as the circuit depth W increases for a fixed L, since it
takes more iterations of coarse-graining for the connection between the boundaries to be built, all the while
the bond dimensions of the bulk tensors barely decrease.
In Fig. 11c we show the scaling of χ as a function of
W/L in the hard regime; χ indeed grows upon increasing W/L, as expected. For computational problems of
practical interests, the vertex model representation often
has the feature of low TOFFOLI concentration but large
aspect ratio, e.g., the multiplication circuit [8, 37]. In
Fig. 11d we show data for cases with this feature by lowering the TOFFOLI concentration to c = 5% and keeping
L∂ /L = 0.5. We find that the bond dimensions grow in
a similar fashion as in Fig. 11c, although a larger range
of values of W/L now becomes amenable.
The above results demonstrate the average scaling
behavior of the ICD algorithm over random instances
of computations. It is informative to compare this to
the typical behavior, revealed by analyzing the full distribution of the maximum bond dimensions; see, e.g.,
Fig. 11(a). In Fig. 12, we present the probability distribution of the maximum bond dimension. A vertical cut at
each L corresponds to the probability distribution over all
instances for that L. For larger L we observe secondary
peaks at larger χ, which gradually take up more weight,
thus shifting both average and typical maximum bond
dimension to higher values. Moreover, despite the fact
that the highest weight is always encountered at small
bond dimensions, a finite subset of hard instances generate much larger bond dimensions, leading to heavy tails
in the distributions. Average values are sensitive to such
tails, and hence do not faithfully represent the typical
instances. In Fig. 10 and 11, we also plot the values of
ehlnχi as an estimate of the typical behavior, in contrast
to hχi. Indeed, the typical values stay below the average
values in all cases studied. We point out that the pres-
12
(a)
10
9
8
7
6
5
2
3
4
5
6
(b)
Figure 13. (Color online) Scatter plot of the (logarithm of)
actual running time τ in units of seconds versus the (logarithm of) maximum bond dimension χ for 4600 instances.
The calculations were performed using a Python implementation of the ICD algorithm, using NumPy / LAPACK for all
linear algebra operations, on 2.0 GHz Intel Xeon Processors
E7-4809 v3.
Figure 12. (Color online) The full distribution of the maximum bond dimension over random instances. In (a), the color
plot along the vertical direction shows the probability distribution at each L. The orange (diamond) points show the
values of ehlnχi , giving an estimate of the typical instances in
contrast to the average instances depicted in purple (triangle)
points. In (b), we take the slice of L = 39 in (a) and plot the
histogram of the distribution.
ence of heavy tails in the distribution is ubiquitous in
random satisfiability problems, and such instances could
in principle be tackled with different strategies [39–42].
The efficiency of the ICD algorithm is controlled by the
maximum bond dimension encountered in each instance,
and in particular, the complexity of the algorithm is upper bounded by O(χ7 ) as discussed in Sec. III. Nevertheless, it is still useful to see whether the actual running time saturates this bound. In Fig. 13 we show the
scatter plot for the actual running time τ versus χ for
4600 random instances. We see a clear clustering of the
data points and a positive correlation between these two
quantities. The fact that there is a spreading of τ for
each χ can be understood by taking into account the
nonuniform spatial distributions of the bond dimensions
across the system. Unlike the TNRG algorithms, where
the bond dimensions of all tensors and all tensor legs are
frequently chosen to be uniform, bond dimensions of different tensors and of different legs of the same tensor are
typically highly nonuniform in the ICD method. Therefore, running times for instances with the same maximum
χ also depend on the number of bonds with dimension
χ. The distribution of χ throughout the system is thus
an important factor. Moreover, we find that the scaling
of the running time with the maximum bond dimension
τ ∼ χα has a power α < 7, which shows that the actual
performance of the algorithm is generally better than the
worst-case scenario estimate.
VI.
SUMMARY AND OUTLOOK
We presented a method for contracting tensor networks that is well suited for the solution of statistical
physics vertex models of universal classical computation.
In these models, the tensor trace represents the number of solutions. Individual solutions can be efficiently
extracted from the tensor network when the number of
solutions is small. More generally, the method applies
to any system, classical or quantum, whose quantity of
interest is a tensor trace in an arbitrary lattice.
Our scheme consists of iteratively compressing tensors
through a contraction-decomposition operation that reduces their bond dimensions, followed by decimation,
which increases bond dimensions but reduces the network
size. By repeated applications of this two step process –
compression followed decimation – one can gradually collapse rather large tensor networks.
In the context of computation, the method allowed us
to study relatively large classical reversible circuits represented by two dimensional vertex models. By contrast
with thermal annealing, direct computation from a fully
specified input boundary through the use of tensor networks occurs in a time linear in the depth of the circuit.
For complex problems with partially fixed input/output
boundaries tensor networks enable us to count solutions
in problems where enumeration would otherwise take of
order 850 operations.
We close with an outlook of future directions motivated
by this work.
First, focusing on the method per se, the performance
of our ICD algorithm could still be further improved.
13
There are enhancements that are simply operational in
nature, such as parallelization of the sweeping step of
the algorithm, which can be accomplished by dividing
the tensors into separate non-overlapping sets.
Second, at a more fundamental level, as we point out at
the end of Sec. V C, a better understanding of the mechanism by which short-range entanglement is removed
within the ICD method would require a systematic study
of the evolution of the spatial distribution of bond dimensions. The goal would be to design more controlled
bond dimension truncation schemes that involve the effect of the environment of local tensors, as proposed in
Refs. [20, 22, 24, 25, 32]. More generally, we expect that
our method can be applied to both classical and quantum
many-body systems in two and higher dimensions.
Third, in our study of computation-motivated problems, we focused on random tensor networks corresponding to random computational circuits. However, the ICD
methodology should be used to address problems of practical interest, a research direction that is being currently
explored. The results on the scaling of the bond dimensions presented above should inform the design and
analysis of tractable computational circuits, such as circuits with W ∼ L and a moderate number of TOFFOLI
gates. Multiplication circuits based on partial sums, for
instance, are very dense in TOFFOLI gates, and hence
are not good a priori candidates for tensor network formulations of related problems, such as factoring. However, different multiplication algorithms whose associated
vertex models are less dense in TOFFOLI gates, and
other computational problems could be amenable by our
approach. Identifying classes of computational problems
of practical interest that can be tackled with tensor network methods remains an open problem at the interface
between physics and computer science.
Finally, from a statistical mechanics point-of-view, one
may speculate that the ICD algorithm could allow us to
study the glass phase of disordered spin systems for which
classical Monte Carlo dynamics breaks down due to loss
of ergodicity.
[1] M. Mézard, G. Parisi, and R. Zecchina, “Analytic and
algorithmic solution of random satisfiability problems,”
Science 297, 812–815 (2002).
[2] M. Mezard and A. Montanari, Information, physics, and
computation (Oxford University Press, 2009).
[3] F. Ricci-Tersenghi, “Being glassy without being hard to
solve,” Science 330, 1639–1640 (2010).
[4] T. Jörg, F. Krzakala, G. Semerjian, and F. Zamponi,
“First-order transitions and the performance of quantum algorithms in random optimization problems,” Phys.
Rev. Lett. 104, 207206 (2010).
[5] A. P. Young, S. Knysh, and V. N. Smelyanskiy, “Firstorder phase transition in the quantum adiabatic algorithm,” Phys. Rev. Lett. 104, 020502 (2010).
[6] I. Hen and A.P. Young, “Exponential complexity of the
quantum adiabatic algorithm for certain satisfiability
problems,” Phys. Rev. E 84, 061152 (2011).
[7] E. Farhi, D. Gosset, I. Hen, A. W. Sandvik, P. Shor, A. P.
Young, and F. Zamponi, “Performance of the quantum
adiabatic algorithm on random instances of two optimization problems on regular hypergraphs,” Phys. Rev. A 86,
052334 (2012).
[8] C. Chamon, E. R. Mucciolo, A. E. Ruckenstein, and Z.C. Yang, “Quantum vertex model for reversible classical
computing,” Nat. Commun. 8 (2017).
[9] A. Cichocki, “Era of big data processing: A new approach
via tensor networks and tensor decompositions,” arXiv
preprint arXiv:1403.2048 (2014).
[10] N. Vervliet, O. Debals, L. Sorber, and L. De Lathauwer,
“Breaking the curse of dimensionality using decompositions of incomplete tensors: Tensor-based scientific com-
puting in big data analysis,” IEEE Signal Processing
Magazine 31, 71–79 (2014).
A. Cichocki, “Tensor networks for big data analytics
and large-scale optimization problems,” arXiv preprint
arXiv:1407.3124 (2014).
J. Biamonte, B. Ville, and Marco L., “Tensor network
methods for invariant theory,” Journal of Physics A:
Mathematical and Theoretical 46, 475301 (2013).
J. D. Biamonte, J. Morton, and J. Turner, “Tensor
network contractions for #sat,” Journal of Statistical
Physics 160, 1389–1404 (2015).
J. Biamonte and V. Bergholm, “Tensor networks in a
nutshell,” arXiv preprint arXiv:1708.00006 (2017).
C. Chamon and E. R. Mucciolo, “Virtual parallel computing and a search algorithm using matrix product
states,” Phys. Rev. Lett. 109, 030503 (2012).
F. Verstraete and J. Ignacio Cirac, “Renormalization algorithms for quantum-many body systems in two and
higher dimensions,” arXiv preprint cond-mat/0407066
(2004).
M. Levin and Cody P. Nave, “Tensor renormalization
group approach to two-dimensional classical lattice models,” Phys. Rev. Lett. 99, 120601 (2007).
Z.-C. Gu, M. Levin,
and X.-G. Wen, “Tensorentanglement renormalization group approach as a unified method for symmetry breaking and topological phase
transitions,” Phys. Rev. B 78, 205116 (2008).
H. C. Jiang, Z. Y. Weng, and T. Xiang, “Accurate Determination of Tensor Network State of Quantum Lattice Models in Two Dimensions,” Phys. Rev. Lett. 101,
090603 (2008).
ACKNOWLEDGMENTS
We thank Justin Reyes, Oskar Pfeffer, and Lei Zhang
for many useful discussions. The computations were carried out at Boston University’s Shared Computing Cluster. We acknowledge the Condensed Matter Theory Visitors Program at Boston University for support. Z.-C. Y.
and C. C. are supported by DOE Grant No. DE-FG0206ER46316. E. R. M. is supported by NSF Grant No.
CCF-1525943.
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
14
[20] Z.-C. Gu and X.-G. Wen, “Tensor-entanglement-filtering
renormalization approach and symmetry-protected topological order,” Phys. Rev. B 80, 155131 (2009).
[21] G. Evenbly and G. Vidal, “Algorithms for entanglement
renormalization,” Phys. Rev. B 79, 144108 (2009).
[22] Z. Y. Xie, J. Chen, M. P. Qin, J. W. Zhu, L. P. Yang, and
T. Xiang, “Coarse-graining renormalization by higherorder singular value decomposition,” Phys. Rev. B 86,
045139 (2012).
[23] G. Evenbly and G. Vidal, “Tensor network renormalization,” Phys. Rev. Lett. 115, 180405 (2015).
[24] H.-H. Zhao, Z.-Y. Xie, T. Xiang, and M. Imada, “Tensor
network algorithm by coarse-graining tensor renormalization on finite periodic lattices,” Phys. Rev. B 93, 125115
(2016).
[25] S. Yang, Z.-C. Gu, and X.-G. Wen, “Loop optimization
for tensor network renormalization,” Phys. Rev. Lett.
118, 110504 (2017).
[26] M. Bal, M. Mariën, J. Haegeman, and F. Verstraete,
“Renormalization group flows of hamiltonians using tensor networks,” Phys. Rev. Lett. 118, 250602 (2017).
[27] H. J. Liao, Z. Y. Xie, J. Chen, Z. Y. Liu, H. D. Xie, R. Z.
Huang, B. Normand, and T. Xiang, “Gapless spin-liquid
ground state in the s = 1/2 kagome antiferromagnet,”
Phys. Rev. Lett. 118, 137202 (2017).
[28] G. Evenbly, “Algorithms for tensor network renormalization,” Phys. Rev. B 95, 045117 (2017).
[29] A. M. Goldsborough and G. Evenbly, “Entanglement
renormalization for disordered systems,” arXiv preprint
arXiv:1708.07652 (2017).
[30] U. Feige, S. Goldwasser, L. Lovasz, S. Safra, and
M. Szegedy, “Approximating clique is almost npcomplete,” in Proceedings 32nd Annual Symposium of
Foundations of Computer Science (1991) pp. 2–12.
[31] U. Schollwöck, “The density-matrix renormalization
group,” Rev. Mod. Phys. 77, 259–315 (2005).
[32] G. Evenbly, “Algorithms for tensor network renormalization,” Phys. Rev. B 95, 045117 (2017).
[33] S. A. Cook, “The complexity of theorem-proving procedures,” in Proceedings of the third annual ACM symposium on Theory of computing (ACM, 1971) pp. 151–158.
[34] L. A. Levin, “Universal sequential search problems,”
Problemy Peredachi Informatsii 9, 115–116 (1973).
[35] M. A. Nielsen and I. L. Chuang, “Quantum computation
and quantum information,” (2004).
[36] In preparation.
[37] V. Vedral, A. Barenco, and A. Ekert, “Quantum networks for elementary arithmetic operations,” Phys. Rev.
A 54, 147–153 (1996).
[38] C. Chamon and E. R. Mucciolo, “Rényi entropies as a
measure of the complexity of counting problems,” Journal of Statistical Mechanics: Theory and Experiment
2013, P04008 (2013).
[39] Elizabeth Crosson, Edward Farhi, Cedric Yen-Yu Lin,
Han-Hsuan Lin, and Peter Shor, “Different strategies
for optimization using the quantum adiabatic algorithm,”
arXiv preprint arXiv:1401.7320 (2014).
[40] Damian S. Steiger, Troels F. Rønnow, and Matthias
Troyer, “Heavy tails in the distribution of time to solution for classical and quantum annealing,” Phys. Rev.
Lett. 115, 230501 (2015).
[41] Dave Wecker, Matthew B. Hastings, and Matthias
Troyer, “Training a quantum optimizer,” Phys. Rev. A
94, 022309 (2016).
[42] Zhi-Cheng Yang, Armin Rahmani, Alireza Shabani,
Hartmut Neven, and Claudio Chamon, “Optimizing
variational quantum algorithms using pontryagin’s minimum principle,” Phys. Rev. X 7, 021027 (2017).
| 8 |
On completeness of logical relations for monadic types ⋆
Sławomir Lasota1
⋆⋆
David Nowak2
Yu Zhang3 ⋆ ⋆ ⋆
arXiv:cs/0612106v1 [cs.LO] 21 Dec 2006
1
Institute of Informatics, Warsaw University, Warszawa, Poland
2
Research Center for Information Security,
National Institute of Advanced Industrial Science and Technology, Tokyo, Japan
3
Project Everest, INRIA Sophia-Antipolis, France
Abstract. Software security can be ensured by specifying and verifying security properties of software using formal methods with strong theoretical bases.
In particular, programs can be modeled in the framework of lambda-calculi, and
interesting properties can be expressed formally by contextual equivalence (a.k.a.
observational equivalence). Furthermore, imperative features, which exist in most
real-life software, can be nicely expressed in the so-called computational lambdacalculus. Contextual equivalence is difficult to prove directly, but we can often use
logical relations as a tool to establish it in lambda-calculi. We have already defined logical relations for the computational lambda-calculus in previous work.
We devote this paper to the study of their completeness w.r.t. contextual equivalence in the computational lambda-calculus.
1 Introduction
Contextual equivalence. Two programs are contextually equivalent (a.k.a. observationally equivalent) if they have the same observable behavior, i.e. an outsider cannot
distinguish them. Interesting properties of programs can be expressed using the notion
of contextual equivalence. For example, to prove that a program does not leak a secret,
such as the secret key used by an ATM to communicate with the bank, it is sufficient to
prove that if we change the secret, the observable behavior will not change [18,3,19]:
whatever experiment a customer makes with the ATM, he or she cannot guess information about the secret key by observing the reaction of the ATM. Another example is
to specify functional properties by contextual equivalence. For example, if sorted is a
function which checks that a list is sorted and sort is a function which sorts a list, then,
for all list l, you want the expression sorted(sort(l)) to be contextually equivalent to the
expression true. Finally, in the context of parameterized verification, contextual equivalence allows the verification for all instantiations of the parameter to be reduced to the
⋆
Partially supported by the RNTL project Prouvé, the ACI Sécurité Informatique Rossignol,
the ACI jeunes chercheurs “Sécurité informatique, protocoles cryptographiques et détection
d’intrusions”, and the ACI Cryptologie “PSI-Robuste”.
⋆⋆
Partially supported by the Polish K BN grant No. 4 T11C 042 25 and by the European Community Research Training Network Games. This work was performed in part during the author’s
stay at LSV.
⋆⋆⋆
This work was mainly done when the author was a PhD student under an MENRT grant on
ACI Cryptologie funding, École Doctorale Sciences Pratiques (Cachan).
verification for a finite number of instantiations (See e.g. [6] where logical relations are
one of the essential ingredients).
Logical relations. While contextual equivalence is difficult to prove directly because
of the universal quantification over contexts, logical relations [15,8] are powerful tools
that allow us to deduce contextual equivalence in typed λ-calculi. With the aid of the
so-called Basic Lemma, one can easily prove that logical relations are sound w.r.t. contextual equivalence. However, completeness of logical relations is much more difficult
to achieve: usually we can only show the completeness of logical relations for types up
to first order.
On the other hand, the computational λ-calculus [10] has proved useful to define
various notions of computations on top of the λ-calculus: partial computations, exceptions, state transformers, continuations and non-determinism in particular. Moggi’s
insight is based on categorical semantics: while categorical models of the standard λcalculus are cartesian closed categories (CCCs), the computational λ-calculus requires
CCCs with a strong monad. Logical relations for monadic types, which are particularly
introduced in Moggi’s language, can be derived by the construction defined in [2] where
soundness of logical relations is guaranteed.
However, monadic types introduce new difficulties. In particular, contextual equivalence becomes subtler due to the different semantics of different monads: equivalent
programs in one monad are not necessarily equivalent in another! This accordingly
makes completeness of logical relations more difficult to achieve in the computational
λ-calculus. In particular the usual proofs of completeness up to first order do not go
through.
Contributions. We propose in this paper a notion of contextual equivalence for the
computational λ-calculus. Logical relations for this language are defined according to
the general derivation in [2]. We then explore the completeness and we prove that for the
partial computation monad, the exception monad and the state transformer monad, logical relations are still complete up to first-order types. In the case of the non-determinism
monad, we need to restrict ourselves to a subset of first-order types. As a corollary, we
prove that strong bisimulation is complete w.r.t. contextual equivalence in a λ-calculus
with monadic non-determinism.
Not like previous work on using logical relations to study contextual equivalence
in models with computational effects [16,13,11], most of which focus on computations
with local states, our work in this paper is based on a more general framework for
describing computations, namely the computational λ-calculus. In particular, very different forms of computations like continuations and non-determinism are studied, not
just those for local states.
Plan. The rest of this paper is structured as follows: we devote Section 2 to preliminaries, by introducing basic knowledge of logical relations in a simple version of typed
λ-calculus; then from Section 3 on, we move to the computational λ-calculus and we
rest on a set-theoretical model. In particular, Section 3.4 sketches out the proof scheme
of completeness of logical relations for monadic types and shows the difficulty of getting a general proof; we then switch to case studies and we explore, in Section 4, the
completeness in the computational λ-calculus for a list of common monads: partial
computations, exceptions, state transformers, continuations and the non-determinism;
the last section consists of a discussion on related work and perspectives.
2 Logical relations for the simply typed λ-calculus
2.1 The simply typed λ-calculus λ→
Let λ→ be a simple version of typed λ-calculus:
Types:
Terms:
τ, τ ′ , ... ::= b | τ → τ ′
t, t′ , ... ::= x | c | λx · t | tt′
where b ranges over a set of base types (booleans, integers, etc.), c over a set of constants
and x over a set of variables. We write t[u/x] the result of substituting the term u for free
occurrences of the variable x in the term t. Typing judgments are of the form Γ ⊢ t : τ
where Γ is a typing context, i.e. a finite mapping from variables to types. We say that
x : τ is in Γ whenever Γ (x) = τ . We write Γ, x : τ for the typing context which agrees
with Γ except that it maps x to τ . Typing rules are as standard. We consider the set
theoretical semantics of λ→ . The semantics of any type τ is given by a set Jτ K. Those
sets are such that Jτ → τ ′ K is the set of all functions from Jτ K to Jτ ′ K, for all types τ
and τ ′ . A Γ -environment ρ is a map such that, for every x : τ in Γ , ρ(x) is an element
of Jτ K. We write ρ[x := a] for the environment which agrees with ρ except that it maps
x to a. We write [x := a] for the environment just mapping x to a. Let t be a term such
that Γ ⊢ t : τ is derivable. The denotation of t, w.r.t. a Γ -environment ρ, is given as
usual by an element JtKρ of Jτ K. We write JtK instead of JtKρ when ρ is irrelevant, e.g.,
when t is a closed term. When given a value a ∈ Jτ K, we say that it is definable if and
only if there exists a closed term t such that ⊢ t : τ is derivable and a = JtK.
Let Obs be a subset of base types, called observation types, such as booleans,
integers, etc. A context C is a term such that x : τ ⊢ C : o is derivable, where o
is an observation type. We spell the standard notion of contextual equivalence in a
denotational setting: two elements a1 and a2 of Jτ K, are contextually equivalent (written
as a1 ≈τ a2 ), if and only if for any context C such that x : τ ⊢ C : o (o ∈ Obs) is
derivable, JCK[x := a1 ] = JCK[x := a2 ]. We say that two closed terms t1 and t2 of
the same type τ are contextually equivalent whenever Jt1 K ≈τ Jt2 K. Without making
confusion, we shall use the same notation ≈τ to denote the contextual equivalence
between terms. We also define a relation ∼τ : for every pair of values a1 , a2 ∈ Jτ K,
a1 ∼τ a2 if and only if a1 , a2 are definable and a1 ≈τ a2 .
2.2 Logical relations
Essentially, a (binary) logical relation [8] is a family (Rτ )τ type of relations, one for
each type τ , on Jτ K such that related functions map related arguments to related results. More formally, it is a family (Rτ )τ type of relations such that for every f1 , f2 ∈
Jτ → τ ′ K,
f1 Rτ →τ ′ f2
⇐⇒
∀a1 , a2 ∈ Jτ K . a1 Rτ a2 =⇒ f1 (a1 ) Rτ ′ f2 (a2 )
There is no constraint on relations at base types. In λ→ , once the relations at base types
are fixed, the above condition forces (Rτ )τ type to be uniquely determined by induction
on types. We might have other complex types, e.g., products in variations of λ→ , and
in general, relations of these complex types should be also uniquely determined by
relations of their type components. For instance, pairs are related when their elements
are pairwise related. A unary logical relation is also called a logical predicate.
A so-called Basic Lemma comes along with logical relations since Plotkin’s work
[15]. It states that if Γ ⊢ t : τ is derivable, ρ1 , ρ2 are two related Γ -environments, and
every constant is related to itself, then JtKρ1 Rτ JtKρ2 . Here two Γ -environments ρ1 ,
ρ2 are related by the logical relation, if and only if ρ1 (x) Rτ ρ2 (x) for every x : τ
in Γ . Basic Lemma is crucial for proving various properties using logical relations [8].
In the case of establishing contextual equivalence, it implies that, for every context C
such that x : τ ⊢ C : o is derivable (o ∈ Obs), JCK[x := a1 ] Ro JCK[x := a2 ] for
every pair of related values a1 , a2 in Jτ K. If Ro is the equality, then JCK[x := a1 ] =
JCK[x := a2 ], i.e., a1 ≈τ a2 . Briefly, for every logical relation (Rτ )τ type such that
Ro is the equality for every observation type o, logically related values are necessarily
contextually equivalent, i.e., Rτ ⊆ ≈τ for any type τ .
Completeness states the inverse: a logical relation (Rτ )τ type is complete if every
contextually equivalent values are related by this logical relation, i.e., ≈τ ⊆ Rτ for
every type τ . Completeness for logical relations is hard to achieve, even in a simple
version of λ-calculus like λ→ . Usually we are only able to prove completeness for
types up to first order (the order of types is defined inductively: ord(b) = 0 for any base
type b; ord(τ → τ ′ ) = max(ord(τ ) + 1, ord(τ ′ )) for function types). The following
proposition states the completeness of logical relations in λ→ , for types up to first order:
Proposition 1. There exists a logical relation (Rτ )τ type for λ→ , with partial equality
on observation types, such that if ⊢ t1 : τ and ⊢ t2 : τ are derivable, for any type τ up
to first order, t1 ≈τ t2 =⇒ Jt1 K Rτ Jt2 K.
Proof. Let (Rτ )τ type be the logical relation induced by Rb = ∼b at every base type b
and we show that it is complete for types up to first order.
The proof is by induction over τ . Case τ = b is obvious. Let τ = b → τ ′ . Take two
terms t1 , t2 of type b → τ ′ such that Jt1 K and Jt2 K are related by ≈b→τ ′ . Let f1 = Jt1 K
and f2 = Jt2 K. Assume that a1 , a2 ∈ JbK are related by Rb , therefore a1 ∼b a2 since
Rb = ∼b . Clearly, a1 and a2 are thus definable, say by terms u1 and u2 , respectively.
Then, for any context C such that x : τ ′ ⊢ C : o (o ∈ Obs) is derivable,
JCK[x := f1 (a1 )]
= JC[xu1 /x]K[x := f1 ]
= JC[xu1 /x]K[x := f2 ]
= JCK[x := f2 (a1 )]
= JC[t2 x/x]K[x := a1 ]
= JC[t2 x/x]K[x := a2 ]
= JCK[x := f2 (a2 )].
(since a1 = Ju1 K)
(since f1 ≈b→τ ′ f2 )
(since f2 = Jt2 K)
(since a1 ≈b a2 )
Hence f1 (a1 ) ≈τ ′ f2 (a2 ). Moreover, f1 (a1 ) and f2 (a2 ) are therefore definable by t1 u1
and t2 u2 respectively. By induction hypothesis, f1 (a1 ) Rτ ′ f2 (a2 ). Because a1 and a2
are arbitrary, we conclude that f1 Rb→τ ′ f2 .
⊓
⊔
Note that an equivalent way to state completeness of logical relations is to say that
there exists a logical relation (Rτ )τ type which is partial equality on observation types
and such that, for all first-order types τ , ∼τ ⊆ Rτ .
3 Logical relations for the computational λ-calculus
3.1 The computational λ-calculus λComp
From the section on, our discussion is based on another language — Moggi’s computational λ-calculus. Moggi defines this language so that one can express various forms
of side effects (exceptions, non-determinism, etc.) in this general framework [10]. The
computational λ-calculus, denoted by λComp , extends λ→ :
Types:
Terms:
τ, τ ′ , ... ::= b | τ → τ ′ | Tτ
t, t′ , ... ::= x | c | λx · t | tt′ | val(t) | let x ⇐ t in t′
An extra unary type constructor T is introduced in the computational λ-calculus: intuitively, a type Tτ is the type of computations of type τ . We call Tτ a monadic type in
the sequel. The two extra constructs val(t) and let x ⇐ t in t′ represent respectively
the trivial computation and the sequential computation, with the typing rules:
Γ ⊢t:τ
Γ ⊢ val(t) : Tτ
Γ ⊢ t : Tτ Γ, x : τ ⊢ t′ : Tτ ′
Γ ⊢ let x ⇐ t in t′ : Tτ ′
Note that the let construct here should not be confused with that in PCF: in λComp ,
we bind the result of the term t to the variable x, but they are not of the same type — t
must be a computation.
Moggi also builds a categorical model for the computational λ-calculus, using the
notion of monads [10]. Whereas categorical models of simply typed λ-calculi such as
λ→ are usually cartesian closed categories (CCCs), a model for λComp requires additionally a strong monad (T, η, µ, t) be defined over the CCC. Consequently, a monadic
type is interpreted using the monad T : JTτ K = T Jτ K, and each term in λComp has a
unique interpretation as a morphism in a CCC with the strong monad [10]. Semantics
of the two additional constructs can be given in full generality in a categorical setting
[10]: the denotations of val construct and let construct are defined by the follwoing
composites respectively:
JΓ ⊢ val(t) : Tτ K :
JΓ ⊢t:τ K
ηJτ K
JΓ K −−−−−→ Jτ K −−−→ T Jτ K,
hidJΓ K ,JΓ ⊢t1 :Tτ Ki
tJΓ K,Jτ K
JΓ ⊢ let x ⇐ t1 in t2 : Tτ ′ K : JΓ K −−−−−−−−−−−→ JΓ K × T Jτ K −−−−−→ T JΓ K × Jτ K
µJτ ′ K
T JΓ,x:τ ⊢t2 :Tτ ′ K
−−−−−−−−−−→ T T Jτ ′ K −−−→ T Jτ ′ K.
In particular, the interpretation of terms in the computational λ-calculus must satisfy
the following equations:
Jlet x ⇐ val(t1 ) in t2 Kρ = Jt2 [t1 /x]Kρ,
(1)
Jlet x2 ⇐ (let x1 ⇐ t1 in t2 ) in t3 Kρ = Jlet x1 ⇐ t1 in let x2 ⇐ t2 in t3 Kρ,(2)
Jlet x ⇐ t in val(x)Kρ = JtKρ.
(3)
We shall focus on Moggi’s monads defined over the category Set of sets and functions. Figure 1 lists the definitions of some concrete monads: partial computations, exceptions, state transformers, continuations and non-determinism. We shall write λPESCN
Comp
to refer to λComp where the monad is restricted to be one of these five monads.
Partial computation:
JTτ K = Jτ K ∪ {⊥}
Jval(t)Kρ = JtKρ
Jlet x ⇐ t1 in t2 Kρ =
Exception:
Continuation:
Non-determinism:
Jt2 Kρ[x := Jt1 Kρ], if Jt1 Kρ 6= ⊥
⊥,
if Jt1 Kρ = ⊥
Jt2 Kρ[x := Jt1 Kρ], if Jt1 Kρ 6∈ E
Jt1 Kρ,
if Jt1 Kρ ∈ E
JTτ K = Jτ K ∪ E
Jval(t)Kρ = JtKρ
Jlet x ⇐ t1 in t2 Kρ =
State transformer:
JTτ K = (Jτ K × St )St
Jval(t)Kρ = λ s · (JtKρ, s)
Jlet x ⇐ t1 in t2 Kρ = λ s · (Jt2 Kρ[x := a1 ])s1 ,
where a1 = π1 ((Jt1 Kρ)s), s1 = π2 ((Jt2 Kρ)s)
Jτ K
JTτ K = RR
Jval(t)Kρ = λ kJτ K→R · k(JtKρ)
Jlet x ⇐ t1 in t2 Kρ = λ kJτ2 K→R · (Jt1 Kρ)k′
where k′ is a function: λ v Jτ1 K · (Jt2 Kρ[x := v])k
JTτ K = Pfin (Jτ K)
Jval(t)Kρ = {JtKρ}
S
Jlet x ⇐ t1 in t2 Kρ =
Jt2 Kρ[x := a]
a∈Jt1 Kρ
Fig. 1. Concrete monads defined in Set
The computational λ-calculus is strongly normalizing [1]. The reduction rules in
λComp are called βc-reduction rules in [1], which, apart from standard β-reduction in
the λ-calculus, contains especially the following two rules for computations:
let x ⇐ val(t1 ) in t2 →βc t2 [t1 /x],
(4)
let x2 ⇐ (let x1 ⇐ t1 in t2 ) in t →βc let x1 ⇐ t1 in (let x2 ⇐ t2 in t).(5)
With respect to the βc rules, every term can be reduced to a term in the βc-normal form.
Considering also the following η-equality rule for monadic types [1]:
let x ⇐ t in t′ [val(x)/x′ ] =η t′ [t/x′ ],
(6)
we can write every term of a monadic type in the following βc-normal η-long form
let x1 ⇐ d1 u11 · · · u1k1 in · · · let xn ⇐ dn un1 · · · unkn in val(u),
where n = 0, 1, 2, . . ., every di (1 ≤ i ≤ n) is either a constant or a variable, u and
uij (1 ≤ i ≤ n, 1 ≤ j ≤ kj ) are all βc-normal terms or βc-normal-η-long terms (of
monadic types). In fact, the rules (4-6) just identify the equations (1-3) respectively.
Lemma 1. For every term t of type Tτ in λComp , there exists a βc-normal-η-long
term t′ such that Jt′ Kρ = JtKρ, for every valid interpretation J_Kρ (i.e., interpretations
satisfying the equations (1-3)).
Proof. Because the computational λ-calculus is strongly normalizing, we consider the
βc-normal form of term t and prove it by the structural induction on t.
– If t is either a variable, a constant or an application, according to the equation (3):
JtKρ = Jlet x ⇐ t in val(x)Kρ.
In particular, if t is an application t1 t1 , then t1 must be either a variable or a constant
since t is βc-normal. Therefore, the term let x ⇐ t in val(x) is in the βc-normalη-long form.
– If t is a trivial computation val(t′ ), by induction there is a βc-normal-η-long term
t′′ such that Jt′ Kρ = Jt′′ Kρ, for every valid ρ, then Jval(t′ )Kρ = Jval(t′′ )Kρ as
well.
– If t is a sequential computation let x ⇐ t1 in t2 , since it is βc-normal, t1 should
not be any val or let term — t1 must be of the form du1 · · · un (n = 0, 1, 2, . . .)
with d either a variable or a constant. By induction, there is a βc-normal-η-long
term t′2 such that Jt2 Kρ = Jt2 Kρ, for every valid ρ, then JtKρ = Jlet x ⇐ t′1 in t′2 Kρ
and the latter is in the βc-normal-η-long form.
⊓
⊔
3.2 Contextual equivalence for λComp
As argued in [3], the standard notion of contextual equivalence does not fit in the setting
of the computational λ-calculus. In order to define contextual equivalence for λComp ,
we have to consider contexts C of type To (o is an observation type), not of type o.
Indeed, contexts should be allowed to do some computations: if they were of type o,
they could only return values. In particular, a context C such that x : Tτ ⊢ C : o is
derivable, meant to observe computations of type τ , cannot observe anything, because
the typing rule for the let construct only allows us to use computations to build other
computations, never values. Taking this into account, we get the following definition:
Definition 1 (Contextual equivalence for λComp ). In λComp , two values a1 , a2 ∈ Jτ K
are contextually equivalent, written as a1 ≈τ a2 , if and only if, for all observable types
o ∈ Obs and contexts C such that x : τ ⊢ C : To is derivable, JCK[x := a1 ] =
JCK[x := a2 ]. Two closed terms t1 and t2 of type τ are contextually equivalent if and
only if Jt1 K ≈τ Jt2 K. We use the same notation
≈τ to denote the contextual equivalence for terms.
3.3 Logical relations for λComp
A uniform framework for defining logical relations relies on the categorical notion of
subscones [9], and a natural extension of logical relations able to deal with monadic
types was introduced in [2]. The construction consists in lifting the CCC structure and
the strong monad from the categorical model to the subscone. We reformulate this construction in the category Set. The subscone is the category whose objects are binary
relations (A, B, R ⊆ A × B) where A and B are sets; and a morphism between
two objects (A, B, R ⊆ A × B) and (A′ , B ′ , R′ ⊆ A′ × B ′ ) is a pair of functions
(f : A → A′ , g : B → B ′ ) preserving relations, i.e. a R b ⇒ f (a) R′ g(b).
The lifting of the CCC structure gives rise to the standard logical relations given in
Section 2.2 and the lifting of the strong monad will give rise to relations for monadic
types. We write T̃ for the lifting of the strong monad T . Given a relation R ⊆ A × B
and two computations a ∈ T A and b ∈ T B, (a, b) ∈ T̃ (R) if and only if there exists
a computation c ∈ T (R) (i.e. c computes pairs in R) such that a = T π1 (c) and b =
T π2 (c). The standard definition of logical relation for the simply typed λ-calculus is
then extended with:
(c1 , c2 ) ∈ RTτ
⇐⇒
(c1 , c2 ) ∈ T̃ (Rτ ).
(7)
This construction guarantees that Basic Lemma always holds provided that every constant is related to itself [2]. A list of instantiations of the above definition in concrete
monads is also given in [2]. Figure 2 cites the relations for those monads defined in
Figure 1.
Partial computation:
Exception:
State transformer:
Continuation:
Non-determinism:
c1 RTτ c2 ⇔ c1 Rτ c2 or c1 = c2 = ⊥
c1 RTτ c2 ⇔ c1 Rτ c2 or c1 = c2 ∈ E
where E is the set of exceptions
c1 RTτ c2 ⇔ ∀s ∈ St . π1 (c1 s) Rτ π1 (c2 s) & π2 (c1 s) = π2 (c2 s)
where St is the set of states
c1 RTτ c2 ⇔ c1 (k1 ) = c2 (k2 ) for every k1 , k2 such that
∀a1 , a2 . a1 Rτ a2 =⇒ k1 (a1 ) = k2 (a2 )
c1 RTτ c2 ⇔ (∀a1 ∈ c1 . ∃a2 ∈ c2 . a1 Rτ a2 ) &
(∀a2 ∈ c2 . ∃a1 ∈ c1 . a1 Rτ a2 )
Fig. 2. Logical relations for concrete monads
We restrict our attention to logical relations (Rτ )τ type such that, for any observation type o ∈ Obs, RTo is a partial equality. Such relations are called observational in
the rest of the paper.
Note that we require partial identity on To, not on o. But if we assume that denotation of val(_), i.e., the unit operation η, is injective, then that RTo is a partial equality
implies that Ro is a partial equality as well. Indeed, let a1 Ro a2 , and by Basic Lemma,
Jval(x)K[x := a1 ] RTo Jval(x)K[x := a2 ], that is to say ηJoK (a1 ) = ηJoK (a2 ). By injectivity of η, a1 = a2 .
Theorem 1 (Soundness of logical relations in λComp ). If (Rτ )τ
tional logical relation, then Rτ ⊆ ≈τ for every type τ .
type
is an observa-
It is straightforward from the Basic Lemma.
3.4 Toward a proof on completeness of logical relations for λComp
Completeness of logical relations for λComp is much subtler than in λ→ due to the
introduction of monadic types. We were expecting to find a general proof following the
general construction defined in [2]. However, this turns out extremely difficult although
it might not be impossible with certain restrictions, on types for example. The difficulty
arises mainly from the different semantics for different forms of computations, which
actually do not ensure that equivalent programs in one monad are necessarily equivalent
in another. For instance, consider the following two programs in λComp :
let x ⇐ t1 in let y ⇐ t2 in val(x),
let y ⇐ t2 in let x ⇐ t1 in val(x),
where both t1 and t2 are closed term. We can conclude that they are equivalent in the
non-determinism monad — they return the same set of possible results of t1 , no matter
what results t2 produces, but this is not the case in, e.g., the exception monad when t1
and t2 throw different exceptions.
Being with such an obstacle, we shall switch our effort to case studies in Section 4
and we explore the completeness of logical relations for a list of common monads,
precisely, all the monads listed in Figure 1. But, let us sketch out here a general structure
for proving completeness of logical relations in λComp . In particular, our study is still
restricted to first-order types, which, in λComp , are defined by the following grammar:
τ 1 ::= b | Tτ 1 | b → τ 1 ,
where b ranges over the set of base types.
Similarly as in Proposition 1 in Section 2.2, we investigate completeness in a strong
sense: we aim at finding an observational logical relation (Rτ )τ type such that if ⊢
t1 : τ and ⊢ t2 : τ are derivable and t1 ≈τ t2 , for any type τ up to first order, then
Jt1 K Rτ Jt2 K. Or briefly, ∼τ ⊆ Rτ , where ∼τ is the relation defined in Section 2. As in
the proof of Proposition 1, the logical relation (Rτ )τ type will be induced by Rb = ∼b ,
for any base type b. Then how to prove the completeness for an arbitrary monad T ?
Note that we should also check that the logical relation (Rτ )τ type , induced by
Rb = ∼b , is observational, i.e., a partial equality on To, for any observable type o.
Consider any pair (a, b) ∈ RTo = T̃ (Ro ). By definition of the lifted monad T̃ , there
exists a computation c ∈ T Ro such that a = T π1 (c) and b = T π2 (c). But Ro = ∼o ⊆
idJoK , hence the two projections π1 , π2 : Ro → JoK are the same function, π1 = π2 , and
consequently a = T π1 (c) = T π2 (c) = b. This proves that RTo is a partial equality.
As usual, the proof of completeness would go by induction over τ , to show ∼τ ⊆
Rτ for each first-order type τ . Cases τ = b and τ = b → τ ′ go identically as in λ→ .
The only difficult case is τ = Tτ ′ , i.e., the induction step:
∼τ ⊆ Rτ =⇒ ∼Tτ ⊆ RTτ
(8)
We did not find any general way to show (8) for an arbitrary monad. Instead, in the next
section we prove it by cases, for all the monads in Figure 1 except the non-determinism
monad. The non-determinism monad is an exceptional case where we do not have completeness for all first-order types but a subset of them. This will be studied separately in
Section 4.3.
At the heart of the difficulty of showing (8), we find an issue of definability at
monadic types in the set-theoretical model. We write def τ for the subset of definable
elements in Jτ K, and we eventually show that the relation between def Tτ and def τ can
be shortly spelled-out:
def Tτ ⊆ T def τ
(9)
for all the monads we consider in this paper. This is a crucial argument for proving
completeness of logical relations for monadic types, but to show (9), we need different
proofs for different monads. This is detailed in Section 4.1.
4 Completeness of logical relations for monadic types
4.1 Definability in the set-theoretical model of λPESCN
Comp
As we have seen in λ→ , definability is involved largely in the proof of completeness of
logical relations (for first-order types). This is also the case in λComp and it apparently
needs more concern due to the introduction of monadic types.
Despite we did not find a general proof for (9), it does hold for all the concrete
monads in λPESCN
Comp . To state it formally, let us first define a predicate Pτ on elements
of Jτ K, by induction on types:
– Pb = def b , for every base type b;
– PTτ = T (def τ ∩ Pτ );
– Pτ →τ ′ = {f ∈ Pτ →τ ′ | ∀a ∈ def τ , f (a) ∈ Pτ ′ }.
We say that a constant c (of type τ ) is logical if and only if τ is a base type or JcK ∈ Pτ .
ESCN
We then require that λP
contains only logical constants. Note that this restriction
Comp
is valid because the predicates PT τ and Pτ →τ ′ depend only on definability at type τ .
Some typical logical constants for monads in λPESCN
are as follows:
Comp
– Partial computation: a constant Ωτ of type Tτ , for every τ . Ωτ denotes the nontermination, so JΩτ K = ⊥.
– Exception: a constant raiseeτ of type Tτ for every type τ and every exception
e ∈ E. raiseeτ does nothing but raises the exception e, so Jraiseeτ K = e.
– State transformer: a constant updates of type Tunit for every state s ∈ St, where
unit is the base type which contains only a dummy value ∗. updates simply changes
the current state to s, so for any s′ ∈ St, Jupdates K(s′ ) = (∗, s).
– Continuation: a constant callkτ of type τ → T bool for every τ and every continuation k ∈ RJτ K . callkτ calls directly the continuation k — it behaves somehow likey “goto” command, so for any a ∈ Jτ K and any continuation k ′ ∈ Rbool,
q
callkτ (a)(k ′ ) = k(a).
– Non-determinism: a constant +τ of type τ → τ → Tτ for every non-monadic type
τ . +τ takes two arguments and returns randomly one of them — it introduces the
non-determinism, so for any a1 , a2 ∈ Jτ K, J+τ K(a1 , a2 ) = {a1 , a2 }.
1
We assume in the rest of this paper that the above constants are present in λPESCN
Comp .
Note that Pτ being a predicate on elements of Jτ K is equivalent to say that Pτ can
be seen as subset of Jτ K, but in the case of monadic types, PTτ (i.e., T (def τ ∩ Pτ )) is
not necessary a subset of JTτ K (i.e., T Jτ K). Fortunately, we prove that all the monads in
preserves inclusions, which ensures that the predicate P is well-defined:
λPESCN
Comp
Proposition 2. All the monads in λPESCN
preserve inclusions: A ⊆ B ⇒ T A ⊆ T B.
Comp
Proof. We check it for every monad in λPESCN
Comp :
– Partial computation: according to the monad definition, if A ⊆ B, then for every
c ∈ T A:
c ∈ T A ⇐⇒ c ∈ A or c = ⊥ =⇒ c ∈ B or c = ⊥ ⇐⇒ c ∈ T B.
– Exception: for every element c ∈ T A:
c ∈ T A ⇐⇒ c ∈ A or c ∈ E =⇒ c ∈ B or c ∈ E ⇐⇒ c ∈ T B.
– State transformer: for every a ∈ T A:
c ∈ T A ⇐⇒ ∀s ∈ St . π1 (cs) ∈ A =⇒ ∀s ∈ St . π1 (cs) ∈ B ⇐⇒ c ∈ T B.
A
– Continuation: this is a special case because apparently T A = RR is not a subset of
B
T B = RR , since they contain functions that are defined on different domains, but
we shall consider here the functions coinciding on the smaller set A as equivalent.
We say that two functions f1 and f2 defined on a domain B coincide on A (A ⊆ B),
written as f1 |A = f2 |A , if and only if for every x ∈ A, f1 (x) = f2 (x). Then for
every c ∈ T A:
∀k1 , k2 ∈ RB . k1 = k2 =⇒ k1 |A = k2 |A =⇒ c(k1 ) = c(k2 ),
so c is also function from RB to R, i.e., c ∈ T B.
– Non-determinism: for every c ∈ T A:
c ∈ T A ⇐⇒ ∀a ∈ c . a ∈ A =⇒ ∀a ∈ c . a ∈ B ⇐⇒ c ∈ T B.
⊓
⊔
Introducing such a constraint on constants is mainly for proving (9). Let us figure
out the proof. Take an arbitrary element c in def Tτ . By definition, there exists a closed
term t of type Tτ such that JtK = c. While it is not evident that c ∈ T def τ , we are expecting to show that JtK ∈ T def τ , by considering the βc-normal-η-long form of t, since
1
It is easy to check that each of these constants is related to itself, except callkτ for continuations. However, we still assume the presence of callkτ for the sake of proving completeness,
while we are not able to prove the soundness with it. Note that Theorem 1 and Theorem 2 still
hold, but they are not speaking of the same language.
λComp is strongly normalizing, Take the partial computation monad as an example,
where T def τ = def τ ∪ {⊥}. Consider the βc-normal-η-long form of t:
let x1 ⇐ d1 u11 · · · u1k1 in · · · let xn ⇐ dn un1 · · · unkn in val(u).
We shall make the induction on n. It is clear that JtK ∈ T def τ when n = 0. For
the induction step, we hope that the closed term d1 u11 · · · u1k1 (of type Tτ1 ) would
produce either ⊥ (the non-termination), or a definable result (of type τ1 ) so that we can
substitute x1 in the rest of the normal term with the result of d1 u11 · · · u1k1 and make
use of induction hypothesis. The constraint on constants helps here: to ensure that after
the substitution, the resulted term is still in the proper form so that the induction would
go through.
The following lemma shows that for every computation term t, JtK ∈ T def τ if t is
in a particular form, which is a more general form of βc-normal-η-long form.
Lemma 2. In λPESCN
Comp , JtK ∈ T def τ , for every closed computation term t (of type Tτ )
of the following form:
t ≡ let x1 ⇐ t1 w11 · · · w1k1 in · · · let xn ⇐ tn wn1 · · · wnkn in val(w),
where n = 0, 1, 2, . . . and ti (1 ≤ i ≤ n) is either a variable or a closed term such that
P(Jti K) holds, and w, wij (1 ≤ i ≤ n, 1 ≤ j ≤ ki ) are valid λPESCN
terms.
Comp
Proof. We prove it by induction on n, for every monad:
– Partial computation (T def τ = def τ ∪ {⊥}): if n = 0, it is clear that JtK ∈ T def τ .
When n > 0, because P(Jt1 K) holds (t1 must be closed), Jt1 w11 · · · w1k1 K ∈
T (def τ1 ∩ Pτ1 ). If Jt1 w11 · · · w1k1 K = ⊥, then JtK = ⊥ ∈ T def τ ; otherwise,
assume that Jt′1 K = Jt1 w11 · · · w1k1 K where t′1 is a closed term of type τ1 (assuming
that t1 w11 · · · w1k1 is of type Tτ1 ). According to the definition of P, P(Jt′1 K) holds.
Let t′ be another closed term:
′
′
′
′
in val(w[t′1 /x1 ]),
in · · · let xn ⇐ t′n wn1
· · · wnk
t′ ≡ let x2 ⇐ t′2 w21
· · · w2k
n
2
′
where t′i (2 ≤ i ≤ n) is either t′1 or ti , wij
≡ wij [t′1 /x1 ] (2 ≤ i ≤ n, 1 ≤ j ≤ ki ).
′
By induction, Jt K ∈ T def τ holds. Furthermore,
Jt′ K = Jlet x2 ⇐ t2 w21 · · · w2k2 in · · ·
let xn ⇐ tn wn1 · · · wnkn in val(w)K[x1 := Jt′1 K]
= Jlet x1 ⇐ t1 w11 · · · w1k1 in · · · let xn ⇐ tn wn1 · · · wnkn in val(w)K
= JtK,
hence JtK ∈ T def τ .
– Exception (T def τ = def τ ∪ E): if n = 0, clearly JtK ∈ T def τ . When n > 0,
because P(Jt1 K) holds, Jt1 w11 · · · w1k1 K ∈ T (def τ1 ∩ Pτ1 ). If Jt1 w11 · · · w1k1 K ∈
E, then JtK ∈ E ⊆ T def τ ; otherwise, exactly as in the case of partial computation,
build a term t′ . Similarly, we prove that JtK = Jt′ K ∈ T def τ by induction.
– State transformer (T def τ = (def τ × St)St ): when n = 0, for every s ∈ St,
π 1 (JtKs) = JwK ∈ def τ hence JtK ∈ T def τ . When n > 0, for every s ∈ St,
assume that Jts1 K = π 1 (Jt1 w11 · · · w1k1 Ks) where t′1 is a closed term of type τ1
(assuming that t1 w11 · · · w1k1 is of type Tτ1 ). According to the definition of P,
P(Jts1 K) holds. Let ts be another closed term:
s
s
s
s
in val(w[ts1 /x1 ]),
in · · · let xn ⇐ tsn wn1
· · · wnk
ts ≡ let x2 ⇐ ts2 w21
· · · w2k
n
2
s
where tsi (2 ≤ i ≤ n) is either ts1 or ti , wij
≡ wij [ts1 /x1 ] (2 ≤ i ≤ n, 1 ≤ j ≤ ki ).
s
By induction, Jt K ∈ T def τ holds. Furthermore, for every s ∈ St,
JtKs = Jlet x1 ⇐ t1 w11 · · · w1k1 in · · · let xn ⇐ tn wn1 · · · wnkn in val(w)Ks
= (Jlet x2 ⇐ t2 w21 · · · w2k2 in · · ·
let xn ⇐ tn wn1 · · · wnkn in val(w)K[x1 := Jts1 K])s′
s ′
= Jt Ks ,
where s′ = π2 (Jt1 w11 · · · w1k1 Ks). Since Jts K ∈ T def τ for every s ∈ St, π1 (JtKs) =
π1 (Jts Ks′ ) ∈ def τ , hence JtK ∈ T def τ .
Jτ K
def τ
– Continuation (T def τ = RR ): we say that an element c ∈ JTτ K = RR is in
Jτ K
T def τ if and only if for every pair of continuations k1 , k2 ∈ R ,
k1 |def τ = k2 |def τ =⇒ c(k1 ) = c(k2 ).
If n = 0, JtK = λ k.k(JwK) ∈ T def τ . When n > 0, according to the definition of
the continuation monad: JtK = λ k · Jt1 w11 · · · wnkn K(k ′ ), where
k ′ = λ a·(Jlet x2 ⇐ t2 w21 · · · w2k2 in · · · let xn ⇐ tn wn1 · · · wnkn in val(w)K[x1 := a])k.
For every continuations k1 , k2 ∈ RJτ K such that k1 |def τ = k2 |def τ let
ki′ = λ a·(Jlet x2 ⇐ t2 w21 · · · w2k2 in · · · let xn ⇐ tn wn1 · · · wnkn in val(w)K[x1 := a])ki ,
i = 1, 2. Because Jt1 w11 · · · w1k1 K ∈ T (Pτ1 ∩def τ1 ), if we can prove k1′ |Pτ1 ∩def τ1 =
k2′ |Pτ1 ∩def τ1 , which implies JtK(k1 ) = JtK(k2 ), we can conclude JtK ∈ T def τ . For
every a ∈ Pτ1 ∩def τ1 , let Jta1 K = a where ta1 is a closed term. Define another closed
term ta :
a
a
a
a
in val(w[ta1 /x1 ]),
in · · · let xn ⇐ tan wn1
· · · wnk
ta ≡ let x2 ⇐ ta2 w21
· · · w2k
n
2
a
where tai (2 ≤ i ≤ n) is either ta1 or ti , wij
≡ wij [ta1 /x1 ] (2 ≤ i ≤ n, 1 ≤
a
′
j ≤ ki ). By induction, Jt K ∈ T def τ , so k1 (a) = Jta Kk1 = Jta Kk2 = k2′ (a), i.e.,
k1′ |Pτ1 ∩def τ1 = k2′ |Pτ1 ∩def τ1 .
– Non-determinism (T def τ = Pfin (def τ )): when n = 0, JtK = {JwK} ∈ T def τ .
When n > 0, for every a ∈ Jt1 w11 · · · w1k1 K, assume that Jta1 K = a where t′1 is a
closed term of type τ1 (assuming that t1 w11 · · · w1k1 is of type Tτ1 ). According to
the definition of P, P(Jta1 K) holds. Let ta be another closed term:
a
a
a
a
in val(w[ta1 /x1 ]),
in · · · let xn ⇐ tan wn1
· · · wnk
ta ≡ let x2 ⇐ ta2 w21
· · · w2k
n
2
a
where tai (2 ≤ i ≤ n) is either ta1 or ti , wij
≡ wij [ta1 /x1 ] (2 ≤ i ≤ n, 1 ≤ j ≤ ki ).
a
By induction, Jt K ∈ T def τ holds. Furthermore,
JtK = Jlet
S x1 ⇐ t1 w11 · · · w1k1 in · · · let xn ⇐ tn wn1 · · · wnkn in val(w)K
Jlet x2 ⇐ t2 w21 · · · w2k2 in · · ·
=
a∈Jt1 K
xn ⇐ tn wn1 · · · wnkn in val(w)K[x1 := a]
S let
=
Jta K.
a∈Jt1 K
Because Jta K ∈ T def τ holds for every a ∈ Jt1 w11 · · · w1k1 K, JtK ∈ T def τ .
⊓
⊔
From the above lemma, we conclude immediately that for every closed βc-normalη-long computation term t in λPESCN
with logical constants, JtK ⊆ T def τ .
Comp
Proposition 3. def Tτ ⊆ T def τ holds in the set-theoretical model of λPESCN
with
Comp
logical constants.
Proof. It follows from Lemma 2 by considering the βc-normal-η-long terms that define
elements in JTτ K since λComp is strongly normalizing.
⊓
⊔
4.2 Completeness of logical relations in λPESC
Comp for first-order types
We prove (8) in this section for the partial computation monad, the exception monad,
the state monad and the continuation monad. We write λPESC
Comp for λComp where the
monad is restricted to one of these four monads.
Proofs depend typically on the particular semantics of every form of computation,
but a common technique is used frequently: given two definable but non-related elements of JTτ K, one can find a context to distinguish the programs (of type Tτ ) that
define the two given elements, and such a context is usually built based on another
context that can distinguish programs of type τ .
Lemma 3. Let (Rτ )τ type be a logical relation in λPESC
Comp with only logical constants.
∼τ ⊆ Rτ =⇒ ∼Tτ ⊆ RTτ holds for every type τ .
Proof. Take two arbitrary elements c1 , c2 ∈ JTτ K such that (c1 , c2 ) 6∈ RTτ , we prove
that c1 6∼Tτ c2 for every monad in λPESC
Comp :
– Partial computation: the fact (c1 , c2 ) 6∈ RTτ amounts to the following two cases:
• c1 , c2 ∈ Jτ K but (c1 , c2 ) 6∈ Rτ , then c1 6∼τ c2 . If one of these two values is
not definable at type τ , by Proposition 3, it is not definable at type Tτ either.
If both values are definable at type τ but they are not contextually equivalent,
then there is a context x : τ ⊢ C : To such that JCK[x := c1 ] 6= JCK[x := c2 ].
Thus, the context y : Tτ ⊢ let x ⇐ y in C : To can distinguish c1 and c2 (as
two values of type Tτ ).
• c1 ∈ Jτ K and c2 = ⊥ (or symmetrically, c1 = ⊥ and c2 ∈ Jτ K), then the
context let x ⇐ y in val(true) can be used to distinguish them.
c1 6∼Tτ c2 in both cases.
– Exception: the fact (c1 , c2 ) 6∈ RTτ amounts to three cases:
• c1 , c2 ∈ Jτ K but (c1 , c2 ) 6∈ Rτ , then c1 6∼τ c2 . Suppose both values are definable at type τ , otherwise by Proposition 3, they must not be definable at type
Tτ . Similar as in the case of partial computation we can build a context that
distinguishes c1 and c2 as values of type Tτ , from the context that distinguishes
c1 and c2 as values of type τ .
• c1 ∈ Jτ K, c2 ∈ E. Consider the following context:
y : Tτ ⊢ let x ⇐ y in val(true) : Tbool.
When y is substituted by c1 and c2 , the context evaluates to different values,
namely, a boolean and an exception.
• c1 , c2 ∈ E but c1 6= c2 . Try the same context as in the second case, which will
evaluate to two different exceptions that can be distinguished.
c1 6∼Tτ c2 in all the three cases.
– State transformer: because (c1 , c2 ) 6∈ RTτ , there exists some s0 ∈ St such that
• either (π1 (c1 s0 ), π1 (c2 s0 )) 6∈ Rτ . Then by induction π1 (c1 s0 ) 6∼τ π1 (c2 s0 ).
If π1 (ci s0 ) (i = 1, 2) is not definable, then by Proposition 3, ci is not definable
either. If both π1 (c1 s0 ) and π1 (c2 s0 ) are definable, but π1 (c1 s0 ) 6≈τ π1 (c2 s0 ),
then there is a context x : τ ⊢ C : To such that JCK[x := π1 (c1 s0 )] 6= JCK[x :=
π1 (c2 s0 )], i.e., for some state s′0 ∈ St ,
JCK[x := π1 (c1 s0 )](s′0 ) 6= JCK[x := π1 (c1 s0 )](s′0 ).
Now we can use the following context:
y : Tτ ⊢ let x ⇐ y in let z ⇐ updates′0 in C : T o,
Let fi =
s ∈ St,
r
z
let x ⇐ y in let z ⇐ updates′0 in C [y := ci ], then for every
r
z
fi (s) = let z ⇐ updates′0 in C [x := π1 (ci s)](π2 (ci s))
= JCK[x := π1 (ci s)](s′0 ),
(i = 1, 2).
f1 6= f2 , because when applied to the state s0 , they will return two different
pairs, so the above context can distinguish the two values c1 and c2 ;
• or π2 (c1 s0 ) 6= π2 (c2 s0 ). we use the context
y : Tτ ⊢ let x ⇐ y in val(true) : Tbool,
then Jlet x ⇐ y in val(true)K[y := ci ] = λs.(true, π2 (ci s)) (i = 1, 2).
These two functions are not equal since they return different results when applied to the state s0 .
In both cases, c1 6∼Tτ c2 .
– Continuation: first say that two continuations k1 , k2 ∈ RJτ K are R-related, if and
only if for every a1 , a2 ∈ Jτ K, a1 Rτ a2 =⇒ k1 (a1 ) = k2 (a2 ). The fact (c1 , c2 ) 6∈
RTτ means that there are two R-related continuations k1 , k2 such that c1 (k1 ) 6=
c2 (k2 ). Because ∼τ ⊆ Rτ , for every definable value a ∈ def τ , clearly,
a ∼τ a =⇒ a1 R a2 =⇒ k1 (a1 ) = k2 (a2 ),
so k1 and k2 coincide over def τ . Suppose that both c1 and c2 are definable, then
by Proposition 3, c1 (k1 ) = c1 (k2 ) and c2 (k1 ) = c2 (k2 ), hence c1 (k1 ) 6= c2 (k1 ).
Consider the context
y : Tτ ⊢ let x ⇐ y in callkτ 1 (x) : T bool.
For every k ∈ RJboolK ,
q
y
k1
let x ⇐
q y inkcallyτ (x) [y := ci ](k)
= ci (λ a · ( callτ 1 (x) [x := a])k)
= ci (λ a · k1 (a)) = ci (k1 ).
(i = 1, 2),
Since c1 (k1 ) 6= c2 (k1 ), this context distinguishes the two computations, hence
c1 6∼Tτ c2 .
⊓
⊔
Theorem 2. In λPESC
Comp , if all constants are logical and in particular, if the following
constants are present
– updates for the state transformer monad;
– callkτ for the continuation monad,
then logical relations are complete up to first-order types, in the strong sense that there
exists an observational logical relation (Rτ )τ type such that for any closed terms t1 , t2
of any type τ 1 up to first order, if t1 ≈τ 1 t2 , then Jt1 K Rτ 1 Jt2 K.
Proof. Take the logical relation (Rτ )τ type induced by Rb =∼b , for any base type b.
We prove by induction on types that ∼τ ⊆ Rτ for any first-order type τ . In particular,
the induction step ∼τ ⊆ Rτ =⇒∼Tτ ; ⊆ RTτ is shown by Lemma 3.
⊓
⊔
4.3 Completeness of logical relations for the non-determinism monad
The non-determinism monad is an interesting case: the completeness of logical relations
for this monad does not hold for all first-order types! To state it, consider the following
two programs of a first-order type that break the completeness of logical relations:
⊢ val(λx.(true +bool false)) : T(bool → Tbool),
⊢ λx.val(true) +bool→Tbool λx.(true +bool false) : T(bool → Tbool).
Recall the logical constant +τ of type τ → τ → Tτ : J+τ K(a1 , a2 ) = {a1 , a2 } for
every a1 , a2 ∈ Jτ K. The two programs are contextually equivalent: what contexts can
do is to apply the functions to some arguments and observe the results. But no matter
how many time we apply these two functions, we always get the same set of possible
values ({true, false}), so there is no way to distinguish them with a context. Recall
the logical relation for non-determinism monad in Figure 2:
c1 RTτ c2 ⇔ (∀a1 ∈ c1 . ∃a2 ∈ c2 . a1 Rτ a2 ) & (∀a2 ∈ c2 . ∃a1 ∈ c1 . a1 Rτ a2 ).
Clearly the denotations of the above two programs are not related by that relation because the function Jλx.val(true)K from the second program is not related to the function in the first.
However, if we assume that for every non-observable base type b, there is an equality
test constant testb : b → b → bool (clearly, P(testb ) holds), logical relations for the
non-determinism monad are then complete for a set of weak first-order types:
τw1 ::= b | Tb | b → τw1 .
Compared to all types up to first order, weak first-order types do not contain monadic
types of functions, so it immediately excludes the two programs in the above counterexample.
Theorem 3. Logical relations for the non-determinism monad are complete up to weak
first-order types. in the strong sense that there exists an observational logical relation
(Rτ )τ type such that for any closed terms t1 , t2 of a weak first-order type τw1 , if t1 ≈τw1
t2 , then Jt1 K Rτw1 Jt2 K.
Proof. Take the logical relation R induced by Rb =∼b , for any base type b. We prove
by induction on types that ∼τw1 ⊆ Rτ 1 for any weak first-order type τw1 .
w
Cases b and b → τw1 go identically as in standard typed lambda-calculi. For monadic
types Tb, suppose that (c1 , c2 ) 6∈ RTb , which means either there is a value in c1 such
that no value of c2 is related to it, or there is such a value in c2 . We assume that every
value in c1 and c2 is definable (otherwise it is obvious that c1 6∼Tb c2 because at least
one of them is not definable, according to Proposition 3). Suppose there is a value a ∈ c1
such that no value in c2 is related to it, and a can be defined by a closed term t of type
b. Then the following context can distinguish c1 and c2 :
x : Tτ ⊢ let y ⇐ x in testb (y, t) : Tbool
since every value in c2 is not contextually equivalent to a, hence not equal to a.
Now let state and label be base types such that label is an observation type,
whereas state is not. Using non-determinism monad, we can define labeled transition
systems as elements of Jstate → label → TstateK, with states in JstateK and labels
in JlabelK, as functions mapping states a and labels l to the set of states b such that
l
/ b . The logical relation at type state → label → Tstate is given by [2]:
a
(f1 , f2 ) ∈ Rstate→label→Tstate ⇐⇒
∀a1 , a2 , l1 , l2 · (a1 , a2 ) ∈ Rstate & (l1 , l2 ) ∈ Rlabel =⇒
(∀b1 ∈ f1 (a1 , l1 ) · ∃b2 ∈ f2 (a2 , l2 ) · (b1 , b2 ) ∈ Rstate )
& (∀b2 ∈ f2 (a2 , l2 ) · ∃b1 ∈ f1 (a1 , l1 ) · (b1 , b2 ) ∈ Rstate )
In case Rlabel is equality, f1 and f2 are logically related if and only if Rstate is a strong
bisimulation between the labeled transition systems f1 and f2 .
Sometimes we explicitly specify an initial state for certain labeled transition system.
In this case, the encoding of the labeled transition system in the nondeterminism monad
is a pair (q, f ) of Jstate × (state → label → Tstate)K, where q is the initial state and
f is the transition relation as defined above. Then (q1 , f1 ) and (q2 , f2 ) are logically
related if and only if they are strongly bisimular, i.e., Rstate is a strong bisimulation
between the two labeled transition systems and q1 Rstate q2 .
Corollary 1 (Soundness of strong bisimulation). Let f1 and f2 be transition systems.
If there exists a strong bisimulation between f1 and f2 , then f1 and f2 are contextually
equivalent.
Proof. There exists a strong bisimulation between f1 and f2 , therefore f1 and f2 are
logically related. By Theorem 1, f1 and f2 are thus contextually equivalent.
In order to prove completeness, we need to assume that label has no junk, in the
sense that every value of JlabelK is definable.
Corollary 2 (Completeness of strong bisimulation). Let f1 and f2 be transition systems which are definable. If f1 and f2 are contextually equivalent and label has no
junk, then there exists a strong bisimulation between f1 and f2 .
Proof. Let R be the logical relation given by Theorem 3. f1 and f2 are definable and
contextually equivalent, so f1 Rstate→label→Tstate f2 . Moreover, because label has no
junk, Rlabel is equality. Rstate is thus a strong bisimulation between f1 and f2 .
⊓
⊔
5 Conclusion
The work presented in this paper is a natural continuation of the authors’ previous
work [2,3]. In [2], we extend [9] and derive logical relations for monadic types which
are sound in the sense that the Basic Lemma still holds. In [3], we study contextual
equivalence in a specific version of the computational λ-calculus with cryptographic
primitives and we show that lax logical relations (the categorical generalization of logical relations [14]) derived using the same construction is complete. Then in this paper,
we explore the completeness of logical relations for the computational λ-calculus and
we show that they are complete at first-order types, for a list of common monads: partial computations, exceptions, state transformers and continuations, while in the case
of continuation, the completeness depends on a natural constant call, with which we
cannot show the soundness.
Pitts and Stark have defined operationally based logical relations to characterize the
contextual equivalence in a language with local store [13]. This work can be traced back
to their early work on the nu-calculus [12] which can be translated in a special version of
the computational λ-calculus and be modeled using the dynamic name creation monad
[17]. Logical relations for this monad are derived in [19] using the construction from
[2]. It is also shown in [19] that such derived logical relations are equivalent to Pitts and
Stark’s operational logical relations up to second-order types.
An exceptional case of our completeness result is the non-determinism monad,
where logical relations are not complete for all first-order types, but a subset of them.
We effectively show this by providing a counter-example that breaks the completeness
at first-order types. This is indeed an interesting case. A more comprehensive study on
this monad can be found in [4], where Jeffrey defines a denotational model for the computational λ-calculus specialized in non-determinism and proves that this model is fully
abstract for may-testing. The relation between our notion of contextual equivalence and
the may-testing equivalence remains to be clarified.
Recently, Lindley and Stark introduce the syntactic ⊤⊤-lifting for the computational λ-calculus and prove the strong normalization [7]. Katsumata then instantiates
their liftings in Set [5]. The ⊤⊤-lifting of strong monads is an essentially different
approach from that in [2]. It would be interesting to establish a formal relationship between these two approaches, and to look for a general proof of completeness using the
⊤⊤-lifting.
References
1. P. N. Benton, G. M. Bierman, and V. C. V. de Paiva. Computational types from a logical
perspective. J. Functional Programming, 8(2):177–193, 1998.
2. J. Goubault-Larrecq, S. Lasota, and D. Nowak. Logical relations for monadic types. In
Proceedings of CSL’2002, volume 2471 of LNCS, pages 553–568. Springer, 2002.
3. J. Goubault-Larrecq, S. Lasota, D. Nowak, and Y. Zhang. Complete lax logical relations for
cryptographic lambda-calculi. In Proceedings of CSL’2004, volume 3210 of LNCS, pages
400–414. Springer, 2004.
4. A. Jeffrey. A fully abstract semantics for a higher-order functional language with nondeterministic computation. Theoretical Computer Science, 228(1-2):105–150, 1999.
5. S. Katsumata. A semantic formulation of ⊤⊤-lifting and logical predicates for computational metalanguage. In Proceedings of CSL’2005, volume 3634 of LNCS, pages 87–102.
Springer, 2005.
6. R. Lazić and D. Nowak. A unifying approach to data-independence. In Proceedings of
CONCUR’2000, volume 1877 of LNCS, pages 581–595. Springer, 2000.
7. S. Lindley and I. Stark. Reducibility and ⊤⊤-lifting for computation types. In Proceedings
of TLCA’2005, number 3461 in LNCS, pages 262–277. Springer, 2005.
8. J. C. Mitchell. Foundations of Programming Languages. MIT Press, 1996.
9. J. C. Mitchell and A. Scedrov. Notes on sconing and relators. In Proceedings of CSL’1992,
volume 702 of LNCS, pages 352–378. Springer, 1993.
10. E. Moggi. Notions of computation and monads. Information and Computation, 93(1):55–92,
1991.
11. P. W. O’Hearn and R. D. Tennent. Parametricity and local variables. J. ACM, 42(3):658–709,
1995.
12. A. Pitts and I. Stark. Observable properties of higher order functions that dynamically create
local names, or: What’s new? In Proceedings of MFCS’1993, number 711 in LNCS, pages
122–141. Springer, 1993.
13. A. Pitts and I. Stark. Operational reasoning for functions with local state. In Higher Order
Operational Techniques in Semantics, pages 227–273. Cambridge University Press, 1998.
14. G. Plotkin, J. Power, D. Sannella, and R. Tennent. Lax logical relations. In Proceedings of
ICALP’2000, volume 1853 of LNCS, pages 85–102. Springer, 2000.
15. G. D. Plotkin. Lambda-definability in the full type hierarchy. In To H. B. Curry: Essays
on Combinatory Logic, Lambda Calculus and Formalism, pages 363–373. Academic Press,
1980.
16. K. Sieber. Full abstraction for the second order subset of an algol-like language. Theoretical
Computer Science, 168(1):155–212, 1996.
17. I. Stark. Categorical models for local names. Lisp and Symbolic Computation, 9(1):77–107,
1996.
18. E. Sumii and B. C. Pierce. Logical relations for encryption. J. Computer Security, 11(4):521–
554, 2003.
19. Y. Zhang. Cryptographic logical relations. Ph. d. dissertation, ENS Cachan, France, 2005.
| 2 |
arXiv:1303.6011v1 [math.FA] 25 Mar 2013
THE INVERSE FUNCTION THEOREM AND THE RESOLUTION
OF THE JACOBIAN CONJECTURE IN FREE ANALYSIS
J. E. PASCOE
Abstract. We establish an invertibility criterion for free polynomials and free
functions evaluated on some tuples of matrices. We show that if the derivative is nonsingular on some domain closed with respect to direct sums and
similarity, the function must be invertible. Thus, as a corollary, we establish
the Jacobian conjecture in this context. Furthermore, our result holds for
commutative polynomials evaluated on tuples of commuting matrices.
1. Introduction
A free map is a function defined on some structured subset of tuples of matrices
that respects joint invariance. For the purposes of introduction, the standard example is a free polynomial being evaluated on tuples of matrices. We give a formal
defintion in Section 2.
We consider the inverse function theorem for free maps.
Classically, the inverse function theorem states that given a map f , if Df (x)
is nonsingular for some x, then there is a neighborhood U of x such that f −1 is
well-defined on f (U ).
Our inverse function theorem is as follows.
Theorem 1.1 (Inverse function theorem). Let f be a free map. The following are
equivalent:
(1) Df (X) is a nonsingular map for every X.
(2) f is injective.
(3) f −1 exists and is a free map.
In the classical case, one may obtain a neighborhood of x where the derivative
is nonsingular. The geometry of free analysis is not exactly topological due noted
algebraic obstructions such as those observed in the work of D. S. KaliuzhnyiVerbovetskyi and V. Vinnikov [9], and Amitsur and Levitzki [2]. Thus, we assert
nonsingularity of the derivative on the entire domain in our theorem, and, in return,
we obtain a global result. We prove the inverse function theorem result in Section
3.
We also consider a famous conjecture of Ott-Heinrich Keller, the so-called Jacobian conjecture [10].
Question 1.2. Let P : CN → CN be a polynomial map. If the Jacobian DP (x) is
invertible for every x ∈ CN , is the map P itself invertible?
James Ax [3] and Alexander Grothendieck [5] independently showed that if a
polynomial map P : CN → CN is injective, then it must be surjective. Furthermore,
2010 Mathematics Subject Classification. Primary 46L52; Secondary 14A25, 47A56.
1
2
J. E. PASCOE
a proof in [12] shows that the inverse must be given by a polynomial via techniques
from Galois theory. We prove a free Ax-Grothendieck theorem as Theorem 4.4.
The following is an immediate corollary of our inverse function theorem combined
with the results in the preceeding paragraphs.
Theorem 1.3 (Free Jacobian conjecture). Let M(C)N be the set of matrix N tuples. Suppose P : M(C)N → M(C)N is a free polynomial map. The following
are equivalent:
(1) DP (X) is a nonsingular map for each X.
(2) P is injective.
(3) P is bijective.
(4) P −1 exists and P −1 |Mn (C)N agrees with free polynomial.
We prove this result in Section 4.
We remark the last condition could be conjectured to be that P −1 is a bona fide
free polynomial. However, we caution that degree bounds in the Ax-Grothendieck
theorem are very large [4], and the theory of polynomial identity rings supplies many
low degree polynomial identities satisfied by all matrix tuples of a specific size [2].
This means that there are a plethora of maps that have polynomial formulas for
each specific size of matrix, but are not actually given by some polynomial.
Additionally, we immediately obtain a matrix version of the commutative Jacobian conjecture via another application of the inverse function theorem. In this
case, we do obtain true polynomials for the inverse map, because commutative
polynomials are determined by their values on the scalars.
Theorem 1.4 (Commuting matrix Jacobian conjecture). Let P : CN → CN . The
following are equivalent:
(1) DP (X) is an nonsingular map for each commuting matrix N -tuple X.
(2) P is injective.
(3) P is bijective.
(4) P −1 exists and P −1 is given by a polynomial.
We prove this result in Section 4.
We caution that the structure of free maps greatly simplifies their geometry. The
Jacobian conjecture in the classical context is notoriously difficult, but in the matricial context is shown here to be tractable. Indeed, free maps have generally been
observed to encode nonlocal data in many contexts which is in strong contrast to the
classical case. For another example, compare between Putinar’s Positivstellensatz
[11] to Helton’s noncommutative Positivstellensatz [6].
1.1. Some examples of domains of invertibility. We briefly give some examples of applications of our main result, the inverse function theorem for free maps.
1.1.1. Domains of inveribility for squaring. Take the function
f (X) = X 2 .
Suppose we want to find a domain where f is invertible. We obtain a derivative for
f given by the formula
Df (X)[H] = XH + HX.
Thus, by the inverse function theorem, we need the equation
XH + HX = 0
THE INVERSE FUNCTION THEOREM
3
to have no nontrivial solutions for each X in our domain. This is a degenerate form
of the famous Sylvester equation, so this will be nonsingular if X has no eigenvalues
in common with −X. For a detailed account of the Sylvester equation, see Horn
and Johnson [8]. Thus, if we take a subset H ⊂ C such that H ∩ −H = ∅, and lift
to the set of matrices with spectrum in H, then f will be invertible there. In fact,
these are all possible maximal domains for such an inverse.
1.1.2. The quadratic symmetrization map. Consider,
f (X, Y ) = (X + Y, X 2 + Y 2 ).
Taking the derivative,
Df (X, Y )[H, K] = (H + K, HX + XH + KY + Y K).
So we need to check the second coordinate of the derivative is nonzero when H =
−K. So, we want
H(X − Y ) + (X − Y )H = 0
to have no nontrivial solutions. By the same use of Sylvesters equation as in the
first example, this exactly says X − Y needs to have spectrum disjoint from Y − X.
1.1.3. A more exotic quadratic map. Now consider the function
f (X) = (X + X 2 + [X, Y ], Y + [X, Y ]).
Taking the derivative,
Df (X, Y )[H, K] = (H + HX + HX + [H, Y ] + [X, K], K + [H, Y ] + [X, K]).
Suppose this had a nontrivial solution at some (X, Y ) for (H, K). Either kHk ≥ kKk
or kKk ≥ kHk. In the case where kHk ≥ kKk, H 6= 0, and
kH + HX + XH + [H, Y ] + [X, K]k ≥ kHk(1 − 4kXk − 2kY k).
So, it must be that 1 − 4kXk − 2kY k ≤ 0. In the case where kKk ≥ kHk, K 6= 0,
and
kK + [H, Y ] + [X, K]k ≥ kKk(1 − 2kXk − 2kY k)
So, it must be that 1 − 2kXk − 2kY k ≤ 0.
Restricting f to the set of (X, Y ) such that 4kXk + 2kY k < 1 precludes Df
from being singular. However, this fails to be a free domain since it is not closed
with repect to direct sums. (See Section 2 for a formal definition of a free domain.)
However, if we restrict f to the set of (X, Y ) such that kXk < 81 and kY k < 41 we
do indeed obtain a free map, and thus by the inverse function theorem, the function
f will be invertible there.
2. Free analysis
S
Let Mn be the n × n matrices. We denote MN = MN
n.
A free set D ⊂ MN is closed under direct sums and joint similarity. That is,
(1) A, B ∈ D ⇒ A ⊕ B ∈ D,
−1
(2) A ∈ D ∩ MN
AS ∈ D.
n , S ∈ GLn ⇒ S
4
J. E. PASCOE
Where
(A1 , A2 , . . . , AN ) ⊕ (B1 , B2 , . . . , BN ) = (A1 ⊕ B1 , A2 ⊕ B2 , . . . , AN ⊕ BN ),
and
S −1 (A1 , A2 , . . . , AN )S = (S −1 A1 S, S −1 A2 S, . . . , S −1 AN S).
A prototypical example of such a set is the zero set of some free polynomial map.
For example, the commuting tuples of matrices form a free set.
We define a free domain D ⊂ MN to be either a free set or, if working over a
local field, a set that is relatively open in its orbit under conjugation by invertible
matrices and is closed under direct sums. Any function on a free domain extends
to a function on a free set as in the envelope method described in [9].
A free map f : D → MN̂ obeys the following
(1) f (A ⊕ B) = f (A) ⊕ f (B),
(2) f (S −1 AS) = S −1 f (A)S,
(3) D is a free domain.
This definition of free sets and free maps is a direct generalization of the definition
given in Helton-Klep-McCullough [7]; in the language of Agler-McCarthy [1] this
generalizes functions on basic open sets in their free topology.
Essentially, these maps are an emulation of the classical functional calculus for
non-commuting tuples of operators.
We note that we do not specify a ground ring for the matrices in the general
inverse function theorem; it merely needs to have a multiplicative identity.
2.1. Derivatives in free analysis. Helton, Klep and McCullough differentiated
free maps in [7]. They obtained the formula
X H
f (X) Df (X)[H]
(2.1)
f
=
.
0 X
0
f (X)
These types of formulas are pervasive throughout the free analysis literature. For
other references see Voiculescu [14] and the recently completed tome by D. S.
Kaliuzhnyi-Verbovetskyi and V. Vinnikov [9].
Formula 2.1 can be seen to be true for formal differentiation satisfying Leibniz
rule. Thus, we eschew any analytic means for obtaining the derivative, and instead
use the above as our definition. That is, our results formally hold over any field,
or indeed unital ring, and for sets that may have cusps or other exotic geometric
features. We formalize the above in the following proposition.
Proposition 2.2. Define
Hi , if i = j
D(Xi )[Hj ] =
,
0, if i 6= j
and require
D(f + g)[H] = D(f )[H] + D(g)[H],
and
D(f g)[H] = D(f )[H]g + f D(g)[H].
Equation 2.1 is satisfied.
THE INVERSE FUNCTION THEOREM
5
Proof. We only need to prove this fact on monomials. This is obtained inductively
via the following algebra:
m(X) Dm(X)[H]
X H
X i Hi
X i Hi
m
=
0
m(X)
0 Xi
0 X
0 Xi
=
Xi m(X) Xi Dm(X)[H] + Hi m(X)
0
Xi m(X)
=
Xi m(X) D(Xi m(X))[H]
.
0
Xi m(X)
3. The inverse function theorem
We now prove the inverse function theorem.
Proof. (¬1 ⇒ ¬2) Suppose Df is singular at some X and some direction H 6= 0.
That is, Df (X)[H] = 0. So, applying 2.1 and the direct sum formula,
X H
f (X) Df (X)[H]
f
=
0 X
0
f (X)
f (X)
0
0
f (X)
=
X
f
0
=
0
X
This equality witnesses the noninjectivity of f.
(1 ⇒ 2) Suppose Df (X) is not singular at any X. Let X1 , X2 be two matrices
of the same size such that f (X1 ) = f (X2 ). Let
1
0
S=
0
0
0
1
0
0
0
0
1
0
1
0
.
0
1
Let,
X1
0
X =
0
0
0
X2
0
0
0
0
X1
0
0
0
.
0
X2
X1
0
S −1 XS =
0
0
0
X2
0
0
0
0
X1
0
X1 − X2
0
,
0
X2
Note,
6
J. E. PASCOE
and, since
f (X1 )
0
0
f
(X
2)
f (X) =
0
0
0
0
0
0
0
f (X2 )
0
0
f (X1 )
0
because f preserves direct sums, we obtain the fomula
f (X1 )
0
0
0
f (X2 )
0
−1
S f (X)S =
0
0
f (X1 )
0
0
0
f (X1 ) − (X2 )
0
.
0
f (X2 )
So, via the similarity relation for free maps,
X1
0
f
0
0
0
X2
0
0
0
0
X1
0
f (X1 )
0
0
f (X1 ) − f (X2 )
X1 − X2
0
f (X2 )
0
0
0
.
=
0
0
f (X1 )
0
0
0
0
0
f (X2 )
X2
Thus, since we assumed f (X1 ) = f (X2 ),
X1
0
f
0
0
0
X2
0
0
0
0
X1
0
f (X1 )
0
0
0
X1 − X2
0
0
f (X2 )
0
0
=
.
0
0
0
f (X1 )
0
X2
0
0
0
f (X2 )
On the other hand, by 2.1,
X1
0
f
0
0
0
X2
0
0
0
0
X1
0
0 X1 − X2
X1 − X2
X1 0
X1 0
Df
f 0 X2
0
0
X2 0 0
=
.
0
X1 0
0
f
X2
0 X2
So,
Df
X1
0
0
X2
0 X1 − X2
0 0
=
.
0
0
0 0
Thus, X1 − X2 = 0, since we assumed Df (X)[H] is nonsingular for all X in the
domain, or equivalently that Df (X)[H] = 0 implies H = 0. So, f is injective.
(2 ⇔ 3) We leave this to the reader. It is similar to a proof in Helton-KlepMcCullough [7].
THE INVERSE FUNCTION THEOREM
7
4. Polynomial maps
In this section we recall some classical results from algebraic geometry which
we will use to prove the Jacobian conjecture for free polynomials and commuting
matrix polynomials, and subsequently give these proofs.
Ott-Heinrich Keller infamously suggested the following conjecture.
Question 4.1 (Jacobian conjecture). Let P : CN → CN be a polynomial map. If
DP (x) is nonsingular for every x, must P necesarily be invertible? Furthermore,
can the inverse be taken to be a polynomial?
James Ax and Alexander Grothendieck independently proved the following seemingly related result about polynomial maps.
Theorem 4.2 (Ax-Grothendieck theorem [3], [5]). Let P : CN → CN be a polynomial map. If P is injective, then P is surjective.
This reduces the Jacobian conjecture to showing the condition on the derivative implying global injectivity. Furthermore, this result has been refined to the
following.
Theorem 4.3 (Ax-Grothendieck theorem[12]). Let P : CN → CN be a polynomial
map. If P is injective, then P is surjective and P −1 is given by a polynomial.
This tacitly gives an equivalence between a polynomial map being invertible and
having a polynomial inverse.
This can be lifted to the free case.
Theorem 4.4 (Free Ax-Grothendieck theorem). Let P : M(C)N → M(C)N be a
free polynomial map. If P is injective, then P is surjective and P −1 |MN
is given
n
by a free polynomial.
Proof. For each size of matrix n, we view P as a tuple of dn2 commuting polynomials by replacing the free variables with indeterminant n by n matices. Since
P |Mn (C)N is injective, P −1 |Mn (C)N is given by a tuple of dn2 commuting polynomials. Since the global P −1 is a free map by the inverse function theorem and
continuous free maps are analytic, they have a power series of free polynomials[9],
the restriction P −1 |Mn (C)N must agree with a free polynomial; the terms in the
power series for P −1 must eventually vanish on all of Mn (C)N .
We now prove the the Jacobian conjecture for free polynomials, Theorem 1.3.
Proof of Theorem 1.3. (1 ⇔ 2) follows from the inverse function theorem. (2 ⇔
3 ⇔ 4) is the free Ax-Grothendieck theorem.
We now prove the the Jacobian conjecture for commuting matrix polynomials,
Theorem 1.4.
Proof of Theorem 1.3. (1 ⇔ 2) follows from the inverse function theorem.
(2 ⇒ 4) The function P |M1 (C)N has a polynomial inverse P −1 by the AxGrothendieck theorem. Since P |M1 (C)N is equal to P as a polynomial, we indeed
obtained global inverse. (The values on the scalars determine a commutative polynomial.)
(4 ⇒ 3 ⇒ 2) is trivial.
8
J. E. PASCOE
References
[1] J. Agler and J.E. McCarthy. On the approximation of holomorphic functions in several noncommuting variables by free polynomials. in progress.
[2] A. S. Amitsur and J. Levitzki. Minimal identities for algebras. Proc. Amer. Math. Soc.,
1:449–463, 1950.
[3] James Ax. The elementary theory of finite fields. Ann. of Math., 88(2):239–271, 1968.
[4] Hyman Bass, Edwin H. Connell, and David Wright. The jacobian conjecture: reduction of
degree and formal expansion of the inverse. Bull. Amer. Math. Soc., 7:287–330, 1982.
[5] Alexander Grothendieck. Eléments de géométrie algébrique IV. Publications Mathematiques
de l’IHES, 1966.
[6] Bill Helton. Positive noncommutative polynomials are sums of squares. Ann. of Math.,
159:675–694, 2002.
[7] J. William Helton, Igor Klep, and Scott McCullough. Proper analytic free maps. Journal of
Functional Analysis, 260(5):1476 – 1490, 2011.
[8] R.A. Horn and C.R. Johnson. Matrix Analysis. Cambridge University Press, Cambridge,
1985.
[9] D. S. Kaliuzhnyi-Verbovetskyi and V. Vinnikov. Foundations of Noncommutative Function
Theory. ArXiv e-prints, 2012.
[10] Ott-Heinrich Keller. Ganze cremona-transformationen. Monatshefte für Mathematik und
Physik, 47:299–306, 1939.
[11] M. Putinar. Positive polynomials on compact sets. Indiana Math. J., 42:969–984, 1993.
[12] Walter Rudin. Injective polynomial maps are automorphisms. American Mathematical
Monthly, 102(6):540–543, 1995.
[13] Dan-Virgil Voiculescu. Free analysis questions. I: Duality transform for the coalgebra of ∂X:B .
International Math. Res. Notices, 16:793–822, 2004.
[14] Dan-Virgil Voiculescu. Free analysis questions. II: The Grassmannian completion and the
series expansions at the origin. J. Reine Angew. Math., 645:155–236, 2010.
| 0 |
Hypercube LSH for approximate near neighbors
Thijs Laarhoven
arXiv:1702.05760v1 [] 19 Feb 2017
IBM Research
Rüschlikon, Switzerland
mail@thijs.com
Abstract. A celebrated technique for finding near neighbors for the angular distance involves using a set
of random hyperplanes to partition the space into hash regions [Charikar, STOC 2002]. Experiments later
showed that using a set of orthogonal hyperplanes, thereby partitioning the space into the Voronoi regions
induced by a hypercube, leads to even better results [Terasawa and Tanaka, WADS 2007]. However, no
theoretical explanation for this improvement was ever given, and it remained unclear how the resulting
hypercube hash method scales in high dimensions.
In this work, we provide explicit asymptotics for the collision probabilities when using hypercubes to
partition the space. For instance, two near-orthogonal vectors are expected to collide with probability
( π1 )d+o(d) in dimension d, compared to ( 12 )d when using random hyperplanes. Vectors at angle π3 collide
√
with probability ( π3 )d+o(d) , compared to ( 32 )d for random hyperplanes, and near-parallel vectors collide
with similar asymptotic probabilities in both cases.
For c-approximate nearest neighbor searching, this translates to a decrease in the exponent ρ of localitysensitive hashing (LSH) methods of a factor up to log2 (π) ≈ 1.652 compared to hyperplane LSH. For c = 2,
we obtain ρ ≈ 0.302 + o(1) for hypercube LSH, improving upon the ρ ≈ 0.377 for hyperplane LSH. We
further describe how to use hypercube LSH in practice, and we consider an example application in the
area of lattice algorithms.
Keywords: (approximate) near neighbors, locality-sensitive hashing, large deviations, dimensionality reduction, lattice algorithms
1
Introduction
Finding (approximate) near neighbors. A key computational problem in various research areas, including machine learning, pattern recognition, data compression, coding theory, and cryptanalysis [SDI05, Bis06, Dub10, DHS00, MO15, Laa15], is finding near neighbors: given a data set D ⊂ Rd
of cardinality n, design a data structure and preprocess D in a way that, when given a query vector
q ∈ Rd , one can efficiently find a near point to q in D. Due to the “curse of dimensionality” [IM98] this
problem is known to be hard to solve exactly (in the worst case) in high dimensions d, so a common
relaxation of this problem is the (c, r)-approximate near neighbor problem ((c, r)-ANN): given that the
nearest neighbor lies at distance at most r from q, design an algorithm that finds an element p ∈ D
at distance at most c · r from q.
Locality-sensitive hashing (LSH) and filtering (LSF). A prominent class of algorithms for finding
near neighbors in high dimensions is formed by locality-sensitive hashing (LSH) [IM98] and localitysensitive filtering (LSF) [BDGL16]. These solutions are based on partitioning the space into regions,
in a way that nearby vectors have a higher probability of ending up in the same hash region than
distant vectors. By carefully tuning (i) the number of hash regions per hash table, and (ii) the number
of randomized hash tables, one can then guarantee that with high probability (a) nearby vectors will
collide in at least one of the hash tables, and (b) distant vectors will not collide in any of the hash
tables. For LSH, a simple lookup in all of q’s hash buckets then provides a fast way of finding near
neighbors to q, while for LSF the lookups are slightly more involved. For various metrics, LSH and
LSF currently provide the best performance in high dimensions [AR15, BDGL16, ALRW17, Chr17].
Near neighbors on the sphere. In this work we will focus on the near neighbor problem under the
angular distance, where two vectors x, y are considered nearby iff their common angle θ is small [Cha02,
STS+ 13, SSLM14, AIL+ 15]. This equivalently corresponds to near neighbor searching for the `2 -norm,
where the entire data set is assumed to lie on a sphere. A special
on the sphere,
√ case of (c, r)-ANN
√
often considered in the literature, is the random case r = 1c 2 and c · r = 2, in part due to a
reduction from near neighbor under the Euclidean metric for general data sets to (c, r)-ANN on the
sphere with these parameters [AR15].
2
1.1
Thijs Laarhoven
Related work
Upper bounds. Perhaps the most well-known and widely used solution for ANN for the angular distance
is Charikar’s hyperplane LSH [Cha02], where a set of random hyperplanes is used to partition the
space into regions. Due to its low computational complexity and the simple form of the collision
probabilities (with no hidden order terms in d), this method is easy to instantiate in practice and
commonly achieves the best performance out of all LSH methods when d is not too large. For large
d, both spherical cap LSH [AINR14, AR15] and cross-polytope LSH [TT07, ER08, AIL+ 15, KW17]
are known to perform better than hyperplane LSH. Experiments from [TT07, TT09] showed that
using orthogonal hyperplanes, partitioning the space into Voronoi regions induced by the vertices
of a hypercube, also leads to superior results compared to hyperplane LSH; however, no theoretical
guarantees for the resulting hypercube LSH method were given, and it remained unclear whether the
improvement persists in high dimensions.
Lower bounds. For the case of random data sets, lower bounds have also been found, matching the
performance of spherical cap and cross-polytope LSH for large c [MNP07, OWZ11, AIL+ 15]. These
lower bounds are commonly in a model where it is assumed that collision probabilities are “not too
small”, and in particular not exponentially small in d. Therefore it is not clear whether one can further
improve upon cross-polytope LSH when the number of hash regions is exponentially large, which would
for instance be the case for hypercube LSH. Together with the experimental results from [TT07,TT09],
this naturally begs the question: how efficient is hypercube LSH? Is it better than hyperplane LSH
and/or cross-polytope LSH? And how does hypercube LSH compare to other methods in practice?
1.2
Contributions
Hypercube LSH. By carefully analyzing the collision probabilities for hypercube LSH using results
from large deviations theory, we show that hypercube LSH is indeed different from, and superior to
hyperplane LSH for large d. The following main theorem states the asymptotic form of the collision
probabilities when using hypercube LSH, which are also visualized in Figure 1 in comparison with
hyperplane LSH.
Theorem 1 (Collision probabilities for hypercube LSH). Let X, Y ∼ N (0, 1)d , let θ ∈ [0, π]
denote the angle between X and Y , and let p(θ) denote the probability that X and Y are mapped to
the same hypercube hash region. For θ ∈ (0, arccos π2 ) (respectively θ ∈ (arccos π2 , π3 )), let β0 ∈ (1, ∞)
(resp. β1 ∈ (1, ∞)) be the unique solution to:
p
p
(β0 − cos θ) β02 − 1
(β1 + cos θ) β12 − 1
−1
1
arccos
=
,
arccos
=
.
(1)
β0
β0 (β0 cos θ − 1)
β1
β1 (β1 cos θ + 1)
Then, as d tends to infinity, p(θ) satisfies:
d+o(d)
(β0 − cos θ)2
,
πβ0 (β0 cos θ − 1) sin θ
d+o(d)
(β1 + cos θ)2
,
πβ1 (β1 cos θ + 1) sin θ
p(θ) =
1 + cos θ d+o(d)
,
π
sin
θ
0,
if θ ∈ [0, arccos π2 ];
if θ ∈ [arccos π2 , π3 ];
(2)
if θ ∈ [ π3 , π2 );
if θ ∈ [ π2 , π].
Denoting the query complexity of LSH methods by nρ+o(1) , the parameter ρ for hypercube LSH is
up to log2 (π) ≈ 1.65 times smaller than for
√ hyperplane LSH. For large d, hypercube LSH is dominated
by cross-polytope LSH (unless c · r > 2), but as the convergence to the limit is rather slow, in
practice either method might be better, depending on the exact parameter setting. For the random
Hypercube LSH for approximate near neighbors
3
1
Hyperplane LSH
→ p(θ)1/d
Hypercube LSH
ν
3
π
1
π
0
0
arccos( π2 )
π
3
π
2
π
→θ
√
Fig. 1. Asymptotics of collision probabilities for hypercube LSH, compared to hyperplane LSH. Here ν = π/(2 π 2 − 4),
and the dashed vertical lines correspond to boundary points of the piecewise parts of Theorem 1. The blue line indicates
hyperplane LSH with d random hyperplanes.
1
→ρ
0.5
0.2
0.1
0.05
1
Hyperplane LSH
Hypercube LSH
Cross-polytope LSH
2
2
2
2
4
→c
Fig. 2. Asymptotics for√the LSH exponent ρ when using hyperplane LSH, hypercube LSH, and cross-polytope LSH, for
(c, r)-ANN with c · r = 2. The curve for hyperplane LSH is exact for arbitrary d, while for the other two curves, order
terms vanishing as d → ∞ have been omitted.
setting, Figure 2 shows limiting values for ρ for hyperplane, hypercube and cross-polytope LSH. We
again remark that these are asymptotics for d → ∞, and may not accurately reflect the performance
of these methods for moderate d. We further briefly discuss how the hashing for hypercube LSH can
be made efficient.
Partial hypercube LSH. As the number of hash regions of a full-dimensional hypercube is often prohibitively large, we also consider partial hypercube LSH, where a d0 -dimensional hypercube is used
to partition a data set in dimension d. Building upon a result of Jiang [Jia06], we characterize when
hypercube and hyperplane LSH are asymptotically equivalent in terms of the relation between d0
and d, and we empirically illustrate the convergence towards either hyperplane or hypercube LSH for
larger d0 . An important open problem remains to identify how large the ratio d0 /d must be for the
asymptotics of partial hypercube LSH to be equivalent to those of full-dimensional hypercube LSH.
Application to lattice sieving. Finally, we consider a specific use case of different LSH methods, in
the context of lattice cryptanalysis. We show that the heuristic complexity of lattice sieving with
4
Thijs Laarhoven
hypercube LSH is expected to be slightly better than when using hyperplane LSH, and we discuss
how experiments have previously indicated that in this application, hypercube LSH is superior to
other dimensions up to dimensions d ≈ 80.
2
Preliminaries
Notation. We denote probabilities with P(·) and expectations with E(·). Capital letters commonly
denote random variables, and boldface letters denote vectors. We informally write P(X = x) for
continuous X to denote the density of X at x. For probability distributions D, we write X ∼ D to
denote that X is distributed according to D. For sets S, with abuse of notation we further write X ∼ S
to denote X is drawn uniformly at random from S. We write N (µ, σ 2 ) for the normal distribution
with mean µ and variance σ 2 , and H(µ, σ 2 ) for the distribution of |X| when X ∼ N (µ, σ 2 ). For µ = 0
the latter corresponds to the half-normal distribution. We write X ∼ Dd to denote a d-dimensional
qP
2
vector where each entry is independently distributed according to D. In what follows, kxk =
i xi
P
denotes the Euclidean norm, and hx, yi = i xi yi denotes the standard inner product. We denote the
angle between two vectors by φ(x, y) = arccoshx/kxk, y/kyki.
Lemma 1 (Distribution of angles between random vectors [BDGL16, Lemma 2]). Let
X, Y ∼ N (0, 1)d be two independent standard normal vectors. Then P(φ(X, Y ) = θ) = (sin θ)d+o(d) .
Locality-sensitive hashing. Locality-sensitive hash functions [IM98] are functions h mapping a ddimensional vector x to a low-dimensional sketch h(x), such that vectors which are nearby in Rd
are more likely to be mapped to the same sketch than distant vectors. For the angular distance1
φ(x, y), we quantify a set of hash functions H as follows (see [IM98]):
Definition 1. A hash family H is called (θ1 , θ2 , p1 , p2 )-sensitive if for x, y ∈ Rd we have:
– If φ(x, y) ≤ θ1 then Ph∼H (h(x) = h(y)) ≥ p1 ;
– If φ(x, y) ≥ θ2 then Ph∼H (h(x) = h(y)) ≤ p2 .
The existence of locality-sensitive hash families implies the existence of fast algorithms for (approximate) near neighbors, as the following lemma describes2 . For more details on the general principles
of LSH, we refer the reader to e.g. [IM98, And09].
Lemma 2 (Locality-sensitive hashing [IM98]). Suppose there exists a (θ1 , θ2 , p1 , p2 )-sensitive
log(p1 )
family H. Let ρ = log(p
. Then w.h.p. we can either find an element p ∈ L at angle at most θ2 from
2)
q, or conclude that no elements p ∈ L at angle at most θ1 from q exist, in time nρ+o(1) with space
and preprocessing costs n1+ρ+o(1) .
Hyperplane LSH. For the angular distance, Charikar [Cha02] introduced the hash family H = {ha :
a ∼ D} where D is any spherically symmetric distribution on Rd , and ha satisfies:
(
+1, if ha, xi ≥ 0;
ha (x) =
(3)
−1, if ha, xi < 0.
The vector a can be interpreted as the normal vector of a random hyperplane, and the hash value
depends on which side of the hyperplane x lies on. For this hash function, the probability of a collision
is directly proportional to the angle between x and y:
φ(x, y)
Ph∼H h(x) = h(y) = 1 −
.
π
For any two angles θ1 < θ2 , the above family H is (θ1 , θ2 , 1 −
1
2
θ1
π ,1
−
(4)
θ2
π )-sensitive.
Formally speaking, the angular distance is only a similarity measure, and not a metric.
Various conditions and order terms (which are commonly no(1) ) are omitted here for brevity.
Hypercube LSH for approximate near neighbors
5
Large deviations theory. Let P
{Z d }d∈N ⊂ Rk be a sequence of random vectors corresponding to an
empirical mean, i.e. Z d = d1 di=1 U i with U i i.i.d. We define the logarithmic moment generating
function Λ of Z d as:
Λ(λ) = ln EU 1 [exphλ, U 1 i] .
(5)
Define DΛ = {λ ∈ Rk : Λ(λ) < ∞}. The Fenchel-Legendre transform of Λ is defined as:
Λ∗ (z) = sup {hλ, zi − Λ(λ)} .
(6)
λ∈Rk
The following result describes that under certain conditions on {Z 0d }, the asymptotics of the probability
measure on a set F are related to the function Λ∗ .
Lemma 3 (Gärtner-Ellis theorem [DZ10, Theorem 2.3.6 and Corollary 6.1.6]). Let 0 be
contained in the interior of DΛ , and let Z d be an empirical mean. Then for arbitrary sets F ,
lim 1
d→∞ d
ln P(z ∈ F ) = − inf Λ∗ (z).
z∈F
(7)
The latter statement can be read as P(z ∈ F ) = exp(−d inf z∈F Λ∗ (z) + o(d)), and thus tells us
exactly how P(z ∈ F ) scales as d tends to infinity, up to order terms.
3
Hypercube LSH
In this section, we will analyze full-dimensional hypercube hashing, with hash family H = {hA : A ∈
SO(d)} where SO(d) ⊂ Rd×d denotes the rotation group, and hA satisfies:
(
+1, if xi ≥ 0;
hA (x) = (h1 (Ax), . . . , hd (Ax)),
hi (x) =
(8)
−1, if xi < 0.
In other words, a hypercube hash function first applies a uniformly random rotation, and then maps
the resulting vector to the orthant it lies in. This equivalently corresponds to a concatenation of d
hyperplane hash functions, where all hyperplanes are orthogonal. Collision probabilities for prescribed
angles θ between x and y are denoted by:
p(θ) = P(hA (x) = hA (y) | φ(x, y) = θ).
(9)
Above, the randomness is over hA ∼ H, with x and y arbitrary vectors at angle θ (e.g. x = e1 and
y = e1 cos θ + e2 sin θ). Alternatively, the random rotation A inside hA may be omitted, and the
probability can be computed over X, Y drawn uniformly at random from a spherically symmetric
distribution, conditioned on their common angle being θ.
3.1
Outline of the proof of Theorem 1
Although Theorem 1 is a key result, due to space restrictions we have decided to defer the full proof
(approximately 5.5 pages) to the appendix. The approach of the proof can be summarized by the
following four steps:
– Rewrite the collision probabilities in terms of (normalized) half-normal vectors X, Y ;
– Introduce dummy variables x, y for the norms of these half-normal vectors, so that the probability
can be rewritten in terms of unnormalized half-normal vectors;
– Apply the
Gärtner-Ellis
theorem (Lemma 3) to the three-dimensional vector given by
P 2 P
1 P
Z = d ( i Xi Yi , i Xi , i Yi2 ) to compute the resulting probabilities for arbitrary x, y;
– Maximize the resulting expressions over x, y > 0 to get the final result.
The majority of the technical part of the proof lies in computing Λ∗ (z), which involves a somewhat
tedious optimization of a multivariate function through a case-by-case analysis.
6
Thijs Laarhoven
A note on Gaussian approximations. From the (above outline of the) proof, and the observation
that the final optimization over x, y yields x = y = 1 as the optimum, one might wonder whether a
simpler analysis might be possible by assuming (half-)normal vectors are already normalized. Such a
computation however would only lead to an approximate solution, which is perhaps easiest to see by
computing collision probabilities for θ = 0. In the exact computation, where vectors are normalized,
hX, Y i = 1 implies X = Y . If however we do not take into account the norms of X and Y , and
do not condition on the norms being equal to 1, then hX, Y i = 1 could also mean that X, Y are
slightly longer than 1 and have a small, non-zero angle. In fact, such a computation would indeed
yield p(θ)1/d 6→ 0 as θ → 0.
3.2
Consequences of Theorem 1
From Theorem 1, we can draw
√ several conclusions. Substituting values for θ, we can find asymptotics
π 1/d
for p(θ), such as p( 3 )
= π3 + o(1) and p( π2 )1/d = π1 + o(1). We observe that the limiting function
of Theorem 1 (without the order terms) is continuous everywhere except at θ = π2 . To understand
the boundary θ = arccos π2 of the piece-wise limit function, note that two (normalized) half-normal
vectors X, Y have expected inner product EhX, Y i = π2 .
LSH exponents ρ for random settings. Using Theorem 1, we can explicitly compute LSH exponents
√
ρ for given angles θ1 and θ2 for large d. As an example, consider the random setting3 with c = 2,
corresponding to θ2 = π2 and θ1 = π3 . Substituting the collision probabilities from Theorem 1, we get
ρ → 1 − 21 logπ (3) ≈ 0.520 as d → ∞. To compare, if we had used random hyperplanes, we would have
gotten a limiting value ρ → log2 ( 32 ) ≈ 0.585. For the random case, Figure 2 compares limiting values ρ
using random and orthogonal hyperplanes, and using the asymptotically superior cross-polytope LSH.
Scaling at θ → 0 and asymptotics of ρ for large c. For θ close to 0, by Theorem 1 we are in the regime
defined by β0 . For cos θ = 1−ε with ε > 0 small, observe that β0 ≈ 1 satisfies
β0 > 1/ cos θ. Computing
√
2 2 3/2
a Taylor expansion around ε = 0, we eventually find β0 = 1 + ε + π ε + O(ε2 ). Substituting this
value β0 into p(θ) with cos θ = 1 − ε, we find:
√
p(θ) =
2√
1−
ε + O(ε)
π
!d+o(d)
.
(10)
To compare this with hyperplane LSH, recall that the collision
√ probability for d random hyperplanes
θ d
is equal to (1 − π ) . Since cos θ = 1 − ε translates to θ = 2ε(1 + O(ε)), the collision probabilities
√ √
for hyperplane hashing in this regime are also (1 − π2 ε + O(ε))d . In other words, for angles θ → 0,
the collision probabilities for hyperplane hashing and hypercube hashing are similar. This can also be
observed in Figure 1. Based on this result, we further deduce that in random settings with large c, for
hypercube LSH we have:
ln 1 −
ρ→
√
2
πc
+O
ln(1/π)
1
c2
√
2
=
+O
πc ln π
1
c2
0.393
≈
+O
c
1
c2
.
(11)
For hyperplane LSH, the numerator is the same, while the denominator is ln( 12 ) instead of ln( π1 ),
leading to values ρ which are a factor log2 π + o(1) ≈ 1.652 + o(1) larger. Both methods are inferior
to cross-polytope LSH for large d, as there ρ = O(1/c2 ) for large c [AIL+ 15].
3
√
√
Here we assume that c · r → ( 2)− ,√i.e. c · r approaches 2 from below. Alternatively, one might interpret this as
that if distant
points lie at distance 2 ± o(1), then we might expect approximately
half of them to lie at distance
√
√
less than 2, with query complexity O(n/2)ρ+o(1) = nρ+o(1) . If however c · r ≥ 2 then clearly ρ = 0, regardless of d
and c.
Hypercube LSH for approximate near neighbors
7
1◆
●
■
▲
▼
●
■
◆
▲
▼
→ p(θ)1/d
3
4
1
2
●
■
1
4
◆
▲
▼
0
●
▲
◆
■
▼
●
◆
▼ ●
■
▲
▲
■
◆
▼ ◆
●
■
▲ ●
■
▲ ●
◆
■
◆
▲ ◆
●
■
▲ ◆
●
▲ ●
■
▲ ●
■
◆
■
▲●
◆
Full hypercube LSH
■
◆
▲◆
●
■●
2-dimensional hypercube
■●
◆
■●
4-dimensional hypercube
◆
■●
◆
8-dimensional hypercube
■●
◆
■
16-dimensional hypercube
◆
●
■
32-dimensional hypercube
π
8
3π
8
π
4
π
2
→θ
Fig. 3. Empirical collision probabilities for hypercube LSH for small d. The green curve denotes the exact collision
probabilities for d = 2 from Proposition 1.
3.3
Convergence to the limit
To get an idea how hypercube LSH compares to other methods when d is not too large, we start by
giving explicit collision probabilities for the first non-trivial case, namely d = 2.
Proposition 1 (Square LSH). For d = 2, p(θ) = 1 −
2θ
π
for θ ≤
π
2
and p(θ) = 0 otherwise.
Proof. In two dimensions, two randomly rotated vectors X, Y at angle θ can be modeled as X =
(cos ψ, sin ψ) and Y = (cos(ψ + θ), sin(ψ + θ)) for ψ ∼ [0, 2π). The conditions X, Y > 0 are then
over the
equivalent to ψ ∈ (0, π2 ) ∩ (−θ, π2 − θ), which for θ < π2 occurs with probability π/2−θ
2π
randomness of ψ. As a collision can occur in any of the four quadrants, we finally multiply this
probability by 4 to obtain the stated result.
Figure 3 depicts p(θ)1/2 in green, along with hyperplane LSH (blue) and the asymptotics for
hypercube LSH (red). For larger d, computing p(θ) exactly becomes more complicated, and so instead
we performed experiments to empirically obtain estimates for p(θ) as d increases. These estimates are
also shown in Figure 3, and are based on 105 trials for each θ and d. Observe that as θ → π2 and/or d
grows larger, p(θ) decreases and the empirical estimates become less reliable. Points are omitted for
cases where no successes occurred.
Based on these estimates and our intuition, we conjecture that (1) for θ ≈ 0, the scaling of p(θ)1/d
is similar for all d, and similar to the asymptotic behavior of Theorem 1; (2) the normalized collision
probabilities for θ ≈ π2 approach their limiting value from below; and (3) p(θ) is likely to be continuous
for arbitrary d, implying that for θ → π2 , the collision probabilities tend to 0 for each d. These together
suggest that values for ρ are actually smaller when d is small than when d is large, and the asymptotic
estimate from Figure 2 might be pessimistic in practice. For the random setting, this would suggest
that ρ ≈ 0 regardless of c, as p(θ) → 0 as θ → π2 for arbitrary d.
Comparison with hyperplane/cross-polytope LSH. Finally, [TT07, Figures 1 and 2] previously illustrated that among several LSH methods, the smallest values ρ (for their parameter sets) are obtained
with hypercube LSH with d = 16, achieving smaller values ρ than e.g. cross-polytope LSH with
d = 256. An explanation for this can be found in:
– The (conjectured) convergence of ρ to its limit from below, for hypercube LSH;
8
Thijs Laarhoven
– The slow convergence of ρ to its limit (from above) for cross-polytope LSH4 .
This suggests that the actual values ρ for moderate dimensions d may well be smaller for hypercube
LSH (and hyperplane LSH) than for cross-polytope LSH. Based on the limiting cases d = 2 and
d → ∞, we further conjecture that compared to hyperplane LSH, hypercube LSH achieves smaller
values ρ for arbitrary d.
3.4
Fast hashing in practice
To further assess the practicality of hypercube LSH, recall that hashing is done as follows:
– Apply a uniformly random rotation A to x;
– Look at the signs of (Ax)i .
Theoretically, a uniformly random rotation will be rather expensive to compute, with A being a real,
dense matrix. As previously discussed in e.g. [Ach01], it may suffice to only consider a sparse subset
of all rotation matrices with a large enough amount of randomness, and as described in [AIL+ 15,
KW17] pseudo-random rotations may also be help speed up the computations in practice. As described
in [KW17], this can even be made provable, to obtain a reduced O(d log d) computational complexity
for applying a random rotation.
Finally, to compare this with cross-polytope LSH, note that cross-polytope LSH in dimension d
partitions the space in 2d regions, as opposed to 2d for hypercube hashing. To obtain a similar finegrained partition of the space with cross-polytopes, one would have to concatenate Θ(d/ log d) random
cross-polytope hashes, which corresponds to computing Θ(d/ log d) (pseudo-)random rotations, compared to only one rotation for hypercube LSH. We therefore expect hashing to be up to a factor
Θ(d/ log d) less costly.
4
Partial hypercube LSH
Since a high-dimensional hypercube partitions the space in a large number of regions, for various
applications one may only want to use hypercubes in a lower dimension d0 < d. In those cases, one would
first apply a random rotation to the data set, and then compute the hash based on the signs of the first
d0 coordinates of the rotated data set. This corresponds to the hash family H = {hA,d0 : A ∈ SO(d)},
with hA,d0 satisfying:
hA,d0 (x) = (h1 (Ax), . . . , hd0 (Ax)),
(
+1, if xi ≥ 0;
hi (x) =
−1, if xi < 0.
(12)
When “projecting” down onto the first d0 coordinates, observe that distances and angles are distorted:
the angle between the vectors formed by the first d0 coordinates of x and y may not be the same as
φ(x, y). The amount of distortion depends on the relation between d0 and d. Below, we will investigate
how the collision probabilities pd0 ,d (θ) for partial hypercube LSH scale with d0 and d, where pd0 ,d (θ) =
P(h(x) = h(y) | φ(x, y) = θ).
4.1
Convergence to hyperplane LSH
First, observe that for d0 = 1, partial hypercube LSH is equal to hyperplane LSH, i.e. p1,d (θ) = 1 − πθ .
For 1 < d0 d, we first observe that both (partial) hypercube LSH and hyperplane LSH can be
modeled by a projection onto d0 dimensions:
0
– Hyperplane LSH: x 7→ Ax with A ∼ N (0, 1)d ×d ;
0
– Hypercube LSH: x 7→ (A∗ )x with A ∼ N (0, 1)d ×d .
4
[AIL+ 15, Theorem 1] shows that the leading term in the asymptotics for ρ scales as Θ(ln d), with a first order term
scaling as O(ln ln d), i.e. a relative order term of the order O(ln ln d/ ln d).
Hypercube LSH for approximate near neighbors
9
Here A∗ denotes the matrix obtained from A after applying Gram-Schmidt orthogonalization to the
rows of A. In both cases, hashing is done after the projection by looking at the signs of the projected
vector. Therefore, the only difference lies in the projection, and one could ask: for which d0 , as a
function of d, are these projections equivalent? When is a set of random hyperplanes already (almost)
orthogonal?
This question was answered in [Jia06]: if d0 = o(d/ log d), then maxi,j |Ai,j −A∗i,j | → 0 in probability
as d → ∞ (implying A∗ = (1 + o(1))A), while for d0 = Ω(d/ log d) this maximum does not converge
to 0 in probability. In other words, for large d a set of d0 random hyperplanes in d dimensions is
(approximately) orthogonal iff d0 = o(d/ log d).
Proposition 2 (Convergence to hyperplane LSH). Let pd0 ,d (θ) denote the collision probabilities
0
for partial hypercube LSH, and let d0 = o(d/ log d). Then pd0 ,d (θ)1/d → 1 − πθ .
As d0 = Ω(d/ log d) random vectors in d dimensions are asymptotically not orthogonal, in that case
one might expect either convergence to full-dimensional hypercube LSH, or to something in between
hyperplane and hypercube LSH.
4.2
Convergence to hypercube LSH
To characterize when partial hypercube LSH is equivalent to full hypercube LSH, we first observe that
if d0 is large compared to ln n, then convergence to the hypercube LSH asymptotics follows from the
Johnson-Lindenstrauss lemma.
Proposition 3 (Sparse data sets). Let d0 = ω(ln n). Then the same asymptotics for the collision
probabilities as those of full-dimensional hypercube LSH apply.
Proof. Let θ ∈ (0, π2 ). By the Johnson-Lindenstrauss lemma [JL84], we can construct a projection
x 7→ Ax from d onto d0 dimensions, preserving all pairwise distances up to a factor 1 ± ε for ε =
Θ((ln n)/d0 ) = o(1). For fixed θ ∈ (0, π2 ), this implies the angle φ between Ax and Ay will be in the
interval θ ± o(1), and so the collision probability lies in the interval p(θ ± o(1)). For large d, this means
that the asymptotics of p(θ) are the same.
To analyze collision probabilities for partial hypercube LSH when neither of the previous two
propositions applies, note that through a series of transformations similar to those for full-dimensional
hypercube LSH, it is possible to eventually end up with the following probability to compute, where
d1 = d0 and d2 = d − d0 :
max P
x,y,u,v,φ
d1
1 X
Xi Yi = xy cos φ,
d1
i=1
d2
1 X
Ui Vi = uvf (φ, θ),
d2
i=1
d1
1 X
Xi2 = x2 ,
d1
1
d2
i=1
d2
X
i=1
Ui2 = u2 ,
d1
1 X
Yi2 = y 2 ,
d1
i=1
!
d2
1 X
Vi2 = v 2 .
d2
(13)
(14)
i=1
Here f is some function of φ and θ. The approach is comparable to how we ended up with a similar
probability to compute in the proof of Theorem 1, except that we split the summation indices I = [d]
into two sets I1 = {1, . . . , d0 } of size d1 and I2 = {d0 +1, . . . , d} of size d2 . We then substitute Ui = Xd0 +i
and Vi = Yd0 +i , and add dummy variables x, y, u, v for the norms of the four partial vectors, and a
dummy angle φ for the angle between the d1 -dimensional vectors, given the angle θ between the
d-dimensional vectors.
Although the vector Z formed by the six random variables in (14) is not an empirical mean
over a fixed number d of random vectors (the first three are over d1 terms, the last three over d2
terms), one may expect a similar large deviations result such as Lemma 3 to apply here. In that
case, the function Λ∗ (z) = Λ∗ (z1 , . . . , z6 ) would be a function of six variables, which we would like to
evaluate at (xy cos φ, x2 , y 2 , uvf (φ, θ), u2 , v 2 ). The function Λ∗ itself involves an optimization (finding
a supremum) over another six variables λ = (λ1 , . . . , λ6 ), so to compute collision probabilities for
10
Thijs Laarhoven
1◆
●
■
▲
▼
■
◆
●
▲
▼
→ p(θ)1/d
3
4
1
2
●
■
◆
▲
▼
●
■
◆
▲ ◆
▼
●
■
▲ ◆
▼
●
■ ●
▲
■
▼ ◆
▲ ◆
●
■
▼
●
▲
■
▼ ◆
●
▲
■
●
▼ ◆
■
▲
●
■
▲
▼ ◆
●
▼ ◆
■
▲
●
■◆
▼
▲◆
●●
▼◆
▲
■◆
▼ ■
■◆
▲◆
●
▲●
■◆
■◆
●
▼
■
▼▲
▲●
▼▼
Partial hypercube LSH (d = 50)
▲
▲
▼
▼
● 2-dimensional hypercube
■ 4-dimensional hypercube
◆ 8-dimensional hypercube
1
4
▲ 16-dimensional hypercube
▼ 32-dimensional hypercube
0
0
π
8
π
4
3π
8
π
2
→θ
0
Fig. 4. Experimental values of pd0 ,50 (θ)1/d , for different values d0 , compared with the asymptotics for hypercube LSH
(red) and hyperplane LSH (blue).
given d, d0 , θ exactly, using large deviations theory, one would have to compute an expression of the
following form:
)
(
min
x,y,u,v,φ
sup
Fd,d0 ,θ (x, y, u, v, φ, λ1 , λ2 , λ3 , λ4 , λ5 , λ6 ) .
(15)
λ1 ,λ2 ,λ3 ,λ4 ,λ5 ,λ6
As this is a very complex task, and the optimization will depend heavily on the parameters d, d0 , θ
defined by the problem setting, we leave this optimization as an open problem. We only mention that
intuitively, from the limiting cases of small and large d0 we expect that depending on how d0 scales
with d (or n), we obtain a curve somewhere in between the two curves depicted in Figure 1.
4.3
Empirical collision probabilities
To get an idea of how pd0 ,d (θ) scales with d0 in practice, we empirically computed several values for
fixed d = 50. For fixed θ we then applied a least-squares fit of the form ec1 d+c2 to the resulting
data, and plotted ec1 in Figure 4. These data points are again based on at least 105 experiments for
each d0 and θ. We expect that as d0 increases, the collision probabilities slowly move from hyperplane
hashing towards hypercube hashing, this can also be seen in the graph – for d0 = 2, the least-squares
fit is almost equal to the curve for hyperplane LSH, while as d0 increases the curve slowly moves
down towards the asymptotics for full hypercube LSH. Again, we stress that as d0 becomes larger, the
empirical estimates become less reliable, and so we did not consider even larger values for d0 .
Compared to full hypercube LSH and Figure 3, we observe that we now approach the limit from
above (although the fitted collision probabilities never seem to be smaller than those of hyperplane
LSH), and therefore the values ρ for partial hypercube LSH are likely to lie in between those of
hyperplane and (the asymptotics of) hypercube LSH.
5
Application: Lattice sieving for the shortest vector problem
We finally consider an explicit application for hypercube LSH, namely lattice sieving algorithms
for the
P
d
shortest vector problem. Given a basis B = {b1 , . . . , bd } ⊂ R of a lattice L(B) = { i λi bi : λi ∈ Z},
Hypercube LSH for approximate near neighbors
11
the shortest vector problem (SVP) asks to find a shortest non-zero vector in this lattice. Various
different methods for solving SVP in high dimensions are known, and currently the algorithm with the
best heuristic time complexity in high dimensions is based on lattice sieving, combined with nearest
neighbor searching [BDGL16].
In short, lattice sieving works by generating a long list L of pairwise reduced lattice vectors, where
x, y are reduced iff kx − yk ≥ min{kxk, kyk}. The previous condition is equivalent to φ(x, y) ≤ π3 ,
and so the length of L can be bounded by the kissing constant in dimension d, which is conjectured to
scale as (4/3)d/2+o(d) . Therefore, if we have a list of size n = (4/3)d/2+o(d) , any newly sampled lattice
vector can be reduced against the list many times to obtain a very short lattice vector. The time
complexity of this method is dominated by doing poly(d) · n reductions (searches for nearby vectors)
with a list of size n. A linear search trivially leads to a heuristic complexity of n2+o(1) = (4/3)d+o(d)
(with space n1+o(1) ), while nearest neighbor techniques can reduce the time complexity to n1+ρ+o(1)
for ρ < 1 (increasing the space to n1+ρ+o(1) ). For more details, see e.g. [NV08, Laa15, BDGL16].
Based on the collision probabilities for hypercube LSH, and assuming the asymptotics for partial
hypercube LSH (with d0 = O(d)) are similar to those of full-dimensional hypercube LSH, we obtain
the following result. An outline of the proof is given in the appendix.
Proposition 4 (Complexity of lattice sieving with hypercube LSH). Suppose the asymptotics
for full hypercube LSH also hold for partial hypercube LSH with d0 ≈ 0.1335d. Then lattice sieving with
hypercube LSH heuristically solves SVP in time and space 20.3222d+o(d) .
As expected, the conjectured asymptotic performance of (sieving with) hypercube LSH lies in
between those of hyperplane LSH and cross-polytope LSH.
–
–
–
–
–
–
Linear search [NV08]:
20.4150d+o(d) .
Hyperplane LSH [Laa15]:
20.3366d+o(d) .
Hypercube LSH:
20.3222d+o(d) .
Spherical cap LSH [LdW15]: 20.2972d+o(d) .
Cross-polytope LSH [BL16]: 20.2972d+o(d) .
Spherical LSF [BDGL16]:
20.2925d+o(d) .
In practice however, the picture is almost entirely reversed [SG15]. The lattice sieving method used to
solve SVP in the highest dimension to date (d = 116) used a very optimized linear search [Kle14]. The
furthest that any nearest neighbor-based sieve has been able to go to date is d = 107, using hypercube
LSH [MLB15, MB16]5 . Experiments further indicated that spherical LSF only becomes competitive
with hypercube LSH as d & 80 [BDGL16, MLB17], while sieving with cross-polytope LSH turned
out to be rather slow compared to other methods [BL16, Mar16]. Although it remains unclear which
nearest neighbor method is the “most practical” in the application of lattice sieving, hypercube LSH
is one of the main contenders.
Acknowledgments. The author is indebted to Ofer Zeitouni for his suggestion to use results from large
deviations theory, and for his many helpful comments regarding this application. The author further
thanks Brendan McKay and Carlo Beenakker for their comments. The author is supported by the
SNSF ERC Transfer Grant CRETP2-166734 FELICITY.
References
Ach01.
AIL+ 15.
Dimitris Achlioptas. Database-friendly random projections. In PODS, pages 274–281, 2001.
Alexandr Andoni, Piotr Indyk, Thijs Laarhoven, Ilya Razenshteyn, and Ludwig Schmidt. Practical and
optimal LSH for angular distance. In NIPS, pages 1225–1233, 2015.
AINR14. Alexandr Andoni, Piotr Indyk, Huy Lê Nguyên, and Ilya Razenshteyn. Beyond locality-sensitive hashing. In
SODA, pages 1018–1028, 2014.
ALRW17. Alexandr Andoni, Thijs Laarhoven, Ilya Razenshteyn, and Erik Waingarten. Optimal hashing-based timespace trade-offs for approximate near neighbors. In SODA, pages 47–66, 2017.
And09.
Alexandr Andoni. Nearest Neighbor Search: the Old, the New, and the Impossible. PhD thesis, Massachusetts
Institute of Technology, 2009.
5
Although phrased as hyperplane LSH, the implementations from [Laa15, MLB15, MB16] are using hypercube LSH.
12
Thijs Laarhoven
AR15.
Alexandr Andoni and Ilya Razenshteyn. Optimal data-dependent hashing for approximate near neighbors.
In STOC, pages 793–801, 2015.
AS72.
Milton Abramowitz and Irene A. Stegun. Handbook of Mathematical Formulas. Dover Publications, 1972.
BDGL16. Anja Becker, Léo Ducas, Nicolas Gama, and Thijs Laarhoven. New directions in nearest neighbor searching
with applications to lattice sieving. In SODA, pages 10–24, 2016.
Bis06.
Christopher M. Bishop. Pattern Recognition and Machine Learning (Information Science and Statistics).
Springer-Verlag, 2006.
BL16.
Anja Becker and Thijs Laarhoven.
Efficient (ideal) lattice sieving using cross-polytope LSH.
In
AFRICACRYPT, pages 3–23, 2016.
Cha02.
Moses S. Charikar. Similarity estimation techniques from rounding algorithms. In STOC, pages 380–388,
2002.
Chr17.
Tobias Christiani. A framework for similarity search with space-time tradeoffs using locality-sensitive filtering.
In SODA, pages 31–46, 2017.
DHS00.
Richard O. Duda, Peter E. Hart, and David G. Stork. Pattern Classification (2nd Edition). Wiley, 2000.
Dub10.
Moshe Dubiner. Bucketing coding and information theory for the statistical high-dimensional nearest-neighbor
problem. IEEE Transactions on Information Theory, 56(8):4166–4179, Aug 2010.
DZ10.
Amir Dembo and Ofer Zeitouni. Large deviations techniques and applications (2nd edition). Springer, 2010.
ER08.
Kave Eshghi and Shyamsundar Rajaram. Locality sensitive hash functions based on concomitant rank order
statistics. In KDD, pages 221–229, 2008.
IM98.
Piotr Indyk and Rajeev Motwani. Approximate nearest neighbors: Towards removing the curse of dimensionality. In STOC, pages 604–613, 1998.
Jia06.
Tiefeng Jiang. How many entries of a typical orthogonal matrix can be approximated by independent normals?
The Annals of Probability, 34(4):1497–1529, 2006.
JL84.
William B. Johnson and Joram Lindenstrauss. Extensions of Lipschitz mappings into a Hilbert space. Contemporary Mathematics, 26(1):189–206, 1984.
Kle14.
Thorsten Kleinjung. Private communication, 2014.
KW17.
Christopher Kennedy and Rachel Ward. Fast cross-polytope locality-sensitive hashing. In ITCS, 2017.
Laa15.
Thijs Laarhoven. Sieving for shortest vectors in lattices using angular locality-sensitive hashing. In CRYPTO,
pages 3–22, 2015.
LdW15. Thijs Laarhoven and Benne de Weger. Faster sieving for shortest lattice vectors using spherical localitysensitive hashing. In LATINCRYPT, pages 101–118, 2015.
Mar16.
Artur Mariano. Private communication., 2016.
MB16.
Artur Mariano and Christian Bischof. Enhancing the scalability and memory usage of HashSieve on multi-core
CPUs. In PDP, pages 545–552, 2016.
MLB15. Artur Mariano, Thijs Laarhoven, and Christian Bischof. Parallel (probable) lock-free HashSieve: a practical
sieving algorithm for the SVP. In ICPP, pages 590–599, 2015.
MLB17. Artur Mariano, Thijs Laarhoven, and Christian Bischof. A parallel variant of LDSieve for the SVP on lattices.
PDP, 2017.
MNP07. Rajeev Motwani, Assaf Naor, and Rina Panigrahy. Lower bounds on locality sensitive hashing. SIAM Journal
of Discrete Mathematics, 21(4):930–935, 2007.
MO15.
Alexander May and Ilya Ozerov. On computing nearest neighbors with applications to decoding of binary
linear codes. In EUROCRYPT, pages 203–228, 2015.
NV08.
Phong Q. Nguyên and Thomas Vidick. Sieve algorithms for the shortest vector problem are practical. Journal
of Mathematical Cryptology, 2(2):181–207, 2008.
OWZ11. Ryan O’Donnell, Yi Wu, and Yuan Zhou. Optimal lower bounds for locality sensitive hashing (except when
q is tiny). In ICS, pages 276–283, 2011.
SDI05.
Gregory Shakhnarovich, Trevor Darrell, and Piotr Indyk. Nearest-Neighbor Methods in Learning and Vision:
Theory and Practice. MIT Press, 2005.
SG15.
Michael Schneider and Nicolas Gama. SVP challenge, 2015.
SSLM14. Ludwig Schmidt, Matthew Sharifi, and Ignacio Lopez-Moreno. Large-scale speaker identification. In ICASSP,
pages 1650–1654, 2014.
STS+ 13. Narayanan Sundaram, Aizana Turmukhametova, Nadathur Satish, Todd Mostak, Piotr Indyk, Samuel Madden, and Pradeep Dubey. Streaming similarity search over one billion tweets using parallel locality-sensitive
hashing. VLDB, 6(14):1930–1941, 2013.
TT07.
Kengo Terasawa and Yuzuru Tanaka. Spherical LSH for approximate nearest neighbor search on unit hypersphere. In WADS, pages 27–38, 2007.
TT09.
Kengo Terasawa and Yuzuru Tanaka. Approximate nearest neighbor search for a dataset of normalized
vectors. In IEICE Transactions on Information and Systems, volume 92, pages 1609–1619, 2009.
A
Proof of Theorem 1
Theorem 1 will be proved through a series of lemmas, each making partial progress towards a final
solution. Reading only the claims made in the lemmas may give the reader an idea how the proof is
built up. Before starting the proof, we begin with a useful lemma regarding integrals of (exponentials
of) quadratic forms.
Hypercube LSH for approximate near neighbors
13
Lemma 4 (Integrating an exponential of a quadratic form in the positive quadrant). Let
a, b, c ∈ R with a, c < 0 and D = b2 − 4ac < 0. Then:
b
Z ∞Z ∞
π + 2 arctan √−D
√
exp(ax2 + bxy + cy 2 ) dx dy =
.
(16)
2 −D
0
0
Proof. The proof below is based on substituting y = xs (and dy = x ds) before computing the integral
over x. An integral over 1/(a + bs + cs2 ) then remains, which leads to the arctangent solution in case
b2 < 4ac.
Z ∞Z ∞
I=
exp(ax2 + bxy + cy 2 ) dx dy
(17)
y=0 0
Z ∞ Z ∞
x exp (a + bs + cs2 )x2 dx ds
(18)
=
s=0
0
#∞
Z ∞"
exp (a + bs + cs2 )x2
=
ds
(19)
2(a + bs + cs2 )
0
x=0
Z ∞
1
=
0−
ds
(20)
2(a + bs + cs2 )
0
Z
1
−1 ∞
ds.
(21)
=
2 0 a + bs + cs2
The last equality used the assumptions a, c < 0 and b2 < 4ac so that a + bs + cs2 < 0 for all s > 0.
We then solve the last remaining integral (see e.g. [AS72, Equation (3.3.16)]) to obtain:
∞
−1
2
b + 2cs
√
√
I=
arctan
(22)
2
4ac − b2
4ac − b2 s=0
−1
b
= √
−π − 2 arctan √
.
(23)
2 4ac − b2
4ac − b2
Eliminating minus signs and substituting D = b2 − 4ac, we obtain the stated result.
Next, we begin by restating the collision probability between two vectors in terms of half-normal
vectors.
Lemma 5 (Towards three-dimensional large deviations). Let H denote the hypercube hash
family in d dimensions, and as before, let p be defined as:
p(θ) = Ph∼H (h(x) = h(y) | φ(x, y) = θ).
Let X̂, Ŷ ∼ H(0, 1)d and let the sequence {Z d }d∈N ⊂ R3 be defined as:
!
d
d
d
X
X
1 X
2
2
Zd =
X̂i Ŷi ,
X̂i ,
Ŷi .
d
i=1
i=1
(24)
(25)
i=1
Then:
p(θ) =
1
2 sin θ
d+o(d)
max P(Z d = (xy cos θ, x2 , y 2 )).
x,y>0
(26)
Proof. First, we write out the definition of the conditional probability in p, and use the fact that
each of the 2d hash regions (orthants) has the same probability mass. Here X, Y ∼ N (0, 1)d denote
random Gaussian vectors, and subscripts denoting what probabilities are computed over are omitted
when implicit.
p(θ) = Ph∼H (h(x) = h(y) | φ(x, y) = θ)
= 2d · PX,Y ∼N (0,1)d (X > 0, Y > 0 | φ(X, Y ) = θ)
=
2d
· P(X > 0, Y > 0, φ(X, Y ) = θ)
.
P(φ(X, Y ) = θ)
(27)
(28)
(29)
14
Thijs Laarhoven
By Lemma 1, the denominator is equal to (sin θ)d+o(d) . The numerator of (29) can further be rewritten
as a conditional probability on {X > 0, Y > 0}, multiplied with P(X > 0, Y > 0) = 2−2d . To
incorporate the conditionals X, Y > 0, we replace X, Y ∼ N (0, 1)d by half-normal vectors X̂, Ŷ ∼
H(0, 1)d , resulting in:
p(θ) =
PX̂,Ŷ ∼H(0,1)d (φ(X̂, Ŷ ) = θ)
(2 sin θ)d+o(d)
=
q(θ)
(2 sin θ)d+o(d)
.
(30)
To incorporate the normalization over the (half-normal)
vectors
X̂ and Ŷ , we introduce dummy
√
√
variables x, y corresponding to the norms of X̂/ d and Ŷ / d, and observe that as the probabilities
are exponential in d, the integrals will be dominated by the maximum value of the integrand in the
given range:
Z ∞Z ∞
P(hX̂, Ŷ i = x y d cos θ, kX̂k2 = x2 d, kŶ k2 = y 2 d) dx dy
(31)
q(θ) =
0
0
= 2o(d) max P hX̂, Ŷ i = x y d cos θ, kX̂k2 = x2 d, kŶ k2 = y 2 d .
(32)
x,y>0
Substituting Z d = d1 (hX̂, Ŷ i, kX̂k2 , kŶ k2 ), we obtain the claimed result.
Note that Z1 , Z2 , Z3 are pairwise but not jointly independent. To compute the density of Z d at
(xy cos θ, x2 , y 2 ) for d → ∞, we use the Gärtner-Ellis theorem stated in Lemma 3.
Lemma 6 (Applying the Gärtner-Ellis theorem to Z d ). Let {Z d }d∈N ⊂ R3 as in Lemma 5,
and let Λ and Λ∗ as in Section 2. Then 0 lies in the interior of DΛ , and therefore
P(Z d = (xy cos θ, x2 , y 2 )) = exp −Λ∗ (xy cos θ, x2 , y 2 )d + o(d) .
(33)
Essentially, all that remains now is computing Λ∗ at the appropriate point z. To continue, we first
compute the logarithmic moment generating function Λ = Λd of Z d :
Lemma 7 (Computing Λ). Let Z d as before, and let D = D(λ1 , λ2 , λ3 ) = λ21 − (1 − 2λ2 )(1 − 2λ3 ).
Then for λ ∈ DΛ = {λ ∈ R3 : λ2 , λ3 < 21 , D < 0} we have:
λ1
Λ(λ) = ln π + 2 arctan √
− ln π − 12 ln(−D).
(34)
−D
Proof. By the definition of the LMGF, we have:
h
i
Λ(λ) = ln EX̂1 ,Ŷ1 ∼H(0,1) exp λ1 X̂1 Ŷ1 + λ2 X̂12 + λ3 Ŷ12 .
(35)
We next compute the inner expectation over the random variables X̂1 , Ŷ1 , by writing out the double
integral over the product of the argument with the densities of X̂1 and Ŷ1 .
EX1 ,Y1 exp λ1 X1 Y1 + λ2 X12 + λ3 Y12
(36)
2 Z ∞ r
2
Z ∞r
2
x
2
y
=
exp −
dx
exp −
dy exp λ1 xy + λ2 x2 + λ3 y 2
(37)
π
2
π
2
0
0
Z ∞Z ∞
2
=
exp λ1 xy + λ2 − 12 x2 + λ3 − 21 y 2 dx dy .
(38)
π 0
0
Applying Lemma 4 with (a, b, c) = (λ2 − 21 , λ1 , λ3 − 12 ) yields the claimed expression for Λ, as well as
the bounds stated in DΛ which are necessary for the expectation to be finite.
We now continue with computing the Fenchel-Legendre transform of Λ, which involves a rather
complicated maximization (supremum) over λ ∈ R3 . The following lemma makes a first step towards
computing this supremum.
Hypercube LSH for approximate near neighbors
15
Lemma 8 (Computing Λ∗ (z) – General form). Let z ∈ R3 such that z2 , z3 > 0. Then the
Fenchel-Legendre transform Λ∗ of Λ at z satisfies
√
z2 z3
1
Λ∗ (z) = ln π + sup
+
+ λ1 z1 − |λ1 |β z2 z3 + ln(β 2 − 1) + ln |λ1 |
(39)
2
2
2
λ1 ,β
β>1
)
λ1
p
.
− ln π + 2 arctan
|λ1 | β 2 − 1
(40)
Proof. First, we recall the definition of Λ∗ and substitute the previous expression for Λ:
Λ∗ (z) = sup {hλ, zi − Λ(λ)}
λ∈R3
√
λ1
= ln π + sup hλ, zi + ln −D − ln π + 2 arctan √
.
−D
λ∈R3
(41)
(42)
Here as before D = λ21 − (1 − 2λ2 )(1 − 2λ3 ) < 0. Let the argument of the supremum above be denoted
by f (z, λ). We make a change of variables by setting t2 = 1 − 2λ2 > 0 and t3 = 1 − 2λ3 > 0, so that
D becomes D = λ21 − t2 t3 < 0:
z2 z3
t2 z2 t3 z3
+
+ λ 1 z1 −
−
2
2
2
2
2
1
+ 2 ln(t2 t3 − λ1 ) − ln π + 2 arctan √
f (z, λ1 , t2 , t3 ) =
(43)
λ1
t2 t3 −λ21
.
(44)
We continue by making a further change of variables u = t2 t3 > λ21 so that t2 = u/t3 . As a result the
dependence of f on t3 is only through the fourth and
p fifth terms above, from which one can
p easily
deduce that the supremum over t3 occurs at t3 = uz2 /z3 . This also implies that t2 = uz3 /z2 .
Substituting these values for t2 , t3 , we obtain:
√
f (z, λ1 , u) = z22 + z23 + λ1 z1 − uz2 z3 + 21 ln(u − λ21 ) − ln π + 2 arctan √ λ1 2 .
u−λ1
Finally, we use the substitution u = β 2 ·λ21 . From D < 0 it follows that u/λ21 = β > 1. This substitution
and some rewriting of f leads to the claimed result.
The previous simplifications were regardless of z1 , z2 , z3 , where the only assumption that was
made during the optimization of t3 was that z2 , z3 > 0. In our application, we want to compute Λ∗ at
z = (xy cos θ, x2 , y 2 ) for certain x, y > 0 and θ ∈ (0, π2 ). Substituting these values for z, the expression
from Lemma 5 becomes:
x2 y 2
1
∗
2 2
Λ (xy cos θ, x , y ) = ln π +
+
+ sup (λ1 cos θ − |λ1 |β)xy + ln(β 2 − 1)
(45)
2
2
2
λ1 ,β
β>1
)
λ1
p
+ ln |λ1 | − ln π + 2 arctan
.
|λ1 | β 2 − 1
(46)
The remaining optimization over λ1 , β now takes slightly different forms depending on whether λ1 < 0
or λ1 > 0. We will tackle these two cases separately, based on the identity:
n
o
Λ∗ (z) = max sup {hλ, zi − Λ(λ)} , sup {hλ, zi − Λ(λ)} = max{Λ∗+ (z), Λ∗− (z)}.
λ∈R3
λ1 >0
λ∈R3
λ1 <0
Lemma 9 (Computing Λ∗ (z) for positive λ1 ). Let z = (xy cos θ, x2 , y 2 ) with x, y > 0 and θ ∈
(0, π2 ). For θ ∈ (0, arccos π2 ), let β0 = β0 (θ) ∈ (1, ∞) be the unique solution to (1). Then the FenchelLegendre transform Λ∗ at z, restricted to λ1 > 0, satisfies
πβ0 (β0 cos θ − 1)
ln
, if θ ∈ (0, arccos π2 );
2
x2 y 2
2(β
−
cos
θ)
∗
0
Λ+ (z) =
+
− 1 − ln(xy) +
(47)
2
2
0,
if θ ∈ [arccos 2 , π ).
π
2
16
Thijs Laarhoven
Proof. Substituting λ1 > 0 into (46), we obtain:
Λ∗+ (xy cos θ, x2 , y 2 ) = ln π +
n
o
x2 y 2
+
+ sup g+ (λ1 , β) ,
2
2
λ1 >0
(48)
β>1
g+ (λ1 , β) = (cos θ − β)λ1 xy +
ln(β 2
− 1)
2
1
.
+ ln λ1 − ln π + 2 arctan p
β2 − 1
(49)
Differentiating w.r.t. λ1 gives (cos θ − β)xy + λ11 . Recall that β > 1 > cos θ. For λ1 → 0+ the derivative
is therefore positive, for λ1 → ∞ it is negative, and there is a global maximum at the only root
λ1 = 1/((β − cos θ)xy). In that case, the expression further simplifies and we can pull out more terms
that do not depend on β, to obtain:
n
o
x2 y 2
Λ∗+ (xy cos θ, x2 , y 2 ) = ln π +
+
− 1 − ln(xy) + sup g+ (β) ,
(50)
2
2
β>1
p
2
β −1
= ln h+ (β).
g+ (β) = ln
(51)
1
(β − cos θ) π + 2 arcsin β
p
Here we used the identity arctan(1/ β 2 − 1) = arcsin(1/β). Now, for β → 1+ we have h+ (β) → 0+ ,
while for β → ∞, we have
1
1
2
1
h+ (β) = +
cos θ −
+O
.
(52)
π πβ
π
β2
In other words, if cos θ ≤ π2 or θ ≥ arccos π2 , we have h+ (β) → ( π1 )− (the second order term is negative
for cos θ = π2 ), while for θ < arccos π2 we approach the same limit from above as h+ (β) → ( π1 )+ . For
θ < arccos π2 there is a non-trivial maximum at some value β = β0 ∈ (1, ∞), while for θ ≥ arccos π2 ,
we can see from the derivative h0+ (β) that h+ (β) is strictly increasing on (1, ∞), and the supremum
is attained at β → ∞. We therefore obtain two different results, depending on whether θ < arccos π2
or θ ≥ arccos π2 .
Case 1: arccos π2 ≤ θ < π2 . The supremum is attained in the limit of β → ∞, which leads to
h+ (β) → π1 and the stated expression for Λ∗+ (xy cos θ, x2 , y 2 ).
Case 2: 0 < θ < arccos π2 . In this case there is a non-trivial maximum at some value β = β0 , namely
there where the derivative h0+ (β0 ) = 0. After computing the derivative, eliminating the (positive)
denominator and rewriting, this condition is equivalent to (1).This allows us to rewrite g and Λ∗ in
terms of β0 , by substituting the given expression for arcsin β10 , which ultimately leads to the stated
formula for Λ∗+ .
Lemma 10 (Computing Λ∗ (z) for negative λ1 ). Let z = (xy cos θ, x2 , y 2 ) with x, y > 0 and
θ ∈ (0, π2 ). For θ ∈ (arccos π2 , π3 ), let β1 ∈ (1, ∞) be the unique solution to (1). Then the FenchelLegendre transform Λ∗ at z, restricted to λ1 < 0, satisfies
0,
if θ ∈ (0, arccos π2 ];
πβ1 (β1 cos θ + 1)
2
2
y
x
∗
, if θ ∈ (arccos π2 , π3 );
Λ− (z) =
+
− 1 − ln(xy) + ln
(53)
2
2(cos
θ
+
β
)
1
2
2
π
,
if θ ∈ [ π3 , π2 ).
ln
2(1 + cos θ)
Proof. We again start by substituting λ1 < 0 into (46):
n
o
x2 y 2
Λ∗− (xy cos θ, x2 , y 2 ) = ln π +
+
+ sup g− (λ1 , β) ,
2
2
λ1 <0
β>1
g− (λ1 , β) = (cos θ + β)λ1 xy +
ln(β 2
2
− 1)
−1
+ ln(−λ1 ) − ln π + 2 arctan p
.
β2 − 1
(54)
Hypercube LSH for approximate near neighbors
17
Differentiating w.r.t. λ1 gives (cos θ + β)xy + λ11 . For λ1 → −∞ this is positive, for λ1 → 0− this
is negative, and so the maximum is at λ1 = −1/((cos θ + β)xy). Substituting this value for λ1 , and
pulling out terms which do not depend on β yields:
Λ∗− (xy cos θ, x2 , y 2 ) = ln
g− (β) = ln
π
2
+
n
o
x2 y 2
+
− 1 − ln(xy) + sup g− (β) ,
2
2
β>1
!
p
β2 − 1
(cos θ + β) arccos β1
(55)
= ln h− (β).
p
Above we used the identity π + 2 arctan(−1/ β 2 − 1) = 2 arccos β1 , where the factor 2 has been pulled
outside the supremum. Now, differentiating h− w.r.t. β results in:
p
2 − 1(β cos θ + 1) arccos 1 − β 2 − 1 (cos θ + β)
β
β
β
h0− (β) =
.
(56)
β (β 2 − 1) (cos θ + β)2 arccos β1
Clearly the denominator is positive, while for β → 1+ the limit is negative iff cos θ < 21 . For β → ∞
we further have h0− (β) → 0− for cos θ ≤ π2 and h0− (β) → 0+ for cos θ > π2 . We therefore analyze three
cases separately below.
Case 1: π3 ≤ θ < π2 . In this parameter range, h0− (β) is negative for all β > 1, and the supremum
1
∗
lies at β → 1+ with limiting value h− (β) → 1+cos
θ . This yields the given expression for Λ− .
Case 2: arccos π2 < θ < π3 . For θ in this range, h0− (β) is positive for β → 1+ and negative for
β → ∞, and changes sign exactly once, where it attains its maximum. After some rewriting, we find
that this is at the value β = β1 (θ) ∈ (1, ∞) satisfying the relation from (1). Substituting this expression
for arccos β11 into h− , we obtain the result for Λ∗− .
Case 3: 0 < θ ≤ arccos π2 . In this case h0− is positive for all β > 1, and the supremum lies at
β → ∞. For β → ∞ we have h− (β) → π2 (regardless of θ) and we therefore get the final claimed result.
Proof (Proof of Theorem 1). Combining the previous two results with Lemma 6 and Equation 47,
we obtain explicit asymptotics for P(Z d ≈ (xy cos θ, x2 , y 2 )). What remains is a maximization over
2
2
x, y > 0 of p, which translates to a minimization of Λ∗ . As x2 + y2 − 1 − ln(xy) attains its minimum
at x = y = 1 with value 0, we obtain Theorem 1.
B
Proof of Proposition 4
We will assume the reader is familiar with (the notation from) [Laa15]. Let t = 2ct d+o(d) denote the
number of hash tables, and n = (4/3)d/2+o(d) . Going through the proofs of [Laa15, Appendix A] and
replacing the explicit instantiation of the collision probabilities (1 − θ/π) by an arbitrary function
p(θ), we get that the optimal number of hash functions concatenated into one function for each hash
table, denoted k, satisfies
k=
ln t
ct d
√ .
=
0
− ln p(θ1 )
d log2 (π/ 3)
(57)
The latter equality follows when substituting θ1 = π/3 and substituting the collision probabilities for
partial hypercube LSH in some dimension d0 ≤ d. As we need k ≥ 1, the previous relation translates
ct √
d. As we expect the collision probabilities to be closer to those of
to a condition on d0 as d0 ≤ log (π/
3)
2
full-dimensional hypercube LSH when d0 is closer to d, we replace the above inequality by an equality,
and what remains is finding the minimum value ct satisfying the given constraints.
By carefully checking the proofs of [Laa15, Appendix A.2-A.3], the exact condition on ct to obtain
the minimum asymptotic time complexity is the following:
ct
−cn = max log2 sin θ2 + π
.
(58)
ρ( 3 , θ2 )
θ2 ∈(0,π)
18
Thijs Laarhoven
Here cn = 21 log2 ( 43 ) ≈ 0.20752, and ρ(θ1 , θ2 ) = ln p(θ1 )/ ln p(θ2 ) corresponds to the exponent ρ for
given angles θ1 , θ2 . Note that in the above equation, only ct is an unknown. Substituting the asymptotic
collision probabilities from Theorem 1, we find a solution at ct ≈ 0.11464, with maximizing angle
θ2 ≈ 0.45739π. This corresponds to a time and space complexity of (n · t)1+o(1) = 2(cn +ct )d+o(d) ≈
20.32216d+o(d) as claimed.
| 8 |
arXiv:1609.07450v3 [cs.DM] 21 Feb 2018
Finding long simple paths in a weighted digraph using
pseudo-topological orderings
Miguel Raggi ∗
mraggi@gmail.com
Escuela Nacional de Estudios Superiores
Universidad Nacional Autónoma de México
Abstract
Given a weighted digraph, finding the longest path without repeated vertices is
well known to be NP-hard. Furthermore, even giving a reasonable (in a certain sense)
approximation algorithm is known to be NP-hard. In this paper we describe an efficient heuristic algorithm for finding long simple paths, using an hybrid approach of
heuristic depth-first search and pseudo-topological orders, which are a generalization
of topological orders to non acyclic graphs, via a process we call “opening edges”.
Keywords: long paths, graphs, graph algorithms, weighted directed graphs, long simple
paths, heuristic algorithms.
1
Introduction
We focus on the following problem: Given a weighted digraph D = (V, E) with weight
w : E → R+ , find a simple path with high weight. The weight of a path is the sum of
the individual weights of the edges belonging to the path. A path is said to be simple if it
contains no repeated vertices.
Possible applications of this problem include motion planning, timing analysis in a VLSI
circuit and DNA sequencing.
The problem of finding long paths in graphs is well known to be NP-hard, as it is trivially
a generalization of HAMILTON PATH. Furthermore, it was proved by Björklund, Husfeldt
and Khanna in [BHK04] that the longest path cannot be aproximated in polynomial time
within n1−ε for any ε > 0 unless P = N P .
∗
Research supported in part by PAPIIT IA106316, UNAM.
Orcid ID: 0000-0001-9100-1655.
1
While LONGEST SIMPLE PATH has been studied extensively in theory for simple
graphs (for example in [Scu03], [ZL07], [PD12]), not many efficient heuristic algorithms exist
even for simple undirected graphs, much less for weighted directed graphs. A nice survey
from 1999 can be found at [Sch99]. A more recent comparison of 4 distinct genetic algorithms
for approximating a long simple path can be found in [PAR10].
An implementation of the proposed algorithm won the Oracle MDC coding competition
in 2015. In the problem proposed by Oracle in the challenge “Longest overlapping movie
names”, one needed to find the largest concatenation of overlapping strings following certain
rules, which could be easily transformed to a problem of finding the longest simple path in
a directed weighted graph. The graph had around 13,300 vertices.
Our contribution lies in a novel method of improving existing paths, and an efficient
implementation of said method. The proposed algorithm consists of two parts: finding
good candidate paths using heuristic DFS and then improving upon those candidates by
attempting to either replace some vertices in the path by longer subpaths–or simply insert
some subpaths when possible–by using pseudo-topological orders.
The full C++ source code can be downloaded from
http://github.com/mraggi/LongestSimplePath.
In Section 2 we give some basic definitions. We describe the proposed algorithm in
Section 3. Finally, we give some implementation details and show the result of some experimental data in Section 4.
It should be noted that for this particular problem it is generally easy to quickly construct
somewhat long paths, but only up to a point. After this point even minor improvements get
progressively harder.
2
Preliminaries
Definition 2.1. A directed acylic graph (or DAG) D is a directed graph with no (directed)
cycles.
In a directed acyclic graph, one can define a partial order ≺ of the vertices, in which we
say v ≺ u iff there is a directed path from v to u.
Definition 2.2. A topological ordering for a directed acyclic graph D is a total order of the
vertices of D that is consistent with the partial order described above. In other words, it is
an ordering of the vertices such that there are no edges of D which go from a “high” vertex
to a “low” vertex.
Definition 2.3. Given a digraph D, a strongly connected component C is a maximal set
of vertices with the following property: for each pair of vertices x, y ∈ C, there exists a
(directed) path from x to y and one from y to x. A weakly connected component is a
connected component of the associated simple graph.
2
1
2
3
3
2
6
1
5
5
6
4
4
Figure 1: Two different topological orders of the same digraph
Definition 2.4. Given a digraph D, the skeleton S of D is the graph constructed as follows:
The vertices of S are the strongly connected components of D. Given x, y ∈ V (S), there is
an edge x → y iff there exists a ∈ x and b ∈ y such that a → b is an edge of D.
It can be observed that S is always a directed acyclic graph.
Definition 2.5. Denote by v the connected component of D which contains v. Given a
vertex v, we define the out-rank of v as the length of the longest path of S that starts at v.
Similarly, we define the in-rank of v as the length of the longest path of S which ends at v.
2.1
Longest simple path on DAGs
In case that the digraph is a acyclic, a well-known algorithm that uses dynamic programming
can find the optimal path in O(n) time.
We describe Dijkstra’s algorithm adapted to finding the longest simple path in a DAG.
As this algorithm is an essential building block of the algorithm described in Section 3.2,
we add a short description here for convenience. For a longer discussion see, for example,
[SW11].
1. Associate to each vertex v a real number x[v], which will end up representing the weight
of the longest simple path that ends at v.
2. Find a topological ordering of the vertices of D.
3. In said topological order, iteratively set x[v] to the max of x[p] + w(p → v) where
p → v is an edge, or 0 otherwise.
4. Once we have x[v] for every vertex v, reconstruct the path by backtracking, starting
from the vertex v with the highest x[v].
In more detail,
3
Algorithm 1 Longest simple path in a DAG
Input: A DAG D with weight function w : E(D) → R.
Output: A longest simple path P in D.
function LSP DAG(D)
x is an array of size |V (D)|, initialized with zeroes.
Find a topological order T for V (D)
for v ∈ T do
x[v] := max{xp + w(p → v) : p → v}
v := argmax(x)
P := path with only v
T 0 := reverse(T )
while x[v] 6= 0 do
u := an in-neighbor of v for which x[u] + w(u → v) = x[v]
Add u to the front of P
v := u
return P
This algorithm is simple to implement and efficient. Its running time is O(E + V ), where
E is the number of edges of D.
3
The Algorithm
In what follows we shall assume we have preprocessed the graph and found the weakly
connected components, the strongly connected components, and have fast access to both
the outgoing edges and incoming edges for each vertex. As we may perform the following
algorithm on each weakly connected component, without loss of generality assume D is
weakly connected.
Our proposed algorithm has two main parts: In the first part we find long paths using
heuristic depth first search, choosing in a smart way which vertices to explore first, and in
the second part we discuss a strategy to improve the paths obtained in the first part. Since
the idea based on DFS is somewhat standard or straightforward, the main contribution of
this paper lies in the ideas presented in the second part.
3.1
Depth-first search
We describe a variation on depth-first search (DFS).
The standard way of implementing a depth-first search is to either use a stack (commonly
refered to as the frontier or fringe) of vertices to store unexplored avenues, or to use recursive
calls (effectively using the callstack in lieu of the stack).
If the graph is not acyclic, DFS may get stuck on a cycle. The standard way of dealing
with this problem, when one simply wishes to see every vertex (and not every simple path,
4
as in our case), is to store previously explored vertices in a data structure that allows us to
quickly check if a vertex has been explored before or not.
However, for the problem of finding the longest simple path, it’s not enough to simply
ignore previously explored vertices, as we may need to explore the same vertex many times, as
we may arrive at the same vertex from many different paths, and these need to be considered
separately. Thus, for this problem, it is not possible to backtrack to reconstruct the path,
as in many other problems.
This could be solved simply by modifying DFS slightly: make the frontier data structure
containing paths instead of only vertices. However, storing paths uses a large amount of
memory and all the extra allocations might slow down the search considerably.
We propose a faster approach that results from only modifying a single path inplace.
This is very likely not an original idea, but an extensive search in the literature did not
reveal any article that considers depth-first search in this manner. Probably because for
most problems the recursive or stack implementations are quite efficient, as they only need
to deal with stacks of vertices and not stacks of paths.
In this approach, instead of maintaining a stack of unexplored avenues, assume for each
vertex the outgoing edges are sorted in a predictable manner. Later we will sort the outgoing
edges in a way that explores the vertices with high probability of producing long paths first,
but for now just assume any order that we know. Since this approach modifies the path in
place, always make a copy of the best path found so far before any operation that might
destroy the path.
Furthermore, assume we have a function NextUnexploredEdge(P, {u, v}) that takes
as input a path P and an edge {u, v}, in which the last vertex of P is u, and returns the
next edge {u, w} in the order mentioned above for which w ∈
/ P . This can be found using
binary search, or even adding extra information to the edge, so that each edge remembers
its index in the adjacency list of the first vertex. If there is no such edge, the function should
return null. If no parameter {u, v} is provided, it should return the first edge {u, w} for
which w ∈
/ P.
We will construct a single path P and modify it repeatedly by adding vertices to the
back of P .
5
Algorithm 2 Next Path in a DFS manner
Input: A weighted digraph D and a path P , which will be modified in place
Output: Either done or not done
function NextPath(P )
last := last vertex of P
t := NextUnexploredEdge(P )
while t = null and |P | > 1 do
last := last vertex of P
Remove last from P
newLast := last vertex of P
t := NextUnexploredEdge(P, {newLast, last})
if t = null then
return done
Add t to the back of P
return not done
By repeatedly applying this procedure we can explore every path that starts at a given
vertex in an efficient manner, but there are still too many possible paths to explore, so we
must call off the search after a specified amount of time, or perhaps after a specified amount
of time has passed without any improvements on the best so far.
Finally, we can do both forward and backward search with a minor modification to this
procedure. So once a path cannot be extended forward any more, we can check if it can
be extended backward. We found experimentally that erasing the the first few edges of the
path before starting the backward search often produces better results.
3.1.1
Choosing the next vertex
We give the details for efficiently searching forward, as searching backward is analogous.
So we are left with the following two questions: At which vertex do we start our search
at? And then, while performing DFS, which vertices do we explore first? That is, how do
we find a “good” order of the outgoing edges, so that good paths have a higher chance of
being found quickly?
The first question is easily answered: start at vertices with high out-rank.
To answer the second question, we use a variety of information we collect on each vertex
before starting the search:
1. The out-rank and in-rank.
2. The (weighted) out-degree and (weighted) in-degree.
3. A score described below.
Once we find the score of each vertex, order the out-neighbors by rank first and score
second, with some exceptions we will mention below. The score should not depend on any
path, only on local information about the vertex, and should be fast to calculate.
6
Formally, let k be a constant (small) positive integer. For each vertex v, let Ak (v) be
the sum of weights of all paths of length k starting at v. For example, A1 (v) is the weighted
out-degree.
Given a choice of parameters a1 , a2 , ..., ak ∈ R+ , construct the (out) score for vertex v as
scoreout (v) =
k
X
ai Ai (v)
i=1
Intuitively, the score of each vertex tries to heuristically capture the number (and quality)
of paths starting at that vertex. High score means more paths start at a vertex.
When performing forward search, perhaps counter-intuitively, giving high priority to
vertices with low score (as long as it is not 0) consistently finds better paths than giving
high priority to vertices with high score. The reason for this is that exploring vertices with
low score first means saving the good vertices–those with high scores–for later use, once
more restrictions are in place. Low scoring vertices are usually quickly discarded if there is
no out, and so by leaving vertices with high degree for later, when the path is longer and so
there are more restrictions about which vertices can be used, makes sense. An exception is
if a vertex has degree 0. In this case, we give the vertex a low priority, as no further paths
are possible.
Another exception is to give higher priority to vertices with very low indegree (for example, indegree 1), since if they are not explored in a path when first finding their parents,
they will never be used again later in the path.
In addition, we also use the in-degree information in an analogous way.
3.2
Pseudo-topological order
The idea behind the second part of the algorithm is to try to improve paths by either inserting
some short paths in-between or replacing some vertices by some short paths in an efficient
way that covers both.
We begin by introducing some definitions.
Definition 3.1. Given a digraph D, a weak pseudo-topological ordering ≺ of the vertices of
D is a total order of the vertices in which whenever x ≺ y and there is an edge y → x, then
x and y are in the same strongly connected component.
In other words, a weak pseudo-topological order is a total order that is consistent with
the partial order given by the skeleton.
Definition 3.2. Given a digraph D, a strong pseudo-topological ordering ≺ of the vertices
of D is a total order of the vertices in which whenever x ≺ y and there is an edge y → x,
every vertex in the interval [x, y] is in the same strongly connected component.
In other words, a strongly pseudo-topological order is a weakly connected component in
which the strongly connected components are not intermixed.
7
3
7
1
4
2
6
8
5
Figure 2: A (strong) pseudo-topological ordering
From here on, whenever we mention a pseudo-topological ordering, we mean a strong
pseudo-topological ordering.
An easy way to get a random pseudo-topological ordering is to get a random topological
ordering of the skeleton of the graph, and then “explode” the strongly connected components,
choosing a random permutation of vertices in each component.
We can think of a pseudo-topological ordering as a topological ordering of the digraph
in which we erase all edges that go from a “high” vertex to a “low” vertex, thus considering
an acyclic subdigraph of the original. We call this graph the subjacent DAG of the pseudotopological order. Thus, we may apply Algorithm 1 to this acyclic digraph and find its
longest simple path.
As can be expected, the results obtained in this fashion are very poor compared to even
non heuristic recursive slow depth-first search. However, if we combine the two approaches
we get better paths.
3.2.1
Combining the two approaches
Definition 3.3. Given a path P and a pseudo-topological ordering T , the imposition of
P on T is a pseudo-topological ordering TP which every vertex not in P stays in the same
position as in T . The vertices in P are permuted to match the order given by P .
For example, say we start with path P = 3 → 1 → 5 → 8. Consider any pseudotopological ordering, say, T = (1, 8, 7, 4, 3, 6, 5, 2). Then imposing the order defined by P
into T gives rise to T 0 = (3, 1, 7, 4, 5, 6, 8, 2).
Lemma 3.4. TP as constructed above is also a (strong) pseudo-topological order.
Proof. As T is a strong pseudo-topological order, consider S1 , S2 , ... , Sc the strongly
connected components in the order they appear in T . Denote by s(v) index of the strongly
connected component of v. It suffices to prove that the vertices only move inside their own
strongly connected components when going from T to TP .
8
S1
S2
S3
S4
S5
Figure 3: No backward edges can jump between strongly connected components.
Let (p1 , p2 , ..., pk ) be the vertices of path P . Note that there is no i < j for which
s(pi ) > s(pj ) since this would mean there exists a path from a vertex in Ss(pi ) to a vertex in
Ss(pj ) , but if s(pi ) > s(pj ), no such path is possible in a pseudo-topological order, since this
violates the order in the skeleton. This means that when imposing the order of P into T to
get TP , no vertex can jump out of their strongly connected component, and thus TP is also
a strong pseudo-topological order.
The previous lemma ensures that we may run algorithm 1 with order TP , and get a path
that is at least as good (and hopefully better) as path P , since all edges of P remain in the
subjacent DAG of TP .
If after applying this technique we do find an improved path P 0 , we can repeat the process
with P 0 , by again taking a random pseudo-topological ordering, imposing the order of P 0 on
this new ordering, and so on, until there is no more improvement.
The idea then is to construct long paths quickly with DFS and then use these paths as
starting points for imposing on random pseudo-topological orders.
This approach does indeed end up producing moderately better paths than only doing
DFS, even when starting from scratch with the trivial path and a random pseudo-topological
order, albeit taking longer. However, we can do better.
3.2.2
Opening the edges
Again, we are in the setting where we have a path P which we wish to improve.
Now, instead of just imposing path P on multiple random pseudo-topological orders to
find one that gets an improvement, construct orders as follows: Pick an edge pi → pi+1
of path P and construct a random pseudo-topological order that is consistent with P and
furthermore, for which pi and pi+1 are as far apart as possible.
Figure 4: The process of opening an edge.
This is achieved by putting all vertices not in P in all strongly connected components
between pi and pj in between pi and pj . In the figure above, the “large” vertices are vertices in
P and the “small” vertices are all other vertices not in P in the same connected component as
9
pi and pi+1 . If pi and pi+1 are not in the same connected component, then, place every vertex
in either connected component, and also every vertex that belongs in a connected component
between the component of pi and the component of pi+1 between the two vertices, in such a
way that the order is still a strong pseudo-topological order.
We may repeat this process for each edge in P .
The process of opening an edge is relatively expensive, since we must run Algorithm 1
each time.
We now make an attempt at explaining why opening edges works as well as it does.
Consider:
1. If there exist a vertex v that can be inserted into P , opening the corresponding edge
finds this improvement.
2. If there exists a vertex v that can replace a vertex p in P and make the path longer
(by means of edge weights), this process will find it when opening the edge to either
side of p.
3. Any (small) path of size k that can be inserted into P , perhaps even replacing a few
vertices of P , has probability at least 1/k! of being found if the corresponding vertices
in the small path happen to be in the correct order.
In the next section we try to heuristically maximize the probability that inserting or
replacing paths will be found.
3.2.3
Opening the edges eXtreme
In the previous section, when opening up each edge, we put all the remaining vertices in the
same connected component in a random order (consistent with pseudo-topological orders).
We now consider the question of which order to give to those unused vertices. We discuss
three different approaches: one heuristic that is quick and practical, one powerful and somewhat slow, and one purely theoretical using sequence covering arrays but which provides
some theoretical guarantees.
Let B be the inbetween vertices (i.e. vertices between pi and pi+1 when opening this
edge). Since every other vertex will remain in their place, we face the problem of giving B
an ordering with a good chance of delivering an improvement. We only consider orders of
B that leaves the total order a strongly pseudo-topological order. That is, we only permute
the vertices of B within their own strongly connected components of the full graph.
Consider the induced subdigraph on B:
Recall that running Algorithm 1 with a pseudo-topological order is equivalent to finding
the longest path on the digraph that results by erasing all the edges that go backward.
10
Therefore, we must consider only orders of B that are themselves pseudo-topological orders
of the induced subgraph on B.
We describe three approaches to choosing an order of B.
3.2.4
The powerful approach
The “powerful” approach is to recursively repeat the process on the induced subgraph on
B. That is, repeat for B the whole process of finding the strongly connected components,
performing DFS as described in a previous section, finding suitable pseudo-topological orders,
opening their respective edges, and recursively doing the same for the induced subgraphs in
order to find good pseudo-topological orders and so on.
The problem with this approach is that with any standard cache-friendly graph data
structure we would need to reconstruct the induced digraph (i.e. rename the vertices of B
but keep track of their original names), and the whole process is slow. Of course, we would
only need to do this process once per connected component and then we can use the results
for each edge of the path.
The advantage of this approach is that we are precisely recursively finding good pseudotopological orders of B, which means it’s likely many long paths can be inserted in our
original P .
3.2.5
The heuristic approach
Instead of attempting to repeat the whole algorithm on the induced subgraph, we try to
mimic heuristically its results. Consider the following operation on the inbetween vertices:
pick a vertex u at random, and exchange its position with some out-neighbor v of u which
appeared before u in the pseudo-topological order. If no such neighbor exists, simply find
pick another u.
Repeat this operation (which is quite inexpensive) as many times as the time limit allows.
The following theorem ensures that this process will likely end in a (weak) pseudo-topological
order.
Theorem 3.5. With probability approaching 1 as the number of times approaches infinity,
the order constructed above is a weak pseudo-topological order of the induced subgraph B.
Proof: For any digraph, given a total order of the vertices, call a pair of vertices (a, b)
bad if a appears before b in the order, there is a path from b to a but not one from a to b. In
other words, if the strongly connected component which contains b is (strictly) less than the
strongly connected component which contains a in the partial order of the skeleton of D.
Thus, we only need to prove that the number of bad pairs never increases after an
operation, and that with some positive probability, it decreases.
Suppose we do an operation, and u is the chosen vertex, which is exchanged with v (so
u was after v in the order, but there is an edge from u to v). Then only pairs of vertices of
the form (a, u) and (v, a) could have changed status (and only when a is between u and v in
the order).
11
Let U, V, A be the strongly connected components containing u, v, a respectively. If A
is before U , then indeed the pair (a, u) is now bad after the exchange, but since we are
assuming there is an edge from u to v, if A is before U , then it is before V , and so the pair
(v, a) used to be bad, but is now good. When (v, a) becomes bad, the process is analogous,
and (a, u) becomes good.
The above process then gives a random approximate algorithm to calculate weak pseudotopological orders without calculating the strongly connected components.
3.2.6
The theoretical approach
Denote E(P ) the edge set of a path P .
For a positive integer k, we wish to find P that is maximal in the sense that if Q is
another path for which |E(Q) \ E(P )| ≤ k, then the total weight of Q is less than or equal
to the total weight of P .
Given the edge opening process, this problem can be reduced to the following: we wish for
a minimal set of permutations of Sn for which every k-subset of n appears in every possible
order. The idea is to try an edge opening for every edge in the path with the order of B
given by each element of the set of permutations.
This problem has been worked on by Colbourn et al. on [CCHZ13] and [MC14], where
they named any such set of permutations a sequence covering array. They give a mostly
impractical algorithm for finding covering arrays, that works in practice up to n ≈ 100.
However, an easy probabilistic argument yields that taking Θ(log(n)) permutations randomly
gives a non-zero chance of ending up with a covering array. This suggests that merely taking
many random permutations would yield (probabilistically) the desired result. Unsurprisingly,
this approach is not nearly as efficient as the other two.
For k = 2, however, a covering array is easy to find: take any permutation and its
reverse. So by opening every edge and taking any permutation of the inbetween vertices and
its reverse, we ensure the found path is optimal in this very limited sense: There exists no
other path with higher total weight all whose edges, except one, are a subset of the edges of
P plus one more.
4
4.1
Some details about the implementation
Preprocessing the graph
The data structure we use for storing the graph is a simple pair of adjacency lists: one
for the out-neighbors and one for the in-neighbors, so we can access either efficiently. The
vertices are numbered from 0 to n − 1 and we store an array of size n, each element of which
is an array storing both the neighbors of the corresponding vertex and the corresponding
edge-weights.
Next, we find connected components. While it is true that finding the weakly connected
components, strongly connected components and skeleton might require some non-trivial
12
processing, this pales in comparison to the NP-hard problem of finding the longest path. An
efficient implementation (for example, using C++ boost graph library) can find the weakly
and strongly connected components on graphs with millions of edges in a fraction of a second.
Then, we find the out-heuristic and the in-heuristic scores for each vertex, as described
in Section 3.1, and sort the neighbors of each vertex according to the heuristic.
In our experiments, the whole preprocessing step took about 0.2 seconds on the Oracle
graph, which has ∼13,300 vertices. This time includes reading the file of strings, figuring out
which concatenations are possible and constructing the graph. Experiments with randomly
generated graphs of comparable size take less than 0.1 seconds if the graph is already in
memory.
If one has a training set of a class of graphs, one could use some rudimentary machine
learning to try to find the optimal parameters so that on average good paths are found. In
fact, for the contest, we did just that, which provided a slight boost. The code includes a
trainer for this purpose, but experimental results on the benefits of this are sketchy at best
and do not (usually) warrant the long time it takes to train.
4.2
Pseudo-Topological orders
Once we have a pseudo-topological order T , we construct its inverse for fast access, so in
addition of being able to answer the query “which vertex is in position i of T ?” we can also
answer the query “at which position is vertex v in T ?” efficiently. Therefore, any operation
we do on T must be mirrored to the inverse.
In addition, since we are constantly changing T and having to rerun Algorithm 1, it is
worth it to store xv for each v, and just reprocess from the first vertex whose order was
modified and onwards.
Fortunately, when performing the edge opening process, much of the order has been
preserved, so we can use this information and recalculate from the first modification onwards,
speeding up the calculation considerably.
Finally, opening the edges is just a matter of rearranging the vertices to satisfy the
condition, which is straightforward. We found experimentally that the process finds good
paths faster if the edges of the path are opened in a random order and not sequentially. This
makes intuitive sense. If a path cannot be improved by a certain edge opening, it’s unlikely
(but possible) an edge that is near will yield an improvement.
Our implementation of the “powerful” approach described in Section 3.2.4 was by constructing a completely new graph and running the algorithm on the subgraph recursively,
and so it was prohibitely slow, although it did tend to find longer subpath insertions with
fewer edge openings. Perhaps this can be improved. The implementation of the heuristic approach of Section 3.2.5 was considerably more efficient over the random approach described
in Section 3.2.6.
13
5
Experimental data
We compare this algorithm to one other found in the literature, by Portugal et. al. [PAR10],
as the authors have kindly provided us with the code. There is a scarcity of heuristic
algorithms for this problem, and the code for some, such as [Sch99] appears to have been lost,
so a direct comparison turns out to be impractical. Unfortunately, an extensive literature
search did not provide any other accessible source code for this problem, making the code
in the link of the introduction the only open source and readily available implementation we
are aware of that heuristically finds long simple paths.
In [PAR10], the authors compare four approaches based on genetic algorithms. The
biggest graph they used as an example consists of 134 vertices. The result was that their
fastest algorithm was able to find the optimal solution more than half of the time in around
22 seconds. For comparison, our program only took 0.001 seconds for the same graph and
found the longest path on 100% of the test runs. Please bear in mind that the comparison
might be somewhat unfair, since their implementation was in Matlab instead of C++. Our
algorithm took less than a millisecond for all the graphs in [PAR10] and found the longest
simple path 100% of the test runs.
5.1
Tests in large random graphs where we know the longest path
size
Given n and m, consider the following graph generation process for a graph with n vertices
and m edges.
Consider any random permutation of the vertices v1 , v2 , ..., vn and add all edges vi → vi+1 .
Then pick n − m + 1 other edges uniformly at random. All edge weights were set to 1, so
we know for certain the longest simple path has size n − 1.
For example, in our experiments, for n = 10, 000 and m = 100, 000, the whole process
(including reading the file containing the graph) took on average 1.28 seconds to find the
longest simple path.
6
Acknowledgements
We would like to thank the organizers of the Oracle MDC coding challenge for providing a
very interesting problem to work on (and for the prize of a drone and camera, of course).
Furthermore, we would like to thank the other participants, specially Miguel Ángel Sánchez
Pérez and David Felipe Castillo Velázquez for the fierce competition. Also, we are grateful
to Marisol Flores and Edgardo Roldán for their helpful comments on the paper, as well as
David Portugal for providing the source code from their work. This research was partially
supported by PAPIIT IA106316.
14
References
[BHK04] Andreas Björklund, Thore Husfeldt, and Sanjeev Khanna, Approximating longest
directed paths and cycles, Automata, Languages and Programming, Springer,
2004, pp. 222–233.
[CCHZ13] Yeow Meng Chee, Charles J Colbourn, Daniel Horsley, and Junling Zhou, Sequence covering arrays, SIAM Journal on Discrete Mathematics 27 (2013), no. 4,
1844–1861.
[MC14] Patrick C Murray and Charles J Colbourn, Sequence covering arrays and linear
extensions, Combinatorial Algorithms, Springer, 2014, pp. 274–285.
[PAR10] David Portugal, Carlos Henggeler Antunes, and Rui Rocha, A study of genetic
algorithms for approximating the longest path in generic graphs, Systems Man
and Cybernetics (SMC), 2010 IEEE International Conference on, IEEE, 2010,
pp. 2539–2544.
[PD12] Quang Dung Pham and Yves Deville, Integration of ai and or techniques in contraint programming for combinatorial optimzation problems: 9th international
conference, cpaior 2012, nantes, france, may 28 – june1, 2012. proceedings,
ch. Solving the Longest Simple Path Problem with Constraint-Based Techniques,
pp. 292–306, Springer Berlin Heidelberg, Berlin, Heidelberg, 2012.
[Sch99] John Kenneth Scholvin, Approximating the longest path problem with heuristics:
a survey, Ph.D. thesis, University of Illinois at Chicago, 1999.
[Scu03] Maria Grazia Scutella, An approximation algorithm for computing longest paths,
European Journal of Operational Research 148 (2003), no. 3, 584–590.
[SW11] R. Sedgewick and K. Wayne, Algorithms, Pearson Education, 2011.
[ZL07] Zhao Zhang and Hao Li, Algorithms for long paths in graphs, Theoretical Computer Science 377 (2007), no. 1, 25–34.
15
| 8 |
Seshadri constants via Okounkov functions and the
Segre-Harbourne-Gimigliano-Hirschowitz Conjecture
M. Dumnicki, A. Küronya∗, C. Maclean, T. Szemberg†
arXiv:1304.0249v1 [math.AG] 31 Mar 2013
April 2, 2013
Abstract
In this paper we relate the SHGH Conjecture to the rationality of onepoint Seshadri constants on blow ups of the projective plane, and explain how
rationality of Seshadri constants can be tested with the help of functions on
Newton–Okounkov bodies.
Keywords Nagata Conjecture, SHGH Conjecture, Seshadri constants,
Okounkov bodies
Mathematics Subject Classification (2000) MSC 14C20
1
Introduction
Nagata’s conjecture and its generalizations have been a central problem in the theory
of surfaces for many years, and much work has been done towards verifying them
[19], [8], [13], [23], [9]. In this paper we open a new line of attack in which we
relate Nagata-type statements to the rationality of one-point Seshadri constants and
invariants of functions on Newton–Okounkov bodies. We obtain as a consequence
of our approach some evidence that certain Nagata-type questions might be false.
Seshadri constants were first introduced by Demailly in the course of his work
on Fujita’s conjecture [10] in the late 80’s and have been the object of considerable
interest ever since. Recall that given a smooth projective variety X and a nef line
bundle L on X, the Seshadri constant of L at a point x ∈ X is the real number
ε(L; x) =def inf
C
L·C
,
multx C
(1)
where the infimum is taken over all irreducible curves passing through x. An intriguing and notoriously difficult problem about Seshadri constants on surfaces is
the question whether these invariants are rational numbers, see [17, Remark 5.1.13]
It follows quickly from
√ their definition that if a Seshadri constant is irrational then
it must be ε(L; x) = L2 , see e.g. [3, Theorem 2.1.5]. It is also known that Seshadri
constants of a fixed line bundle L, take their maximal value on a subset in X which
is a complement of at most countably many Zariski closed proper subsets of X.
∗
During this project Alex Küronya was partially supported by DFG-Forschergruppe 790 “Classification of Algebraic Surfaces and Compact Complex Manifolds”, and the OTKA Grants 77476
and 81203 by the Hungarian Academy of Sciences.
†
Szemberg research was partially supported by NCN grant UMO-2011/01/B/ST1/04875
2
We denote this maximum by ε(L, 1). Similar notation is used for√multi-point
2
Seshadri constants, see [3, Definition 1.9]. In particular, if ε(L; x)
√ = L at some
point the same holds in a very general point on X and ε(L; 1) = L2 .
From a slightly different point of view, Seshadri constants reveal information on
the structure of the nef cone on the blow-up of X at x, hence their study is closely
related to our attempts to understand Mori cones of surfaces.
An even older problem concerning linear series on algebraic surfaces is the conjecture formulated by Beniamino Segre in 1961 and rediscovered, made more precise
and reformulated by Harbourne 1986, Gimigliano 1987 and Hirschowitz 1988. (See
[8] for a very nice account on this development and related subjects.) In particular
it is known, [8, Remark 5.12] that the SHGH Conjecture implies the Nagata Conjecture. We now recall this conjecture, using by Gimigliano’s formuation, which will
be the most convenient form for us [12, Conjecture 3.3].
SHGH Conjecture. Let X be the blow up of the projective plane P 2 in s general
points with exceptional divisors E1 , . . . , Es . Let H denote the pullback to X of the
hyperplane bundle OP2 (1) on P2 . Let the integers d, m1 > . . . > ms > −1 with
d > m1 + m2 + m3 be given. Then the line bundle
dH −
s
X
mi Ei
i=1
is non-special.
The main result of this note is the following somewhat unexpected relation between
the SHGH Conjecture and the rationality problem for Seshadri constants.
Theorem 1.1. Let s > 9 be an integer for which the SHGH Conjecture holds true.
Let X be the blow up of the projective plane P2 in s general points. Then
a) either there exists on X an ample line bundle whose Seshadri constant at a
very general point is irrational;
b) or the SHGH Conjecture fails for s + 1 points.
Note that it is known that the SHGH conjecture holds true for s 6 9, [8, Theorem
5.1]. It is also known that Seshadri constants of ample line bundles on del Pezzo
surfaces (i.e. for s 6 8) are rational, see [22, Theorem 1.6]. In any case, the statement
of the Theorem is interesting (and non-empty) for s = 9. (See the challenge at the
end of the article.)
Corollary 1.2. If all one-point Seshadri constants on the blow-up of P2 in nine
general points are rational, then the SHGH conjecture fails for ten points.
An interesting feature of our proof is that the role played by the general position
of the points at which we blow up becomes clear.
In a different direction, we study the connection between functions on Newton–
Okounkov bodies defined by orders of vanishing, and Seshadri-type invariants. Our
main result along these lines is the following.
Theorem 1.3. Let X be a smooth projective surface, Y• an admissible flag, L a
big line bundle on X, and let P ∈ X be an arbitrary point.
If
max
x∈∆Y• (L)
φordp (x) ∈ Q then ε(L, P ) ∈ Q .
3
Acknowledgements. We thank Cristiano Bocci, Sébastien Boucksom and Patrick
Graf for helpful discussions. Part of this work was done while the second author
was visiting the Uniwersytet Pedagogiczny in Cracow. We would like to thank the
Uniwersytet Pedagogiczny for the excellent working conditions.
2
Rationality of one point Seshadri constants and the SHGH
Conjecture
In this section we prove Theorem 1.1: we start with notation and preliminary
leamms. Let f : X → P2 be the blow up of P2 at s > 9 general points P1 , . . . , Ps
with exceptional divisors E1 , . . . , Es . We denote as usual by H = f ∗ (OP2 (1)) the
pull back of the hyperplane bundle and we let E = E1 + · · · + Es be the sum of exceptional divisors. We consider the blow up g : Y → X of X at P with exceptional
divisor F . Whilst The following result is well known, we include it for the lack of a
proper reference.
P
Lemma 2.1. If there exists a curve C ⊂ X in the linear system dH − si=1 mi Ei
computing Seshadri constant of a Q-line bundle L = H − αE, then there exists a
divisor Γ with multP1 Γ = . . . = multPs Γ = M computing Seshadri constant of L at
P , i.e.
L·C
L·Γ
=
= ε(L; P ).
multP Γ
multP C
Proof. Since the points P1 , . . . , Ps are general, there exist curves
Cσ = dH −
s
X
mσ(j) Ej
j=1
for all permutations σ ∈ Σs . Since the point P is general, we may take all these
curves to have the same multiplicity m at P . Summing over a cycle σ of length s in
Σs , we obtain a divisor
Γ=
s
X
i=1
Cσi = sdH −
s
s X
X
i=1 j=1
mσi (j) Ej = sdH − M E,
with M = m1 + . . . + ms . Note that the multiplicity of Γ at P equals sm. Taking
the Seshadri quotient for Γ we have
sd − αsM
d − αM
L·Γ
=
=
= ε(L; P )
sm
sm
m
hence Γ satisfies the assertions of the Lemma.
The following auxiliary Lemma will be used in the proof of Theorem 1.1. We
postpone its proof to the end of this section.
Lemma 2.2. Let s > 9 be an integer. The function
p
√
√
f (δ) = (2 s + 1 − s) 1 − sδ2 + s(1 − s + 1)δ + s − 2
(2)
is non-negative for δ satisfying
√
1
1
<δ< √ .
s
s+1
(3)
4
Proof of Theorem 1.1. Let δ be a rational number satisfying (3). Note that the
SHGH Conjecture implies the Nagata Conjecture [8, Remark 5.12] so that
1
ε(OP2 (1); s) = √
s
and hence the Q–divisor L = H − δE is ample. If ε(L; 1) is irrational, then we are
done.
√
So we proceed assuming that ε(L; 1) is rational and that it is not equal to L2
(this can be achieved changing δ a little bit if necessary). In particular, by Lemma
2.1 for a general point P ∈ X Seshadri constant ε(L; P ) there is a divisor Γ ⊂ P2
of degree γ with M = multP1 Γ = . . . = multPs γ and m = multP Γ whose proper
e on X computes the Seshadri constant
transform Γ
ε(L; P ) =
p
e
L·Γ
γ − δsM
=
< 1 − sδ2 .
m
m
This gives an upper bound on γ
γ<m
p
1 − sδ2 + δsM.
(4)
We need to prove that statement b) in Theorem 1.1 holds. Suppose not: the SHGH
Conjecture then holds for s + 1 points in P2 . The Nagata Conjecture then also holds
for s + 1 points and this gives a lower bound for γ, since for Γ we must have that
1
γ
.
>√
sM + m
s+1
(5)
γ > 2M + m.
(6)
γ < 2M + m.
(7)
We now claim that
Suppose not. We then have that
The real numbers
√
√
2 s+1−s
s − δs s + 1
a :=
and b :=
2 − δs
2 − δs
are positive. Multiplying (4) by a and (7) by b and adding we obtain
p
√
sM + m 6 γ s + 1 < sM + (b + a 1 − sδ2 )m,
where the first inequality follows from (5). Subtracting sM in the left and in the
right term and dividing by m we obtain
p
1 < b + a 1 − sδ2 .
Plugging in the definition of a and b and rearranging terms we obtain that
p
√
√
(2 s + 1 − s) 1 − sδ2 + s − δs s + 1 < 2 − δs,
which contradicts Lemma 2.2. Hence (6) holds.
5
It follows now from the SHGH conjecture for s + 1 points (in the form stated in
the introduction) that the linear system
γH − M E − mF
on Y is non-special. Indeed the condition γ > 2M + m is (6) and the condition
γ
> √1s (because the Nagata Conjecture holds for s
γ > 3M is satisfied since sM
by hypothesis) and because we have assumed that s > 9. This system is also nonempty because the proper transform of Γ under g is its member. Thus by a standard
dimension count
0 6 γ(γ + 3) − sM (M + 1) − m(m + 1).
The upper bound on γ (4) together with the above inequality yields
p
p
0 6 (sδM + m 1 − sδ2 )(sδM + m 1 − sδ2 + 3) − m2 − m − sM − sM 2 .
(8)
Note that the quadratic term in (8) is a negative semi-definite form
p
(s2 δ2 − s)M 2 + 2sδ 1 − sδ2 M m − sδ2 m2 .
Indeed, the restrictions on δ made in (3) imply that the term at M 2 is negative.
The determinant of the associated symmetric matrix vanishes. These two conditions
imply together that the form is negative semi-definite. In particular this term of (8)
is non-positive. The linear part in turn is
p
(3sδ − s)M + (3 1 − sδ2 − 1)m,
which is easily seen to be negative. This provides the desired contradiction and
finishes the proof of the Theorem.
Remark 2.3. As it is well known, Nagata’s conjecture can be interpreted in terms of
the nef and Mori cones of the blow-up X of P2 at s general points. More precisely,
consider the following question: for what t > 0 does the ray H − tE meet the
boundary of the nef cone? The conjecture predicts that this ray should intersect the
boundaries of the nef cone and the effective cone at the same time.
Considering the Zariski chamber structure of X (see [5]), we see that this is
equivalent to requiring that H − tE crosses exactly one Zariski chamber (the nef
cone itself). Surprisingly, it is easy to prove that H − tE cannot cross more than
two chambers.
Proposition 2.4. Let f : X → P2 be the blow up of P 2 in s general points with
exceptional divisors E1 , . . . , Es . Let H be the pull-back of the hyperplane bundle and
E = E1 + . . . + Es . The ray R = H − tE meets at most two Zariski chambers on X.
Proof. If ε = ε(OP2 (1); s) = √1s i.e. this multi-point Seshadri constant is maximal,
then the ray crosses only the nef cone.
P
If ε is submaximal, then there is a curve C = dH −
mi Ei computing this
d
P
Seshadri constant, i.e. ε = mi .
If this curve is homogeneous, i.e. m = m1 = · · · = ms , then we claim first that
µ = µ(OP2 (1); E) = m/d. Indeed, this is an effective divisor on the ray R and it is
not big (because big divisors on surfaces intersect all nef divisors positively, see [6,
Corollary 3.3]), so it must be the point where the ray leaves the big cone.
6
Now, suppose that for some ε < δ < µ the ray R crosses another Zariski chamber
wall. This means that there is a divisor eH −kE (obtained after possible symmetrization of a curve D with 0 intersection number with H − δE) with
(H − δE) · (eH − kE) = 0.
Hence e = δks < µks. On the other hand
(eH − kE) · (dH − µE) = e − kµs < 0,
implies that C is a component eH − kE which is not possible. Hence there are only
two Zariski chambers meeting the ray R in this case.
If the curve C is not homogeneous, then since the points are general there exist at
least (and also at most) s different irreducible curves computing ε. All these curves
are in the support of the negative part of the Zariski decomposition of H − λE for
λ > ε. Hence their intersection matrix is negative definite and this is a matrix
of maximal dimension (namely s) with this property. This implies that R cannot
meet another Zariski chamber because the support of the negative part of Zariski
decompositions grows only when encountering new chambers, see [5].
It is interesting to compare this result with the following easy example, which
constructs rays meeting a maximal number of chambers.
Example 2.5. Keeping the notation from Proposition 2.4 let L = ( s(s+1)
+ 1)H −
2
E1 − 2E2 − . . .− sEs is an ample divisor on X and the ray R = L + λE crosses s + 1 =
ρ(X) Zariski chambers. Indeed, with λ growing, exceptional divisors E1 , E2 , . . . , Es
join the support of the Zariski decomposition of L − λE one by one. We leave the
details to the reader.
We conclude this section with the proof of Lemma 2.2.
√
s + 1) = 0 it is enough to show that f (δ) is inProof of Lemma
√ 2.2. Since f (1/
√
creasing for 1/ s + 1 6 δ 6 1/ s. Consider the derivative
√
√
δ
′
(9)
f (δ) = s 1 + √
(s − 2 s + 1) − s + 1 .
1 − sδ2
√
√
δ
The function h(δ) = √1−sδ
is increasing for 1/ s + 1 6 δ 6 1/ s since the nu2
merator is an increasing function of δ and the denominator is a decreasing function
1
) = 1 so that h(δ) > 1 holds for all δ. Since the coefficient at
of δ. We have h( √s+1
h(δ) in (9) is positive we have
√
√
√
f ′ (δ) > s 1 + (s − 2 s + 1) − s + 1 = 1 + s − 3 s + 1 > 0,
which completes the proof.
3
Rationality of Seshadri constants and functions on Okounkov bodies
The theory of Newton–Okounkov bodies has emerged recently with work by Okounkov [21], Kaveh–Khovanskii [14], and Lazarsfeld–Mustaţă [18]. Shortly thereafter, Boucksom–Chen [7] and Witt-Nyström [20] have shown ways of constructing
geometrically significant functions on Okounkov bodies, that were further studied
7
in [16]. In the context of this note the study of Okounkov functions was pursued
by the last three authors in [16]. We refer to [16] for construction and properties of
Okounkov functions.
In this section we consider an arbitrary smooth projective surface X and an
ample line bundle L on X. Let p ∈ X be an arbitrary point and let π : Y → X be
the blow up of p with exceptional divisor E. Recall that the Seshadri constant of L
at p can equivalently be defined as
ε(L; p) = sup {t > 0 | π ∗ L − tE is nef } .
There is a related invariant
def
µ(L; p) = sup {t > 0 | π ∗ L − tE is pseudo-effective} = sup {t > 0 | π ∗ L − tE is big} .
The invariant ε(L; p) is the value of the parameter λ where the ray π ∗ L − λE meets
the boundary of the nef cone of Y , and µ(L; p) is the value of λ where the ray meets
the boundary of the pseudo-effective cone. The following relation between the two
invariants is important in our considerations.
Remark 3.1. If ǫ(L; p) is irrational, then
ǫ(L; p) = µ(L; p) .
In particular, if µ(L; p) is rational, then so is ǫ(L; p).
Rationality of µ(L; p) implies rationality of the associated Seshadri constants on
surfaces. This invariant appears in the study of the concave function ϕordp associated
to the geometric valuation on X defined by the order of vanishing ordp at p. We
fix some flag Y• : X ⊇ C ⊇ {x0 } and consider the Okounkov body ∆Y• (L) defined
with respect to that flag. We define also a multiplicative filtration determined by the
geometrical valuation ordP on the graded algebra V = ⊕k>0 Vk with Vk = H 0 (X, kL)
by
Ft (V ) = {s ∈ V : ordP (s) > t} ,
see [16, Example 3.7] for details. (All the above remains valid in the more general
context of graded linear series.) There is an induced filtration F• (Vk ) on every
summand of V and one defines the maximal jumping numbers of both filtrations as
emax (V, F• ) = sup {t ∈ R : ∃kFktVk 6= 0} and emax (Vk , F• ) = sup {t ∈ R : Ft Vk 6= 0}
respectively. Let ϕordP (x) = ϕF• (x) be the Okounkov function on ∆Y• (L) determined by filtration F• , see [16, Definition 4.8]. It turns out that µ(L; p) is the
maximum of the Okounkov function ϕordP .
Proposition 3.2. With notation as above we have that
max ordp (s) | s ∈ H 0 (X, OX (mL))
µ(L; p) = lim sup
m
m→∞
=
max
x∈∆Y• (L)
φordp (x) .
Proof. Observe that
ordp (s) = ordE (π ∗ s) = max {m ∈ N | div(π ∗ s) − mE is effective} .
Consequently,
µ(L; p) = sup {t ∈ R>0 | π ∗ L − tE is pseudo-effective}
max ordp (s) | s ∈ H 0 (X, OX (mL)
,
= lim sup
m
m→∞
8
which gives the first equality.
For the second equality, we observe first that
max ordp (s) | s ∈ H 0 (X, OX (mL))
= emax (Vm , F• ) ,
and hence
max ordp (s) | s ∈ H 0 (X, OX (mL))
lim sup
m
m→∞
= emax (V, F• ) .
Since
emax (V, F• ) =
max
x∈∆Y• (L)
ϕordp (x)
by Theorem 3.4, we are done.
3.1
Independence of the maximum of an Okounkov function on the flag
In the course of this section the projective variety X can have arbitrary dimension.
Boucksom and Chen proved that though ϕF• and ∆(V• ) depend on the flag Y• ,
the integral of ϕF• over ∆(V• ) is independent of Y• , [7, Remark 1.12 (ii)]. We prove
now that the maximum of the Okounkov function does not depend on the flag. This
fact is valid in the general setting of arbitrary multiplicative filtration F defined on
a graded linear series V• .
Remark 3.3. Note that in general the functions ϕF• are only upper-semicontinuous
and concave, but not continuous on the whole Newton–Okounkov body as explained
in [16, Theorem 1.1]. They are however continuous provided the underlying body
∆(V• ) is a polytope (see again [16, Theorem 1.1]), which is the case for complete
linear series on surfaces [15].
Theorem 3.4 (Maximum of Okounkov functions). With the above notation, we
have that
max ϕF• (x) = emax (L, F• ).
x∈∆Y• (L)
In particular the left hand side does not depend on the flag Y• .
Proof. For any real t > 0, we consider the partial Okounkov body ∆t,Y• (L) associated
the graded linear series Vt,k ⊂ H 0 (kL) given by
def
Vt,k = Fkt (H 0 (kL)) .
Note that by definition
emax (L, F• ) = sup{t ∈ R| ∪k Vt,k 6= 0.}
In other words,
emax (L, F• ) = sup{t ∈ R|∆t,Y• (L) 6= ∅}.
Recall that by definition
ϕF• (x) = sup{t ∈ R|x ∈ ∆t,Y• (L)}.
and it is therefore immediate that ∀x
ϕF• (x) 6 emax (L, F• ).
9
from which it follows that
max
x∈∆Y• (L)
ϕF• (x) 6 emax (L, F• ).
Since the bodies ∆t,Y• (L) form a decreasing family of closed subsets of Rd , we have
that
∩t|∆t,Y• (L)6=∅ ∆t,Y• (L) 6= ∅.
Consider a point y ∈ ∩t|∆t,Y• (L)6=∅ ∆t,Y• (L) We then have that
y ∈ ∆t,Y• (L) ⇔ ∆t,Y• (L) 6= ∅
and hence
sup{t ∈ R|y ∈ ∆t,Y• (L)} = sup{t ∈ R|∆t,Y• (L) 6= ∅}
or in other words
ϕF• (y) = emax (L, F• )
from which it follows that
max
x∈∆Y• (L)
ϕF• (x) 6 emax (L, F• ).
This completes the proof of the theorem.
4
The effect of blowing up on Okounkov bodies and functions
We begin with an observation (valid in fact in arbitrary dimension, though we state
and prove it here only for surfaces.)
Proposition 4.1. Let S be an arbitrary surface with a fixed flag Y• and let f : X →
S be the blow up of S at a point P not contained in the divisorial part of the flag,
P ∈
/ Y1 . Let E be the exceptional divisor. And finally let D be a big divisor on S.
For any rational number λ such that 0 6 λ < µ(L; P ) we let Dλ be the Q–divisor
f ∗ D − λE. There is then a natural inclusion
∆Y• (Dλ ) ⊂ ∆Y• (D).
Moreover, a filtration F• on the graded algebra ⊕k>0 H 0 (S; kD) induces a filtration
F•λ on the graded (sub)algebra ⊕H 0 (X, kDλ ), where the sum is taken over all k
divisible enough. For associated Okounkov functions we have
ϕF•λ (x) 6 ϕF• (x)
(10)
for all x ∈ ∆Y• (Dλ ).
Remark 4.2. The best case scenario is that the functions φ are piecewise linear with
rational coefficients over a rational polytope. Of these properties, some evidence for
the first was given by Donaldson [11] in the toric situation. For the second condition,
it was proven in [1] that every line bundle on a surface has an Okounkov body which
is a rational polytope.
10
Proof. Note first that since the blow up center is disjoint from all elements in the
flag, one can take Y• to be an admissible flag on X. (Strictly speaking one takes
f ∗ Y• as the flag, but it should cause no confusion to identify flag elements upstairs
and downstairs.)
Then, if k is sufficiently divisible we have
H 0 (X, f ∗ kD − kλE) = s ∈ H 0 (S, kD) : ordP (s) > E ⊂ H 0 (S, kD).
The inclusion of the Okounkov bodies follows immediately under this identification.
We can view the algebra associated to f ∗ D − E as a graded linear series on S.
The claim about the Okounkov functions follows from their definition, see [16,
Definition 4.8]. Indeed, the supremum arising in the definition of ϕF•λ is taken over
a smaller set of sections than it is for ϕF• .
The following examples illustrate various situations arising in the setting of
Proposition 4.1.
Example 4.3. Let ℓ be a line in X0 = P2 and let P0 ∈ ℓ be a point. We fix the flag
Y• : X0 ⊃ ℓ ⊃ {P0 } .
Let D = OP2 (1). Then ∆Y• (D) is simply the standard simplex in R2 .
1
∆Y• (D)
0
1
Let F• be the filtration on the complete linear series of D imposed by the geometric
valuation ν = ordP0 and let ϕν be the associated Okounkov function. Then
ϕν (a, b) = a + b.
Indeed, given a point (a, b) with rational coordinates, we pass to the integral point
(ka, kb). This valuation vector can be realized geometrically by a global section in
H 0 (P2 , OP2 (k)) vanishing exactly with multiplicity ka along ℓ, exactly with multiplicity kb along a line passing through P0 different from ℓ and along a curve of degree
k(1 − a − b) not passing through P0 .
The next example shows that even when the Okounkov body changes in the
course of blowing up, the Okounkov function may remain the same.
Example 4.4. Keeping the notation from the previous Example and from Proposition 4.1 let f : X1 = BlP1 P2 → X0 = P2 be the blow up of the projective plane in
a point P1 not contained in the flag line ℓ with the exceptional divisor E1 . We work
now with a Q–divisor Dλ = f ∗ (OP2 (1)) − λE1 = H − λE1 , for some fixed λ ∈ [0, 1].
A direct computation using [18, Theorem 6.2] gives that the Okounkov body has
the shape
11
1
∆Y• (Dλ )
0
1−λ
Thus we see that the Okounkov body of Dλ is obtained from that of D by intersecting
with a closed halfspace.
For the valuation ν = ordP0 , we get as above
ϕν (a, b) = a + b.
Let now k be an integer such that the point (ka, kb) is integral and kλ is also an
integer. Now we need to exhibit a section s in H 0 (P2 , OP2 (k)) satisfying the following
conditions:
a) s vanishes along ℓ exactly to order a;
b) s vanishes in the point P1 to order at least kλ;
c) s vanishes in the point P0 exactly to order b.
We let the divisor of s to consist of a copies of ℓ (there is no other choice here), of
b copies of the line through P0 and P1 , of kλ − b copies of any other line passing
through P1 (if this number is negative then this condition is empty) and of a curve
of degree k(1 − a − max {b, λ}) passing neither through P0 , nor through P1 .
Remark 4.5. Note that in the setting of Proposition 4.1 Okounkov bodies of divisors
Dλ always result from those of D by cutting with finitely many halfplanes. This is
an immediate consequence of [15, Theorem 5].
We conclude by showing that the inequality in (10) can be sharp, i.e. the blow
up process can influence the Okounkov function as well as the Okounkov body.
Example 4.6. Keeping the notation from the previous examples, let f : X6 → P2 be
the blow up of six general points P1 , . . . , P6 not contained in ℓ and chosen so that the
points P0 , P1 , . . . , P6 are also general. Let E1 , . . . , E6 denote the exceptional divisors
and set E = E1 +. . .+E6 . We consider the divisor D = H − 52 E. A direct computation
using [18, Theorem 6.2] (this requires computing Zariski decompositions this time,
see [2] for an effective approach) yields the triangle with vertices at the origin and
in points (0, 1) and (1/25, 0) as ∆Y• (D). For the valuation ν = ordP0 we get now
ϕν (a, b) 6 4/15 < a + b
for (a, b) ∈ Ω = (x, y) ∈ R2 : x ∈ [0, 11/360) and b ∈ (4/15 − a, 1 − 25a] .
(11)
12
1
Ω
11 17
( 360
, 72 )
0
1
25
The reason for the above inequality is the following. Let (a, b) ∈ Ω be a valuation vector. Assume to the contrary that ϕ(a, b) > 4/15. It is well known that
ε(OP2 (1), P0 , . . . , P6 ) = 38 , see for instance [23, Example 2.4]. On the other hand a
section with the above valuation vector would have (after scaling to O(1)) multiplicities 2/5 at P1 , . . . , P6 and ϕν (a, b) > 4/15 at P0 . It would give Seshadri quotient
6·
2
5
3
1
< ,
8
+ ϕν (a, b)
a contradiction. This proves (11).
Since the SHGH Conjecture holds for 9 points, the first challenge arising in the
view of our Theorem would be to compute the Okounkov body and the Okounkov
function associated to ordP0 as above for the system
22H − 7(E1 + · · · + E9 ).
References
[1] Anderson, D., Küronya, A., Lozovanu, V.: Okounkov bodies of finitely generated divisors,
International Mathematics Research Notices 2013; doi: 10.1093/imrn/rns286.
[2] Bauer, Th.: A simple proof for the existence of Zariski decompositions on surfaces: J. Alg.
Geom. 18 (2009), 789-793
[3] Bauer, Th., Di Rocco, S., Harbourne, B., Kapustka, M., Knutsen, A. L., Syzdek, W., Szemberg T.: A primer on Seshadri constants, Interactions of Classical and Numerical Algebraic
Geometry, Proceedings of a conference in honor of A. J. Sommese, held at Notre Dame, May
22–24 2008. Contemporary Mathematics vol. 496, 2009, eds. D. J. Bates, G-M. Besana, S. Di
Rocco, and C. W. Wampler, 362 pp.
[4] Bauer, Th., Bocci, C., Cooper, S., Di Rocco, S., Dumnicki, M., Harbourne, B., Jabbusch,
K., Knutsen, A.L., Küronya, A., Miranda, R., Roe, J., Schenck, H., Szemberg, T., Teitler,
Z.: Recent developments and open problems in linear series, In ”Contributions to Algebraic
Geometry”, 93–140, IMPANGA Lecture Notes (Piotr Pragacz , ed.), EMS Series of Congress
Reports, edited by the European Mathematical Society Publishing House 2012.
[5] Bauer, Th., Küronya, A., Szemberg, T.: Zariski chambers, volumes and stable loci, Journal
für die reine und angewandte Mathematik, 576 (2004), 209–233.
[6] Bauer, Th., Schmitz, D.: Volumes of Zariski chambers, arXiv:1205.3817
[7] Boucksom, S., Chen, H.: Okounkov bodies of filtered linear series Compositio Math. 147
(2011), 1205–1229
13
[8] Ciliberto, C.: Geometric aspects of polynomial interpolation in more variables and of Waring’s
problem. European Congress of Mathematics, Vol. I (Barcelona, 2000), 289-316, Progr. Math.,
201, Birkhuser, Basel, 2001
[9] Cilibert, C., Harbourne, B., Miranda, R., Roé, J.: Variations on Nagata’s Conjecture,
arXiv:1202.0475
[10] Demailly, J.-P.: Singular Hermitian metrics on positive line bundles. Complex algebraic varieties (Bayreuth, 1990), Lect. Notes Math. 1507, Springer-Verlag, 1992, pp. 87–104
[11] Donaldson, S. K.: Scalar curvature and stability of toric varieties. J. Differential Geom. 62
(2002), no. 2, 289–349.
[12] Gimigliano, A.: Our thin knowledge of fat points. The Curves Seminar at Queen’s, Vol. VI
(Kingston, ON, 1989), Exp. No. B, 50 pp., Queen’s Papers in Pure and Appl. Math., 83,
Queen’s Univ., Kingston, ON, 1989
[13] Harbourne, B.: On Nagata’s conjecture. J. Algebra 236 (2001), 692-702
[14] Kaveh, K., Khovanskii, A.: Newton-Okounkov bodies, semigroups of integral points, graded
algebras and intersection theory. Annals of Mathematics 176 (2012), 1–54
[15] Küronya, A., Lozovanu, V., Maclean, C.: Convex bodies appearing as Okounkov bodies of
divisors, Advances in Mathematics 229 (2012), no. 5, 2622–2639.
[16] Küronya, A., Maclean, C., Szemberg, T.: Functions on Okounkov bodies coming from geometric valuations (with an appendix by Sébastien Boucksom), arXiv:1210.3523v2.
[17] Lazarsfeld, R.: Positivity in Algebraic Geometry. I.-II. Ergebnisse der Mathematik und ihrer
Grenzgebiete, Vols. 48–49., Springer Verlag, Berlin, 2004.
[18] Lazarsfeld, R., Mustaţă, M.: Convex bodies associated to linear series, Ann. Scient. Éc. Norm.
Sup., 4 série, t. 42, (2009), 783–835.
[19] Nagata, M.: On the 14-th problem of Hilbert. Amer. J. Math. 81 (1959), 766–772.
[20] Nyström, D., W.: Transforming metrics on a line bundle to the Okounkov body. preprint,
arXiv:0903.5167v1.
[21] Okounkov, A.: Brunn-Minkowski inequalities for multiplicities, Invent. Math 125 (1996) pp
405–411.
[22] Sano, T.:
Seshadri
arXiv:0908.4502v4.
constants
on
rational
surfaces
with
anticanonical
pencils,
[23] Strycharz-Szemberg, B., Szemberg, T.: Remarks on the Nagata Conjecture, Serdica Math. J.
30 (2004), 405-430
Marcin Dumnicki, Jagiellonian University, Institute of Mathematics, Lojasiewicza 6,
PL-30-348 Kraków, Poland
E-mail address: Marcin.Dumnicki@im.uj.edu.pl
Alex Küronya, Budapest University of Technology and Economics, Mathematical Institute, Department of Algebra, Pf. 91, H-1521 Budapest, Hungary.
E-mail address: alex.kuronya@math.bme.hu
Current address: Alex Küronya, Albert-Ludwigs-Universität Freiburg, Mathematisches
Institut, Eckerstraße 1, D-79104 Freiburg, Germany.
Catriona Maclean, Institut Fourier, CNRS UMR 5582 Université de Grenoble, 100 rue
des Maths, F-38402 Saint-Martin d’Héres cedex, France
E-mail address: catriona@fourier.ujf-grenoble.fr
Tomasz Szemberg, Instytut Matematyki UP, Podchora̧żych 2, PL-30-084 Kraków, Poland.
E-mail address: szemberg@up.krakow.pl
| 0 |
A Supervisory Control Algorithm Based on
Property-Directed Reachability⋆
arXiv:1711.06501v1 [] 17 Nov 2017
Koen Claessen1 , Jonatan Kilhamn1 , Laura Kovács13 , and Bengt Lennartson2
1
Department of Computer Science and Engineering,
2
Department of Electrical Engineering,
Chalmers University of Technology
3
Faculty of Informatics, Vienna University of Technology
{koen, jonkil, laura.kovacs, bengt.lennartson}@chalmers.se
Abstract. We present an algorithm for synthesising a controller (supervisor) for
a discrete event system (DES) based on the property-directed reachability (PDR)
model checking algorithm. The discrete event systems framework is useful in
both software, automation and manufacturing, as problems from those domains
can be modelled as discrete supervisory control problems. As a formal framework, DES is also similar to domains for which the field of formal methods for
computer science has developed techniques and tools. In this paper, we attempt
to marry the two by adapting PDR to the problem of controller synthesis. The
resulting algorithm takes as input a transition system with forbidden states and
uncontrollable transitions, and synthesises a safe and minimally-restrictive controller, correct-by-design. We also present an implementation along with experimental results, showing that the algorithm has potential as a part of the solution
to the greater effort of formal supervisory controller synthesis and verification.
Keywords: Supervisory control ·Discrete-event systems ·Property-directed reachability ·Synthesis ·Verification ·Symbolic transition system
1 Introduction
Supervisory control theory deals with the problems of finding and verifying controllers
to given systems. One particular problem is that of controller synthesis: given a system
and some desired properties—safety, liveness, controllability—automatically change
the system so that it fulfills the properties. There are several approaches to this problem,
including ones based on binary decision diagrams (BDD) [14, 6], predicates [11] and
the formal safety checker IC3 [18].
In this work we revisit the application of IC3 to supervisory control theory. Namely,
we present an algorithm for synthesising a controller (supervisor) for a discrete event
system (DES), based on property-directed reachability [4] (PDR, a.k.a. the method underlying IC3 [2]). Given a system with a safety property and uncontrollable transitions,
the synthesised controller is provably safe, controllable and minimally restrictive [16].
⋆
The final publication is available at Springer via https://doi.org/10.1007/978-3-319-70389-3_8.
2
1.1 An illustrative example
Let us explain our contributions by starting with an example. Figure 1 shows the transition system of a finite state machine extended with integer variables x and y. The
formulas on the edges denote guards (transition cannot happen unless formula is true)
and updates (after transition, x takes the value specified for x′ in the formula). This
represents a simple but typical problem from the domain of control theory, and is taken
from [17].
x = 0, y = 0
l0
a : y′ = 2
l2
b : y′ = 1
b:⊤
a:⊤
l1
l3
α:y =2∧x>2
l5
c : x′ = x + 1
α:y =2∧x≤2
l4
ω:⊤
Fig. 1. The transition system of the example.
In a controller synthesis problem, a system such as this is the input. The end result
is a restriction of the original system, i.e. one whose reachable state space is a subset of
that of the original one. In this extended finite state machine (denoted as EFSM) representation, this is written as new and stronger guard formulas on some of the transitions.
Our example has two more features: the location l5 , a dashed circle in the figure, is
forbidden, while the event α is uncontrollable. The latter feature means that the synthesised controller must not restrict any transition marked with the event α.
To solve this problem, we introduce an algorithm based on PDR [4] used in a software model checker (Section 3). Intuitively, what our algorithm does is to incrementally
build an inductive invariant which in turn implies the safety of the system. This invariant
is constructed by ruling out paths leading into the bad state, either by proving these bad
states unreachable from the initial states, or by making them unreachable via strengthening the guards.
In our example, the bad state l5 is found to have a preimage under the transition
relation T in l3 ∧ y = 2 ∧ x > 2. The transition from l3 to l5 is uncontrollable, so in
order to guarantee safety, we must treat this prior state as unsafe too. The transitions
leading into l3 are augmented with new guards, so that the system may only visit l3 if
the variables make a subsequent transition to l5 impossible. By applying our work, we
refined Figure 1 with the necessary transition guards and a proof that the new system is
safe. We show the refined system obtained by our approach in Figure 2.
3
x = 0, y = 0
a : y′ = 2
l0
l2
b : y′ = 1
b : y 6= 2 ∨ x ≯ 2
a : y 6= 2 ∨ x ≯ 2
l1
l3
α:y =2∧x>2
l5
c : x′ = x + 1
α:y =2∧x≤2
l4
ω:⊤
Fig. 2. The transition system from the example, with guards updated to reflect the controlled
system.
1.2 Our Contributions
1. In this paper we present a novel algorithm based on PDR for controller synthesis
(Section 3) and prove correctness and termination of our approach (Section 4). To
the best of our knowledge, PDR has not yet been applied to supervisory control
systems in this fashion. We prove that our algorithm terminates (given finite variable domains) and that the synthesised controller is safe, minimally-restrictive, and
respects the controllability constraints of the system. Our algorithm encodes system
variables in the SAT domain; we however believe that our work can be extended by
using satisfiability modulo theory (SMT) reasoning instead of SAT.
2. We implemented our algorithm in the model checker Tip [5]. We evaluated our
implementation on a number of control theory problems and give practical evidence
of the benefits of our work (see Section 6).
2 Background
We use standard terminology and notation from first-order logic (FOL) and restrict
formulas mainly to quantifier-free formulas. We reserve P, R, T, I to denote formulas describing, respectively, safety properties, “frames” approximating reachable sets,
transition relations and initial properties of control systems; all other formulas will be
denoted with φ, ψ, possibly with indices. We write variables as x, y and sets of variables as X, Y . A literal is an atom or its negation, a clause a disjunction of literals, and
a cube a conjunction of literals. We use R to denote a set of clauses, intended to be
read as the conjunction of those clauses. When a formula ranges over variables in two
or more variable sets, we take φ(X, Y ) to mean φ(X ∪ Y ).
For every variable x in the control system, we assume the existence of a unique
variable x′ representing the next-state value of x. Similarly, the set X ′ is the set {x′ |x ∈
4
X}. As we may sometimes drop the variable set from a formula if it is clear from the
context, i.e. write φ instead of φ(X), we take φ′ to mean φ(X ′ ) in a similar fashion.
2.1 Modelling Discrete Event Systems
A given DES can be represented in several different ways. The simple, basic model
is the finite state machine (FSM) [10]. A state machine is denoted by the tuple G =
hQ, Σ, δ, Qi i, where Q is a finite set of states, Σ the finite set of events (alphabet),
δ ⊆ Q × Σ × Q the transition relation, and Qi ⊆ Q the set of initial states.
In this notation, a controller can be represented as a function C : Q → 2Σ denoting
which events are enabled in a given state. For any σ ∈ Σ and q ∈ Q, the statement
σ ∈ C(q) means that the controller allows transitions with the event σ to happen when
in q; conversely, σ ∈
/ C(q) means those transitions are prohibited.
Extended Finite State Machine. The state machine representation is general and monolithic. In order to more intuitively describe real supervisory control problems, other formalisms are also used. Firstly, we have the extended finite state machine (EFSM), which
is an FSM extended with FOL formulas over variables. In effect, we split the states into
locations and variables, and represent the system by the tuple A = hX, L, Σ, ∆, li, Θi.
Here, X is a set of variables, L a set of locations, Σ the alphabet, ∆ the set of transitions, li ∈ L the initial location and Θ(X) a formula describing the initial values of the
variables.
A transition in ∆ is now a tuple hl, a, mi where l, m are the entry and exit locations,
respectively, while the action a = (σ, φ) consists of the event σ ∈ Σ and φ(X, X ′ ).
The interpretation of this is that the system can make the transition from l to m if the
formula φ(X, X ′ ) holds. Since the formula can include next-state variables—φ may
contain arbitrary linear expressions over both X and X ′ —the transition can specify
updated variable values for the new state.
We have now defined almost all of the notation used in the example in Figure 1.
In the figure, we write σ : φ to denote the action (σ, φ). Furthermore, the figure is
simplified greatly by omitting next-state assignments on the form x′ = x, i.e. x keeping
its current value. If a variable does not appear in primed form in a transition formula,
that formula is implied to have such an assignment.
Symbolic Representation. Moving from FSM to EFSM can be seen as “splitting” the
state space into two spaces: the locations and the variables. A given feature of an FSM
can be represented as either one (although we note that one purpose for using variables
is to easier extend the model to cover an infinite state space). Using this insight we can
move to the “other extreme” of the symbolic transition system (STS): a representation
with only variables and no locations.
The system is here represented by the tuple SA = hX̂, T(X̂, X̂ ′ ), I(X̂)i where
X̂ is the set of variables extended by two new variables xL and xΣ with domains L
and Σ, respectively. With some abuse of notation, we use event and variable names to
denote formulas over those variables, such as ln for the literal xL = ln and ¬σ for the
literal xΣ 6= σ. The initial formula I and transition formula T are constructed from the
corresponding EFSM representation as I(X̂) = (xL = li ) ∧ Θ(X) and T(X̂, X̂ ′ ) =
W
′
′
hl,(σ,φ),mi∈∆ (l ∧ σ ∧ φ(X, X ) ∧ m ).
5
In this paper, we will switch freely between the EFSM and STS representations of
the same system, depending on which is the best fit for the situation. Additionally, we
will at times refer to X̂ as only X, as long as the meaning is clear from context. In
either representation, we will use state to refer to a single assignment of location and
variables, and path for a sequence of states s0 , s1 , ..., sk .
2.2 Supervisory Control
The general problem of supervisory control theory is this: to take a transition system,
such as the ones we have described so far, and modify it so that it fulfils some property which the unmodified system does not. There are several terms in this informal
description that require further explanation.
The properties that we are interested in are generally safety, non-blocking, and/or
liveness, which can be seen as a stronger form of non-blocking. Controlling for a safety
property means that in the controlled system, there should be no sequence of events
which enables transitions leading from an initial state to a forbidden state.
Non-blocking and liveness are defined relative to a set of marked state. The former
means that at least one such state is reachable from every state which is reachable from
the initial states. The latter, liveness, implies non-blocking, as it is the guarantee that
the system not only can reach but will return to a marked state infinitely often. In this
work we have reduced the scope of the problem by considering only safety.
Furthermore, we talk about the property of controllability. This is the notion that
some events in a DES are uncontrollable, which puts a restriction on any proposed
controller: in order to be valid, the transitions involving uncontrollable events must
not be restricted. Formally, in an (E)FSM it is enough to split the alphabet into the
uncontrollable Σu ⊆ Σ and the controllable Σc = Σ \ Σu . In an STS, this is expressed
by the transition relation taking the form T = Tu ∨ Tc , where Tc and Tu include
literals xL = σ for, respectively, only controllable and only uncontrollable events σ.
Finally there is the question of what form this “controlled system” takes, since a
controller function C : Q → 2Σ can be impractical. A common method is that of
designating a separate state machine as the supervisor, and taking the controlled system
to be the synchronous composition of the original system and the supervisor [8]. In
short, this means running them both in parallel, but only allowing a transition with a
shared event σ to occur simultaneously in both sub-systems.
However, the formidable theory of synchronised automata is not necessary for the
present work. Instead, we take the view that the controlled system is the original system,
either in the EFSM or STS formulation, with some additions.
In the EFSM case, the controlled system has the exact same locations and transitions, but additional guards and updates may be added. In other words, the controlled
system augments each controllable transition by replacing the original transition formula φ with the new formula φs = φ ∧ φnew . The uncontrollable transitions are left
unchanged. In the STS case, the new transition function is TS = Tu ∨ TSc where
TSc = Tc ∧ Tnew
c . This way, all uncontrollable transitions are guaranteed to be unmodified in the controlled system.
Finally, a controlled system, regardless of which properties the controller is set out
to guarantee, is often desired to be minimally restrictive (eqiv. maximally permissive).
6
The restrictiveness of a controlled system is defined as follows: out of two controlled
versions S1 and S2 of the same original system S, S1 is more restrictive than S2 if there
is at least one state, reachable under the original transition function T, which is reachable under TS2 but unreachable under TS1 . A controlled system is minimally restrictive
if no other (viable) controlled system exists which is less restrictive. The word “viable”
in brackets shows that one can talk about the minimally restrictive safe controller, the
minimally restrictive non-blocking controller and so on; for each combination of properties, the minimally restrictive controller for those properties is different.
3 PDRC: Property-Driven Reachability-Based Control
Property-driven reachability (PDR) [4] is a name for the method underlying IC3 [2],
used to verify safety properties in transition systems. In this paper we present PropertyDriven Reachability-based Control (PDRC), which extends PDR from verifying safety
to synthesising a controller which makes the system safe. In order to explain PDRC, we
first review the main ingredients of PDR.
PDR works by successively blocking states that are shown to lead to unsafe states in
certain number of steps. Blocking a state at step k here means performing SAT-queries
to show that the state is unreachable from the relevant frame Rk . A frame Rk is a
predicate over-approximating the set of states reachable from the initial states I in k
steps.
When a state is blocked—i.e. shown to be unreachable—the relevant frame is updated by excluding that state from the reachable-set approximation. If a state cannot
be blocked at Rk , the algorithm finds its preimage s and proceeds to block s at Rk−1 .
If a state that needs to be blocked intersects with the initial states, the safety property
of the system has been proven false. Conversely, if two adjacent frames Ri , Ri+1 are
identical after an iteration, we have reached a fixed-point and a proof of the property P
in one of them entails a proof of P for the whole system.
With PDRC, we focus on the step where PDR has found a bad cube s (representing unsafe states) in frame Rk , and proceeds to check whether it is reachable from
the previous frame Rk−1 . If it is not, this particular cube was a false alarm: it was in
the over-approximation of k-reachable states, but after performing this check we can
sharpen that approximation to exclude s. If s was reachable, PDR proceeds to find its
preimage t which is in Rk−1 . Note that t is also a bad cube, since there is a path from
t to an unsafe state. However, in a supervisory control setting, there is no reason not
to immediately control the system by restricting all controllable transitions from t to s.
This observation is the basis of our PDRC algorithm.
3.1 Formal Description of PDRC
As PDRC is very similar to PDR, this description and the pseudocode procedures draw
heavily from [4].
Our PDRC algorithm is given in Algorithm 1. As input, we take a transition system
that can be represented by a transition function T(X, X ′ ) = Tc ∨ Tu , i.e. one where
each possible transition is either controllable or uncontrollable; and a safety property
7
Algorithm 1: Blocking and propagation for one iteration of N .
1
2
3
4
5
6
7
8
9
10
11
// finding and blocking bad states
while SAT[RN ∧ ¬P] do
extract a bad state m from the SAT model;
generalise m to a cube s;
recursively block s as per block(s, N );
// at this point R and/or T have been updated to rule
out m
end
// propagation of proven clauses
add new empty frame RN+1 ;
for k ∈ [1, N ] and c ∈ Rk do
if Rk c′ then
add c to Rk+1 ;
end
end
P(X). The variables in X are boolean, in order to allow the use of a SAT solver –
although see Section 3.2 describing an extension from SAT to SMT.
Throughout the run of the algorithm, we keep a trace: a series of frames Ri , 0 ≤
i ≤ N . Each Ri (X) is a predicate that over-approximates the set of states reachable
from I in i steps or less. R0 = I, where I is a formula encoding the initial states.
V Each frame Ri , i > 0 can be represented by a set of clauses Ri = {cij }j , such that
j cij (X) = Ri (X). An empty frame Rj = {} is considered to encode ⊤, i.e. the
most over-approximating set possible.
We maintain the following invariants:
1. Ri → Ri+1
2. Ri → P, except for i = N
3. Ri+1 is an over-approximation of the image of Ri under T
Starting with N = 1 and R1 = {}, we proceed to do the first iteration of the
blocking and propagation steps, as shown in Algorithm 1.
The “blocking step” consists of the while-loop (lines 1–5) of Algorithm 1, and coming out of that loop we know that RN → P. The propagation step follows (lines 6–11),
and here we consider for each clause in some frame of the trace whether it also holds in
the next frame.
Afterwards, we check for a fix-point in Ri ; i.e. two syntactically equal adjacent
frames Ri = Ri+1 . Unless such a pair is found, we increment N by 1 and repeat the
procedure.
The most important step inside the while loop is the call to block (line 4). This
routine is shown in Algorithm 2. Here, we take care of the bad states in a straightforward way. First, we consider its preimage under the controllable transition function
Tc (line 2). The preimage cube t can be found by taking a model of the satisfiable
query Rk−1 ∧ ¬s ∧ Tc ∧ s′ and dropping the primed variables. Each such cube encodes
8
Algorithm 2: The blocking routine, which updates the supervisor.
1
2
3
4
5
6
7
8
9
10
11
12
Data: A cube s and a frame index k
// first consider the controllable transitions:
while SAT[Rk−1 ∧ ¬s ∧ Tc ∧ s′ ] do
extract and generalise a bad cube t in the preimage of Tc ;
update Tc := Tc ∧ ¬t;
end
// then consider the uncontrollable transitions:
while SAT[Rk−1 ∧ ¬s ∧ Tu ∧ s′ ] do
if k=1 then
throw error: system uncontrollable;
end
extract and generalise a bad cube t in the preimage of Tu ;
call block(t, k − 1);
end
add ¬s to Ri , i ≤ k;
states from which a bad state is reachable in one step. Thus, we update the supervisor to
disallow transitions from those bad states (line 3). This accounts for the first while-loop
in Algorithm 2.
The second while-loop (lines 5–11)is very similar, but considers the uncontrollable
transitions, encoded by Tu , instead. If a preimage cube is found here, we cannot rule it
out by updating the supervisor. That preimage instead becomes a bad state on its own,
to be controlled in the previous frame k − 1.
Example 1. Example, revisited. Recall the example in Figure 1. Since it uses integer
variables it seems to require an SMT-based version of PDRC. This particular example is
so simple, however, that “bit-blasting” the problem into SAT by treating the proposition
x < i as a separate boolean variable for each value of i in the domain of x will yield
the same solution.
PDRC requires 3 iterations to completely supervise the system. In the first, the
clause ¬l5 is added to the first frame R1 , after proving that it is not in the initial states.
In the second, ¬l5 is found again but this time the uncontrollable transition from l3 is
followed backwards, and the clause ¬α ∨ ¬l3 ∨ y 6= 2 ∨ x ≯ 2 is also added to R1 ,
which allows us to add ¬l5 to R2 . Finally, in the third, the trace of preimages lead
to the controllable transitions l1 → l3 and l2 → l3 , and we add new guards to both
(technically, we add new constraints to the transition function).
The updated system is the one shown in Figure 2. The third iteration also proves the
system safe, as we have R1 = R2 . These frames then hold the invariant, (¬α∨¬l3 ∨y 6=
2 ∨ x ≯ 2) ∧ ¬l5 , which implies P and is inductive under the updated T.
3.2 Extension to SMT
Our PDRC algorithm in Algorithm 1 uses SAT queries, and is straightforward to use
with a regular SAT solver on systems with a propositional transition function. However,
9
like in [3, 9] it is possible to extend it to other theories, such as Linear Integer Arithmetic, using an SMT solver. The SAT query in Algorithm 1 provides no diffuculty, but
some extra thought is required for the ones in the blocking procedure, which follow this
pattern:
while SAT[Ri ∧ ¬s ∧ T ∧ s′ ] do
extract and generalise a bad cube t in the preimage of T;
If one only replaces the SAT solver by an SMT solver capable of handling the
theory in question, one can extract a satisfying assignment of theory literals. However,
each of these might contain both primed and unprimed variables, such as the next-state
assignment x′ = x + 1.
These lines effectively ask the solver to generalise a state m—an assignment of
theory literals satisfying some formula F—into a more general cube t, ideally choosing
the t that covers the maximal amount of discrete states, while still guaranteeing t → F.
In the SAT case, this is achieved by dropping literals of t that do not affect the validity
of F(t). An alternate method based on ternary simulation, that is useful when the query
is for a preimage of a transition function T, is given in [4]. For the SMT case, however,
the extent of generalisation depends on the theory and the solver.
In the worst case of a solver that cannot generalise at all, the algorithm is consigned
to blocking a single state m in each iteration. This means that the state space simplification gained from using a symbolic transition function in the first place is lost, since the
reachability analysis checks states one by one. In conclusion, PDRC could be implemented for systems with boolean variables using a SAT-solver with no further issues,
while an SMT version would require carefully selecting the right solver for the domain.
We leave this problem as an interesting task for future work.
4 Properties of PDRC
In this section we prove the soundness and termination of our PDRC algorithm.
4.1 Termination
Theorem 1. For systems with state variables whose domains are finite, the PDRC algorithm always terminates.
The termination of regular PDR is proven in [4]. In the case of an unsafe system—
which for us corresponds to an uncontrollable system—the counterexample proving this
must be finite in length, and thus found in finite time. In the case of a safe system, the
proof is based on the following observations: that each proof-obligation (call to block)
must block at least one state in at least one frame; that there are a finite number of frames
for each iteration (value of N ); that there are a finite number of states of the system;
and that each Ri+1 must either block at least one more state than Ri , or they are equal.
All these observations remain true for PDRC, substituting “uncontrollable” for “unsafe”. This means that the proof of termination from [4] can be used for PDRC with
minimal modification.
10
4.2 Correctness
We claim that the algorithm described above synthesises a minimally restrictive safe
controller for the original system.
Theorem 2. If there exists any safe controller for the system, the controller synthesised
by the PDRC algorithm is safe.
Proof. We prove Theorem 2 by contradiction. Assume there is an unsafe state s, i.e. we
have ¬P(s), that is reachable from an I-state in k steps. We must then have k ≥ N ,
since invariant (2) states that Ri → P, i < N . Let M be the index of the discovered
fix point RM = RM+1 .
Invariant (1) (from Section 3.1) states that Ri → Ri+1 , and this applies for all
values 0 ≤ i ≤ M . Repeated application of this means that any state in any Ri , i < M
is also contained in RM .
Invariant (3) states that Ri+1 is an over-approximation of the image of Ri . This
means that any state reachable from RM should be in RM+1 . Since RM = RM+1 ,
such a state is also in RM itself. Repeated application of this allows us to extend the
trace all the way to Rk = Rk−1 = · · · = RM .
Now, for the bad state s, regardless of the number of steps k needed to reach it, we
know that s is contained in Rk and therefore in RM . Yet when the algorithm terminated
it had at one point found RM ∧ ¬P to be UNSAT. The state s, which is both in RM and
¬P, would constitute a satisfying assignment to this query. This contradiction proves
that s cannot exist.
⊓
⊔
Theorem 3. A controller synthesised by the PDRC algorithm is minimally restrictive.
Proof. We prove Theorem 3 also by contradiction. Assume there is a safe path π =
s0 , s1, . . . , sk through the original system (with transition function T), which is not
possible using the controlled transition function TPDRC ; yet there exists another safe,
controllable supervisor represented by TS where π is possible. By deriving a contradiction, we will prove that no such TS can exist.
Consider the first step of π that is not allowed by TPDRC ; in other words, a pair
(si , si+1 ) where we have ¬TPDRC (si , si+1 ) while we do have both TS (si , si+1 ) and
T(si , si+1 ). The only way that TPDRC is more restrictive than T is due to strengthenings on the form TcPDRC = Tc ∧ ¬m, for some cube m. This means that si must be in
some cube m that PDRC supervised in this fashion.
This happened inside a call block(m, j). Since π is safe, this call cannot have
been made because m itself encoded unsafe states. Instead, there must have been a
previous call block(n, j + 1), where m is a minterm of the preimage of n under Tu .
This cube n is either itself a bad cube, or it can be traced to a bad cube by following the
trace of block calls. Since each step in this block chain only uses Tu , we can find a
series of uncontrollable transitions, starting in some s̃i+1 ∈ n, leading to some cube p
which is a generalisation of a satisfying assignment to the query RN ∧ ¬P.
This proves that TS , whose TSc does not restrict transitions from si , allows for
the system to enter a state s̃i+1 , from which there is an uncontrollable path to an unsafe
state. This contradicts the assumption that TS was safe, proving that the combination of
π and TS cannot exist. This proves that the controller encoded by TPDRC is minimally
restrictive.
⊓
⊔
11
5 Implementation
We have implemented a prototype of PDRC in the model checker Tip (Temporal Inductive Prover [5]). The input format supported by Tip is AIGER [1], where the transition
system is represented as a circuit, which is not a very intuitive way to view an EFSM or
STS. For this reason, our prototype also includes Haskell modules for creating a transition system in a control-theory-friendly representation, converting it to AIGER, and
using the output from the Tip-PDRC to reflect the new, controlled system synthesised by
PDRC. Finally, it also includes a parser from the .wmod format used by WATERS and
Supremica [13], into our Haskell representation. Altogether, our implementation consists of about 150 lines of code added or changed in the Tip source, and about 1600 lines
of Haskell code. Our tools, together with the benchmarks we used, is available through
github.com/JonatanKilhamn/supermini and github.com/JonatanKilhamn/tipcheck.
When converting transition systems into circuits, certain choices have to be made.
Our encoding allows for synchronised automata with one-hot-encoded locations (e.g.
location l3 out of 5 is represented by the bits [0, 0, 1, 0, 0]) and unary-encoded integer
variables (e.g. a variable ranging from 0 to 5 currently having the value 3 is represented
by [1, 1, 1, 0, 0]). Each of these encoding has a corresponding invariant: with one-hot,
exactly one bit must be set to 1; with unary, each bit implies the previous one. However,
these invariants need not be explicitly enforced by the transition relation (i.e. as guards
on every transition), rather, it is enough that they are preserved by all variable updates.
It should be noted that although the PDRC on a theoretical level works equally
well on STS as EFSM, our implementation does assume the EFSM division between
locations and variables for the input system. However, our implementation retains the
generality of PDRC in how the state space is explored—the algorithm described in
Section 3 is run on the circuit representation, where the only difference between the
location variable xL and any other variable is the choice of encoding.
6 Experiments
For an empirical evaluation, we ran PDRC on several standard benchmark problems:
the extended dining philosophers (EDP) [15], the cat and mouse tower (CMT) [15] and
the parallell manufacturing example (PME) [12]. The runtimes of these experiments
are shown in Table 1 below. The benchmarks were performed on a computer with a 2.7
GHz Intel Core i5 processor and 8GB of available memory.
6.1 Problems
For the dining philosophers, EDP(n, k) denotes the problem of synthesising a safe
controller for n philosophers and k intermediary states that each philosopher must go
through between taking their left fork and taking their right one. The transition system
is written so that all philosophers respect when their neighbours are holding the forks,
except for the even-numbered ones who will try to take the fork to their left even if it is
held, which leads (uncontrollably) to a forbidden state.
12
For the cat and mouse problem, CMT(n, k) similarly denotes the problem with n
floors of the tower, k cats and k mice. Again, the transition system already prohibits
cats and mice from entering the same room (forbidden state) except by a few specified
uncontrollable pathways.
Finally, the parallel manufacturing example (PME) represents an automated factory,
with an industrial robot and several shared resources. It differs from the other in that
its scale comes mainly from the number of different synchronised automata. In return,
it does not have a natural parameter that can be set to higher values to increase the
complexity further.
6.2 Results
We compare PDRC to Symbolic Supervisory Control using BDD (SC-BDD) [14, 6],
which is implemented within Supremica. We wanted to include the Incremental, Inductive Supervisory Control (IISC) algorithm [18], which also uses PDR but in another
way. However, the IISC implementation from [18] is no longer maintained. Despite this
failed replication, we include figures for IISC taken directly from [18]—with all the
caveats that apply when comparing runtimes obtained from different machines. Table 1
shows runtimes, where the problems are denoted as above and “×” indicates time-out
(5 min). The parameters for EDP and CMT were chosen to show a wide range from
small to large problems, while still mostly choosing values for which [18] reports runtimes for IISC. We see that while SC-BDD might have the advantage on certain small
problems, PDRC quickly outpaces it as the problems grow larger.
Table 1. Performance of PDRC (our contribution), SC-BDD and IISC on standard benchmark
problems. Note that the IISC implementation was not reproducible by us; the numbers here are
lifted from [18]. “×” indicates timeout (5 min), and “–” means this particular problem was not
included in [18].
Model
CMT(1,5)
CMT(3,3)
CMT(5,5)
CMT(7,7)
EDP(5,10)
EDP(10,10)
EDP(5,50)
EDP(5,200)
EDP(5,10e3)
PME
PDRC IISC[18] SC-BDD
0.09
0.13
0.007
1.3
0.43
1.12
8.3
0.73
×
30.02 0.98
×
0.03
0.98
0.031
0.15
–
0.10
0.03
0.12
0.26
0.06
0.12
×
0.19
0.12
×
0.72
2.3
8.1
7 Discussion
In this section, we relate briefly how BDD-SC [14, 6] and IISC [18] work, in order to
compare and contrast to PDRC.
13
7.1 BDD-SC
BDD-SC works by modelling an FSM as a binary decision diagram (BDD). The algorithm generates a BDD, representing the safe states, by searching backwards from the
forbidden states. However, the size of this BDD grows with the domain of the integer
variables. The reason is that the size of the BDD is quite sensitive to the number of
binary variables, but also the ordering of the variables in the BDD. Even when more recent techniques on partitioning of the problem are used [6], the size of the BDD blows
up, and we see in Table 1 that BDD-SC very quickly goes from good performance to
time-out.
7.2 IISC
It is natural to compare PDRC to IISC [18], since the latter is also inspired by PDR
(albeit under the name IC3). In theory, PDRC has some advantages.
The first advantage is one of representation. IISC is built on the EFSM’s separation
between locations and variables, as described in 2.1. PDRC, on the other hand, handles
the more general STS representation. Specifically, IISC explicitly unrolls the entire substate-space spanned by the locations. This sub-space can itself suffer a space explosion
when synchronising a large number of automata.
To once again revisit our example (Figure 1): IISC would unroll the graph, starting
in l0 , into an abstract reachability tree. Each node in such a tree can cover any combination of variable values, but only one location. Thus, IISC effectively does a forwards
search for bad locations, and the full power of PDR (IC3) is only brought to bear on the
assignment of variables along a particular error trace. Thus, a bad representation choice
w.r.t. which parts of the system are encoded as locations versus as variables can hurt
IISC, while PDRC is not so vulnerable.
PDRC, in contrast, leverages PDR’s combination of forwards and backwards search:
exploring the state space backwards from the bad states in order to construct an inductive invariant which holds in the initial states. One disadvantage of the backwards search
is that PDRC might add redundant safeguards. For example, the safeguard on the transition from l1 to 13 in Figure 2 is technically redundant, as there is no way to reach
l2 with the restricted variable values from the initial states. As shown in [18], IISC
does not add this particular guard. However, since both methods are proven to yield
minimally-restrictive supervisors, any extra guards added by PDRC are guaranteed not
to affect the behaviour of the final system.
The gain, on the other hand, is that one does not need to unroll the whole path from
the initial state to the forbidden state in order to supervise it. Consider: each such error
path must have a “point of no return”—the last controllable transition. When synthesising for safety, this transition must never be left enabled (our proof of Theorem 3 hinges
upon this). In order to find this point, PDRC traverses only the path between the point
of no return and the forbidden state, whereas IISC traverses the whole path. In a sense,
PDRC does not care about how one might end up close to forbidden state, but only
where to put up the fence.
In practice, our results have IISC outperforming PDRC on both PDE and CMT.
We believe the main reason is that unlike IISC which uses IC3 extended to SMT [3],
14
our implementation of PDRC works in SAT. This means that while both algorithms
are theoretically equipped to abstract away large swathes of the state space, IISC does it
much easier on integer variables than PDRC, which needs to e.g. represent each possible
value of a variable as a separate gate.
The one point where PDRC succeeds also in practice is on the PME problem. Here,
most of the system’s complexity comes from the number of different locations across
the synchronised automata, rather than from large variable domains. In order to further
explore this difference in problem type, we would have liked to evaluate PDRC and
IISC on more problems with more synchronised automata, such as EDP(10,10). Sadly,
this was impossible since the IISC implementation is no longer maintained.
8 Conclusions and Future Work
We have presented PDRC, an algorithm for controller synthesis of discrete event systems with uncontrollable transitions, based on property-driven reachability. The algorithm is proven to terminate on all solvable problem instances, and its synthesised controllers are proven to be safe and minimally restrictive. We have also implemented a
prototype in the SAT-based model checker Tip. Our experiments show that even this
SAT-based implementation outperforms a comparable BDD-based approach, but not
the more recent IISC. However, since the implementation of IISC we compare against
uses an SMT solver, not to mention that it is not maintained anymore, we must declare
the algorithm-level comparison inconclusive.
The clearest direction for future research would be to implement PDRC using an
SMT solver, to see if this indeed does realise further potential of the algorithm like we
believe. Both [3] and [9] provide good insights for this task. However, another interesting direction is to use both PDRC and IISC as a starting point to tackling the larger
problem: safe and nonblocking controller synthesis. Expanding the problem domain
like this cannot be done by a trivial change to PDRC, but hopefully the insights from
this work can contribute to a new algorithm. Another technique to draw from is that
of IICTL [7]. As discussed in Section 2.2, by restricting our problem to only safety,
we remove ourselves from real-world applications. For this reason, we do not present
PDRC as a contender for any sort of throne, but as a stepping stone towards the real
goal: formal, symbolic synthesis and verification of discrete supervisory control.
REFERENCES
15
References
[1] Armin Biere. AIGER. 2014. URL: http://fmv.jku.at/aiger/ (visited
on 07/24/2017).
[2] Aaron R. Bradley. “SAT-Based Model Checking without Unrolling”. In: Verification, Model Checking, and Abstract Interpretation: 12th International Conference, VMCAI 2011, Austin, TX, USA, January 23-25, 2011. Proceedings. Ed. by
Ranjit Jhala and David Schmidt. Berlin, Heidelberg: Springer Berlin Heidelberg,
2011, pp. 70–87. ISBN: 978-3-642-18275-4. DOI: 10.1007/978-3-642-18275-4_7.
[3] Alessandro Cimatti and Alberto Griggio. “Software Model Checking via IC3”.
In: Computer Aided Verification: 24th International Conference, CAV 2012, Berkeley, CA, USA, July 7-13, 2012 Proceedings. Ed. by P. Madhusudan and Sanjit
A. Seshia. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012, pp. 277–293.
ISBN : 978-3-642-31424-7. DOI: 10.1007/978-3-642-31424-7_23.
[4] Niklas Eén, Alan Mishchenko, and Robert Brayton. “Efficient Implementation
of Property Directed Reachability”. In: Proceedings of the International Conference on Formal Methods in Computer-Aided Design. FMCAD ’11. Austin,
Texas: FMCAD Inc, 2011, pp. 125–134. ISBN: 978-0-9835678-1-3. URL: http://dl.acm.org/citation.
[5] Niklas Eén and Niklas Sörensson. “Temporal Induction by Incremental SAT
Solving”. In: Electronic Notes in Theoretical Computer Science 89.4 (2003),
pp. 543–560. ISSN: 1571-0661. DOI: http://dx.doi.org/10.1016/S1571-0661(05)82542-3.
[6] Z. Fei et al. “A symbolic approach to large-scale discrete event systems modeled
as finite automata with variables”. In: 2012 IEEE International Conference on
Automation Science and Engineering (CASE). Aug. 2012, pp. 502–507. DOI:
10.1109/CoASE.2012.6386479.
[7] Zyad Hassan, Aaron R. Bradley, and Fabio Somenzi. “Incremental, Inductive
CTL Model Checking”. In: Proceedings of the 24th International Conference on
Computer Aided Verification. CAV’12. Springer-Verlag, 2012, pp. 532–547.
[8] C. A. R. Hoare. Communicating Sequential Processes. Upper Saddle River, NJ,
USA: Prentice-Hall, Inc., 1985. ISBN: 0-13-153271-5.
[9] Kryštof Hoder and Nikolaj Bjørner. “Generalized Property Directed Reachability”. In: Proceedings of the 15th International Conference on Theory and Applications of Satisfiability Testing. SAT’12. Trento, Italy: Springer-Verlag, 2012,
pp. 157–171. ISBN: 978-3-642-31611-1. DOI: 10.1007/978-3-642-31612-8_13.
[10] John E. Hopcroft, Rajeev Motwani, and Jeffrey D. Ullman. Introduction to Automata Theory, Languages, and Computation (3rd Edition). Boston, MA, USA:
Addison-Wesley Longman Publishing Co., Inc., 2006. ISBN: 0321462254.
[11] R. Kumar, V. Garg, and S. I. Marcus. “Predicates and predicate transformers for
supervisory control of discrete event dynamical systems”. In: IEEE Transactions
on Automatic Control 38.2 (Feb. 1993), pp. 232–247. ISSN: 0018-9286. DOI:
10.1109/9.250512.
[12] R. J. Leduc, M. Lawford, and W. M. Wonham. “Hierarchical interface-based
supervisory control-part II: parallel case”. In: IEEE Transactions on Automatic
Control 50.9 (Sept. 2005), pp. 1336–1348. ISSN: 0018-9286. DOI: 10.1109/TAC.2005.854612.
[13] Robi Malik. Waters/Supremica IDE. 2014. URL: http://www.cs.waikato.ac.nz/˜robi/download_w
(visited on 07/24/2017).
16
REFERENCES
[14] S. Miremadi, B. Lennartson, and K. Akesson. “A BDD-Based Approach for
Modeling Plant and Supervisor by Extended Finite Automata”. In: IEEE Transactions on Control Systems Technology 20.6 (Nov. 2012), pp. 1421–1435. ISSN:
1063-6536. DOI: 10.1109/TCST.2011.2167150.
[15] Sajed Miremadi, Knut Akesson, et al. “Solving two supervisory control benchmark problems using Supremica”. In: 2008 9th International Workshop on Discrete Event Systems. May 2008, pp. 131–136. DOI: 10.1109/WODES.2008.4605934.
[16] P.J. Ramadge and W.M. Wonham. “The control of discrete event systems”. In:
Proceedings of the IEEE, Special Issue on Discrete Event Dynamic Systems 77.1
(1989), pp. 81–98. ISSN: 0018-9219.
[17] Mohammad Reza Shoaei. Incremental and Hierarchical Deadlock-Free Control
of Discrete Event Systems with Variables: A Symbolic and Inductive Approach.
PhD thesis, Series 3827. Chalmers University of Technology, Dept. of Signals
and Systems, Automation, 2015, pp. 44–45. ISBN: 978-91-7597-146-9.
[18] Mohammad Reza Shoaei, Laura Kovács, and Bengt Lennartson. “Supervisory
Control of Discrete-Event Systems via IC3”. In: Hardware and Software: Verification and Testing: 10th International Haifa Verification Conference, HVC 2014,
Haifa, Israel, November 18-20, 2014. Proceedings. Ed. by Eran Yahav. Springer
International Publishing, 2014, pp. 252–266.
| 3 |
Throughput-Optimal Broadcast in Wireless Networks with
Dynamic Topology
Abhishek Sinha
Leandros Tassiulas
Eytan Modiano
Laboratory for Information and
Decision Systems
MIT
Electrical Engg. and Yale
Institute of Network Science
Yale University
Laboratory for Information and
Decision Systems
MIT
arXiv:1604.00576v1 [] 3 Apr 2016
sinhaa@mit.edu
leandros.tassiulas@yale.edu
ABSTRACT
We consider the problem of throughput-optimal broadcasting in time-varying wireless networks, whose underlying topology is restricted to Directed Acyclic Graphs (DAG). Previous broadcast algorithms route packets along spanning trees.
In large networks with time-varying connectivities, these
trees are difficult to compute and maintain. In this paper
we propose a new online throughput-optimal broadcast algorithm which makes packet-by-packet scheduling and routing decisions, obviating the need for maintaining any global
topological structures, such as spanning-trees. Our algorithm relies on system-state information for making transmission decisions and hence, may be thought of as a generalization of the well-known back-pressure algorithm which
makes point-to-point unicast transmission decisions based
on queue-length information, without requiring knowledge
of end-to-end paths. Technically, the back-pressure algorithm is derived by stochastically stabilizing the networkqueues. However, because of packet-duplications associated
with broadcast, the work-conservation principle is violated
and queuing processes are difficult to define in the broadcast
problem. To address this fundamental issue, we identify certain state-variables which behave like virtual queues in the
broadcast setting. By stochastically stabilizing these virtual queues, we devise a throughput-optimal broadcast policy. We also derive new characterizations of the broadcastcapacity of time-varying wireless DAGs and derive an efficient algorithm to compute the capacity exactly under certain assumptions, and a poly-time approximation algorithm
for computing the capacity under less restrictive assumptions.
1.
INTRODUCTION
The problem of efficiently disseminating packets, arriving
at a source node, to a subset of nodes in a network, is known
as the Multicast problem. In the special case when the packets are to be distributed among all nodes, the corresponding
problem is referred to as the Broadcast problem. Multicast-
ACM ISBN 978-1-4503-2138-9.
DOI: 10.1145/1235
modiano@mit.edu
ing and broadcasting is considered to be a fundamental network functionality, which enjoys numerous practical applications ranging from military communications [16], disaster
management using mobile adhoc networks (MANET) [9], to
streaming services for live web television [24] etc.
There exists a substantial body of literature addressing different aspects of this problem in various networking settings.
An extensive survey of various multicast routing protocols
for MANET is provided in [12]. The authors of [8] consider
the problem of minimum latency broadcast of a finite set of
messages in MANET. This problem is shown to be NP-hard.
To address this issue, several approximation algorithms are
proposed in [11], all of which rely on construction of certain network-wide broadcast-trees. Cross-layer solutions for
multi-hop multicasting in wireless network are given in [29]
and [10]. These algorithms involve network coding, which introduces additional complexity and exacerbates end-to-end
delay. The authors of [21] propose a multicast scheduling
and routing protocol which balances load among a set of
pre-computed spanning trees, which are challenging to compute and maintain in a scalable fashion. The authors of [26]
propose a local control algorithm for broadcasting in a wireless network for the so called scheduling-free model, in which
an oracle is assumed to make interference-free scheduling
decisions. This assumption, as noted by the authors themselves, is not practically viable.
In this paper we build upon the recent work of [23] and consider the problem of throughput-optimal broadcasting in a
wireless network with time-varying connectivity. Throughout the paper, the overall network-topology will be restricted
to a directed acyclic graph (DAG). We first characterize the
broadcast-capacity of time-varying wireless networks and
propose an exact and an approximation algorithm to compute it efficiently. Then we propose a dynamic link-activation
and packet-scheduling algorithm that, unlike any previous
algorithms, obviates the need to maintain any global topological structures, such as spanning trees, yet achieves the
capacity. In addition to throughput-optimality, the proposed algorithm enjoys the attractive property of in-order
packet-delivery, which makes it particularly useful in various
online applications, e.g. VoIP and live multimedia communication [4]. Our algorithm is model-oblivious in the sense that
its operation does not rely on detailed statistics of the random arrival or network-connectivity processes. We also show
that the throughput-optimality of our algorithm is retained
when the control decisions are made using locally available
and possibly imperfect, state information.
Notwithstanding the vast literature on the general topic of
broadcasting, to the best of our knowledge, this is the first
work addressing throughput-optimal broadcasting in timevarying wireless networks with store and forward routing.
Our main technical contributions are the following:
• We define the broadcast-capacity for wireless networks
with time-varying connectivity and characterize it mathematically and algorithmically. We show that broadcastcapacity of time-varying wireless directed acyclic networks can be computed efficiently under some assumptions. We also derive a tight-bound for the capacity
for a general setting and utilize it to derive an efficient
approximation algorithm to compute it.
• We propose a throughput-optimal dynamic routing and
scheduling algorithm for broadcasting in a wireless DAGs
with time-varying connectivity. This algorithm is of
Max-Weight type and uses the idea of in-order delivery to simplify its operation. To the best of our knowledge, this is the first throughput-optimal dynamic algorithm proposed for the broadcast problem in wireless
networks.
• We extend our algorithm to the setting when the nodes
have access to infrequent state updates. We show
that the throughput-optimality of our algorithm is preserved even when the rate of inter-node communication
is made arbitrarily small.
• We illustrate our theoretical findings through illustrative numerical simulations.
The rest of the paper is organized as follows. Section 2
introduces the wireless network model. Section 3 defines
and characterizes the broadcast capacity of a wireless DAG.
It also provides an exact and an approximation algorithm
to compute the broadcast-capacity. Section 4 describes our
capacity-achieving broadcast algorithm for DAG networks.
Section 5 extends the algorithm to the setting of broadcasting with imperfect state information. Section 6 provides numerical simulation results to illustrate our theoretical findings. Finally, in section 7 we summarize our results and
conclude the paper.
2.
NETWORK MODEL
link (i, j) is activated.
Let r ∈ V be the source node. At slot t, A(t) packets arrive
at the source. The arrivals, A(t), are i.i.d. over slots with
mean E(A(t)) = λ. Our problem is to efficiently disseminate
the packets to all nodes in the network.
2.1 Notations and Nomenclature:
All vectors in this paper are assumed to be column vectors.
For any set X ⊂ Rk , its convex-hull is denoted by conv(X ).
Let U, V \ U be a disjoint partition of the set of vertices
V of the graph G, such that the source r ∈ U and U ( V .
Such a partition is called a proper-partition. To each proper
partition corresponding to the set U , associate the propercut vector u ∈ Rm , defined as follows:
ui,j
= ci,j
= 0
if i ∈ U, j ∈ V \ U
otherwise
(1)
Denote the special, single-node proper-cuts by Uj ≡ V \ {j},
and the corresponding cut-vectors by uj , ∀j ∈ V \ {r}. The
set of all proper-cut vectors in the graph G is denoted by U.
The in-neighbours of a node j is defined as the set of all
nodes i ∈ V such that there is a directed edge (i, j) ∈ E. It
is denoted by the set ∂ in (j), i.e.,
∂ in (j) = {i ∈ V : (i, j) ∈ E}
(2)
Similarly, we define the out-neighbours of a node j as follows
∂ out (j) = i ∈ V : (j, i) ∈ E
(3)
For any two vectors x and y in Rm , define the componentwise product z ≡ x ⊙ y to be a vector in Rm such that
zi = xi yi , 1 ≤ i ≤ m.
For any set S ⊂ Rm and any vector v ∈ Rm , v⊙S, denotes
the set of vectors obtained as the component-wise product
of the vector v and the elements of the set S, i.e.,
v ⊙ S = y ∈ Rm : y = v ⊙ s, s ∈ S
(4)
Also, the usual dot product between two vectors x, y ∈
Rm is defined as,
x·y =
m
X
x i yi
i=1
2.2 Model of Time-varying Wireless Connectivity
First we describe the basic wireless network model without time-variation. Subsequently, we will incorporate timevariation in the basic model. A static wireless network is
modeled by a directed graph G = (V, E, c, M), where V is
the set of nodes, E is the set of directed point-to-point links1 ,
the vector c = (cij ) denotes capacities of the edges when
the corresponding links are activated and M ⊂ {0, 1}|E|
is the set of incidence-vectors corresponding to all feasible
link-activations complying with the interference-constraints.
The structure of the activation-set M depends on the interference model, e.g., under the primary or node-exclusive interference model [14], M corresponds to the set of all matchings on the graph G. There are a total of |V | = n nodes and
|E| = m edges in the network. Time is slotted and at timeslot t, any subset of links complying with the underlying
interference-constraint may be activated. At most cij packets can be transmitted in a slot from node i to node j, when
Now we incorporate time-variation into our basic framework described above. In a wireless network, the channelSINRs vary with time because of random fading, shadowing
and mobility [27]. To model this, we consider a simple ONOFF model where an individual link can be in one of the
two states, namely ON and OFF. In an OFF state, the capacity of a link is zero 2 . Thus at a given time, the network
can be in any one configuration, out of the set of all possible
network configurations Ξ. Each element σ ∈ Ξ corresponds
to a sub-graph G(V, Eσ ) ⊂ G(V, E), with Eσ ⊂ E, denoting
the set of links that are ON. At a given time-slot t, one of
the configuration σ(t) ∈ Ξ is realized. The configuration at
time t is represented by the vector σ(t) ∈ {0, 1}|E| , where
(
1, if e ∈ Eσ(t)
σ(e, t) =
0, otherwise.
1
We assume all transmit and receiving antennas to be directed and hence all transmissions to be point-to-point [3].
2
Generalization of the ON-OFF model, to multi-level discretization of link-capacity is straight-forward.
At a given time-slot t, the network controller may activate
a set of non-interfering links that are ON.
The network-configuration process {σ(t)}t≥1 evolves in discretetime according to a stationary ergodic process with the stationary distribution {p(σ)}σ∈Ξ [13], where
X
p(σ) = 1, p(σ) > 0, ∀σ ∈ Ξ
(5)
σ∈Ξ
Since the underlying physical processes responsible for
time-variation are often spatially-correlated [1], [19], the distribution of the link-states is assumed to follow an arbitrary
joint-distribution. The detailed parameters of this process
depend on the ambient physical environment, which is often difficult to measure. In particular, it is unrealistic to
assume that the broadcast-algorithm has knowledge of the
parameters of the process σ(t). Fortunately, our proposed
dynamic throughput-optimal broadcast algorithm does not
require the statistical characterization of the configurationprocess σ(t) or its stationary-distribution p(σ). This makes
our algorithm robust and suitable for use in time-varying
wireless networks.
3.
DEFINITION AND CHARACTERIZATION
OF BROADCAST CAPACITY
Intuitively, a network supports a broadcast rate λ if there
exists a scheduling policy under which all network nodes
receive distinct packets at rate λ. The broadcast-capacity
of a network is the maximally supportable broadcast rate
by any policy. Formally, we consider a class Π of scheduling
policies where each policy π ∈ Π consists of a sequence of
actions {πt }t≥1 , executed at every slot t. Each action πt
consists of two operations:
• The scheduler observes the current network-configuration
σ(t) and activates a subset of links by choosing a feasible activation vector s(t) ∈ Mσ(t) . Here Mσ denotes
the set of all feasible link-activation vectors in the subgraph G(V, Eσ ), complying with the underlying interference constraints. As an example, under the primary
interference constraint, Mσ is given by the set of all
matchings [6] of the sub-graph G(V, Eσ ).
Analytically, elements from the set Mσ will be denoted by their corresponding |E|-dimensional binary
incidence-vectors, whose component corresponding to
edge e is identically zero if e ∈
/ Eσ .
• Each node i forwards a subset of packets (possibly
empty) to node j over an activated link (i, j) ∈ σ(t),
subject to the link capacity constraint. The class Π
includes policies that may use all past and future information, and may forward any subset of packets over
a link, subject to the link-capacity constraint.
To formally introduce the notion of broadcast capacity,
we define the random variable Riπ (T ) to be the number of
distinct packets received by node i ∈ V up to time T , under
a policy π ∈ Π. The time average lim inf T →∞ Riπ (T )/T is
the rate of packet-reception at node i.
Definition 1. A policy π ∈ Π is called a “broadcast policy of rate λ” if all nodes receive distinct packets at rate λ,
i.e.,
1
min lim inf Riπ (T ) = λ,
i∈V T →∞ T
w.p. 1
(6)
where λ is the packet arrival rate at the source node r.
Definition 2. The broadcast capacity λ∗ of a network is
defined to be the supremum of all arrival rates λ, for which
there exists a broadcast policy π ∈ Π of rate λ.
In the following subsection, we derive an upper-bound on
broadcast-capacity, which immediately follows from the previous definition.
3.1 An Upper-bound on Broadcast Capacity
Consider a policy π ∈ Π that achieves a broadcast rate of
at least λ∗ − ǫ, for an ǫ > 0. Such a policy π exists due to
the definition of the broadcast capacity λ∗ in Definition 2.
Now consider any proper-cut U of the network G. By
definition of a proper-cut, there exists a node i ∈
/ U . Let
sπ (t, σ(t)) = (sπe (t), e ∈ E) be the link-activation vector
chosen by policy π in slot t, upon observing the currentconfiguration σ(t). The maximum number of packets that
can be transmitted across the cut U in slot t is upperbounded by the total capacity P
of all activated links across
the cut-set U , which is given by e∈EU ce sπe (t, σ(t)). Hence,
the number of distinct packets received by node i by time T
is upper-bounded by the total available capacity across the
cut U up to time T , subject to link-activation decisions of
the policy π. In other words, we have
Riπ (T ) ≤
T
X
X
ce sπe (t, σ(t)) = u ·
T
X
sπ (t, σ(t))
(7)
t=1
t=1 e∈EU
i.e.,
Riπ (T )
≤u·
T
T
1 X π
s (t, σ(t)) ,
T t=1
where the cut-vector u ∈ Rm , corresponds to the cut-set U ,
as in Eqn.(1). It follows that,
λ∗ − ǫ
(a)
≤
≤
Rjπ (T )
Rπ (T )
≤ lim inf i
T →∞
j∈V T →∞
T
T
X
T
1
lim inf u ·
sπ (t, σ(t)) ,
T →∞
T t=1
min lim inf
(8)
where (a) follows from the fact that π is a broadcast policy
of rate at least λ∗ − ǫ. Since the above inequality holds for
all proper-cuts u, we have
X
T
1
sπ (t, σ(t))
(9)
λ∗ − ǫ ≤ min lim inf u ·
u∈U T →∞
T t=1
The following technical lemma will prove to be useful for
deriving an upper-bound on the broadcast-capacity.
Lemma 1. For any policy π ∈ Π, and any propercut vector u, there exists a collection of vectors βσπ ∈
conv(Mσ ) σ∈Ξ , such that, the following holds w.p. 1
T
1 X π
s (t, σ(t))
T →∞
T t=1
X
= min u ·
p(σ)βσπ
min lim inf u ·
u∈U
u∈U
σ∈Ξ
The above lemma essentially replaces the minimum cutset bound of an arbitrary activations in (9), by the minimum
cut-set bound of a stationary randomized activation, which
is easier to handle. Combining Lemma 1 with Eqn. (9), we
conclude that for the policy π ∈ Π, there exists a collection
of vectors {βσπ ∈ conv(Mσ )}σ∈Ξ such that
X
λ∗ − ǫ ≤ min u ·
(10)
p(σ)βσπ
u∈U
σ∈Ξ
Maximizing the RHS of Eqn. (10) over all vectors βσ ∈
conv(Mσ ), σ ∈ Ξ and letting ǫ ց 0, we have the following
universal upper-bound on the broadcast capacity λ∗
X
∗
λ ≤
max
min u ·
(11)
p(σ)βσ
βσ ∈conv(Mσ ) u∈U
σ∈Ξ
Specializing the above bound for single-node cuts of the form
Uj = (V \ {j}) → {j}, ∀j ∈ V \ {r}, we have the following
upper-bound
X
(12)
p(σ)βσ
λ∗ ≤
max
min uj ·
βσ ∈conv(Mσ ) j∈V \{r}
σ∈Ξ
It will be shown in Section 4 that in a DAG, our throughputoptimal policy π ∗ achieves a broadcast-rate equal to the RHS
of the bound (12). Thus we have the following theorem
Theorem 3.1. The broadcast-capacity λ∗DAG of a
time-varying wireless DAG is given by:
X
λ∗DAG =
max
min uj ·
p(σ)βσ (13)
βσ ∈conv(Mσ ),σ∈Ξ j∈V \{r}
σ∈Ξ
The above theorem shows that for computing the broadcastcapacity of a wireless DAG, taking minimum over the singlenode cut-sets {uj , j ∈ V \ {r}} suffice (c.f. Eqn. (11)).
3.2 An Illustrative Example of Capacity Computation
In this section, we work out a simple example to illustrate
the previous results.
Configuration σ4
Figure 1: A Wireless Network and its four possible
configurations
transmitted over a link if it is ON. Moreover, since the links
are assumed to be point-to-point, even if both the links ra
and rb are ON at a slot t (i.e., σ(t) = σ3 ), a packet can be
transmitted over one of the links only. Hence, the sets of
feasible activations are given as follows:
0
1
},
}, Mσ2 = {
Mσ 1 = {
1
0
1
0
,
}, Mσ4 = φ.
Mσ 3 = {
0
1
Here the first coordinate corresponds to activating the edge
ra and the second coordinate corresponds to activating the
edge rb.
To illustrate the effect of link-correlations on broadcastcapacity, we consider three different joint-distributions p(σ),
all of them having the following marginal
1
2
1
p(rb = ON) = p(rb = OFF) =
2
p(ra = ON) = p(ra = OFF) =
Case 1: Zero correlations.
In this case, the links ra and rb are ON w.p.
dently at every slot, i.e.,
p(σi ) = 1/4,
Configuration σ1
indepen-
i = 1, 2, 3, 4
(14)
It can be easily seen that the broadcast capacity, as given
in Eqn. (13), is achieved when in configurations σ1 and σ2 ,
the edges ra and rb are activated w.p. 1 respectively and in
the configuration σ3 the edges ra and rb are activated with
probability 12 and 12 . In other words, an optimal activation
schedule of a corresponding stationary randomized policy is
given as follows:
βσ∗1 = 1
Wireless network
1
2
′
0 , βσ∗2 = 0
1 ′
2
′
1
1 , βσ∗3 =
2
The optimal broadcast capacity can be computed from Eqn.
(13) to be λ∗ = 14 + 0 + 14 × 21 = 38 .
Case 2: Positive correlations.
In this case, assume that the edges ra and rb are positively
correlated, i.e., we have
Configuration σ2
Configuration σ3
Consider the simple wireless network shown in Figure (1),
with node r being the source. The possible network configurations σi , i = 1, 2, 3, 4 are also shown. One packet can be
p(σ1 ) = p(σ2 ) = 0; p(σ3 ) = p(σ4 ) =
1
2
Then it is clear that half of the slots are wasted when both
the links are OFF (i.e., in the configuration σ4 ). When
the network is in configuration σ3 , an optimal randomized
activation is to choose one of the two links uniformly at
random and send packets over it. Thus
1
1 ′
βσ∗3 =
2
2
The optimal broadcast-capacity, computed from Eqn. (13)
is λ∗ = 14 .
Case 3: Negative correlations.
In this case, we assume that the edges ra and rb are negatively correlated, i.e., we have
1
; p(σ3 ) = p(σ4 ) = 0
2
It is easy to see that in this case, a capacity-achieving activation strategy is to send packets over the link whichever
is ON. The broadcast-capacity in this case is λ∗ = 21 , the
highest among the above three cases.
In this example, with an arbitrary joint distribution of networkconfigurations {p(σi ), i = 1, 2, 3, 4}, it is a matter of simple
∗
calculation to obtain the optimal activations βσ
in Eqn.
i
(13). However it is clear that for an arbitrary network with
arbitrary activations M and configuration sets Ξ, evaluating (13) is non-trivial. In the following section we study this
problem under some simplifying assumptions.
p(σ1 ) = p(σ2 ) =
3.3 Efficient Computation of Broadcast Capacity
In this section we study the problem of efficient computation of the Broadcast Capacity λ∗ of a wireless DAG, given
by Eqn. (13). In particular, we show that when the number
of possible network configurations |Ξ|(n) grows polynomially
with n (the number of nodes in the network), there exists
a strongly polynomial-time algorithm to compute λ∗ under
the primary-interference constraint. Polynomially-bounded
network-configurations arise, for example, when the set Ξ(n)
consists of all subgraphs of the graph G with at most d number of edges, for some fixed integer d. In this case |Ξ(n)| can
be bounded as follows
!
d
X
m
= O(n2d ),
|Ξ|(n) ≤
k
k=0
2
where m(= O(n )) is the number of edges in the graph G.
Theorem 3.2 (Efficient Computation of λ∗ ).
Suppose that there exists a polynomial q(n) such that,
for a wireless DAG network G with n nodes, the number
of possible network configurations |Ξ|(n) is bounded
polynomially in n, i.e., |Ξ|(n) = O(q(n)). Then, there
exists a strongly poly − time algorithm to compute the
broadcast-capacity of the network under the primary
interference constraints.
Although only polynomially many network configurations
are allowed, we emphasize that Theorem (3.2) is highly nontrivial. This is because, each network-configuration σ ∈ Ξ itself contains exponentially many possible activations (matchings). The key combinatorial result that leads to Theorem
(3.2) is the existence of an efficient separator oracle for the
matching-polytope for any arbitrary graph [22]. We first
reduce the problem of broadcast-capacity computation of a
DAG to an LP with exponentially many constraints. Then
invoking the above separator oracle, we show that this LP
can be solved in strongly polynomial-time.
Proof. See Appendix 9.1.
3.4 Simple Bounds on λ∗
Using Theorem (3.2) we can, in principle, compute the
broadcast-capacity λ∗ of a wireless DAG with polynomially many network configurations. However, the complexity of the exact computation of λ∗ grows substantially with
the number of the possible configurations |Ξ|(n). Moreover,
Theorem (3.2) does not apply when |Ξ|(n) can no longer
be bounded by a polynomial in n. A simple example of
exponentially large |Ξ|(n) is when a link e is ON w.p. pe
independently at every slot, for all e ∈ E.
To address this issue, we obtain bounds on λ∗ , whose computational complexity is independent of the size of |Ξ|. These
bounds are conveniently expressed in terms of the broadcastcapacity of the static network G(V, E) without time-variation,
i.e. when |Ξ| = 1 and Eσ = E, σ ∈ Ξ. Let us denote the
broadcast-capacity of the static network by λ∗stat . Specializing Eqn. (13) to this case, we obtain
λ∗stat =
max
min uj · β.
β∈conv(M) j∈V \{r}
(15)
Using Theorem (3.2), λ∗stat can be computed in poly-time
under the primary-interference constraint.
Now consider an arbitrary joint distribution p(σ) such that
each link is ON uniformly with probability p, i.e.,
X
p(σ) = p, ∀e ∈ E.
(16)
σ∈Ξ:σ(e)=1
We have the following bounds:
Lemma 2
(Bounds on Broadcast Capacity).
pλ∗stat ≤ λ∗ ≤ λ∗stat .
Proof. See Appendix 9.3.
Generalization of the above Lemma to the setting, where
the links are ON with non-uniform probabilities, may also
be obtained in a similar fashion.
Note that, in our example 3.2 the bounds in Lemma 2 are
tight. In particular, here the value of the parameter p = 21 ,
the lower-bound is attained in case (2) and the upper-bound
is attained in case (3).
The above lemma immediately leads to the following corollary:
Corollary 3.3. (Approximation-algorithm for
computing λ∗ ). Assume that, under the stationary
distribution p(σ), probability that any link is ON is p,
uniformly for all links. Then, there exists a poly-time
p-approximation algorithm to compute the broadcastcapacity λ∗ of a DAG, under the primary-interference
constraints.
Proof. See Appendix 9.4.
In the following section, we are concerned with designing
a dynamic and throughput-optimal broadcast policy for a
time-varying wireless DAG network.
4.
THROUGHPUT-OPTIMAL BROADCAST
POLICY FOR WIRELESS DAGS
The classical approach of solving the throughput-optimal
broadcast problem in the case of a static, wired network is
to compute a set of edge-disjoint spanning trees of maximum cardinality (by invoking Edmonds’ tree-packing theorem [20]) and then routing the incoming packets to all nodes
via these pre-computed trees [21]. In the time-varying wireless setting that we consider here, because of frequent and
random changes in topology, routing packets over a fixed set
of spanning trees is no-longer optimal. In particular, part
of the network might become disconnected from time-totime, and it is not clear how to select an optimal set of trees
to disseminate packets. The problem becomes even more
complicated when the underlying statistical model of the
network-connectivity process (in particular, the stationary
distribution {p(σ), σ ∈ Ξ}) is unknown, which is often the
case in mobile adhoc networks. Furthermore, wireless interference constraints add another layer of complexity, rendering the optimal dynamic broadcasting problem in wireless
networks extremely challenging.
In this section we propose an online, dynamic, throughputoptimal broadcast policy for time-varying wireless DAG networks, that does not need to compute or maintain any global
topological structures, such as spanning trees. Interestingly,
we show that the broadcast-algorithm that was proposed
in [23] for static wireless network, generalizes well to the
time-varying case. As in [23], our algorithm also enjoys the
attractive feature of in-order packet delivery. The key difference between the algorithm in [23] and our dynamic algorithm is in link-scheduling. In particular, in our algorithm,
the activation sets are chosen based on current networkconfiguration σ(t).
4.1 Throughput-Optimal Broadcast Policy π ∗
All policies π ∈ Π, that we consider in this paper, comprise
of the following two sub-modules which are executed at every
time-slot t:
• π(A) (Activation-module): activates a subset of links,
subject to the interference constraint and the current
network-configuration σ(t).
• π(S) (Packet-Scheduling module): schedules a subset of packets over the activated links.
Following the treatment in [23], we first restrict our attention
to a sub-space Πin−order , in which the broadcast-algorithm
is required to follow the so-called in-order delivery property,
defined as follows
in−order
Definition 3 (Policy-space Π
[23]). A policy
π belongs to the space Πin−order if all incoming packets are
serially indexed as {1, 2, 3, . . .} according to their order of arrival at the source r and a node can receive a packet p at time
t, if and only if it has received the packets {1, 2, , . . . , p − 1}
by the time t.
As a consequence of the in-order delivery, the state of received packets in the network at time-slot t may be succinctly represented by the n-dimensional vector R(t), where
Ri (t) denotes the index of the latest packet received by node
i by time t. We emphasize that this succinct network-state
representation by the vector R(t) is valid only under the action of policies in the space Πin−order . This compact representation of the packet-state results in substantial simplification of the overall state-space description. This is because,
to completely specify the current packet-configurations in
the network in the general policy-space Π, we need to specify the identity of each individual packets that are received
by different nodes.
To exploit the special structure that a directed acyclic graph
offers, it would be useful to constrain the packet-scheduler
π(S) further to the following policy-space Π∗ ⊂ Πin−order .
Definition 4 (Policy-space Π∗ ⊂ Πin−order [23]). A
broadcast policy π belongs to the space Π∗ if π ∈ Πin−order
and π satisfies the additional constraint that a packet p can
be received by a node j at time t if all in-neighbours of the
node j have received the packet p by the time t.
The above definition is further illustrated in Figure 2. The
variables Xj (t) and i∗t (j) appearing in the Figure are defined
subsequently in Eqn. (19).
It is easy to see that for all policies π ∈ Π∗ , the packet
Ra (t) = 18
Rj (t) = 10
Rb (t) = 15 Rc (t) = 14
Figure 2: Under a policy π ∈ Π∗ , the set of packets available for transmission to node j at slot t is {11, 12, 13, 14},
which are available at all in-neighbors of node j. The
in-neighbor of j inducing the smallest packet deficit is
i∗t (j) = c, and Xj (t) = 4.
scheduler π(S) is completely specified. Hence, to specify a
policy in the space Π∗ , we need to define the activationmodule π(A) only.
Towards this end, let µij (t) denote the rate (in packets per
slot) allocated to the edge (i, j) in the slot t by a policy
π ∈ Π∗ , for all (i, j) ∈ E. Note that, the allocated rate µ(t)
is constrained by the current network configuration σ(t) at
slot t. In other words, we have
µ(t) ∈ c ⊙ Mσ(t) , ∀t
(17)
This implies that, under any randomized activation
Eµ(t) ∈ c ⊙ conv(Mσ(t) ), ∀t
(18)
In the following lemma, we show that for all policies π ∈ Π∗ ,
certain state-variables X(t), derived from the state-vector
R(t), satisfy so-called Lindley recursion [15] of queuing theory. Hence these variables may be thought of as virtual
queues. This technical result will play a central role in deriving a Max-Weight type throughput-optimal policy π ∗ , which
is obtained by stochastically stabilizing these virtual-queues.
For each j ∈ V \ {r}, define
Xj (t) = min
Ri (t) − Rj (t)
i∈∂ in (j)
i∗t (j) = arg min
i∈∂ in (j)
Ri (t) − Rj (t) ,
(19)
(20)
where in Eqn. (20), ties are broken lexicographically. The
variable Xj (t) denotes the minimum packet deficit of node
j with respect to any of its in-neighbours. Hence, from the
definition of the policy-space Π∗ , it is clear that Xj (t) is the
maximum number of packets that a node j can receive from
its in-neighbours at time t, under any policy in Π∗ .
The following lemma proves a “queuing-dynamics” of the
variables Xj (t), under any policy π ∈ Π∗ .
Lemma 3
have
([23]). Under all policies in π ∈ Π∗ , we
Xj (t + 1) ≤ Xj (t) −
+
X
µkj (t)
k∈∂ in (j)
X
+
µmi∗t (j) (t)
account the time-variation of network configurations, which
is the focus of this paper.
To describe π ∗ (A), we first define the node-set
Kj (t) = {m ∈ ∂ out (j) : j = i∗t (m)}
where the variables
are defined earlier in Eqn. (20).
The activation-module π (A) is described in Algorithm 1.
The resulting policy in the space Π∗ with the activationmodule π ∗ (A) is called π ∗ .
Note that, in steps (1) and (2) above, the computation
Algorithm 1 A Throughput-optimal Activation Module
π ∗ (A)
1: To each link (i, j) ∈ E, assign a weight as follows:
(
P
Xj (t) − k∈Kj (t) Xk (t), if σ(i,j) (t) = 1
(23)
Wij (t) =
0,
o.w.
2: Select an activation s∗ (t) ∈ Mσ(t) as follows:
s∗ (t) ∈ arg max s · c ⊙ W (t)
s∈Mσ(t)
(21)
m∈∂ in (i∗
t (j))
Lemma (3) shows that the variables Xj (t), j ∈ V \ {r}
∗
satisfy Lindley recursions in the policy-space Π . Interestingly, unlike the corresponding unicast problem [25], there
is no “physical queue” in the system.
Continuing correspondence with the unicast problem, the
next lemma shows that any activation module π(A) that
“stabilizes” the virtual queues X(t) for all arrival rates λ <
λ∗ , constitutes a throughput optimal broadcast-policy for a
wireless DAG network.
Lemma 4. Suppose that, the underlying topology of
the wireless network is a DAG. If under the action of
a broadcast policy π ∈ Π∗ , for all arrival rates λ < λ∗ ,
the virtual queue process {X(t)}∞
0 is rate-stable, i.e.,
X
1
lim sup
Xj (T ) = 0, w.p. 1,
T →∞ T
j6=r
then π is a throughput-optimal broadcast policy for the
DAG network.
Proof. See Appendix (9.5).
Equipped with Lemma (4), we now set out to derive a dynamic activation-module π ∗ (A) to stabilize the virtual-queue
∗
process {X(t)}∞
0 for all arrival rates λ < λ . Formally, the
∗
structure of the module π (A) is given by a mapping of the
following form:
π ∗ (A) : (X(t), σ(t)) → Mσ(t)
Thus, the module π ∗ (A) is stationary and dynamic as it
depends on the current value of the state-variables and the
network-configuration only. This activation-module is different from the policy described in [23] as the latter is meant
for static wireless networks and hence, does not take into
(22)
i∗t (m)
∗
(24)
3: Allocate rates on the links as follows:
µ∗ (t) = c ⊙ s∗ (t)
(25)
of link-weights and link-activations depend explicitly on the
current network-configuration σ(t). As anticipated, in the
following lemma, we show that the activation-module π ∗ (A)
stochastically stabilizes the virtual-queue process {X(t)}∞
0 .
Lemma 5. For all arrival rates λ < λ∗ , under the action of the policy π ∗ in a DAG, the virtual-queue process
{X(t)}∞
0 is rate-stable, i.e.,
1 X
Xj (T ) = 0, w.p. 1
lim sup
T →∞ T
j6=r
The proof of this lemma is centered around a Lyapunovdrift argument [18]. Its complete proof is provided in Appendix (9.6).
Combining the lemmas (4) and (5), we immediately obtain
the main result of this section
Theorem 4.1. The policy π ∗ is a throughput-optimal
broadcast policy in a time-varying wireless DAG network.
5. THROUGHPUT-OPTIMAL BROADCASTING WITH INFREQUENT INTER-NODE
COMMUNICATION
In practical mobile wireless networks, it is unrealistic to
assume knowledge of network-wide packet-state information
by every node at every slot. This is especially true in the case
of time-varying wireless networks, where network-connectivity
changes frequently. In this section we extend the main results of section 4 by considering the setting where the nodes
make control decisions with imperfect packet-state information that they currently possess. We will show that the dynamic broadcast-policy π ∗ retains its throughput-optimality
even in this challenging scenario.
State-Update Model.
We assume that two nodes i and j can mutually update
their knowledge of the set of packets received by the other
node, only at those slots with positive probability, when
the corresponding wireless-link (i, j) is in ON state. Otherwise, it continues working with the outdated packet stateinformation. Throughout this section, we assume that the
nodes have perfect information about the current networkconfiguration σ(t).
Suppose that, the latest time prior to time t when packetstate update was made across the link (i, j) is t − T(i,j) (t).
Here T(i,j) (t) is a random variable, supported on the set
of non-negative integers. Assume for simplicity, that the
network configuration process {σ(t)}∞
0 evolves according to
a finite-state, positive recurrent Markov-Chain, with the
stationary distribution {p(σ) > 0, σ ∈ Ξ}. With this assumption, T(i,j) (t) is related to the first-passage time in the
finite-state positive recurrent chain {σ(t)}∞
0 . Using standard theory
P [7], it can be shown that the random variable
T (t) ≡
(i,j)∈E T(i,j) (t) has bounded expectation for all
time t .
Analysis of π ∗ with Imperfect Packet-State Information.
Consider running the policy π ∗ , where each node j now
computes the weights Wij′ (t), given by Eqn.(23), of the incoming links (i, j) ∈ E, based on the latest packet-state
information available to it. In particular, for each of its
in-neighbour i ∈ ∂ in (j), the node j possess the following
information of the number of packets received by node i:
Ri′ (t) = Ri (t − T(ij) (t))
(26)
′
Now, if the packet-scheduler module π (S) of a broadcastpolicy π ′ takes scheduling decision based on the imperfect
state-information R′ (t) (instead of the true state R(t)), it
still retains the following useful property:
Lemma 6. π ′ ∈ Π∗ .
Figure 3: A 3 × 3 grid network.
Wij′ (t), used by policy π ′ and the true link-weights Wij (t),
as follows
Lemma 7. There exists a finite constant C such that,
the expected weight Wij′ (t) of the link (ij), locally computed by the node j using the random update process,
differs from the true link-weight Wij (t) by at most C,
i.e.
|EWij′ (t) − Wij (t)| ≤ C
(27)
The expectation above is taken with respect to the random packet-state update process.
Proof. See Appendix (9.8)
From lemma (7) it follows that the policy π ′ , in which
link-weights are computed using imperfect packet-state information is also a throughput-optimal broadcast policy for
a wireless DAG. Its proof is very similar to the proof of Theorem (4.1). However, since the policy π ′ makes scheduling
decision using W ′ (t), instead of W (t), we need to appropriately bound the differences in drift using the Lemma (7).
The technical details are provided in Appendix (9.9).
Theorem 5.1. The policy π ′ is a throughput-optimal
broadcast algorithm in a time-varying wireless DAG.
Proof. See Appendix (9.7).
The above lemma states that the policy π ′ inherits the inorder delivery property and the in-neighbour packet delivery
constraint of the policy-space Π∗ .
From Eqn. (23) it follows that, computation of linkweights {Wij (t), i ∈ ∂ in (j)} by node j requires packet-state
information of the nodes that are located within 2-hops from
the node j. Thus, it is natural to expect that with an ergodic state-update process, the weights Wij′ (t), computed
from the imperfect packet-state information, will not differ
too much from the true weights Wij (t), on the average. Indeed, we can bound the difference between the link-weights
6. NUMERICAL SIMULATION
We numerically simulate the performance of the proposed
dynamic broadcast-policy on the 3 × 3 grid network, shown
in Figure 3. All links are assumed to be of unit capacity.
Wireless link activations are subject to primary interference
constraints, i.e., at every slot, we may activate a subset of
links which form a Matching [28] of the underlying topology.
External packets arrive at the source node r according to a
Poisson process of rate λ packets per slot. The following
proposition shows that, the broadcast capacity λ∗stat of the
static 3 × 3 wireless grid (i.e., when all links are ON with
probability 1 at every slot) is 25 .
7. CONCLUSION
Proposition 6.1. The broadcast-capacity λ∗stat of
the static 3 × 3 wireless grid-network in Figure 3 is 25 .
′
Average Broadcast-Delay Dpπ (λ)
See Appendix (9.10) for the proof.
In our numerical simulation, the time-variation of the network is modeled as follows: link-states are assumed to evolving in an i.i.d. fashion; each link is ON with probability p at
every slot, independent of everything else. Here 0 < p ≤ 1 is
the connectivity-parameter of the network. Thus, for p = 1
we recover the static network model of [23]. We also assume
that the nodes have imperfect packet-state information as in
Section 5. Hence, two nodes i and j can directly exchange
packet state-information, only when the link (i, j) (if any)
is ON.
′
The average broadcast-delay Dpπ (λ) is plotted in Figure 4
as a function of the packet arrival rate λ. The broadcastdelay of a packet is defined as the number of slots the packet
takes to reach all nodes in the network after its arrival. Because of the throughput-optimality of the policy π ′ (Theorem (5.1)), the broadcast-capacity λ∗ (p) of the network, for
a given value of p, may be empirically evaluated from the
′
λ-intercept of vertical asymptote of the Dpπ (λ) − λ curve.
As evident from the plot, for p = 1, the proposed dynamic
algorithm achieves all broadcast rates below λ∗stat = 25 = 0.4.
This shows the throughput-optimality of the algorithm π ′ .
It is evident from the Figure 4 that the broadcast capacity λ∗ (p) is non-decreasing in the connectivity-parameter p,
i.e., λ∗ (p1 ) ≥ λ∗ (p2 ) for p1 ≥ p2 . We observe that, with
i.i.d. connectivity, the capacity bounds given in Lemma (2)
are not tight, in general. Hence the lower-bound of pλ∗stat
is a pessimistic estimate of the actual broadcast capacity
′
λ∗ (p) of the DAG. The plot also reveals that, Dpπ (λ) is nondecreasing in λ for a fixed p and non-increasing in p for a
fixed λ, as expected.
p = 0.4
p = 0.6
p=1
Packet Arrival Rate λ
′
Figure 4: Plot of average broadcast-delay Dpπ (λ), as
a function of the packet arrival rates λ. The underlying wireless network is the 3 × 3 grid, shown in
Figure 3, with primary interference constraints.
In this paper we studied the problem of throughput-optimal
broadcasting in wireless directed acyclic networks with pointto-point links and time-varying connectivity. We characterized the broadcast-capacity of such networks and derived efficient algorithms for computing it, both exactly and approximately. Next, we proposed a throughput-optimal broadcast
policy for such networks. This algorithm does not require
any spanning tree to be maintained and operates based on
local information, which is updated sporadically. The algorithm is robust and does not require statistics of the arrival
or the connectivity process, thus making it useful for mobile
wireless networks. The theoretical results are supplemented
with illustrative numerical simulations. Future work would
be to remove the restriction of the directed acyclic topology.
It would also be interesting design broadcast algorithms for
wireless networks with point-to-multi-point links.
8. REFERENCES
[1] P. Agrawal and N. Patwari. Correlated link shadow
fading in multi-hop wireless networks. Wireless
Communications, IEEE Transactions on,
8(8):4024–4036, 2009.
[2] D. Bertsimas and J. N. Tsitsiklis. Introduction to
linear optimization, volume 6. Athena Scientific
Belmont, MA, 1997.
[3] A. Beygelzimer, A. Kershenbaum, K.-W. Lee, and
V. Pappas. The benefits of directional antennas in
heterogeneouswireless ad-hoc networks. In Mobile Ad
Hoc and Sensor Systems. MASS 2008. 5th IEEE
International Conference on, pages 442–449. IEEE.
[4] Y. Chu, S. Rao, S. Seshan, and H. Zhang. Enabling
conferencing applications on the internet using an
overlay muilticast architecture. ACM SIGCOMM
computer communication review, 31(4):55–67, 2001.
[5] S. Dasgupta, C. H. Papadimitriou, and U. Vazirani.
Algorithms. McGraw-Hill, Inc., 2006.
[6] R. Diestel. Graph theory. Grad. Texts in Math, 2005.
[7] R. G. Gallager. Discrete stochastic processes, volume
321. Springer Science & Business Media, 2012.
[8] R. Gandhi, S. Parthasarathy, and A. Mishra.
Minimizing broadcast latency and redundancy in ad
hoc networks. In Proceedings of the 4th ACM
international symposium on Mobile ad hoc networking
& computing, pages 222–232. ACM, 2003.
[9] M. Ge, S. V. Krishnamurthy, and M. Faloutsos.
Overlay multicasting for ad hoc networks. In
Proceedings of Third Annual Mediterranean Ad Hoc
Networking Workshop, 2004.
[10] T. Ho and H. Viswanathan. Dynamic algorithms for
multicast with intra-session network coding. In Proc.
43rd Annual Allerton Conference on Communication,
Control, and Computing, 2005.
[11] S. C. Huang, P.-J. Wan, X. Jia, H. Du, and W. Shang.
Minimum-latency broadcast scheduling in wireless ad
hoc networks. In 26th IEEE INFOCOM 2007.
[12] L. Junhai, Y. Danxia, X. Liu, and F. Mingyu. A
survey of multicast routing protocols for mobile
ad-hoc networks. Communications Surveys Tutorials,
IEEE, 11(1):78–91, First 2009.
[13] A. Kamthe, M. A. Carreira-Perpiñán, and A. E.
Cerpa. Improving wireless link simulation using
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
9.
multilevel markov models. ACM Trans. Sen. Netw.,
10(1):17:1–17:28, Dec. 2013.
X. Lin, N. Shroff, and R. Srikant. A tutorial on
cross-layer optimization in wireless networks. Selected
Areas in Communications, IEEE Journal on,
24(8):1452–1463, Aug 2006.
D. V. Lindley. The theory of queues with a single
server. In Mathematical Proceedings of the Cambridge
Philosophical Society, volume 48, pages 277–289.
Cambridge Univ Press, 1952.
J. Macker, J. Klinker, and M. Corson. Reliable
multicast data delivery for military networking. In
Military Communications Conference, 1996. MILCOM
’96, Conference Proceedings, IEEE, volume 2, pages
399–403 vol.2, Oct 1996.
J. Matoušek. Lectures on discrete geometry, volume
108. Springer New York, 2002.
M. J. Neely. Stochastic network optimization with
application to communication and queueing systems.
Synthesis Lectures on Communication Networks,
3(1):1–211, 2010.
N. Patwari and P. Agrawal. Effects of correlated
shadowing: Connectivity, localization, and rf
tomography. In Information Processing in Sensor
Networks, 2008. IPSN’08. International Conference
on, pages 82–93. IEEE, 2008.
R. Rustin. Combinatorial Algorithms. Algorithmics
Press, 1973.
S. Sarkar and L. Tassiulas. A framework for routing
and congestion control for multicast information flows.
Information Theory, IEEE Transactions on,
48(10):2690–2708, 2002.
A. Schrijver. Combinatorial optimization: polyhedra
and efficiency, volume 24. Springer, 2003.
A. Sinha, G. Paschos, C. ping Li, and E. Modiano.
Throughput-optimal broadcast on directed acyclic
graphs. In Computer Communications (INFOCOM),
2015 IEEE Conference on, pages 1248–1256.
D. Smith. Ip tv bandwidth demand: Multicast and
channel surfing. In INFOCOM 2007. 26th IEEE
International Conference on Computer
Communications. IEEE, pages 2546–2550, May 2007.
L. Tassiulas and A. Ephremides. Stability properties
of constrained queueing systems and scheduling
policies for maximum throughput in multihop radio
networks. Automatic Control, IEEE Transactions on,
37(12):1936–1948, 1992.
D. Towsley and A. Twigg. Rate-optimal decentralized
broadcasting: the wireless case, 2008.
D. Tse and P. Viswanath. Fundamentals of wireless
communication. Cambridge university press, 2005.
D. B. West et al. Introduction to graph theory,
volume 2. Prentice hall Upper Saddle River, 2001.
J. Yuan, Z. Li, W. Yu, and B. Li. A cross-layer
optimization framework for multihop multicast in
wireless mesh networks. Selected Areas in
Communications, IEEE Journal on, 24(11), 2006.
APPENDIX
9.1 Proof of Theorem 3.2
Under the primary interference constraint, the set of feasible activations of the graphs are matchings [28]. To solve for
the optimal broadcast capacity in a time-varying network,
first we recast Eqn. (13) as an LP. Although this LP has
exponentially many constraints, using a well-known separation oracle for matchings, we show how to solve this LP in
strongly-polynomial time via the ellipsoid algorithm [2].
′
For a subset of edges E ′ ⊂ E, let χE be the incidence
′
vector, where χE (e) = 1 if e ∈ E ′ and is zero otherwise.
Let
Pmatching (G(V, E)) =
convexhull({χM |M is a matching in G(V, E)})
We have the following classical result by Edmonds [22].
Theorem 9.1. The set Pmatching (G(V, E)) is characterized by the set of all β ∈ R|E| such that :
X
βe
≥ 0
∀e ∈ E
βe
≤ 1
∀v ∈ V
βe
≤
(28)
e∈∂ in (v)∪∂ out (v)
X
e∈E[U ]
|U | − 1
;
2
U ⊂ V, |U | odd
Here E[U ] is the set of edge with both end points in U.
Thus, following Eqn. (13), the broadcast capacity of a
DAG can be obtained by the following LP :
max λ
(29)
Subject to,
λ ≤
X
e∈∂ in (v)
βσ
∈
ce
X
σ∈Ξ
p(σ)βσ,e , ∀v ∈ V \ {r} (30)
Pmatching (G(V, Eσ )),
∀σ ∈ Ξ
(31)
The constraint corresponding to σ ∈ Ξ in (31) refers to the
set of linear constraints given in Eqn.(28) corresponding to
the graph G(V, Eσ ), for each σ ∈ Ξ.
Invoking the equivalence of optimization and separation due
to the ellipsoid algorithm [2], it follows that the LP (29) is
solvable in poly-time, if there exists an efficient separatororacle for the set of constraints (30) and (31). With our
assumption of polynomially many network configurations
|Ξ|(n), there are only linearly many constraints (n − 1, to be
precise) in (30) with polynomially many variables in each
constraint. Thus the set of constraints (30) can be separated efficiently. Next we invoke a classic result from the
combinatorial-optimization literature which shows the existence of efficient separators for the matching polytopes.
Theorem 9.2. [22] There exists a strongly poly-time
algorithm, that given G = (V, E) and β : E → R|E|
determines if β satisfies (28) or outputs an inequality
from (9.1) that is violated by β.
Hence, there exists an efficient separator for each of the
constraints in (9.1). Since there are only polynomially many
network configurations, this directly leads to Theorem 3.2.
9.2 Proof of Lemma 1
Eqn. (37), (38) and Eqn. (38), we have
Proof. Fix a time T . For each configuration σ ∈ Ξ, let
σ
be the index of the time-slots up to time T such
{tσ,i }Ti=1
that σ(t) = σ. Clearly we have,
X
Tσ = T
(32)
u∈U T ր∞
=
min
u∈U
σ∈Ξ
=
Hence, we can rewrite
Tσ
T
X Tσ 1 X
1 X π
sπ (tσ,i , σ)
s (t, σ(t)) =
T t=1
T
T
σ
i=1
σ∈Ξ
u·
Since the process σ(t) is stationary ergodic, we have
lim
Tσ
= p(σ), ∀σ ∈ Ξ, w.p. 1
T
min lim inf u ·
u∈U T ր∞
=
min
u∈U
X
1
T
T
X
s (t, σ(t))
t=1
p(σ) lim inf u ·
T →∞
σ∈Ξ
(36)
Let an optimal solution to Eqn. (13) be obtained at
Ξ . Then from Eqn. (39), it follows that
X
p(σ)βσ∗ ∈ conv(M)
Tσ
1 X π
s (tσ,i , σ) ,
Tσ i=1
w.p. 1
= min
u∈U
σ∈Ξ
(37)
∈
π
p(σ) lim u · ζσ,T
σk
k→∞
π
p(σ) lim inf u · ζσ,T
.
σ
Tσ →∞
π
lim ζσ,T
= βσπ ,
σk
∀σ ∈ Ξ
Where βσπ ∈ Mσ , since Mσ is closed. Hence combining
p(σ)βσ∗
σ∈Ξ
max
min uj · β
β∈conv(M) j∈V \{r}
This proves the upper-bound.
9.3.2 Proof of the Lower-bound
Since Mσ ⊂ M, the expression for the broadcast-capacity
(13) may be re-written as follows:
X
X
p(σ)βσ (e)1(e ∈ σ)
λ∗ = max min
ce
e∈∂ in (j)
σ∈Ξ
∗
Let β ∈ M be the optimal activation, achieving the RHS
of (15). Hence we can lower-bound λ∗ as follows
X
X
p(σ)1(e ∈ σ)
λ∗ ≥
min
ce β ∗ (e)
j∈V \{r}
e∈∂ in (j)
p min
j∈V \{r}
X
σ∈Ξ
ce β ∗ (e)
e∈∂ in (j)
=
p min uj · β ∗
(b)
pλ∗stat
=
(38)
X
λ∗ ≤ λ∗stat
βσ ∈M j∈V \{r}
Let us denote
k→∞
(39)
βσ∗ , σ
Using Eqn. (15), this shows that
=
X
min uj ·
≤
(a)
σ∈Ξ
w.p.1
∀σ ∈ Ξ
βσ ∈conv(Mσ ) j∈V \{r}
Since s (tσ,i , σ) ∈ Mσ for all i ≥ 1, convexity of the set Mσ
π
∈ Mσ for all Tσ ≥ 1. Since the set Mσ
implies that ζσ,T
σ
is closed and bounded (and hence, compact) any sequence
in Mσ has a converging sub-sequence. Consider any set of
π
}
, σ ∈ Ξ such that, it
converging sub-sequences {ζσ,T
σk k≥1
achieves the following
u∈U
βσ ∈ conv(Mσ ) =⇒ βσ ∈ conv(M)
max
π
min
σ∈Ξ
Mσ ⊂ M,
Tσ
1 X π
=
s (tσ,i , σ)
Tσ i=1
X
p(σ)βσπ
σ∈Ξ
Since p(σ) > 0, ∀σ ∈ Ξ, the above implies that Tσ ր
∞ as T ր ∞∀σ, w.p.1. In the rest of the proof we will
concentrate on a typical sample path {σ(t)}t≥1 having the
above property.
π
}
For each σ ∈ Ξ, define the sequence {ζσ,T
σ Tσ ≥1
π
ζσ,T
σ
u∈U
X
Hence we have,
π
min u ·
Proof. 9.3.1 Proof of the Upper-bound
Note that, for all σ ∈ Ξ, we have Eσ ⊂ E. Hence, it
follows that
(35)
Hence from Eqn. (34) we have,
σ∈Ξ
This in turn implies that
Using countability of Ξ and invoking the union bound, we
can strengthen the above conclusion as follows
T →∞
p(σ)u · βσπ
9.3 Proof of Lemma 2
X
Tσ
T
Tσ
1 X π
1 X π
s (t, σ(t)) =
u·
s (tσ,i , σ) (34)
T t=1
T
Tσ i=1
σ∈Ξ
Tσ
= p(σ), w.p. 1 ∀σ ∈ Ξ
lim
T →∞ T
X
T
1 X π
s (t, σ(t))
T t=1
(33)
Hence,
min lim inf u ·
j∈V \{r}
Equality (a) follows from the assumption (16) and equality
(b) follows from the characterization (15). This proves the
lower-bound.
9.4 Proof of Corollary 3.3
∗
Consider the optimal randomized-activation vector β ∈
M, corresponding to the stationary graph G(V, E) (15). By
Theorem (3.2), β ∗ can be computed in poly-time under the
primary interference constraint. Note that, by Caratheodory’s
theorem [17], the support of β ∗ may be bounded by |E|.
Hence it follows that λ∗stat (15) may also be computed in
poly-time.
From the proof of Lemma (2), it follows that by randomly activating β ∗ (i.e., βσ (e) = β ∗ (e)1(e ∈ σ), ∀σ ∈ Ξ) we obtain
a broadcast-rate equal to pλ∗stat where λ∗stat is shown to be
an upper-bound to the broadcast capacity λ∗ in Lemma (2).
Hence it follows that pλ∗stat constitutes a p-approximation
to the broadcast capacity λ∗ , which can be computed in
poly-time.
Since
P the variables Xi (t)’s are non-negative, we have
j6=r Xj (t). Thus, for each node j
Pk−1
i=1
Xui (t) ≤
T −1
T −1
1 X
1 X
1
1 X
A(t) −
A(t).
Xj (T ) ≤ Rj (T ) ≤
T t=0
T
T
T t=0
j6=r
Taking limit as T → ∞ and using the strong law of large
numbers for the arrival process and Eqn. (40), we have
Rj (T )
= λ, ∀j.
T
This concludes the proof.
lim
w.p. 1
T →∞
9.6 Proof of Lemma (5)
We begin with a preliminary lemma.
Lemma 8. If we have
9.5 Proof of Lemma (4)
Assume that under the policy π ∈ Π∗ , the virtual queues
Xj (t) are rate stable i.e., limT →∞ Xj (T )/T = 0, a.s. for all
j. Applying union-bound, it follows that,
P
j6=r Xj (T )
= 0,
w.p. 1
(40)
lim
T →∞
T
Now consider any node j 6= r in the network. We can
construct a simple path p(r = uk → uk−1 . . . → u1 = j)
from the source node r to the node j by running the following Path construction algorithm on the underlying graph
G(V, E).
Q(t + 1) ≤ (Q(t) − µ(t))+ + A(t)
(42)
+
where all the variables are non-negative and (x) = max{x, 0},
then
Q2 (t + 1) − Q2 (t) ≤ µ2 (t) + A2 (t) + 2Q(t)(A(t) − µ(t)).
Proof. Squaring both sides of Eqn. (42) yields,
Q2 (t + 1)
≤ (Q(t) − µ(t))+
2
+ A2 (t) + 2A(t)(Q(t) − µ(t))+
≤ (Q(t) − µ(t))2 + A2 (t) + 2A(t)Q(t),
2
where we use the fact that x2 ≥ (x+ ) , Q(t) ≥ 0, and µ(t) ≥
0. Rearranging the above inequality finishes the proof.
Algorithm 2 r → j Path Construction Algorithm
Require: DAG G(V, E), node j ∈ V
1: i ← 1
2: ui ← j
3: while ui 6= r do
4:
ui+1 ← i∗t (ui );
5:
i← i+1
6: end while
Applying Lemma 8 to the dynamics (21) of Xj (t) yields, for
each node j 6= r,
Xj2 (t + 1) − Xj2 (t) ≤ B(t) +
X
X
2Xj (t)
µkj (t) ,
µmi∗t (t) −
m∈V
At time t, the algorithm chooses the parent of a node ui in
the path p as the one that has the least relative packet deficit
as compared to ui (i.e. ui+1 = i∗t (ui )). Since the underlying
graph G(V, E) is a connected DAG (i.e., there is a path from
the source to every other node in the network), the above
path construction algorithm always terminates with a path
p(r → j). Note that the output path of the algorithm varies
with time.
The number of distinct packets received by node j up to
time T can be written as a telescoping sum of relative packet
deficits along the path p, i.e.,
=
−
= −
i=1
Xui (T ) +
T
−1
X
X
X
µmi∗t (t)
(45)
E[µij (t) | X(t), σ(t)] Xj (t)
(46)
Xj (t)E
m∈V
µkj (t) | X(t), σ(t)
k∈V
Xui (T ) + Rr (T )
k−1
X
X
j6=r
= B|V | − 2
i=1
(a)
∆(X(t)|σ(t)) , E[L(X(t + 1) − L(X(t)) | X(t), σ(t)]
X
Xj2 (t + 1) − Xj2 (t) | X(t), σ(t)
= E
Rui (T ) − Rui+1 (T ) + Ruk (T )
i=1
=−
where B(t) ≤ c2max + max{a2 (t), c2max } ≤ (a2 (t) + 2c2max ),
a(t) is the number of exogenous packet arrivals in a slot,
and cmax , maxe∈E ce is the maximum capacity of the links.
We assume the arrival process a(t) has bounded second moments; thus, there exists a finite constant B > 0 such that
E[B(t)] ≤ E a2 (t) + 2c2max < B.
the quadratic Lyapunov function L(X(t)) =
PWe define
2
j6=r Xj (t). From (43), the one-slot Lyapunov drift ∆(X(t)),
conditioned on the current network-configuration σ(t) yields
≤ B|V | + 2
k−1
X
(44)
k∈V
j6=r
Rj (T ) = Ru1 (T )
k−1
X
(43)
X
(i,j)∈E
A(t),
t=0
where the equality (a) follows the observation that
Xui (T ) = Qui+1 ui (T ) = Rui+1 (T ) − Rui (T ).
(41)
−
X
Xk (t)
k∈Kj (t)
= B|V | − 2
X
(i,j)∈E
Wij (t)E[µij (t) | X(t), σ(t)]
(47)
The broadcast-policy π ∗ is chosen to minimize the upperbound of conditional-drift, given on the right-hand side of
(47) among all policies in Π∗ .
Next, we construct a randomized scheduling policy π RAND ∈
∗
∗
Π . Let βσ
∈ conv(Mσ ) be the part of an optimal solution corresponding to σ(t) ≡ σ given by Eqn. 11. From
Caratheodory’s theorem [17], there exist at most (|E| + 1)
link-activation vectors sk ∈
σ and the associated nonPM
|E|+1
negative scalars {ασk } with k=1 ασk = 1, such that
|E|+1
βσ∗
=
X
ασk sσk .
(48)
k=1
Define the average (unconditional) activation vector
X
p(σ)βσ∗
β∗ =
Note that this stationary randomized policy π RAND operates independently of the state of received packets in the
network, i.e., X(t). However it depends on the current
network-configuration σ(t). Since each network node j is
relabelled as vl for some l, from (53) we have, for each node
j 6= r, the total expected incoming transmission rate to the
node j under the policy π RAND , averaged over all network
states σ satisfies
RAND
X
E[µπij
X
(t) | X(t)] =
RAND
E[µπij
(t)]
i:(i,j)∈E
i:(i,j)∈E
X
= ql
ce βe∗
e∈EUv
l
l
.
=λ+ǫ
|V |
(49)
σ∈Ξ
(54)
Hence, from Eqn. (11) we have,
λ∗ ≤
min
U : a proper cut
X
ce βe∗ .
(50)
e∈EU
Suppose that the exogenous packet arrival rate λ is strictly
less than the broadcast capacity λ∗ . There exists an ǫ > 0
such that λ + ǫ ≤ λ∗ . From (50), we have
X
λ+ǫ≤
min
ce βe∗ .
(51)
U : a proper cut
e∈EU
For any network node v 6= r, consider the proper cuts Uv =
V \ {v}. Specializing the bound in (51) to these cuts, we
have
X
λ+ǫ ≤
ce βe∗ , ∀v 6= r.
(52)
Equation (54) shows that the randomized policy π RAND provides each network node j 6= r with the total expected incoming rate strictly larger than the packet arrival rate λ via
proper random link activations conditioned on the current
network configuration. According to our notational convention, we have
RAND
X
E[µπir
(t) | X(t)] = E[
X
RAND
µπir
(t)] = λ.
i:(i,r)∈E
i:(i,r)∈E
(55)
From (54) and (55), if node i appears before node j in the
aforementioned topological ordering, i.e., i = vli < vlj = j
for some li < lj , then
e∈EUv
Since the underlying network topology G = (V, E) is a DAG,
there exists a topological ordering of the network nodes so
that: (i) the nodes can be labelled serially as {v1 , . . . , v|V | },
where v1 = r is the source node with no in-neighbours and
v|V | has no outgoing neighbours and (ii) all edges in E are
directed from vi → vj , i < j [5]; From (52), we define
ql ∈ [0, 1] for each node vl such that
X
l
ql
ce βe∗ = λ + ǫ
, l = 2, . . . , |V |.
(53)
|V
|
e∈E
Uv
l
Consider the randomized broadcast policy π RAND ∈ Π∗
working as follows:
Stationary Randomized Policy π RAND :
(i) If the observed network-configuration at slot t is
σ(t) = σ, the policy π RAND selects 3 the feasible activation set sσk with probability ασk ;
(ii) For each incoming selected link e = (·, vl ) to node
vl such that se (t) = 1, the link e is activated independently with probability ql ;
(iii) Activated links (note, not necessarily all the selected links) are used to forward packets, subject to the
constraints that define the policy class Π∗ (i.e., in-order
packet delivery and that a network node is only allowed
to receive packets that have been received by all of its
in-neighbors).
X
RAND
E[µπki
X
RAND
E[µπkj
(t)]
k:(k,j)∈E
k:(k,i)∈E
≤−
(t)] −
ǫ
.
|V |
(56)
The above inequality will be used to show the throughput
optimality of the policy π ∗ .
The drift inequality (45) holds for any policy π ∈ Π∗ . The
broadcast policy π ∗ observes the states (X(t), σ(t)) and and
seek to greedily minimize the upper-bound of drift (47) at
every slot. Comparing the actions taken by the policy π ∗
with those by the randomized policy π RAND in slot t in (45),
we have
∗
∆π (X(t)|σ(t))
X π∗
E µij (t) | X(t), σ(t)]Wij (t)
≤ B|V | − 2
(57)
(i,j)∈E
≤ B|V | − 2
X
(i,j)∈E
(∗)
= B|V | − 2
X
RAND
E µπij
(t) | X(t), σ(t)]Wij (t)
(i,j)∈E
RAND
E µπij
(t) | σ(t)]Wij (t)
Taking Expectation of both sides w.r.t.
(58)
the stationary-
i.e.,
process σ(t) and rearranging, we have
∗
∆π (X(t))
(59)
X
≤ B|V | − 2
E
(i,j)∈E
≤ B|V | + 2
X
Xj (t)
j6=r
Xj (t) − cmax ET ≤ EXj′ (t) ≤ Xj (t)
X
m∈V
2ǫ X
Xj (t).
≤ B|V | −
|V |
Where the expectation is with respect to the random update
fashion, since every in πRAND X πRAND process at the node j. In a similar
neighbour i of a node k ∈ ∂ out (j), is at most 2-hop away
E µkj
(t)
E µmi∗t (t) −
from the node j, we have
k∈V
(60)
Ri (t) − T cmax ≤ Ri′ (t) ≤ Ri (t)
j6=r
Note that i∗t = arg mini∈In(j) Qij (t) for a given node j. Since
node i∗t is an in-neighbour of node j, i∗t must lie before j in
any topological ordering of the DAG. Hence, the last inequality of (60) follows directly from (56). Taking expectation in (60) with respect to X(t), we have
2ǫ
E L(X(t + 1)) − E L(X(t)) ≤ B|V | −
E||X(t)||1 ,
|V |
where || · ||1 is the ℓ1 -norm of a vector. Summing the above
inequality over t = 0, 1, 2, . . . T − 1 yields
T −1
2ǫ X
E L(X(T )) − E L(X(0)) ≤ B|V |T −
E||X(t)||1 .
|V | t=0
(64)
RAND
µπij
(t)]Wij (t)
Dividing the above by 2T ǫ/|V | and using L(X(t)) ≥ 0, we
have
T −1
1 X
B|V |2
|V | E[L(X(0))]
E||X(t)||1 ≤
+
T t=0
2ǫ
2T ǫ
Rk (t) − T cmax ≤ Rk′ (t) ≤ Rk (t)
It follows that for all i ∈ ∂ in (k)
(Ri (t) − Rk (t)) − T cmax
≤
≤
Ri′ (t) − Rk′ (t)
(Ri (t) − Rk (t)) + T cmax
Hence,
Xk (t) − T cmax ≤ Xk′ (t) ≤ Xk (t) + T cmax
Again taking expectation w.r.t. the random packet-state
update process,
Xk (t) − cmax ET ≤ EXk′ (t) ≤ Xk (t) + cmax ET
(65)
Combining Eqns (64) and (65) using Linearity of expectation
and using Eqn. (23) we have
Taking a lim sup of both sides yields
T −1
B|V |2
1 XX
E[Xj (t)] ≤
T t=0 j6=r
2ǫ
Also,
(61)
−ncmax ET + Wij (t) ≤ EWij′ (t) ≤ Wij (t) + ncmax ET
which implies that all virtual-queues Xj (t) are strongly stable [18]. Strong stability of Xj (t) implies that all virtual
queues Xj (t) are rate stable [18, Theorem 2.8].
Thus the lemma (7) follows with C ≡ ncmax ET < ∞.
9.7 Proof of Lemma (6)
To prove throughput-optimality of TheoremP
(5.1), we work
with the same Lyapunov function L(X(t)) = j6=r Xj2 (t) as
in Theorem (4.1) and follow the same steps until Eqn. (47)
to obtain the following upper-bound on conditional drift
lim sup
T →∞
Proof. Recall the definition of the policy-space Π∗ . For
every node i, since Ri (t) is a non-decreasing function of t, if
a packet p is allowed to be transmitted to a node j at time
slot t, by the policy π ′ , it is certainly allowed to be transmitted by the policy π. This is because Ri′ (t) ≤ Ri (t), ∀j ∈
∂ out (i) and hence outdated state-information may only prevent transmission of a packet p at a time t, which would
otherwise be allowed by the policy π ∗ . As a result, the policy π ′ can never transmit a packet to node j which is not
present at all in-neighbours of the node j. This shows that
π ′ ∈ Π∗ .
9.8 Proof of Lemma (7)
9.9 Proof of Theorem (5.1)
′
∆π (X(t)|X(t), X ′ (t), σ(t))
X
′
Wij (t)E(µπij (t)|X(t), X ′ (t), σ(t))
≤ B|V | − 2
(i,j)∈E
(66)
Since the policy π ′ makes scheduling decision based on the
locally computed weights Wij′ (t), by the definition of the policy π ′ , we have for any policy π ∈ Π:
Consider the packet-state update process at node j. Since
the capacity of the links are bounded by cmax , from Eqn.
(26) and the fact that Ri (t) is non-decreasing, we have
Ri (t) − T cmax ≤
Ri′ (t)
in
≤ Ri (t), ∀i ∈ ∂ (j)
(62)
(63)
′
Wij′ (t)E(µπij (t)|X(t), X ′ (t), σ(t))
(i,j)∈E
≥
Hence, from Eq. (19), it follows that
Xj (t) − T cmax ≤ Xj′ (t) ≤ Xj (t)
X
X
Wij′ (t)E(µπij (t)|X(t), X ′ (t), σ(t))
(67)
(i,j)∈E
Taking expectation of both sides w.r.t. the random update
process X ′ (t), conditioned on the true network state X(t)
and the network configuration σ(t), we have
X
π′
Wij (t)E(µij
(t)|X(t), σ(t))
Cn2 cmax /2 +
Since the above holds for any stationary randomized policy
π, we conclude
λ∗stat ≤
(i,j)∈E
(a)
≥
′
2
5
(69)
X
EWij′ (t)E(µπij (t)|X(t), σ(t))
X
EWij′ (t)E(µπij (t)|X(t), σ(t))
r
a
b
r
a
b
X
Wij (t)E(µπij (t)|X(t), σ(t)) − Cn2 cmax /2
c
d
e
c
d
e
f
g
h
f
g
h
(i,j)∈E
(b)
≥
(i,j)∈E
(c)
≥
(i,j)∈E
(68)
Here the inequality (a) and (c) follows from Lemma (7) and
the fact that |E| ≤ n2 /2 and µij (t) ≤ cmax . Inequality (b)
follows from Eqn. (67). Thus from Eqn. (66) and (68), the
expected conditional drift of the Lyapunov function under
the policy π ′ , where the expectation is taken w.r.t. the random update and arrival process is upper-bounded as follows:
X
′
Wij (t)E[µπij (t) | X(t), σ(t)]
∆π (X(t)|X(t), σ(t)) ≤ B ′ − 2
Matching M1
(i,j)∈E
′
Matching M2
r
a
b
r
a
b
c
d
e
c
d
e
f
g
h
f
g
h
2
with the constant B ≡ B|V | + 2Cn cmax . Since the above
inequality holds for any policy π ∈ Π, we can follow the
exactly same steps in the proof of Theorem (4.1) by replacing
an arbitrary π by π RAND and showing that it has negative
drift.
Matching M3
9.10 Proof of Proposition 6.1
r
Like many proofs in this paper, this proof also has a converse and an achievability part. In the converse part, we
obtain an upper bound of 25 for the broadcast capacity λ∗stat
of the stationary grid network (i.e. when all links are ON
w.p. 1). In the achievability part, we show that this upper
bound is tight.
Part (a): Proof of the Converse: λ∗stat ≤
b
fra ≥ λ, fab ≥ λ
Applying the primary interference constraint at node a, we
then obtain
fad ≤ 1 − 2λ
Because of symmetry in the network topology, we also have
fcd ≤ 1 − 2λ.
However, to achieve a broadcast capacity of λ, the total
allocated rate towards node d must be atleast λ. Hence we
have,
c
d
e
f
g
c
h
f
Matching M5
2
5
a
b
2
5
1
5
2
5
1
5
2
5
d
e
0
0
g
0
2
5
h
‘Time averaged’ Network
Figure 5: Some feasible activations of the 3 × 3 grid
network which are activated uniformly at random.
The components corresponding to each edge in the
resulting overall activation vector β is denoted by
the numbers alongside the edges.
Part (b): Proof of the Achievability: λ∗stat ≥ 25 :
As usual, the achievability proof will be constructive. Consider the following five activations (matchings) M1 , M2 , . . . , M5
of the underlying graph as shown in Figure 5. Now consider
a stationary policy π ∗ ∈ Π∗ that activates the matchings
M1 , . . . , M5 at each slot uniformly at random with probability 51 for each matching. The resulting ‘time-averaged’
graph is also shown in Figure 5. Using Theorem 3.1, it is
clear that λ∗stat ≥ 25 . Combining the above with the converse
result in Eqn. (69), we conclude that,
λ∗stat =
2
.
5
2
5
2
5
i.e.
λ≤
r
2
5
We have shown earlier that for the purpose of achieving
capacity, it is sufficient to restrict our attention to stationary
randomized policies only. Suppose a stationary randomized
policy π achieves a broadcast rate λ and it activates edge
e ∈ E at every slot with probability fe . Then for the nodes
a and b to receive distinct packets at rate λ, one requires
2(1 − 2λ) ≥ λ
a
Matching M4
2
5
| 8 |
Spike Event Based Learning in Neural Networks
arXiv:1502.05777v1 [] 20 Feb 2015
J. A. Henderson, T. A. Gibson, J. Wiles
Abstract
A scheme is derived for learning connectivity in spiking neural networks.
The scheme learns instantaneous firing rates that are conditional on the activity in other parts of the network. The scheme is independent of the choice
of neuron dynamics or activation function, and network architecture. It
involves two simple, online, local learning rules that are applied only in response to occurrences of spike events. This scheme provides a direct method
for transferring ideas between the fields of deep learning and computational
neuroscience. This learning scheme is demonstrated using a layered feedforward spiking neural network trained self-supervised on a prediction and
classification task for moving MNIST images collected using a Dynamic Vision Sensor.
Keywords: Spiking Neural Networks, Learning, Vision, Prediction
1. Introduction
Methods in deep learning for training neural networks (NNs) have been
very successfully applied to a range of datasets, performing tasks at levels
approaching human performance, such as image classification [1], object detection [1] and speech recognition [2, 3, 4]. Along with these experimental
successes, the field of deep learning is rapidly developing theoretical frameworks in representation learning [5, 6, 7] including understanding the benefits of different types of non-linearities in neuron activation functions [8],
disentanglement of inputs by projecting onto hidden layer manifolds, model
averaging with techniques like maxout and dropout [9, 10] and assisting generalization through corruption of input with denoising autoencoders [11].
These types of experimental and theoretical work are necessary to effectively build and understand systems like brains that are capable of learning
to solve real world problems. Many of the successes of deep learning are a
Preprint submitted to arχiv
February 23, 2015
result of a broad inspiration from biology; however, there is a large gap in
understanding how the principles of deep learning are related to those of the
brain. Some elements of deep learning may well inspire discoveries in brain
function. Equally, deep learning systems are still inferior to the brain in
aspects such as memory, thus efforts to develop models that bridge between
deep learning and neuroscience are likely to be mutually beneficial.
The neuron models commonly used in deep learning are abstracted away
from neuron models that are used in computational neuroscience to model
biological neurons. Spiking is a salient feature of biological neurons that is
not typically present in deep learning networks. It is not yet understood
why the brain uses spiking dynamics; for the purposes of machine learning
it would be useful to know what if any advantages spiking dynamics confers
spike based NN learning algorithms over other types of NN learning algorithms, rather than advantages that are otherwise useful in implementing
algorithms in biology such as energy efficiency and robustness. Dynamical
systems like spiking networks appear more naturally suited to processing
continuous time temporal data than state machines, as deep networks are
usually implemented, but this idea is yet to be demonstrated experimentally
on machine learning tasks.
In an effort to bridge this gap in understanding between spike, and nonspike based NN learning systems, and develop systems for processing event
based, continuous time data, this paper develops a scheme for learning connectivity in a spiking neural network (SNN). The scheme is based upon learning conditional instantaneous firing rates, linking it to many of the statistical
frameworks previously developed in deep learning that are based on conditional probabilities. However, our scheme is fundamentally different to most
methods used in deep learning as the learning rules are based solely on the
activity of the neurons in the network and are the same, independent of
the choice of neuron dynamics or activation function unlike gradient descent
methods [12], and they can be implemented online and do not require periods
of statistical sampling from the model unlike energy based methods [13]. In
addition, the learning scheme is local, meaning that modifying a connection
only requires knowledge of the activity of the neurons it connects, not neurons from a distant part of the network, unlike gradient descent and energy
based methods [12, 13]. From a perspective of biological plausibility, this
means neurons do not have to make assumptions about, or communicate
to each other their dynamics or activation function and associated parameters in order to correctly learn, and the system can be run online without
2
interrupting processing with periods of sampling for learning.
This paper describes a general scheme for event based learning in SNNs.
This scheme is demonstrated on a network similar to that commonly used in
deep learning, specifically, a feedforward layered network architecture with
rectified linear units, and piecewise constant temporal connectivity. Dropout
is utilized to show that many ideas from deep learning can be directly imported into SNNs using this learning scheme. An event based dataset of
moving MNIST digits collected using an DVS camera [14, 15, 16] is used to
train the network for both prediction and classification tasks.
2. Learning Theory
We begin by developing a method for learning the connectivity of a supervised output neuron. The discussion will be framed in reference to learning
in a network operating continuously in time with temporally delayed connectivity and temporally encoded input signals since spiking neurons are usually
modelled as dynamical systems; however, the results are also applicable to
networks operating in discrete timesteps (as is usual in implementations of
SNNs with current standard computer architecture), with or without temporally delayed connectivity and temporally encoded inputs, so they can also
be applied to traditional artificial neural networks performing static image
classification, for example.
Figure 1 shows a general network containing input neurons whose activity
is determined by an external source, hidden neurons, and supervised output
neurons. At present we do not assume any particular connection architecture,
nor do we specify the dynamics or activation functions of the individual
neurons.
We first consider learning the input, Qo (t), that a supervised output neuron o receives from the network at time t. We need to identify the mathematical quantity that o should learn - the quantity that Qo (t) should approximate.
Note that Qo (t) may be calculated from two sets of quantities only. The first
quantities are the weights W , that connect the activity of the network to o,
potentially including self connections and connections that have a temporal
delay. We call these connections weights as this is the terminology usually
used in machine learning; however, these connections can also be imagined
as propagator functions whose value varies with temporal delay. The second quantity is the history of activity of the neurons in the network, H,
which since we are primarily interested in SNNs and event based learning,
3
Figure 1: Schematic of a general neural network consisting of input neurons controlled by
an external source, hidden neurons and supervised output neurons. Connectivity between
neurons is unrestricted, directed connections are allowed between every pair of neurons
including self connections, as are temporally delayed connections.
4
we model as a set of one dimensional Dirac delta functions in time that may
be normalized to non unit integral to allow a neuron’s spike strength to be a
real value, instead of only binary as in many SNN models. However, since it
allows the use of more intuitive terminology of spike rates, instead of spike
strength rates, we assume that a spike with real valued strength is equivalent
to a sum of simultaneous unit strength spikes and possibly one partial unit
spike. Alternatively, this equivalence holds if spike strengths are restricted
to a unit value and simultaneous spikes are not allowed. In any case the
mathematical description and quantitative results are unchanged aside from
a possible conversion function if simultaneous spikes are not considered to
combine additively into a single real valued spike and vice-versa.
Assuming W is fixed after learning, the only time varying quantity that
can be used in calculating Qo (t) is H(t), so we re-parametrize Qo (t) to Qo (H).
We then propose that a sensible output of the network to o is Q(o|H), the
mean conditional instantaneous spike rate (during training) of supervised
output neuron o, given activity H in the network. Integrating Q over a time
period gives an expected number of spikes. Thus, in the case of a network
operating in discretized time, as is common in the implementation of artificial
SNNs in code, Q(o|H) can be interpreted as an expected number of spikes
o given H, where the integral over a small discrete timestep is understood.
That is, if we observe activity H in the network n times during training,
and o spikes no times during timesteps coinciding with those n occurrences
of H, then after training if we observe H again, the output that should have
been learnt and produced by the network at that timestep is no /n. If only
single unit strength spikes are allowed at each timestep, then this output
can be interpreted as P (o|H), the probability of o spiking during the given
timestep, given H. This interpretation is important in connecting the focus
on probability distributions in machine learning with the focus on spike and
rate coded networks in computational neuroscience [17].
Of course if H includes the full history of the network’s activity from
inception, then only one sample trajectory of H will be observed and used
for learning. However, we assume that W approaches zero as the connection
time delay becomes large, meaning that only some recent history of activity
in the network is used in calculating Q(o|H). Thus, a variety of different
H will be observed during training. In any practical network W will be
finitely parametrized, so for any particular Hk the parameters modified in
learning Q(o|Hk ) will in effect learn for both a range of other H that are
slight perturbations of Hk and also use the same parameters, as well as very
5
different H that only share a portion of the same parameters.
If we assume the neurons in the network are spiking neurons, then there
are only four events that occur within the network at which to apply learning
rules that modify Qo (H). They are (i) when an input neuron spikes, (ii)
when a hidden neuron spikes, (iii) when a supervised output neuron spikes
due to supervision, and (iv) when a supervised output neuron spikes due to
its own dynamics or activation function. Modification could also be made
continuously at all times, at randomly generated time points, or according
to a temporally periodic function; however, we proceed concentrating on the
spiking events.
Let D be a function that is applied to Qo (H) when H occurs (a combination of events (ii)), and let U be a function that is applied to Qo (H) for each
supervised spike o (an event (i)) that co-occurs with H (see Fig. 2). After
some period of training time t, we have a series of iterated applications of D
and U applied to the initial value Q0o (H)
Qto (H) = (D ◦ U ◦ ... ◦ U ) ◦ ... ◦ (D ◦ U ◦ ... ◦ U ) ◦ Q0o (H),
(1)
where the brackets group operations for each occurrence of H.
To find a relation between U and D, suppose now that the initial value
0
Qo (H) = Q(o|H) as we desire. Clearly, we also require that Qto (H) ≈
Q(o|H), meaning that the application of the learning rules D and U do
not cause the output to significantly deviate from the desired value. Note
that we cannot require strict equality due to the stochastic nature of the
event occurrences. If we choose
t0
U = DJ(Qo (H)) ,
(2)
where superscripts indicate composition, not exponentiation and J is an
unknown function still to be determined, then with the initial value Q0o (H) =
Q(o|H), Eq. (1) becomes
t00
t00
t0
t0
Qto (H) = D ◦ DJ(Qo (H)) ◦ ... ◦ DJ(Qo (H)) ◦...◦ D ◦ DJ(Qo (H)) ◦ ... ◦ DJ(Qo (H)) Q(o|H).
(3)
Let N be the number of occurrences of H. For Qto (H) ≈ Q(o|H), we
require that all the applications of D and U approximately cancel, i.e.
X
0
(4)
N+
J(Qto (H)) ≈ 0.
6
Figure 2: Schematic showing the timing of events in the operation and learning of the
network. Neuron spike events are indicated by crosses, learning rule applications are indicated by dots. Vertical dotted lines indicate timesteps of width τ used in the network’s
implementation in code (the network can, given suitable hardware, be operated as a dynamical system in continuous time). Note that although variation in the location of spikes
within a timestep occurs, this variation is not resolved by a time stepped implementation,
additionally single neurons can spike multiple times within a single timestep. This diagram focuses upon learning for a particular hidden neuron activity pattern H1 indicated
by red crosses. (i) For output neuron o, U1o is applied each time o spikes in conjunction
with H1 , D1o is applied each time H1 is observed. (ii) For input neuron i, each time i
spikes in conjunction with the beginning of H1 , U1i is applied at the conclusion of H1 as
H1 must be observed in order to identify the connections to modify. D1i is applied each
time H1i occurs. (iii) For prediction neuron p, each time p spikes time ∆t after H1 occurs,
U1p is applied. D1p is applied each time H1 occurs.
7
The expected number of applications of U is N Q(o|H), and we require that
Qto (H) ≈ Q(o|H) at all points in this sequence of applications of D and U ,
so we have
J(Qo (H))N Q(o|H) ≈ −N.
(5)
However, since we do not know Q(o|H) a priori we use the network’s current
estimate Qo (H) instead and set
J(Qo (H)) = −
1
.
Qo (H)
(6)
Using Eq. (2) we now have the following relation between the function
D that is applied when H occurs, and the function U that is applied when o
spikes due to supervision
1
(7)
U = D− Qo .
This requires that D has a unique inverse, and D−1 can be generalized in
such a way as to be applied a fractional number of times.
In the above we required that Qto (H) ≈ Q(o|H) at all points in a sequence
of applications of D and the U . This implies that any single application of
either D or U when Qo (H) ≈ Q(o|H), can only change Qo (H) by a small
(but not necessarily fixed) amount
Qo − U ≤ U (Qo ) ≤ Qo + U ,
(8)
Qo − D ≤ D(Qo ) ≤ Qo + D ,
(9)
which using (7) leads to the relation
D = Qo U .
(10)
The required range of Qo is [0, ∞). To ensure that U remains small as
Qo → 0, we require D be chosen so that in the limQ→0 , QD remains finite.
Alternatively it would be possible to insert noise spikes, for example Poisson
noise with rate m into the supervision to fix a minimum target value of Qo
to m, hence bounding Qo > 0 and eliminating the divergence in Eq. (10).
After learning this noise can be stopped and subtracted from the learnt value
of Q(o|H). In most cases the maximum value of Q will be finite, and hence
D and U can be chosen to give sufficiently small changes.
To avoid Qo converging to an unwanted value, this learning scheme must
have only a single globally stable fixed point Qo = Q(o|H). This means that
8
U (Q) and D(Q) cannot both have fixed points at any Q. We therefore adjust
Eqs. (8) and (9) to
Qo − U ≤ U (Qo ) < Qo or Qo < U (Qo ) ≤ Qo + U ,
(11)
Qo < D(Qo ) ≤ Qo + D or Qo − D ≤ D(Qo ) < Qo .
(12)
and
We choose between either the two left, or two right options in (11) and
(12) by considering the stability of the fixed point Q(o|H) for each of these
choices. Taking equalities in the above equations and using Eq. (10), the
total change to Qo after N applications of D and an expected Q(o|H)N
applications of U is
∆ ≈ ±N Qo U ∓ Q(o|H)N U .
(13)
If Qo > Q(o|H) we require ∆ < 0, and if Qo < Q(o|H) we require ∆ > 0.
This implies the following choice for our learning rule restrictions
Qo − D ≤ D(Qo ) < Qo ,
(14)
Qo < U (Qo ) ≤ Qo + U ,
(15)
that is, D slightly decreases Qo and U slightly increases Qo .
3. Application to Learning Layers of Autoencoders
We now outline a demonstration of this learning scheme. A standard
method for training an unsupervised deep feedforward network is to train
each pair of layers successively as autoencoders [6] so that each layer encodes
the activity of the layer below it, see Fig. 3. The learning rules described in
Sec. 2 can be used to learn layers of autoencoders by replacing the supervised
output neuron o , with an input neuron i that self-supervises, and by reversing
the direction of connectivity so that i learns to output Q(i|H), where H is
now the future activity of the hidden neurons in the layer above i, since
causality is reversed from the previous case; the input layer causes activity
in the hidden layer above, see Fig. 2.
Using this method, the hidden layers learn so that by observing a period
of hidden layer activity, the activity of the layer below at the beginning of
9
Figure 3: (a) Architecture of the feedforward layered network. (b) Illustration of piecewise
constant connectivity between two neurons in Eq. (18). (c) Rectified linear unit activation
function for hidden neurons used in this network. (d) Illustration of the spiking activity of
a hidden neuron, vertical lines indicate the presence of a Dirac delta function, with height
corresponding to different normalizations of each individual Dirac delta function.
10
that observation can be inferred. The activity of the hidden layer and the
connectivity between layers acts like, and encodes a short term memory.
In this demonstration we also include a layer of prediction neurons that
predict the activity of the input layer at a specified time period in the future.
These neurons are supervised by the activity of the input layer with the
corresponding prediction time period delay, see Fig. 2. We also include a
layer of digit classification neurons that are trained as for a supervised output
neuron o, see Fig. 2.
So far we have not needed to specify the dynamics or activation function of
the hidden neurons in the network in order to develop this learning scheme.
Spiking neuron models in computational neuroscience are often dynamical
systems modeled using differential equations [18]. In contrast, neurons in
machine learning are typically characterized by an activation function of the
neuron’s input [6]. Any of these types of neuron models could be employed
here; however, we choose rectified linear units (ReLUs) that are commonly
used in deep learning networks [6]. The form we use here is
A(I) = I,
= 0,
I > 0,
I ≤ 0,
(16)
see Fig. 3c.
3.1. Weight Update Rules
We have so far developed rules for learning a value Qo to approximate
Q(o|H); however, we have not yet discussed rules for modifying W that
are necessary for implementation in a network. Before these rules can be
determined, the formula for calculating Qo from H and W needs to be chosen.
The most common choice is to use the product of H and W summed across
all neurons in the layer below and integrated across time in the case of time
delay connections. We use this same choice here
XZ t
hj (t0 )wj (t − t0 )dt0 ,
(17)
Qo (t) =
j
0
where hj is a hidden neuron connected to o and wj is the corresponding
connection between them. Other choices are possible and may have advantages over this choice, though this is left for future investigation. We also
need to choose a parametrization for W . A wide variety of choices could
11
be made here such as sums of continuous functions, or convolution kernels
acting across different sets of j, as is done in convolutional neural networks
by modifying (17) to include a convolution across j as well as t. However as a
first demonstration of this learning scheme we make a simpler choice of using
a piecewise constant function (see Fig. 3b) that is easy to conceptualize and
produces simple learning rules for the weight parameter updates
wj (t) =
K
X
ωk [S(t + (k − 1)τ ) − S(t + kτ )] ,
(18)
k=1
where S is the Heaviside step function, τ defines the width of each of the K
pieces of wj , and the ωk are modified by learning. We simplify this notation
to use
wjk = ωk [S(t + (k − 1)τ ) − S(t + kτ )] ,
(19)
where wjk are effectively the time delayed weights in the network. For time
delays greater than τ K, the connectivity weight is zero, meaning that only
activity histories H of length τ K are used in calculating Qo .
Assuming H is composed of spikes modeled as delta functions, Eq. (17)
becomes a sum of weights multiplied by the numbers of spikes
X
Qo (t) =
wjk hjt0 ,
(20)
The following simple and fast weight update rules satisfy Eqs. (14) and (15),
though other choices are possible. A weight update rule d that implements
D when H occurs is
d(w) = w − hQ,
(21)
and a corresponding weight update rule u that implements U when supervision spikes o occur is
u(w) = w + ho,
(22)
where is a hyperparameter of the learning rules and should be chosen to be
appropriately small. These learning rules cause Qo to fluctuate within a small
range of Q(o|H) and it may be useful to change with time to allow a initial
period of fast convergence from the initialization point, and then a reduced
fluctuation error once Qo ≈ Q(o|H). Again, these rules are not specific to
the ReLUs that we demonstrate with, these neurons could be replaced with
sigmoid units, for example, without changing these weight update rules.
We use the same weight update rules for learning to predict the activity of
the input layer from the activity of the hidden layers, where during learning
the prediction neurons are supervised by the future input, see Fig. 2.
12
3.2. DVS MNIST Event Based Dataset
To demonstrate this learning scheme we use a dataset collected using
a Dynamic Vision Sensor (DVS) [16]. The DVS is a type of video camera
that collects event data, unlike conventional video cameras that collect frame
data. In the camera an event is triggered by the light intensity impinging
upon a pixel changing above a threshold amount. Upon such an event, the
camera outputs the pixel coordinates, a timestamp (in µs) and the polarity
of the change in intensity.
The MNIST database [15] has been used extensively in the development
of deep learning [6]. With the view of linking this work to previous work in
deep learning, we demonstrate this learning scheme using a DVS version of
the MNIST database [14, 15] in which the handwritten digits are displayed
and moved on an LCD screen that is being recorded by a DVS camera. In this
dataset the light intensity changes collected by the DVS camera are primarily
edges of the moving MNIST digits; however, in general the camera also
captures other scene changes such as changes in illumination. The resulting
dataset is noisy. Viewing the recorded data reveals that the edges are often
blurred, and the number of events captured is not uniform across a digit’s
edges. The dataset also appears to contain some events that are not related
to the movement of the MNIST digit on the LCD screen; however, these
events are relatively few in number. The dataset contains recordings of 1000
handwritten digits for each integer from 0 to 9. We use the first 900 entries
for training and the last 100 entries for testing. The DVS’s 128 × 128 array
of pixels is cropped down to 23 × 23 pixels with each of these pixels mapped
onto two input neurons, one for each polarity of light intensity change. The
input training sequence was formed from a random selection of the individual
MNIST digit sequences each separated by 15 timesteps or 75 ms of no input.
Each individual MNIST event sequence has a duration of about 77 timesteps
or about 2.3 s.
Each pair of neurons have five ωk parameters encoding weights for connection delays kτ of width 30 ms corresponding to the network’s execution
timesteps of 30 ms. In this demonstration we predict 15 timesteps or 450ms
into the future. An additional ten output neurons are used to classify the
current input as a digit from zero to nine. All connection weights ω between
layers were initialized to small random values to the range [0, ] where was
initially set to 1 × 10−5 . The connection weights for the prediction and classification neurons were all initialized to zero and used an initial value of of
2.5 × 10−6 , corresponding to the value for the between layer connections di13
vided by the number of layers, since the prediction and classification neurons
connect to all layers. The connections between hidden layers were trained
one layer at a time for one pass through the training dataset, corresponding
to 8.8 × 105 timesteps. After each pass through the hidden layers was decreased by half for all connections and training was repeated, beginning at
the first hidden layer. Note that the initial value, decay and decay period
for are not heavily optimized. The prediction and classification weights
from all hidden layers were trained at every timestep. To demonstrate that
many ideas used in deep learning are directly transferable to a spiking neural
network that learns using this scheme, during training we use 50% dropout
[10] for each hidden layer.
The operation of the trained network is demonstrated in Fig. 4. The
hidden layers are very active since in this demonstration the neurons have
a threshold fixed at zero. Including a learnable threshold would produce
a more sparse representation whilst also reducing the required cpu time as
the network’s operation and learning are both dependent on the number of
events that occur. The inference of the noisy input is significantly better than
the prediction since the inference involves a memory of the input whereas
the prediction does not. However, a smoothed version of the future input
is usually identifiable in the prediction. The inference and predictions are
often poor when the digit changes direction as the edges at these points are
weak and the data are particularly noisy. The classification output correctly
classifies the input digit 87.41% of the time. The classification error as a
function of training time is shown in Fig. 5.
Figure 6 shows receptive fields of neurons from the first hidden layer and
predictive fields of neurons from all layers. Both positive (excitatory) and
negative (inhibitory) weights are learnt. Initially all neurons are active and
the small random weight vectors converge toward a time averaged input vector. Upon converging toward the time averaged input, the weight vectors are
nearly identical; however, differences due to the small random initialization
breaks their symmetry and the weights of different neurons begin to diverge
toward other more specific features of the input. This process continues
as these features themselves are further split into other even more specific
features. After learning is stopped, some of the receptive fields are tuned
toward responding to a small number of pixels, while others respond to a
distributed pattern of pixels. Predictive fields tend to be composed of larger
patches of the sensory field indicating that the encoding of the prediction is
distributed across many neurons. Without dropout, denoising autoencoding
14
Figure 4: A demonstration of the feedforward network described in the text applied to
the DVS MNIST dataset. (a) The present input to each neuron. Vertical red lines divide
layers. Neurons 1 to 1058 are input neurons, thereafter each 1000 neurons form successive
hidden layers. The number of presently active neurons in each layer are indicated at the
top of this frame. (b) The activity of the DVS input delayed by 5 timesteps (corresponding
to the maximum connection delay). (c) The inferred activity of the input Q(i|H) from the
recent activity of first hidden layer. (d) The activity of the DVS input 15 timesteps into
the future. (e) The network’s prediction Q(p|H) of the activity of the input 15 timesteps
into the future. (f) Classification of the present input Q(c|H). (g)-(j) As for (b)-(e) with
polarity removed by summing the activity of both polarities. (k) Sum of squared errors
normalized by the sum of squares of the data at each timestep for the inferences and
predictions in (c) and (e). An additional file is available to view this figure as a movie.
15
1
0.9
0.8
classification error
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.5
1
timesteps
1.5
2
7
x 10
Figure 5: Network classification error on the test set vs training time. Connections
between layers are trained in a sequence from lowest to highest, vertical dashed lines
indicate the end of each pass through the network, and the points at which is halved.
At the end of training the error on the test set is 12.59%
16
Figure 6: (a) Example receptive fields from eight neurons in the first hidden layer. (b)-(e)
Example predictive fields for neurons in the input layer and hidden layers one to three
respectively. In all frames each pixel is the sum of all temporal connection weights ω for
that pixel. All fields have been normalized to have equal maximums.
or another regularization method, the connectivity between hidden layers
forms an identity mapping, with each neuron connecting only to a single
neuron in the previous layer.
4. Summary
This paper introduces an event based learning scheme for neural networks. The scheme does not depend on the specific form of the neuronal
dynamics or activation function, and while this paper focuses on training
spiking neural networks, this scheme may also be used to train traditional
artificial neural networks, especially those that involve discontinuous activation functions that defeat gradient descent methods. The scheme may also
17
be applied to networks of neurons containing biologically inspired dynamics.
Future work in this direction may inform theories of dynamics and learning
in the brain. The broad applicability of this learning scheme provides an
avenue to directly apply ideas from both deep learning and computational
neuroscience and thus strengthen and inform the theoretical progress in both
fields.
References
[1] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma,
Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, L. Fei-Fei,
Imagenet large scale visual recognition challenge (2014). arXiv:arXiv:
1409.0575.
[2] G. Dahl, D. Yu, L. Deng, A. Acero, Context-dependent pre-trained deep
neural networks for large-vocabulary speech recognition, Audio, Speech,
and Language Processing, IEEE Transactions on 20 (1) (2012) 30–42.
[3] L. Deng, G. Hinton, B. Kingsbury, New types of deep neural network
learning for speech recognition and related applications: an overview,
in: Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, 2013, pp. 8599–8603.
[4] G. Hinton, L. Deng, D. Yu, G. Dahl, A. Mohamed, N. Jaitly, A. Senior,
V. Vanhoucke, P. Nguyen, T. Sainath, B. Kingsbury, Deep neural networks for acoustic modeling in speech recognition: The shared views of
four research groups, Signal Processing Magazine, IEEE 29 (6) (2012)
82–97.
[5] Y. Bengio, A. Courville, P. Vincent, Representation learning: A review
and new perspectives, Pattern Analysis and Machine Intelligence, IEEE
Transactions on 35 (8) (2013) 1798–1828.
[6] Y. Bengio, I. Goodfellow, A. Courville, Deep Learning, MIT Press
(preparation version 22/10/2014), 2014.
[7] J. Schmidhuber, Deep learning in neural networks: An overview, Neural
Networks 61 (0) (2015) 85 – 117.
18
[8] V. Nair, G. E. Hinton, Rectified linear units improve restricted boltzmann machines, in: Proceedings of the 27th International Conference
on Machine Learning (ICML-10), 2010, pp. 807–814.
[9] I. J. Goodfellow, D. Warde-Farley, M. Mirza, A. Courville, Y. Bengio,
Maxout networks, ICML 28 (3) (2013) 1319–1327.
[10] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, R. Salakhutdinov,
Dropout: A simple way to prevent neural networks from overfitting, J.
Mach. Learn. Res. 15 (1) (2014) 1929–1958.
[11] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, P.-A. Manzagol, Stacked
denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion, J. Mach. Learn. Res. 11 (2010)
3371–3408.
[12] D. Rumelhart, G. Hinton, R. Williams, Learning representations by
back-propagating errors, Nature 323 (6088) (1986) 533–536.
[13] G. E. Hinton, S. Osindero, Y.-W. Teh, A fast learning algorithm for
deep belief nets, Neural Comput. 18 (7) (2006) 1527–1554.
[14] T. Serrano-Gotarredona, B. Linares-Barranco, MNIST-DVS database,
accessed: 27th Jan. 2015.
URL http://www2.imse-cnm.csic.es/caviar/MNISTDVS.html
[15] Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, Gradient-based learning
applied to document recognition, Proceedings of the IEEE 86 (11) (1998)
2278–2324.
[16] T. Serrano-Gotarredona, B. Linares-Barranco, A 128 × 128 1.5% contrast sensitivity 0.9% FPN 3 µs latency 4 mW asynchronous frame-free
dynamic vision sensor using transimpedance preamplifiers, Solid-State
Circuits, IEEE Journal of 48 (3) (2013) 827–838.
[17] A. Kumar, S. Rotter, A. Aertsen, Spiking activity propagation in neuronal networks: reconciling different perspectives on neural coding, Nat.
Rev. Neurosc. 11 (9) (2010) 615–627.
[18] E. Izhikevich, Dynamical Systems in Neuroscience, MIT Press, 2007.
19
| 9 |
arXiv:1711.02837v1 [stat.ML] 8 Nov 2017
Revealing structure components of the retina by deep
learning networks
Qi Yan, Zhaofei Yu, Feng Chen
Center for Brain-Inspired Computing Research, Department of Automation, Tsinghua University
{q-yan15,yuzf12}@mails.tsinghua.edu.cn
chenfeng@mail.tsinghua.edu.cn
Jian K. Liu
Institute for Theoretical Computer Science, Graz University of Technology
liu@igi.tugraz.at
Abstract
Deep convolutional neural networks (CNNs) have demonstrated impressive performance on visual object classification tasks. In addition, it is a useful model
for predication of neuronal responses recorded in visual system. However, there
is still no clear understanding of what CNNs learn in terms of visual neuronal
circuits. Visualizing CNN’s features to obtain possible connections to neuronscience underpinnings is not easy due to highly complex circuits from the retina
to higher visual cortex. Here we address this issue by focusing on single retinal ganglion cells with a simple model and electrophysiological recordings from
salamanders. By training CNNs with white noise images to predicate neural responses, we found that convolutional filters learned in the end are resembling to
biological components of the retinal circuit. Features represented by these filters
tile the space of conventional receptive field of retinal ganglion cells. These results suggest that CNN could be used to reveal structure components of neuronal
circuits.
1 Introduction
Deep convolutional neural networks (CNNs) have been a powerful model for numerous tasks related
to system identification [1]. By training a CNN with a large set of targeted images, it can achieve
the human-level performance for visual object recognition. However it is still a challenge for understanding the relationship between computation and the underlying structure components learned
within CNNs [2, 3]. Thus, visualizing and understanding CNNs are not trivial [4].
Inspired by neuroscience studies, a typical CNN model consists of a hierarchical structure of layers
[5], where one of the most important properties for each convolutional (conv) layer is that one can
use a conv filter as a feature detector to extract useful information from inputed images after the
previous layer [6, 7]. Therefore, after learning, conv filters are meaningful. The features captured by
these filters can be represented in the original natural images [4]. Often, one typical feature shares
some similarities with part of natural images from the training set. These similarities are obtained
by using a very large set of specific images. The benefit of this is that features are relative universal
for one category of objects, which is good for recognition. However, it also causes the difficulty
of visualization or interpretation due to the complex nature of natural images, i.e., the complex
statistical structures of natural images [8]. As a result, the filters and features learned in CNNs are
often not obvious to be interpreted [9].
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
On the other hand, researchers begin to adapt CNNs for studying the target questions from neuroscience. For example, CNNs have been used to model the ventral visual pathway that has been
suggested as a route for visual object recognition starting from the retina to visual cortex and reaching inferior temporal (IT) cortex [10–12]. The prediction of neuronal responses recorded in monkey
in this case has a surprisingly good performance. However, the final output of this CNN model is
representing dense computations conducted in many previous conv layers, which may or may not be
related to the neuronscience underpinnings of information processing in the brain. Understanding
these network components of CNN are difficult given the IT cortex part is sitting at a high level of
our visual system with abstract information, if any, encoded [13]. In principle, CNN models can
also be applied to early sensory systems where the organization of underlying neuronal circuitry is
relatively more clear and simple. Thus one expect knowledge of these neuronal circuitry could provide useful and important validation for such models. For instance, a recent study employs CNNs to
predict neural responses of the retinal ganglion cells to white noise and natural images [14].
Here we move a step further in this direction by relating CNNs with single RGCs. Specifically,
we used CNNs to learn to predict the responses of single RGCs to white noise images. In contrast
to the study by [14] where one single CNN model was used to model a population of RGCs, in the
current study, our main focus is based on single RGCs to revealing the network structure components
learned by CNNs. Our aim is to study what kind of possible biological structure components in the
retina can be learned by CNNs. This concerns the research focus of understanding, visualizing and
interpreting the CNN components out of its black box.
To the end, by using a minimal model of RGC, we found the conv filters learned in CNN are essentially the subunit components of RGC model. The features represented by these filters are fallen into
the receptive field of modeled RGC. Furthermore, we applied CNNs to analyze biological RGC data
recorded in salamander. Surprisingly, the conv filters are resembling to the receptive fields of bipolar
cells that sit in the previous layer of RGC and pool their computations to a downstream single RGC.
2 Methods
2.1 RGC model and data
A simulated RGC is modeled in Fig. 1 as previously [15]. The model cell has five subunits that
each filter, similar to a conv filter in DNN, convolves the incoming stimulus image and then applies
an nonlinearity of threshold-quadratic rectification. The subunit signals are then polled together by
the RGC. The polled signal is applied with a threshold-linear output nonlinearity with a positive
threshold at unity to make spiking sparse.
The biological data of RGCs were recorded in salamander as described in [15]. Briefly RGC spiking
activity were obtained by multielectrode array recordings as in [16]. The retinas was optically stimulated with spatiotemporal white noise images, temporally updated at a rate of 30 Hz and spatially
arranged in a checkerboard layout with stimulus pixels of 30x30 mµ.
2.2 CNN model
We adopt a CNN model containing two convolution layers and a dense layer as in [14]. Several
sets of parameters in convolution layers, including the number of layers, the number and size of
convolution filters were explored. The predication performance is robust against these changes of
parameters. Therefore we used a filter size as of 15 × 15 to compare our results with those in [14].
The major difference between our model with that in [14] is that our CNN is for studying of single
RGCs.
For the RGC model, we used a data set consisting of 600k training samples of white noise images,
and additional set of samples for testing. The training labels are a train of binary spikes with 0 and
1 generated by the model. For the biological RGCs recorded in salamander, we used the same data
sets as in [15]. Briefly there are about 40k training samples and labels with the number of spikes
as in [0 5] for each image. The test data have 300 samples, which are repeatedly presented to the
retina for about 200 trials. The average firing rate of this test data is compared to the CNN output
for performance calculation.
2
A
B
Subunit model
CNN model
convolution
convolution
dense
labels
Bernoulli
process
# Spikes
Images
Output
nonlinearity
Subunits
C
Subunit model
RF
Spikes
Subunit
nonlinearity
D Conv filters
CNN model
Features
Figure 1: The CNN filters are resembling to the subunits in RGC model. (A) An illustration of RGC
model structure. Note there are five subunits playing the role of conv filters. (B) An illustration of
CNN model that trains the same set of images to predicate the labels, here spikes, of all images. (C)
Receptive fields (RFs) of modeled RGC and predication by CNN model. (D) Visualizing of CNN
model components of both conv filters and average features represented by each filter.
3 Results
Here we focus on single RGC that has the benefit to clarify the network structure components of
CNNs. Recently, a variation of non-negative matrix factorization was used to analyze the RGC’
responses to white noise images and identify a number of subunits, resembling to the biopolar cells,
within the receptive field of each RGC [15]. With this picture in mind, here we address the question
that what types of network structure components can be revealed by CNNs when they are used to
model the single RGC response.
A previous study [14] focused on predicating neural response of RGC at the population level with
one CNN model, and claimed that the features represented by conv filters are resembling to the
receptive fields of bipolar cells (BCs). However, a careful examination reveals that this connection
between CNN feature map and BCs is weak since the number of conv filters in the CNN is much
less than that of BCs in the RGC population from the retina. By using a CNN model, one expect to
reveal a more clear picture of this connection between CNNs and the retina.
Here, we set up a single RGC model with conv subunits as in Fig. 1(A), which is resembling to a
2-layer CNN with one conv layer of subunits and one dense layer of single RGC. By training a CNN
as in Fig. 1(B) with a set of white noise images to predicate the target labels as the simulated spikes
generated by this RGC model, we found that the CNN model can predicate the RGC model response
well with Pearson correlation coefficient (CC) up to 0.70 similar to the study by [14].
Interestingly, we also found the CNN model can predicate the receptive field (RF) well as in
Fig. 1(C). Furtherer more, the conv filters learned by CNN are the exact subunits employed in
the RGC model as shown in Fig. 1(C). A subset of the conv filters, that can be termed as effective
filters, start from random shapes and converge to the exact subunits. Although we set up the filter
size as 15x15 pixels, the resulting effective filters are sparse represented with a 6x6 pixel size. The
rest of the filters are still random and close to zero. Therefore, these results show that CNN parameters are highly redundant. Such a redundancy of parameters, including conv filters, units/neurons
and connections of conv and dense layers, is widely observed for deep learning models [17–19].
All together, These results suggest that the CNN model can identify the underlying hidden network
structure components within the RGC model by only looking at the input stimulus images and the
output response in terms of the number of spikes.
3
A
BConv filters
Data
RF
100 m
Features
CNN
100 m
100 m
C
Data
Firing rate (Hz)
CNN
40
20
0
0
2
4
6
8
10
Time (sec.)
Figure 2: The CNN reveals subunit structures in biological RGC data. (A) Receptive fields of the
sample cell and CNN predication. (B) Visualizing of CNN model components of both conv filters
and average features represented by each filter. (C) Neural response predicated well by CNN model
visualized by RGC data spike rasters (upper) and CNN spike rasters (middle) and their average firing
rates.
To further characterizing these structure components in details, we use a CNN to learn the biological
RGC data with the similar images of white noise and the spiking responses. Similar to the results of
the RGC model above, the outputs of CNN model can recover the receptive field of data very well
as in Fig. 2(A). We also found that the learned conv filters converge to a set of localized subunits
whereas the rest of filters are noisy and close to zero as in Fig. 2(B). The size of these localized
filters is comparable to that in bipolar cells around 100 mµ [15].
In addition, the features represented by these localized conv filers are also localized. Given the
example RGC is a OFF type cell that response to the dark part of images strongly, most features
have similar OFF peaks resulted from the OFF BC-like filters. These OFF features tile the space
of receptive field of RGC. Interestingly, there are some features with ON peaks, which play a role
as inhibition in the retinal circuit. A few features have some complex structures mixed with OFF
and ON peaks, which are mostly resulted from the less localized filters. However, if the filters are
pure noise, the resulting features are pure noise without any structure embedded. Besides filters and
features, the CNN model generates a good predication of RGC response as in Fig. 2(C) with the CC
up to 0.75. These observations are similar across different RGCs recorded.
4 Discussion
Here by focusing on single RGCs, we shown that CNN can learn their parameters in an interpretable
fashion. Both filters and features are close to the biological underpinnings within the retinal circuit.
With the benefits of relative well-understood neuronal circuit of the retina ganglion cells, our preliminary results give a strong evidence that the building-blocks of CNNs are meaningful when they
are applied to neuroscience for revealing network structure components. Our results extend the previous studies [11, 14] that focus on predication of neural responses. Furtherer more, our approach is
suitable to address other difficult issues of deep learning, such as transfer learning, since the domain
4
of images seen by single RGCs is local and less complicated than those global structures of entire
natural images.
5 Acknowledgements
Q.Y., Z.Y. and F.C. were supported in part by the National Natural Science Foundation of China
under Grant 61671266, 61327902, in part by the Research Project of Tsinghua University under
Grant 20161080084, in part by National High-tech Research and Development Plan under Grant
2015AA042306. J.K.L. was partially supported by the Human Brain Project of the European Union
#604102 and #720270.
References
[1] Yann Lecun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, 2015.
[2] Leslie N. Smith and Nicholay Topin.
arXiv:1611.00847v3, 2016.
Deep convolutional neural network design patterns.
[3] Adam H. Marblestone, Greg Wayne, and Konrad P. Kording. Toward an integration of deep learning and
neuroscience. Frontiers in Computational Neuroscience, 10, sep 2016.
[4] Matthew D. Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In European
Conference on Computer Vision, pages 818–833, 2014.
[5] Y Lecun, K Kavukcuoglu, and C Farabet. Convolutional networks and applications in vision. In IEEE
International Symposium on Circuits and Systems, pages 253–256, 2010.
[6] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. Computer Science, 2014.
[7] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional
neural networks. In International Conference on Neural Information Processing Systems, pages 1097–
1105, 2012.
[8] E. P. Simoncelli and B. A. Olshausen. Natural image statistics and neural representation. Annual Review
of Neuroscience, 24(24):1193, 2001.
[9] Matthew D. Zeiler, Graham W. Taylor, and Rob Fergus. Adaptive deconvolutional networks for mid and
high level feature learning. In International Conference on Computer Vision, pages 2018–2025, 2011.
[10] Daniel Yamins, Ha Hong, Charles Cadieu, and James J. Dicarlo. Hierarchical modular optimization of
convolutional networks achieves representations similar to macaque it and human ventral stream. Advances in Neural Information Processing Systems, pages 3093–3101, 2013.
[11] D. L. K. Yamins, H. Hong, C. F. Cadieu, E. A. Solomon, D. Seibert, and J. J. DiCarlo. Performanceoptimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the National Academy of Sciences, 111(23):8619–8624, may 2014.
[12] Charles F. Cadieu, Ha Hong, Daniel L. K. Yamins, Nicolas Pinto, Diego Ardila, Ethan A. Solomon,
Najib J. Majaj, and James J. Dicarlo. Deep neural networks rival the representation of primate it cortex
for core visual object recognition. Plos Computational Biology, 10(12):e1003963, 2014.
[13] Daniel L K Yamins and James J DiCarlo. Using goal-driven deep learning models to understand sensory
cortex. Nature Neuroscience, 19(3):356–365, feb 2016.
[14] Lane McIntosh, Niru Maheswaranathan, Aran Nayebi, Surya Ganguli, and Stephen Baccus. Deep learning
models of the retinal response to natural scenes. In Advances in Neural Information Processing Systems
29. 2016.
[15] Jian K. Liu, Helene M. Schreyer, Arno Onken, Fernando Rozenblit, Mohammad H. Khani, Vidhyasankar
Krishnamoorthy, Stefano Panzeri, and Tim Gollisch. Inference of neuronal functional circuitry with spiketriggered non-negative matrix factorization. Nature Communications, 8(1), jul 2017.
[16] Jian K. Liu and Tim Gollisch. Spike-triggered covariance analysis reveals phenomenological diversity of
contrast adaptation in the retina. PLOS Computational Biology, 11(7):e1004425, jul 2015.
5
[17] Misha Denil, Babak Shakibi, Laurent Dinh, MarcAurelio Ranzato, and Nando de Freitas. Predicting
parameters in deep learning. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q.
Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 2148–2156. Curran
Associates, Inc., 2013.
[18] Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with
pruning, trained quantization and huffman coding. Fiber, 56(4):3–7, 2015.
[19] Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient
neural network. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances
in Neural Information Processing Systems 28, pages 1135–1143. Curran Associates, Inc., 2015.
6
| 1 |
Addressing Expensive Multi-objective Games with Postponed
Preference Articulation via Memetic Co-evolution
Adam Żychowskia,∗, Abhishek Guptab , Jacek Mańdziuka , Yew Soon Ongb
arXiv:1711.06763v1 [] 17 Nov 2017
a
Faculty of Mathematics and Information Science, Warsaw University of Technology,
Koszykowa 75, 00-662 Warsaw, Poland
b
School of Computer Science and Engineering, Nanyang Technological University,
Block N4, Nanyang Avenue, Singapore 639798
Abstract
This paper presents algorithmic and empirical contributions demonstrating that the convergence characteristics of a co-evolutionary approach to tackle Multi-Objective Games (MOGs)
with postponed preference articulation can often be hampered due to the possible emergence
of the so-called Red Queen effect. Accordingly, it is hypothesized that the convergence
characteristics can be significantly improved through the incorporation of memetics (local
solution refinements as a form of lifelong learning), as a promising means of mitigating
(or at least suppressing) the Red Queen phenomenon by providing a guiding hand to the
purely genetic mechanisms of co-evolution. Our practical motivation is to address MOGs
of a time-sensitive nature that are characterized by computationally expensive evaluations,
wherein there is a natural need to reduce the total number of true function evaluations
consumed in achieving good quality solutions. To this end, we propose novel enhancements
to co-evolutionary approaches for tackling MOGs, such that memetic local refinements can
be efficiently applied on evolved candidate strategies by searching on computationally cheap
surrogate payoff landscapes (that preserve postponed preference conditions). The efficacy
of the proposal is demonstrated on a suite of test MOGs that have been designed.
Keywords: multi-objective games, Red Queen effect, surrogate-assisted memetic algorithm
1. Introduction
Many practical problems can be modeled and resolved with game theory methods. In
real world problems decisions are usually made with multiple objectives or lists of payoffs.
The notion of vector payoffs for games was originally introduced by Blackwell [1] and later
extended by Contini [2]. Such games are named Multi-Objective Games (MOGs). MOGs
may have many practical applications in engineering, economics, cybersecurity [3] or Security
∗
Corresponding author
Email address: a.zychowski@mini.pw.edu.pl (Adam Żychowski)
Preprint submitted to Elsevier
November 21, 2017
Games [4], where real-life situations can be easily modeled as a game, and their solutions
help decision-makers make the right choice in multi-objective environments.
Most existing studies about MOGs concentrate on differential games and defining the
equilibrium for them. The most common approach is the Pareto-Nash equilibrium proposed
in [5]. The Pareto-Nash equilibrium uses the concept of cooperative games, because subplayers under the same coalitions should, according to the Pareto notion, optimize their
vector functions on a set of strategies. This notion also takes into account the concept of
non-cooperative games, because coalitions are interested in preserving the Nash equilibrium
between coalitions. First attempts to solving MOGs were using multi-parametric criteria
linear programming, for example in [6] (for zero-sum games) and [2] (for non-zero-sum
variants). Also artificial intelligence-based approaches, such as fuzzy functions, have been
applied to MOGs. For example, in [7], the objectives are aggregated to one artificial objective
for which the weights (players’ preferences towards the objectives) are modeled with fuzzy
functions.
Generally speaking, the most popular method of solving MOGs is to specify players objective preferences and define the utility function, for example a weighted sum, to transform
the MOG into a surrogate Single Objective Game (SOG) [8, 6]. However, in real-life applications such an approach may not be sufficient, because preferences are often postponed until
tradeoffs are revealed. Furthermore, in many cases decision-makers objectives are conflicting
which makes specifying the utility function a priori more difficult.
There is a lack of literature on the topic of MOGs, especially which involves players
with postponed preference articulation. Thus, there is a significant gap in the availability
of algorithms for tackling real-world MOGs. First of all, an efficient numerical scheme is
needed that is able to present decision-makers with the optimal tradeoffs in competitive
game settings comprising multiple conflicting objectives (somewhat similarly to the case of
standard multi-objective optimization [9, 10, 11]). Only then can an informed postponed
choice be made with regard to ascertaining the most preferred strategy to implement. The
present paper takes a step towards filling this algorithmic void. First formalization of such
MOGs can be found in [12]. In [13], the definition of rationalizable strategies in such games is
provided together with a suggestion about how these strategies could be found (preliminary
discussions are provided in Section 2 of this paper), analyzed, and used for choosing the
preferred strategy.
Co-evolutionary adaptation is a viable method for solving game theory problems and
is successfully used in traditional SOGs [14, 15]. Preliminary works in [16] showed that in
principle a co-evolutionary algorithm may be applied to MOGs as well, with the populationbased evolutionary algorithms being particularly well-suited for handling multiple objectives
simultaneously (as a consequence of the implicit parallelism of population-based search).
However, a canonical co-evolutionary approach to solving MOGs has certain drawbacks.
One of them is the emergence of a phenomenon named the Red Queen effect which often
hampers the convergence characteristics of the algorithm. In many time-sensitive applications involving computationally expensive evaluations, such a slowdown must be avoided.
The Red Queen principle was first proposed by the evolutionary biologist L. van Valen in
1973 [17]. It states that populations must continuously adapt to survive against ever-evolving
2
competitors. It is based on a biologically grounded hypothesis that species continually need
to change to keep up with the competitors (because when species stop changing, they will
lose to the other species that do continue to change). The Red Queen effect could have
positive as well as negative consequences. For example, co-evolution between predators and
preys where the only way the predator can compensate for a better defense by the prey (e.g.
rabbits running faster) is by developing a better offense (e.g. foxes running faster). This
leads to improvement of the skills (running faster) of both species. In another example,
consider trees in a forest which compete for access to sunlight. If one tree grows taller than
its neighbours it can capture part of their sunlight. This causes the other trees to grow taller,
in order not to be overshadowed. The effect is that all trees tend to become taller and taller,
but still getting on average the same amount of sunlight. Optimizing access to sunlight for
each individual tree does not lead to optimal performance for the forest as a whole [18].
Notably, such continuous adaptation as a result of the Red Queen effect occurs not only in
species co-evolution, but also in disease mutations, business competitors or macroeconomics
changes.
In a co-evolutionary algorithm for solving MOGs, the emergence of the Red Queen effect
may hamper the convergence characteristics of the algorithm, since many function evaluations are needed to overcome the continuous adaptations and drive the population to a
more-or-less steady state of reasonably good solutions. It must be observed, however, that
the presence of the Red Queen effect in a given co-evolutionary approach is - in general only hypothetical as tracking the Red Queen phenomenon is usually a complex and non
obvious process [19]. Regardless of the detailed reasons, the decelerated convergence is often only an artefact of the co-evolutionary method, and may not be at all related to the
underlying MOGs. For the cases where MOGs are time-sensitive and/or involve computationally expensive evaluations, the Red Queen effect is unaffordable from an algorithmic
standpoint. Thus, in this paper, an approach to mitigating (or at least suppressing) the
deleterious consequence of the Red Queen effect is proposed. It is achieved by applying a
local solution refinement technique (alternatively known as lifelong learning of an individual
in a population) - in the spirit of memetic algorithms.
Canonical memetic algorithms [20, 21, 22] enhance population-based Evolutionary Algorithms (EA) by means of adding a distinctive local optimization phase. The underlying
idea of memetics is to use local optimization techniques or domain knowledge to improve
potential solutions (represented by individuals in a population) between consecutive EA
generations. Drawing from their sociological interpretation, memes are seen as basic units
of knowledge that are passed down from parents to offspring in the form of ideas (procedures) that serve as guidelines for superior performance. Thus, while genes leap from body
to body as they propagate in the gene pool, memes are thought of as leaping from brain
to brain as they propagate in the meme pool. A synergetic combination of simple local
improvement schemes with evolutionary operators leads to complex and powerful solving
methods which are applicable to a wide range of problems [23], and currently serve as one
of the fastest growing subfields of Evolutionary Computation research. Local improvement
of temporary solutions represented by genes is deemed to be of paramount importance in
the context of the presumed existence of the Red Queen effect as it should strongly mit3
igate the rolling-horizon of the individual fitness evaluation. The main rationale for such
a claim is that the lifelong learning module introduced by memetics can provide a guiding
hand to the purely genetic mechanisms of co-evolutionary algorithms, thereby potentially
suppressing the intensity of the Red Queen effect in our search for equilibrium solutions to
the underlying MOG. Due to the complexity of rigorous theoretical analysis, we attempt to
substantiate our claims experimentally in this paper.
To summarize, the main contribution of this paper is a novel enhancement of co-evolutionary
algorithms for MOGs. In particular, memetic local refinements are proposed on evolved candidate strategies, as a means of improving convergence behavior. Importantly, in order to
make such local refinements computationally viable in competitive multi-objective game settings, we incorporate an approach that reduces sets of payoff vectors in objective space to
a single representative point that preserves the postponed preference articulation condition
(details are provided in Section 3). Thereafter, a surrogate model of the representative point
can be built, which allows the local improvements to be carried out efficiently by searching on
the surrogate landscape [24, 25]. As a result of the proposal, it is considered that MOGs of
a computationally expensive nature can be effectively handled, at negligible computational
overhead involved in surrogate modeling.
The remainder of this paper is arranged as follows. Section 2 presents the general formulation and fundamentals of MOGs. An overview of the baseline co-evolutionary approach
for solving MOGs based on [16] is presented in Section 3. Section 4 provides a more detailed
description of the proposed surrogate-assisted memetic algorithm tailored for computationally expensive MOGs. In Section 5 experimental studies for both algorithms are carried out
on a suite of test MOGs of varying degrees of complexity that have been designed based on
an intuitively visualizable differential game. Results are discussed in the context of convergence and suppression of the Red Queen effect. The last section is devoted to conclusions
and directions for future research.
2. Preliminaries on MOGs
2.1. Problem definition
Single-act multi-objective games considered in this paper can be formally described as follows. Let P1 and P2 be the players competing against each other, and S1 = {s11 , s21 , . . . , sI1 },
S2 = {s12 , s22 , . . . , sJ2 } be complete sets of their possible strategies, respectively. Each cani(1) i(2)
i(N )
didate strategy is a vector of decision parameters: si1 = [s1 , s1 , . . . , s1 1 ] ∈ RN1 , sj2 =
j(1) j(2)
j(N )
[s2 , s2 , . . . , s2 2 ] ∈ RN2 , where N1 and N2 are the numbers of players’ decision parameters. f¯i,j represents the payoff vector corresponding to a game evaluated using strategies si1
and sj2 . Without loss of generality, we assume that the goal of P1 is to minimize the payoff,
while the goal of P2 is to maximize the payoff.
Since both players do not know how the opponent evaluates their objectives in a postponed preference articulation setting, each player takes a worst-case approach. Thus, players
must somehow take into account all possible moves available to the opponent. From P1 ’s
perspective, the goal may be modeled as minimizing the objective function vector assuming the best possible opponent strategies: minsi1 ∈S1 maxsj ∈S2 f¯i,j . In contrast, player P2
2
4
aims at maximizing the objectives while considering the best strategies for the minimizer:
maxsj ∈S2 minsi1 ∈S1 f¯i,j . Note that, the above max and min operators applied to vector-valued
2
payoffs are ill-defined. As one possible alternative, their meaning can be formalized by means
of domination relations between payoff vectors, as described in subsection 2.3 below.
2.2. Solution approach
Contrary to SOGs, in games with multiple objectives, a universally optimal strategy
usually does not exist. For this reason, the notion of Pareto optimality is used based on
domination relations between individual vectors as well as sets of vectors. Most importantly,
such deductions can be found without the need to specify objective preferences, which aligns
well with our basic premise of MOGs with postponed preference articulation.
Definitions presented in the reminder of this section follow the discussions in [13]. Herein,
we only provide a brief overview of the main ideas for the sake of brevity.
2.3. Domination relations
To resolve domination relation between sets of vectors, first the domination relation
between individual vectors must be defined.
Definition 1. Domination relation between vectors
Let f = [f1 , f2 , . . . , fK ] ∈ RK and h = [h1 , h2 , . . . , hK ] ∈ RK be two vectors in the objective
max
space. A vector f dominates vector h in a maximization problem (f h), if fk ≥ hk for all
k ∈ {1, . . . , K} and there exists k ∈ {1, . . . , K} for which fk > hk .
With this, the domination relation between sets is defined as follows.
Definition 2. Domination relation between sets
Let F and H be sets of vectors from the objective space. Set F dominates set H in a
max
max
maximization problem, F H, if ∀h ∈ H, ∃f ∈ F , such that f h.
min
Analogous definitions to the above are used for domination relations ( ) in minimization
problems.
In the context of MOGs, the notion of ’worst case domination’ emerges in addition to the
simple domination relation between sets, because, while assessing the payoff of a particular
strategy for P1 (or P2 ) in a competing game, the set of optimal strategies for the opponent
must be taken into account. Accordingly, we define the worst-case domination relation.
Definition 3. Worst-case domination
w.c.
min
Set F worst-case dominates set H (F H) in a maximization problem when H F ,
max
and set F worst-case dominates set H in a minimization problem when H F .
5
2.4. Rationalizable strategies
Given the worst-case domination relation, Pareto-optimality in a MOG is defined as
follows.
Definition 4. Pareto-optimality in MOGs
A set Z ∗ is Pareto-optimal in a MOG if no other set exists that worst-case dominates
∗
Z . The set of all worst case non-dominated sets constitutes the Pareto set of sets P ∗ (also
referred to as the Pareto Layer [26]):
w.c.
P ∗ = {F : ¬∃Hs.t.H F }
To elaborate from the point of view of the players in the MOG, when evaluating the i-th
strategy of player P1 (si1 ), there are J possible strategies [s12 , s22 , . . . sJ2 ] of P2 to consider. If
the objective preferences of P2 are not defined, then there is a set of non-dominated payoff
vectors, which are in fact all possible best responses of P2 to the strategy si1 . This set is
named the anti-optimal front (Fs−∗
i ) corresponding to the i-th strategy of P1 .
1
Definition 5. Irrational strategies
A set of irrational strategies (S1irr ) of player P1 is defined as follows:
w.c.
0
Fs−∗
∀i ∈ {1, . . . , I}}
S1irr = {si1 ∈ S1 | ∃si1 ∈ S1 Fs−∗
i
i0
1
1
All strategies which are not irrational are called rationalizable.
Definition 6. Rationalizable strategies
A set of rationalizable strategies of player P1 is defined as S1R = S1 − S1irr .
A detailed discussion with examples concerning domination relations and preferable outcomes can be found in the Appendix of [13]. The main conclusion stemming from that
discussion is that, for a particular player in a MOG, if the anti-optimal front corresponding to strategy si worst-case dominates the anti-optimal front corresponding to sj , then
si always produces a preferable outcome for that player. On the other hand, when two
anti-optimal fronts are worst-case non-dominated, one of the strategies could be a better
choice for a certain objective preference articulation, while the other strategy may be better
under some other preferences. Therefore, in the latter case, their direct comparison is not
possible.
In this paper, the described worst-case domination approach is used as the basic tool
to solve MOGs. All considered MOGs are assumed to have postponed preferences, and
candidate strategies are accordingly analyzed from both players’ perspectives in the proposed
co-evolutionary algorithms (described next).
6
Figure 1: Example of payoff vectors in two-dimensional objective space. The white circles representing the
anti-optimal front corresponding to si1 worst-case dominate the white squares representing the anti-optimal
front corresponding to sj1 in minimization sense. Black figures represent the opponent’s ideal points for the
respective sets.
3. The Canonical Co-evolutionary Algorithm for MOGs
Our implementation of the canonical co-evolutionary algorithm for MOGs (Canonical
CoEvoMOG) is designed based on [16]. Each subpopulation in the algorithm caters to a
unique player in the MOG. The key difference between our implementation and that of [16]
is that, instead of considering the entire anti-optimal set of vectors while determining worstcase domination relations, we only consider the ideal point of the anti-optimal set as a
single representative vector. Note that the term ”ideal” is used from the point of view
of the opponent. Thus, without loss of generality, if the anti-optimal front corresponding
to strategy si1 of the minimization player P1 is Fs−∗
i , then the ideal point is defined by
1
the maximum (extreme) individual objective values that occur in Fs−∗
i . An illustration is
1
depicted in Figure 1, where the set of white circles (representing the anti-optimal front
corresponding to si1 ) worst-case dominate the set of white squares (representing the antioptimal front corresponding to sj1 ) in minimization sense. The maximizing opponent’s ideal
points, given si1 or sj1 , are also shown in the figure in black. From Figure 1, we observe that
while ascertaining the expected payoff of a particular strategy, the ideal point of the antioptimal front can be seen as a meaningful representative encompassing all possible moves
of the opponent. It is worth mentioning that as the ideal point is composed of the extreme
values of all objectives, no specific objective preference is assumed for the opponent. In
that sense, the proposed simplification can be seen as preserving the postponed preference
articulation condition of the MOG.
We emphasize the rationale behind using only a single representative vector (instead of
the entire anti-optimal set) following the observation presented in [27], which can also be
stated through the theorem below.
Theorem 1. If strategy si worst-case dominates sj , then the ideal point of the anti-optimal
front of the former either dominates or is at least equal to the ideal point of the anti-optimal
front of the latter strategy.
7
Proof. It follows from the definition of worst-case domination presented in the previous section. For brevity, we consider the strategies si and sj of the minimization player. Similar
arguments can be applied to the maximization player as well. Thus, the antecedent statement of the theorem implies that the anti-optimal front of si is maximization dominated by
the anti-optimal front of sj . Accordingly, there exist vector(s) in the anti-optimal set of sj
that maximization dominate the extreme vectors of the anti-optimal front of si . As a result,
if we assume that the opponent’s ideal point, given si , maximization dominates the opponent’s ideal point given sj , then we have a contradiction. In other words, the opponent’s
ideal point, given si , must minimization dominate or be equal to the opponent’s ideal point
given sj .
From an algorithmic point of view, the first advantage of using the representative ideal
point vector corresponding to a particular strategy is that it allows us to directly employ
standard non-domination relations between vectors (as in Definition 1), bypassing the cumbersome process of comparing sets of vectors to determine worst-case domination relations.
In other words, from minimization player P1 ’s standpoint, strategy si1 is preferred over sj1
simply if the ideal point of the anti-optimal front of si1 dominates that of sj1 in the minimization sense. Furthermore, the reduction of a set of vectors to a single point implies that
simple diversity measures (such as the crowding distance [28]) may also be directly incorporated to facilitate a good distribution of alternative payoff vectors in the objective space
of the MOG. Based on these basic ingredients, Figure 2 outlines the schematic workflow of
the Canonical CoEvoMOG algorithm.
In the Canonical CoEvoMOG algorithm, there are two subpopulations catering to the
two players in the MOG. The method proceeds as in standard co-evolution for SOGs, where
interactions are considered between all candidate strategies in the two subpopulations (forming a complete bipartite evaluation framework). The outcomes of the interactions are used
to ”approximate” the ideal point of the anti-optimal front corresponding to every candidate
strategy of both players. The approximated ideal point vectors are then used to calculate non-domination ranks and crowding distances of strategies within each subpopulation
separately, similarly to the case of evolutionary multi-objective optimization [28]. The nondomination ranks and the crowding distances are considered lexicographically for selecting
the most promising candidate strategies in each subpopulation that progress the search to
the next generation through genetic operations of crossover and mutation.
4. The Memetic Co-evolutionary Algorithm for MOGs
One of the drawbacks faced by the Canonical CoEvoMOG algorithm is the possible
emergence of the Red Queen effect. This suggests that the convergence to the desired
equilibrium strategies is impeded due to the continuously adapting subpopulations that
endlessly try to keep pace with the changes in the opponent’s strategies. Notably, the
slowdown is unlikely to be related to the underlying MOG, but is often an artefact of the
co-evolutionary method itself. It is regarded that in MOG applications of a time-sensitive
nature that may even involve computationally expensive evaluations, such a slowdown is not
8
Initialize randomly population S1 for player P1 and population S2 for player P2 .
for all generations do
Step 1: Create offspring populations S10 and S20 via crossover and mutation of parent individuals from S1 and S2 , respectively.
Step 2: Set S100 as S1 ∪ S10 , and set S200 as S2 ∪ S20 .
Step 3: Evaluate populations S100 and S200 by performing all interactions between candidate
strategies in S100 and S200 (keep track of evaluations to prevent repetitions).
Step 4: Obtain the ”approximate” ideal point corresponding to each candidate strategy in
S100 and S200 based on the outcomes of all possible interactions.
Step 5: Obtain non-domination rank and crowding distance of each strategy in S100 and S200
based on the approximated ideal point vectors.
Step 6: Consider the non-domination ranks and crowding distances lexicographically to select
the most promising candidate strategies from S100 and S200 to form S1 and S2 in the next
generation.
end for
Figure 2: Pseudo-code of the Canonical Co-evolutionary MOG Algorithm.
affordable. Therefore, in this section, we propose memetic local strategy refinements as a way
of enhancing the purely genetic mechanisms of the Canonical CoEvoMOG algorithm, thereby
potentially speeding up the convergence characteristics. Further, in order to maintain the
computational feasibility of the method, a surrogate modeling of the representative payoff
vector is proposed, which allows the local refinements to be carried out efficiently on the
surrogate landscape. Our proposal is labeled as a Memetic CoEvoMOG algorithm, and
involves a simple but important modification to the pseudo-code in Figure 2. Details of the
modified procedure are presented in Figure 3.
4.1. Overview of Surrogate Modeling in MOGs
A surrogate model is essentially a computationally cheap approximation of the underlying
(expensive) function to be evaluated. By searching on the surrogate landscape instead of
the original function, significant savings in computational effort can be achieved [29, 30].
However, before building the surrogate model, the function(s) to be approximated must first
be ascertained. For a MOG, this task is in general unclear, as corresponding to a particular
strategy, a set of optimal opponent strategies usually exist that constitute the anti-optimal
front.
It is at this juncture that the second, and perhaps most relevant, implication of using
the representative ideal point vector (instead of the entire anti-optimal front) is revealed.
Without the proposed modification, it is difficult to imagine an approach for incorporating
memetics into the canonical co-evolutionary algorithm for MOGs. To elaborate, while creating a surrogate of an entire set of vectors is indeed prohibitive, surrogate models that map
an individual strategy to its corresponding ideal point vector (of the anti-optimal set) can
presumably be learned with relative ease.
In the Memetic CoEvoMOG algorithm, the data generated for S1 and S2 at Step 6
of Figure 2 is used for iterative surrogate modeling. Candidate strategies in S1 and S2
9
Randomly generate initial population S1 for player P1 and S2 for P2 .
Evaluate strategies in S1 and S2 considering all interactions possible.
Train FNNs mapping candidate strategies to the corresponding ideal point approximations.
for all generations do
Step 1: Create offspring populations S10 and S20 via crossover and mutation of parent individuals from S1 and S2 , respectively.
Step 2: Apply local refinements using the surrogate landscape on a subset of randomly chosen
individuals from S10 and S20 (see Figure 4 for details).
Step 3: Set S100 as S1 ∪ S10 , and set S200 as S2 ∪ S20 .
Step 4: Evaluate populations S100 and S200 by performing all interactions between candidate
strategies in S100 and S200 (keep track of evaluations to prevent repetitions).
Step 5: Obtain the ”approximate” ideal point corresponding to each candidate strategy in
S100 and S200 based on the outcomes of all possible interactions.
Step 6: Obtain non-domination rank and crowding distance of each strategy in S100 and S200
based on the approximated ideal point vectors.
Step 7: Consider the non-domination ranks and crowding distances lexicographically to select
the most promising candidate strategies from S100 and S200 to form S1 and S2 in the next
generation.
Step 8: Retrain FNNs based on S1 , S2 and the corresponding ideal point approximations.
end for
Figure 3: Pseudo-code of the Memetic Co-evolutionary MOG Algorithm.
serve as the inputs to the surrogate model, while the corresponding approximate ideal point
objectives serve as outputs of interest. Note that separate surrogate models are learned
for each player. Further, the models are learned repeatedly at every generation of the
Memetic CoEvoMOG algorithm based on only the data generated during that generation
(i.e., data is not accumulated across generations). The rationale behind this step is that the
approximated ideal point vector tends to continuously adapt in a co-evolutionary algorithm
in conjunction with the evolving strategies of the opponent, such that there may be little
correlation in the data across generations. Finally, it must be mentioned that a simple
feedforward neural network (FNN) is used for surrogate modeling in this paper, although
any other preferred model type may also be incorporated with minimal change to the overall
algorithmic framework.
4.2. Memetics via Local Refinement
Memetics in stochastic optimization algorithms (such as EAs) are generally realized via a
local solution refinement step as a form of lifelong learning of individuals. Since the original
functions are assumed to be computationally expensive, herein, the local refinements are
carried out in the surrogate landscape. Notably, since the ideal point is itself vector-valued,
the local search is performed by first reducing the vector to a scalar value via a simple random
weighting of objectives; we ensure that the randomly generated weights satisfy the partition
of unity condition. It is important to mention here that the random weights are sampled
from a uniform probability density function, such that no biased preference information
10
Let probability of local search be pls .
for all individual in S1 do
Step 1: Select the individual with probability pls . If not selected, then continue to next
individual.
Step 2: Generate a random weight vector satisfying partition of unity.
Step 3: Combine the FNN surrogates using the random weights for scalarization. Minimize
the resultant objective via the Nelder-Mead simplex algorithm where the individual’s strategy
is taken as the starting point for local search.
Step 4: Update the individual with the best solution found after the local search procedure.
end for
Figure 4: Pseudo-code of Memetics via Local Refinement. Steps are shown herein from the perspective of
the minimization player P1 . The procedure is trivially generalized to the case of the maximization player
P2 as well.
is imposed (which preserves the postponed preference articulation condition of the MOG).
The local search method used in this study is the popular derivative-free (bounded) NelderMead simplex algorithm, although any other algorithm may also be used. Thus, for the
minimization player P1 , the simplex algorithm locally minimizes the randomly scalarized
objectives, while for the maximization player P2 , the simplex algorithm locally maximizes
the scalarized objectives.
After offspring creation through genetic operations, a subset of them from both subpopulations of the Memetic CoEvoMOG algorithm are randomly selected with some user defined
probability for local search. Once the local refinement is completed, i.e., the Nelder-Mead
simplex algorithm converges to a point within the specified search space bounds, the improved solution (or strategy) is directly injected into the offspring population in the spirit
of Lamarckian learning [31]. A brief overview of the steps involved in the memetic local
refinement procedure is presented in Figure 4.
5. Numerical Experiments
The proposed method was tested on a simple differential MOG named tug-of-war. The
basic form of the game consists of a point with mass m placed at coordinates (0, 0). Two
players P1 and P2 choose angles θ1 and θ2 , respectively, at which the respective forces with
magnitudes F1 , F2 are applied (see Figure 5). The game outcome is the position (given by
coordinates (x1 , x2 )) of the mass m after particular time tf . The objectives of player P1 are
to minimize x1 and x2 and the objectives of player P2 are to maximize these two coordinates.
The final coordinates can be computed with formulas: x1 = (F1 cos(θ1 ) + F2 cos(θ2 )) · 12 t2f ,
x2 = (F1 sin(θ1 ) + F2 sin(θ2 )) · 12 t2f . For simplification, the following assumptions are made:
√
F1 = F2 = 1, tf = 2, and thus x1 = cos(θ1 ) + cos(θ2 ) and x2 = sin(θ1 ) + sin(θ2 ). In
this game, the strategic decision is to choose the angles θ1 and θ2 , so the space of possible
strategies is infinite, since θ1 , θ2 ∈ [0; 2π]. Accordingly, observe that the continuous search
space of θ1 corresponds to the set S1 , and that of θ2 corresponds to the set S2 .
11
Figure 5: The tug-of-war game setting.
Figure 6: Representative set of all points in the objective space that reflect the rationalizable strategies of
the tug-of-war MOG.
Rationalizable strategies for both players can be intuitively ascertained. Player P1 (minimizer) aims at having the mass in a position with negative coordinates (third quadrant),
and therefore rationalizable strategies are in the range π ≤ θ1 ≤ 23 π. Similarly, player
P2 (maximizer) wants the mass to be located in the first quadrant, thus the rationalizable
strategies are in the range 0 ≤ θ2 ≤ 21 π. Due to postponed preference articulation, any move
in the above ranges is a valid selection. Figure 6 shows all possible end game positions in
the case of optimal performance, i.e. when both players select from the rationalizable range
of strategies mentioned above.
Due to the a priori known optimal performances of this MOG, the results of the algorithms applied to the tug-of-war game can be easily compared based on their approximation
quality. Nevertheless, doing so is usually not possible in arbitrary MOGs where exact results
are often too hard to compute in general practical settings. A more detailed description of
12
the tug-of-war game can be found in [16] where it was first used to test a version of the
Canonical Co-evolutionary Algorithm similar to the one described in Section 3.
5.1. Experimental setup
To make the tug-of-war MOG even more complex for the purpose of rigorous experimental testing, several synthetic functions have been artificially incorporated into the game
formulation to create a number of alternate benchmarks. To elaborate, we define:
x1 =
F1
F2
F1
F2
cos(θ1 ) +
cos(θ2 ), x2 =
sin(θ1 ) +
sin(θ2 ),
1 + φ(z1 )
1 + φ(z2 )
1 + φ(z1 )
1 + φ(z2 )
where φ is the incorporated function, z1 and z2 are additional decision parameters introduced
for P1 and P2 , respectively, while F1 , F2 ∈ [0, 1] are force magnitudes (which are now also
treated as decision parameters).
In this way, several tug-of-war variants can be created. Tested functions were chosen to
check the efficacy of the proposed methods under varying conditions. Note that the selected
functions are widely used in the literature to evaluate global optimization methods, including
evolutionary techniques. In particular, the following 9 widely-known optimization functions
were considered to serve as φ: Rosenbrock 2D, Rosenbrock 3D, Rastrigin 1D, Rastrigin 2D,
Rastrigin 3D, Griewank 1D, Griewank 2D, Griewank 3D and Ackley 2D. Their plots and
detailed description of properties can be found in [32]. It is worth mentioning that as all
the selected functions have minimum value 0, the representation of rationalizable strategies
of all MOG variants is the same as shown in Figure 6.
1. Rosenbrock nD
φ1 (z) =
n−1
X
[100(z(i + 1) − z(i)2 )2 + (z(i) − 1)2 ]
i=1
Global minimum: φ1 (1, . . . , 1) = 0.
2. Rastrigin nD
n
X
φ2 (z) = 10n +
[z(i)2 − 10cos(2πz(i))]
i=1
Global minimum: φ2 (0, . . . , 0) = 0.
3. Griewank nD
n
φ3 (z) =
X z(i)2
i=1
4000
−
n
Y
z(i)
cos( √ ) + 1
i
i=1
Global minimum: φ3 (0, . . . , 0) = 0.
4. Ackley nD
v
u n
n
u1 X
1X
2
t
φ4 (z) = −20 · exp[−0.2
z(i) ] − exp[
cos(2πz(i))] + 20 + e
n
n
i=1
i=1
Global minimum: φ4 (0, . . . , 0) = 0.
13
In the experimental study, the canonical as well as the memetic co-evolutionay algorithms
were run with the same hyperparameter settings in order to ensure that any performance
differences are indeed a consequence of the proposed memetics module. The size of each
subpopulation in the co-evolutionary algorithms was taken as 50, and the methods were run
for 100 generations. For recombination operations, simulated binary crossover (SBX) [33]
was used with distribution index of 20, and mutations were applied using the polynomial
mutation operator [34] also with distribution index 20. In the Memetic CoEvoMOG algorithm, the probability of local search on the surrogate landscape (pls ) was set to 20%
throughout. The test problems were assumed to be computationally expensive, such that
the extra computational effort spent on building and searching the surrogate model was
considered negligible in comparison to the cost of evaluations of the true underlying problem. For many real-world settings, such an assumption on the cost of surrogate assistance
is reasonable, and is commonly used in most surrogate-assisted optimization studies. For
this reason, the comparison plots presented in the next subsection are drawn on the basis of
the results achieved over certain number of generations in the co-evolutionary algorithms,
rather than explicitly taking computational time into account.
5.2. Experimental Results
The Inverse Generational Distance (IGD) metric was used to measure the convergence
characteristics (performance) achieved by the algorithms. IGD combines information about
convergence and diversity of the obtained solutions. If P ∗ is a set of uniformly distributed
points constituting the Pareto layer, and F is an approximation set of the Pareto layer
obtained from the co-evolutionary algorithms, then
P
∗ d(v, F )
IGD = v∈P ∗
|P |
where d(v, F ) denotes minimum Euclidean distance between v and points in F , as measured
in the objective space. Clearly, the lower the IGD values the better.
Figures 7 and 8 present convergence comparison between Canonical CoEvoMOG and
Memetic CoEvoMOG for 9 tested functions. Plots show the IGD convergence trends averaged over 20 independent runs. In all cases the Memetic CoEvoMOG algorithm’s convergence (dashed line) is noticeably faster than Canonical CoEvoMOG (doted line).
Figures 9 and 10 present examples of the algorithms’ performance, in the objective space,
as the search progresses through the generations. In each plot, each point represents the
position of the mass as an outcome of the current strategies contained in the co-evolutionary
subpopulations. Refer to Figure 6 for comparing these plots with the a priori known optimal
solution (defined by the set of interactions between rationalizable strategies of both players).
From generations 1 to 5, both algorithms produce mostly random solutions. The main
difference can be noticed to emerge after 10-25 generations when plots of the memetic
algorithm are closer to the optimal solution than those of the canonical approach, which
agrees with the claim of faster Memetic CoEvoMOG convergence.
Detailed numerical results after every 5 generations for each of the tested MOGs, in
terms of average, minimum, maximum and standard deviation of IGD values are presented
14
Rosenbrock 2D
Rosenbrock 3D
Rastrigin 1D
Rastrigin 2D
Rastrigin 3D
Griewank 1D
Griewank 2D
Griewank3D
Figure 7: Convergence comparison between Canonical and Memetic Co-evolutionary Algorithms for tug-ofwar MOG variants with different synthetic functions φ.
in the Appendix (Tables 1–9). It can be observed that not only is the convergence faster,
but also the final results obtained after 100 generations are better in the case of Memetic
15
Ackley 2D
Figure 8: Convergence comparison between Canonical and Memetic Co-evolutionary Algorithms for the
tug-of-war MOG variant where φ = Ackley 2D.
CoEvoMOG. Moreover, the Memetic CoEvoMOG algorithm appears to be more stable standard deviation values in most of the cases are lower. All results are proved to be
statistically significant by the Wilcoxon Signed-Rank Test with p-value=0.05. The numbers
of exact function evaluations were identical for both methods.
Slower convergence of the Canonical CoEvoMOG algorithm is hypothesized to be caused
by the existence of the Red Queen effect described in Section 1. In most cases, after a few
generations of steep decrease of the IGD value, it is observed that the IGD value tends to
rise for a brief period of time in the canonical case (as revealed in Figures 7 and 8). This
surprising observation may be due to the continuous adaptations of subpopulations to the
evolutions of each other, even though the overall performance may be far from the optimum
(much like the trees in the forest as discussed in the introduction). In this respect, the local
search steps included in the Memetic CoEvoMOG algorithm are seen to provide a guiding
hand to the purely genetic mechanisms, thereby suppressing the Red Queen effect to a large
extent and accelerating the convergence characteristics of the proposed algorithm.
The above promising experimental results form a strong basis for attempts of solving
more complex, real-life problems which can be represented as MOGs. In particular, multistep decision-making problems and problems characterized by payoffs changing over time
seem to be perfect candidates for further evaluation of the Memetic CoEvoMOG algorithm.
Such problems appear is various practical domains, including planning and decision-making
under uncertainty or in adversarial environments, e.g. in the area of cyber security or
homeland security (notably Security Games [35, 36]).
6. Conclusions
This paper presents a new memetic co-evolutionary approach (Memetic CoEvoMOG) to
finding strategies for multi-objective games under postponed objective preference articulation. The proposed method improves the canonical co-evolutionary model described in [16]
by suppressing the Red Queen effect via the guiding light of lifelong learning. In particular,
for ensuring the computational viability of lifelong learning in competitive multi-objective
16
Canonical gen. 1
Canonical gen. 10
Canonical gen. 15
Canonical gen. 20
Canonical gen. 25
Canonical gen. 50
Memetic gen. 1
Memetic gen. 10
Memetic gen. 15
Memetic gen. 20
Memetic gen. 25
Memetic gen. 50
Figure 9: The performance of Canonical CoEvoMOG and Memetic CoEvoMOG algorithms on the tug-of-war
MOG variant with φ = Rastrigin 1D, after 1, 10, 15, 20, 25 and 50 generations.
17
Canonical gen. 1
Canonical gen. 10
Canonical gen. 15
Canonical gen. 20
Canonical gen. 25
Canonical gen. 50
Memetic gen. 1
Memetic gen. 10
Memetic gen. 15
Memetic gen. 20
Memetic gen. 25
Memetic gen. 50
Figure 10: The performance of Canonical CoEvoMOG and Memetic CoEvoMOG algorithms on the tug-ofwar MOG variant with φ = Ackley 2D, after 1, 10, 15, 20, 25 and 50 generations.
18
game settings, we incorporate an approach that reduces sets of payoff vectors in objective
space to a single representative point without disrupting the postponed preference articulation condition. Thereafter, a surrogate model of the representative point is built, which
allows the local improvements to be carried out efficiently by searching on the surrogate landscape. The reliability and effectiveness of our method is experimentally proved on a suite
of testing MOG variants. It is demonstrated in the paper that incorporation of memetics
improves the convergence characteristics and leads to better solutions in comparison with
the canonical co-evolutionary algorithm. Consequently, in the proposed method the total
number of function evaluations can be reduced with no harm to the overall quality of resultant strategies. Such time savings are of special importance in the case of time-sensitive
and/or computationally expensive MOGs appearing in real-life applications.
Our future research activities shall be concentrated on building upon the current foundations of the Memetic CoEvoMOG algorithms, and extending to several complex multi-step
decision-making problems of practical relevance, with particular focus on domains of cyber
security and Security Games.
7. Acknowledgment
The second author of the paper, Dr. Abhishek Gupta, would like to extend his sincere
gratitude to Prof. Amiram Moshaiov, Tel-Aviv University, for valuable discussions on the
topic of MOGs that set the foundation for the present work. This work was partly developed
while the third author, Prof. Jacek Mańdziuk, was on leave at the School of Computer
Science and Engineering, Nanyang Technological University, Singapore.
References
[1] D. Blackwell, An analog of the minimax theorem for vector payoffs., Pacific J. Math. 6 (1) (1956) 1–8.
[2] B. Contini, A decision model under uncertainty with multiple objectives, Theory of Games: Techniques
and Applications. Elsevier, New York.
[3] E. Eisenstadt, A. Moshaiov, Novel solution approach for multi-objective attack-defense cyber games
with unknown utilities of the opponent, IEEE Transactions on Emerging Topics in Computational
Intelligence 1 (1) (2017) 16–26.
[4] M. Brown, B. An, C. Kiekintveld, F. Ordóñez, M. Tambe, An extended study on multi-objective
security games, Autonomous Agents and Multi-Agent Systems 28 (1) (2014) 31–71.
[5] D. Lozovanu, D. Solomon, A. Zelikovsky, Multiobjective games and determining pareto-nash equilibria,
Buletinul Academiei de Ştiinţe a Republicii Moldova. Matematica 3 (2005) 115–122.
[6] M. Zeleny, Games with multiple payoffs, International Journal of game theory 4 (4) (1975) 179–191.
[7] M. Sakawa, Genetic Algorithms and Fuzzy Multiobjective Optimization, Kluwer Academic Publishers,
Norwell, MA, USA, 2002.
[8] L. S. Shapley, F. D. Rigby, Equilibrium points in games with vector payoffs, Naval Research Logistics
(NRL) 6 (1) (1959) 57–61.
[9] T. Liu, L. Jiao, W. Ma, J. Ma, R. Shang, A new quantum-behaved particle swarm optimization based on
cultural evolution mechanism for multiobjective problems, Knowledge-Based Systems 101 (Supplement
C) (2016) 90 – 99. doi:https://doi.org/10.1016/j.knosys.2016.03.009.
[10] S. Mirjalili, P. Jangir, S. Z. Mirjalili, S. Saremi, I. N. Trivedi, Optimization of problems with multiple
objectives using the multi-verse optimization algorithm, Knowledge-Based Systems 134 (Supplement
C) (2017) 50 – 71. doi:https://doi.org/10.1016/j.knosys.2017.07.018.
19
[11] M. Gholamian, S. F. Ghomi, M. Ghazanfari, A hybrid system for multiobjective problems A case
study in NP-hard problems, Knowledge-Based Systems 20 (4) (2007) 426 – 436. doi:https://doi.
org/10.1016/j.knosys.2006.06.007.
[12] E. Eisenstadt, G. Avigad, A. Moshaiov, J. Branke, Multi-objective zero-sum games with postponed
objective preferences, Tech. rep., School of Mechanical Engineering, Tel-Aviv University (2014).
[13] E. Eisenstadt, A. Moshaiov, G. Avigad, J. Branke, Rationalizable strategies in multi-objective games
under undecided objective preferences, Tech. rep., School of Mechanical Engineering, Tel-Aviv University (2016).
[14] Y. Shi, R. A. Krohling, Co-evolutionary particle swarm optimization to solve min-max problems, in:
Evolutionary Computation, 2002. CEC ’02. Proceedings of the 2002 Congress on, Vol. 2, 2002, pp.
1682–1687.
[15] F. Fabris, R. A. Krohling, A co-evolutionary differential evolution algorithm for solving min-max optimization problems implemented on gpu using c-cuda, Expert Syst. Appl. 39 (12) (2012) 10324–10333.
[16] E. Eisenstadt, A. Moshaiov, G. Avigad, Co-evolution of strategies for multi-objective games under
postponed objective preferences, in: 2015 IEEE Conference on Computational Intelligence and Games
(CIG), 2015, pp. 461–468.
[17] L. Van Valen, A new evolutionary law, Evolutionary theory 1 (1973) 1–30.
[18] F. Heylighen, The red queen principle (1993).
URL http://cleamc11.vub.ac.be/REDQUEEN.html
[19] D. Cliff, G. F. Miller, Tracking the red queen: Measurements of adaptive progress in co-evolutionary
simulations, Springer Berlin Heidelberg, Berlin, Heidelberg, 1995.
[20] Y. S. Ong, M. H. Lim, X. S. Chen, Research frontier: Memetic computation - past, present & future,
IEEE Computational Intelligence Magazine 5 (2) (2010) 24–36.
[21] X. S. Chen, Y. S. Ong, M. H. Lim, K. C. Tan, A multi-facet survey on memetic computation, IEEE
Transactions on Evolutionary Computation 15 (5) (2011) 591–607.
[22] F. Neri, C. Cotta, P. Moscato (Eds.), Handbook of Memetic Algorithms, Vol. 379 of Studies in Computational Intelligence, Springer, 2012.
[23] F. Neri, C. Cotta, Memetic algorithms and memetic computing optimization: A literature review,
Swarm and Evolutionary Computation 2 (2012) 1–14.
[24] C. K. Goh, D. Lim, L. Ma, Y.-S. Ong, P. S. Dutta, A surrogate-assisted memetic co-evolutionary
algorithm for expensive constrained optimization problems, in: Evolutionary Computation (CEC),
2011 IEEE Congress on, IEEE, 2011, pp. 744–749.
[25] D. Lim, Y. Jin, Y.-S. Ong, B. Sendhoff, Generalizing surrogate-assisted evolutionary computation,
IEEE Transactions on Evolutionary Computation 14 (3) (2010) 329–355.
[26] G. Avigad, E. Eisenstadt, A. Goldvard, Pareto layer: Its formulation and search by way of evolutionary
multi-objective optimization, Engineering Optimization 42 (5) (2010) 453–470.
[27] G. Avigad, E. Eisenstadt, V. Y. Glizer, Evolving a pareto front for an optimal bi-objective robust
interception problem with imperfect information, EVOLVE-A Bridge between Probability, Set Oriented
Numerics, and Evolutionary Computation II (2013) 121–135.
[28] K. Deb, D. Kalyanmoy, Multi-Objective Optimization Using Evolutionary Algorithms, John Wiley &
Sons, Inc., New York, NY, USA, 2001.
[29] Y. Jin, Surrogate-assisted evolutionary computation: Recent advances and future challenges, Swarm
and Evolutionary Computation 1 (2) (2011) 61–70.
[30] I. Loshchilov, Surrogate-assisted evolutionary algorithms, Ph.D. thesis, Université Paris Sud-Paris XI;
Institut national de recherche en informatique et en automatique-INRIA (2013).
[31] Y. S. Ong, A. J. Keane, Meta-lamarckian learning in memetic algorithms, IEEE Transactions on
Evolutionary Computation 8 (2) (2004) 99–110.
[32] M. Molga, C. Smutnicki, Test functions for optimization needs, Tech. rep., Wroclaw university of science
and technology (2013).
[33] R. B. Agrawal, K. Deb, R. Agrawal, Simulated binary crossover for continuous search space, Complex
systems 9 (2) (1995) 115–148.
20
[34] K. Deb, S. Agrawal, A niched-penalty approach for constraint handling in genetic algorithms, in:
Proceedings of the international conference on artificial neural networks and genetic algorithms
(ICANNGA-99), 1999, pp. 235–243.
[35] J. Karwowski, J. Mańdziuk, A new approach to security games, in: ICAISC’2015, Vol. 9120 of Lecture
Notes in Artificial Intelligence, Springer-Verlag, 2015, pp. 402–411.
[36] J. Karwowski, J. Mańdziuk, Mixed strategy extraction from UCT tree in security games, in: ECAI
2016, IOS Press, 2016, pp. 1746–1747.
Appendix
Generation
5
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
Avg
65.28
108.77
89.17
72.94
65.48
57.10
50.49
48.01
38.29
37.67
33.65
31.71
31.44
32.01
30.76
30.32
31.04
30.44
29.24
29.45
CanonicalCoEvoMOG
Min
Max Std dev
39.39 133.60
30.45
35.07 249.93
81.80
31.81 262.70
68.31
27.08 244.94
63.69
27.66 208.32
53.05
22.71 203.50
52.95
22.95 198.35
52.39
21.29 176.34
45.65
21.80 128.73
32.03
20.70 102.80
24.04
20.57
76.62
16.01
21.75
74.82
15.75
22.37
65.54
13.45
21.74
68.39
14.49
22.99
64.73
13.26
21.63
64.61
13.16
21.88
58.89
11.87
21.64
57.39
11.35
20.99
60.32
11.61
20.82
58.14
11.36
Avg
54.37
49.92
50.89
43.86
36.77
30.68
28.29
25.72
24.13
23.95
22.75
22.57
22.09
21.58
21.26
20.92
21.09
21.07
20.89
20.96
MemeticCoEvoMOG
Min
Max
Std dev
43.16 110.30
20.66
34.45
58.46
9.44
33.31
77.97
15.41
28.24
95.44
19.50
24.22
68.52
13.28
22.28
38.74
6.32
23.05
36.67
4.88
20.58
41.32
6.01
19.85
39.28
5.72
20.75
36.44
4.75
20.39
32.17
3.59
20.48
30.74
3.41
20.30
26.64
2.13
19.59
24.10
1.40
19.20
23.57
1.13
18.62
22.70
1.25
19.54
22.71
0.98
20.11
22.79
0.76
20.00
21.73
0.55
19.72
22.89
0.91
Table 1: Comparison between results (in terms of IGD) obtained by Cannonical and Memetic Coevolutionary Algorithms based on 20 independent runs of the tug-of-war MOG variant with φ = Rosenbrock
2D.
21
Generation
5
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
Avg
221.43
139.15
163.41
142.60
156.23
156.85
148.23
133.30
140.77
130.92
131.97
123.37
118.80
121.05
108.20
112.03
110.11
109.76
116.73
112.51
CanonicalCoEvoMOG
Min
Max Std dev
62.10 422.71
121.20
94.28 235.24
40.96
85.68 348.19
77.72
40.23 333.12
85.41
34.67 325.21
83.47
37.83 392.68
99.68
31.43 388.38
98.04
33.33 334.02
91.26
32.21 356.72
97.36
30.93 356.50
95.12
27.86 359.18
98.65
26.57 351.25
95.70
26.21 345.35
93.30
26.22 401.82
110.77
27.95 292.76
81.13
26.12 302.42
82.19
25.34 302.54
81.86
24.38 325.80
85.75
23.13 319.10
84.37
22.08 318.41
84.95
Avg
159.70
127.09
96.67
107.87
88.77
91.32
81.51
74.29
67.55
53.79
47.50
45.11
41.47
39.08
38.25
38.89
36.36
32.38
32.46
31.13
MemeticCoEvoMOG
Min
Max
Std dev
98.87 210.68
30.68
65.20 344.80
80.01
40.44 177.86
41.80
55.31 232.53
56.76
44.01 169.74
42.91
49.53 240.98
55.93
37.89 206.72
50.32
38.13 193.34
44.43
34.48 164.75
37.58
32.19 126.93
27.58
33.15 108.08
23.51
28.85
86.73
16.29
26.04
84.47
16.83
24.33
73.32
15.01
26.62
86.44
17.80
22.26
88.13
20.71
22.45
79.01
17.17
22.31
61.75
12.70
22.90
59.00
11.87
22.93
47.49
7.96
Table 2: Comparison between results (in terms of IGD) obtained by Cannonical and Memetic Coevolutionary Algorithms based on 20 independent runs of the tug-of-war MOG variant with φ = Rosenbrock
3D.
Generation
5
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
Avg
82.69
115.71
138.51
131.23
124.50
114.05
112.73
109.67
106.52
104.97
104.66
103.91
103.19
103.65
102.98
102.24
102.15
102.29
101.57
101.78
CanonicalCoEvoMOG
Min
Max Std dev
35.23 182.31
51.71
40.70 532.32
149.59
31.11 822.00
241.86
24.76 858.55
257.37
25.75 843.10
253.38
24.59 820.08
248.31
22.83 821.77
249.36
21.45 821.12
250.10
22.63 820.33
250.87
20.45 819.34
251.06
20.80 819.51
251.20
21.29 820.20
251.69
20.91 819.72
251.77
22.21 819.06
251.37
20.52 819.22
251.66
19.00 817.93
251.48
18.99 819.12
251.93
21.44 818.64
251.70
20.84 818.62
251.95
21.07 818.19
251.72
Avg
65.59
57.21
25.37
22.61
21.04
21.37
21.09
21.43
21.20
21.74
20.88
20.99
21.34
20.97
21.38
21.18
20.72
20.95
21.11
21.00
MemeticCoEvoMOG
Min
Max
Std dev
34.88 128.11
36.63
28.04 100.62
26.31
19.83
37.58
5.31
20.40
30.55
3.02
19.59
22.69
1.11
20.13
22.45
0.76
20.27
22.50
0.80
20.36
22.84
0.75
19.02
22.45
1.02
19.67
23.46
0.96
19.91
22.35
0.64
19.35
22.76
1.05
19.87
22.10
0.64
19.91
21.90
0.75
20.20
22.76
0.85
19.12
22.76
1.08
19.22
21.88
0.95
19.94
22.48
0.79
19.84
23.26
1.00
18.74
22.56
1.26
Table 3: Comparison between results (in terms of IGD) obtained by Cannonical and Memetic Coevolutionary Algorithms based on 20 independent runs of the tug-of-war MOG variant with φ = Rastrigin
1D.
22
Generation
5
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
Avg
267.25
112.35
129.75
130.90
108.05
81.50
64.85
48.32
44.62
36.43
32.11
29.49
28.88
26.92
26.61
26.55
25.01
24.29
23.86
23.72
CanonicalCoEvoMOG
Min
Max Std dev
105.65 401.83
97.01
30.70 320.32
96.15
37.76 424.67
126.87
35.54 492.63
142.76
24.44 374.31
117.24
21.98 306.10
87.68
21.36 195.60
58.29
20.80 121.05
33.20
21.74
97.24
28.69
21.50
76.02
17.43
21.30
60.89
13.70
20.61
50.33
10.26
21.08
46.79
9.63
21.45
39.10
6.44
20.68
40.84
7.53
21.64
42.22
6.31
21.43
33.32
3.94
21.40
28.88
2.93
19.51
30.49
3.28
20.77
29.40
3.01
Avg
167.62
72.66
47.83
58.24
42.87
36.85
44.76
51.40
33.58
41.13
52.31
42.92
50.44
52.59
49.01
36.21
48.01
49.40
46.31
47.22
MemeticCoEvoMOG
Min
Max Std dev
62.95 347.25
79.07
31.69 240.47
69.85
21.72 194.51
52.01
20.34 369.19
109.29
20.19 230.27
65.86
21.13 170.69
47.03
19.40 251.54
72.67
20.40 322.40
95.23
20.11 141.78
38.03
19.03 219.55
62.71
20.09 330.97
97.91
20.11 237.23
68.28
20.18 312.75
92.17
20.12 336.27
99.68
19.74 299.81
88.13
20.23 168.79
46.59
20.05 281.77
82.14
20.32 302.58
88.96
20.26 267.55
77.74
19.79 283.01
82.86
Table 4: Comparison between results (in terms of IGD) obtained by Cannonical and Memetic Coevolutionary Algorithms based on 20 independent runs of the tug-of-war MOG variant with φ = Rastrigin
2D.
Generation
5
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
Avg
539.35
297.53
246.47
253.62
253.72
239.35
226.37
200.78
208.77
189.19
171.68
161.08
170.20
161.04
170.60
183.36
156.82
170.47
158.73
162.51
CanonicalCoEvoMOG
Min
Max Std dev
264.70 793.37
170.54
109.05 497.63
122.76
47.36 574.07
177.01
45.83 514.96
172.03
43.73 559.89
182.96
32.67 555.56
185.00
24.05 520.98
187.45
25.86 570.16
185.05
24.15 487.22
185.19
23.75 483.84
180.69
23.93 431.40
170.07
23.21 460.39
161.00
22.22 499.57
182.10
23.58 464.79
171.77
23.12 455.10
183.43
22.01 549.62
209.91
22.85 400.55
173.16
21.48 450.96
192.60
21.84 434.21
176.66
20.60 492.29
188.31
Avg
473.73
195.57
164.72
128.32
115.86
103.92
140.85
100.02
101.88
121.58
106.75
119.49
100.48
131.46
129.83
125.35
130.58
118.76
99.18
121.98
MemeticCoEvoMOG
Min
Max Std dev
257.24 708.45
122.06
28.67 559.76
172.42
42.34 483.88
175.39
29.95 538.47
168.63
22.74 404.96
146.92
20.35 362.89
135.21
19.90 585.08
205.77
19.67 359.16
133.79
19.90 381.89
134.46
20.75 616.96
193.50
20.52 350.30
138.17
20.42 568.65
182.07
20.56 452.16
144.64
20.61 488.36
184.60
20.16 427.89
178.37
19.79 560.23
185.42
21.33 566.23
190.19
20.29 446.60
161.69
20.16 340.70
127.85
20.07 433.02
165.82
Table 5: Comparison between results (in terms of IGD) obtained by Cannonical and Memetic Coevolutionary Algorithms based on 20 independent runs of the tug-of-war MOG variant with φ = Rastrigin
3D.
23
Generation
5
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
Avg
61.50
45.79
35.56
30.16
27.06
25.77
24.12
23.20
23.05
23.39
22.20
22.57
21.98
22.25
21.98
21.94
21.94
22.08
21.66
21.65
CanonicalCoEvoMOG
Min
Max Std dev
28.44 145.25
35.46
22.07 110.52
25.45
21.10
85.59
20.25
22.15
62.02
11.87
21.44
47.33
7.92
21.85
40.16
5.61
20.17
35.10
4.53
20.67
31.75
3.34
20.51
30.91
2.90
21.29
28.48
2.18
20.43
25.15
1.57
21.01
25.76
1.68
20.64
24.91
1.38
20.61
23.89
1.19
20.42
23.34
0.83
19.51
23.72
1.20
20.63
23.16
0.79
20.66
24.07
0.94
20.02
23.56
1.06
20.10
23.28
1.12
Avg
38.47
34.29
22.36
20.61
20.93
20.85
20.38
20.99
20.51
21.15
20.55
20.92
21.93
20.33
21.00
20.96
21.02
20.84
20.39
20.75
MemeticCoEvoMOG
Min
Max Std dev
30.15 83.04
15.51
20.95 46.41
7.90
20.52 28.91
2.58
20.21 23.69
1.02
19.66 22.52
1.03
19.59 22.33
0.83
20.12 22.01
0.67
19.47 22.19
0.92
19.69 23.65
1.36
19.62 21.89
0.89
19.51 22.29
0.93
19.79 22.49
0.88
20.73 22.90
0.75
19.75 23.45
1.16
19.38 22.49
1.00
19.72 21.81
0.71
20.18 22.81
0.85
19.90 21.68
0.59
19.71 22.63
0.98
19.86 21.80
0.69
Table 6: Comparison between results (in terms of IGD) obtained by Cannonical and Memetic Coevolutionary Algorithms based on 20 independent runs of the tug-of-war MOG variant with φ = Griewank
1D.
Generation
5
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
Avg
44.79
52.15
43.86
38.64
35.14
31.45
28.89
27.76
26.20
24.90
23.89
23.57
23.06
23.00
22.66
22.75
22.56
22.22
23.08
22.79
CanonicalCoEvoMOG
Min
Max Std dev
29.18 72.61
14.84
42.66 69.53
10.57
25.76 74.00
15.32
24.60 64.01
13.00
23.91 58.30
10.39
22.79 47.51
7.35
22.49 44.95
6.76
21.46 45.63
7.11
22.20 35.58
3.93
21.05 33.30
3.43
20.63 32.65
3.35
21.02 31.07
2.94
21.07 27.99
2.06
20.68 27.28
1.89
20.98 26.23
1.64
21.29 23.93
0.79
20.31 25.34
1.61
20.94 23.16
0.84
21.81 24.83
1.23
20.68 24.22
1.13
Avg
44.64
41.19
26.42
22.03
21.33
21.30
21.28
21.42
20.88
21.11
20.72
21.17
21.17
21.05
21.10
21.25
21.01
21.47
20.83
20.78
MemeticCoEvoMOG
Min
Max Std dev
28.02 72.71
13.84
30.52 50.83
6.56
21.14 37.49
4.74
20.17 24.37
1.41
20.24 24.37
1.14
19.44 22.20
0.84
19.82 21.95
0.69
19.47 22.31
0.94
19.87 22.15
0.73
19.47 24.04
1.27
19.76 22.16
0.67
20.11 22.59
0.76
19.45 22.16
0.79
20.10 21.96
0.60
20.27 22.10
0.66
20.72 22.08
0.45
18.76 22.23
1.17
20.02 22.76
0.92
20.19 22.05
0.69
19.84 22.29
0.71
Table 7: Comparison between results (in terms of IGD) obtained by Cannonical and Memetic Coevolutionary Algorithms based on 20 independent runs of the tug-of-war MOG variant with φ = Griewank
2D.
24
Generation
5
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
Avg
50.06
64.26
44.51
32.81
28.11
25.05
23.85
23.47
23.81
22.87
22.58
22.79
23.85
22.80
22.61
22.06
22.65
22.42
22.36
22.00
CanonicalCoEvoMOG
Min
Max Std dev
31.19
65.34
11.21
34.18 117.82
25.48
26.98
70.69
12.33
26.98
66.00
11.97
21.81
52.40
8.96
21.83
34.17
3.71
20.58
31.20
2.89
20.72
33.44
3.76
20.25
29.19
2.57
20.80
25.21
1.48
21.18
25.99
1.37
21.33
26.38
1.63
21.12
27.83
2.28
20.89
27.65
1.96
20.35
25.07
1.47
19.95
24.24
1.17
20.03
24.90
1.64
21.07
23.86
0.90
20.57
25.39
1.47
20.92
24.34
1.06
Avg
45.12
38.02
25.65
22.74
21.88
21.42
21.62
21.79
21.31
21.97
21.07
21.62
21.28
20.53
21.77
21.18
21.34
21.32
21.39
21.04
MemeticCoEvoMOG
Min
Max Std dev
33.66 75.94
13.48
27.80 55.44
9.10
22.56 34.38
4.38
20.08 26.94
2.03
20.92 25.94
1.50
20.35 23.67
1.16
19.97 22.98
0.92
19.97 23.09
0.96
19.70 21.95
0.74
19.46 23.69
1.11
19.83 21.68
0.62
20.11 22.02
0.72
19.97 22.70
0.82
19.51 22.65
1.02
19.85 22.99
1.03
19.84 23.27
0.98
20.16 22.88
0.86
19.43 23.19
1.19
20.46 22.68
0.67
19.45 22.91
1.09
Table 8: Comparison between results (in terms of IGD) obtained by Cannonical and Memetic Coevolutionary Algorithms based on 20 independent runs of the tug-of-war MOG variant with φ = Griewank
3D.
Generation
5
10
15
20
25
30
35
40
45
50
55
60
65
70
75
80
85
90
95
100
Avg
226.76
199.62
182.56
159.47
137.04
114.79
98.50
94.67
68.56
55.32
52.46
51.33
43.51
39.83
37.38
32.96
33.67
29.98
29.84
27.13
CanonicalCoEvoMOG
Min
Max Std dev
99.59 383.74
95.09
42.24 409.33
129.43
48.71 398.80
106.92
39.99 427.36
116.25
32.40 336.20
93.39
30.98 283.01
83.79
27.48 203.53
63.03
25.11 261.62
72.15
23.42 145.00
41.81
22.41 122.20
32.63
21.91
93.02
26.93
20.29 118.80
30.57
21.67
76.99
20.07
22.65
82.61
19.24
21.74
96.71
23.09
21.08
71.85
15.29
21.33
67.30
15.18
21.45
55.10
10.64
21.26
64.97
13.10
22.15
44.12
6.91
Avg
126.66
48.78
31.29
23.55
21.37
21.12
21.16
20.87
20.83
20.62
21.14
21.10
21.07
20.86
20.92
20.63
21.08
21.17
20.82
21.00
MemeticCoEvoMOG
Min
Max
Std dev
37.18 322.69
85.99
31.50
68.11
12.33
22.81
41.76
5.78
20.76
27.76
2.19
19.29
22.29
0.89
20.38
23.07
0.81
20.16
22.22
0.79
19.95
22.16
0.68
19.03
22.55
1.29
19.56
22.34
0.96
20.06
22.37
0.83
19.50
22.20
0.95
19.64
22.31
0.85
19.17
21.67
0.88
20.09
22.59
0.75
18.94
21.56
0.98
19.11
22.42
0.98
19.63
23.59
1.36
18.95
22.32
1.00
19.98
22.07
0.61
Table 9: Comparison between results (in terms of IGD) obtained by Cannonical and Memetic Coevolutionary Algorithms based on 20 independent runs of the tug-of-war MOG variant with φ = Ackley
2D.
25
| 9 |
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
1
Algorithms for Scheduling Malleable Tasks
arXiv:1501.04343v8 [cs.DC] 4 Feb 2018
Xiaohu Wu, and Patrick Loiseau
Abstract—Due to the ubiquity of batch data processing in cloud computing, the related problems of scheduling malleable batch tasks
have received significant attention recently. In this paper, we consider a fundamental model where a set of n tasks is to be processed
on C identical machines and each task is specified by a value, a workload, a deadline and a parallelism bound. Within the parallelism
bound, the number of machines assigned to a task can vary over time without affecting its workload. For this model, we first give two
core results: the definition of an optimal state under which multiple machines could be utilized by a set of tasks with hard deadlines,
and, an algorithm achieving such a state. The optimal utilization state plays a key role in the design and analysis of scheduling
algorithms (i) when several typical objectives are considered, such as social welfare maximization, machine minimization, and
minimizing the maximum weighted completion time, and, (ii) when the algorithmic design techniques such as greedy and dynamic
programming are applied to the social welfare maximization problem. As a result, we give four new or improved algorithms for the
above problems.
F
1
I NTRODUCTION
Cloud computing has become the norm for a wide range
of applications and batch processing constitutes the most
significant computing paradigm [1]. Applications such as
web search index update, monte carlo simulations and bigdata analytics require executing a new type of parallel tasks
on clusters, termed malleable tasks. Two basic features of
malleable tasks are about workload and parallelism bound.
There are multiple machines, and, throughout the execution,
the number of machines assigned to a task can vary over
time within the parallelism bound but its workload is not
affected by the number of used machines [2], [3]. Beyond
understanding how to schedule the fundamental batch task
model, many efforts are also devoted to its online version
[4], [5], [6] and its extension in which each task contains
several subtasks with precedence constraints [7], [8]. In
practice, for better efficiency, companies such as IBM have
integrated these smarter scheduling algorithms for various
time metrics [8] (than the popular dominant resource fairness strategy) into their batch processing platforms [9].
In scheduling theory, the above malleable task model can
be viewed as an extension of the classic model of scheduling
preemptive tasks on a single or multiple machines where
the parallelism bound is one [10], [11]. When each task
has to be completed by some deadline, the results from
the special single machine case have already implied that
the state of optimally utilizing machines plays a key role in
the design and analysis of scheduling algorithms under several objectives [11]. In particular, the famous EDF (Earliest
Deadline First) rule can achieve an optimal schedule for the
single machine case. It is initially designed so as to find an
exact algorithm for scheduling batch tasks to minimize the
maximum task lateness (i.e., task’s completion time minus
due date) [12]. So far, numerous applications of this rule
•
•
Xiaohu Wu is with Fondazione Bruno Kessler, Trento, Italy.
E-mail: xiaohuwu@fbk.eu
Patrick Loiseau is with Univ. Grenoble Alpes, LIG, France and MPI-SWS,
Germany. E-mail: patrick.loiseau@univ-grenoble-alpes.fr
Manuscript received April 19, 2005; revised August 26, 2015.
have been found, e.g., (i) to design exact algorithms for the
extended model with release times [13] and for scheduling
tasks with deadlines (and release times) to minimize the
total weighted number of tardy tasks [14], and (ii) as a
significant principle in the analysis of scheduling feasibility
for real-time systems [15].
Similarly, we are convinced that, as far as malleable
tasks are concerned, achieving such an optimal resource
utilization state is also very important for designing and
analyzing scheduling algorithms (i) under various objectives, or (ii) when different algorithmic design techniques
such as greedy and dynamic programming are applied. The
intuition for this is that, if the utilization state was not
optimal in an algorithm, its performance could be improved
by utilizing the machines optimally to allow more tasks
to be completed. All these considerations motivate us to
develop an theoretical framework proposed in this paper.
Before this paper, a greedy algorithm was proposed in
s−1
[3] that achieves a performance guarantee C−k
C · s ; here,
C is the number of machines, k is the maximum parallelism
bound of all tasks, s is the minimum slackness of all tasks
where each task’s slackness is defined to be the ratio of
its deadline to its minimum execution time, which is the
time when a task is always allocated the maximum number
of machines during the execution. k is a system parameter
and is assumed to be finite [16]. Intuitively, s characterizes
the resource allocation urgency (e.g., s = 1 means that the
maximum amount of machines have to be allocated to a task
at every time slot to meet its deadline).
1.1
Our Results
Core result (Section 3). The core result of this paper is to
identify a sufficient and necessary condition under which a
set of independent malleable tasks could be all completed by
their deadlines on C machines, also referred to as boundary
condition in this paper.
In particular, by understanding the basic constraints of
malleable tasks, we first identify and formally define a state
in which C machines can be said to be optimally utilized by
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
a set of tasks with deadlines in terms of resource utilization.
Then, we propose an optimal scheduling algorithm LDF(S )
(Latest Deadline First) that achieves such an optimal state.
The LDF(S ) algorithm has a polynomial time complexity
of O(n2 ) and is different from the EDF algorithm that
gives an optimal schedule in the single-machine case. Here,
the maximum deadline of tasks is assumed to be finitely
bounded by a constant.
Applications (Sections 4 and 5). The above core results
have several applications to propose new or improved
algorithmic design and analysis for scheduling malleable
tasks under different objectives. The scheduling objectives
considered in this paper include:
(a)
(b)
(c)
social welfare maximization: maximize the sum of
values of tasks completed by their deadlines;
machine minimization: minimize the number of machines needed to produce a feasible schedule for a
set of tasks such that each task is completed by their
deadline;
maximum weighted completion time minimization: minimize the maximum weighted completion time of
tasks.
Here, the first and second objectives above have been considered in [2], [3], [8]. The second objective that concerns
the optimal utilization of machines has been considered for
other types of tasks [17] but we are the first to consider it for
malleable tasks. After applying the core results above, we
obtain the following algorithmic results:
(i)
(ii)
(iii)
(iv)
an improved greedy algorithm GreedyRLM with a
performance guarantee s−1
s for social welfare maximization with a time complexity of O(n2 );
the first exact dynamic programming algorithm for
social welfare maximization with a pseudo-polynomial time complexity of O(max{ndL C L , n2 }), where
L is the number of deadlines, D and d are the
maximum workload and deadline of tasks;
the first exact algorithm for machine minimization
with a time complexity of O(n2 , Ln log n);
a polynomial time (1+)-approximation algorithm for
maximum weighted completion time minimization.
In the greedy algorithm of [3], the tasks are considered
in the non-decreasing order of their marginal values of tasks
(i.e., the ratio of a task’s value to its size), and only if a
task could be fully completed by its deadline according to
the currently remaining resource, it will be accepted and
allocated possibly different number of machines over time
according to an allocation algorithm; otherwise, it will be
rejected. In this paper, we also show that
•
•
for social welfare maximization, s−1
s is the best possible performance guarantee that a class of greedy
algorithms could achieve where they consider tasks
in the non-increasing order of their marginal values.
as a result, the proposed greedy algorithm of this
paper is the best possible among this kind of greedy
algorithms.
The second algorithm for social welfare maximization
can work efficiently when L is small since its time complexity is exponential in L. However, this may be reasonable
2
in a machine scheduling context. In scenarios like [7], tasks
are often scheduled periodically, e.g., on an hourly or daily
basis, and many tasks have a relatively soft deadline (e.g.,
finishing after four hours instead of three will not trigger a financial penalty). Then, the scheduler can negotiate
with the tasks and select an appropriate set of deadlines
{τ1 , τ2 , · · · , τL }, thereafter rounding the deadline of a task
down to the closest τi (1 ≤ i ≤ L). By reducing L, this could
permit to use the dynamic programming (DP) algorithm
rather than GreedyRLM in the case where the slackness s
is close to 1. With s close to 1, the approximation ratio of
GreedyRLM approaches 0 and possibly little social welfare
is obtained by adopting GreedyRLM while the DP algorithm
can still obtain the almost optimal social welfare.
Technical Difference. The second algorithm can be viewed
as an extension of the pseudo-polynomial time exact algorithm in the single machine case [10] that is also designed
via the generic dynamic programming procedure. However,
before our work, how to enable this extension to malleable
tasks was not clear as indicated in [2], [3]. This is mainly due
to the lack of a notion of the optimal state of machines being
utilized by malleable tasks with deadlines and the lack of an
algorithm that achieves such a state. In contrast, the optimal
state in the single machine case can be defined much more
easily and achieved by the EDF algorithm. The core results
of this paper are the enabler of a DP algorithm.
The way of applying the core results to a greedy algorithm is less obvious since in the single machine case there
is no corresponding algorithm to hint its role in the algorithmic design. For the above class of greedy algorithms,
we manage to give a new algorithm analysis, figuring out
what resource allocation features of tasks can benefit and
determine the algorithm’s performance. This analysis is an
extended analysis of the greedy algorithm for the standard
knapsack problem [22] and it does not rely on the dualfitting technique, on which the algorithm in [3] is built.
Here, the problem could be viewed as an extension of
the knapsack problem where each item has two additional
constraints in a two-dimensional space: a (time) window
in which an item could be placed and a maximum width
of the space that it could utilize at every moment. Two of
the most important algorithms there are either based on the
DP technique or of greedy type, that also considers items
by their marginal values [22]; we give in this paper their
counterparts in the scenario of malleable tasks.
In the construction of the greedy and optimal scheduling
algorithms, we are inspired by the algorithm in [3]. After
our definition of the optimal state and a new analysis of the
above class of greedy algorithms, we found that the algorithm in [3] could achieve an optimal resource utilization
state from the maximum deadline of tasks d to some earlier
time slot t. However, this is achieved by guaranteeing the
existence of a time slot t0 earlier than t such that the number
of available machines at t0 is ≥ k , which leads a suboptimal
utilization of resources. In our algorithm, we only require t0
to be such that the number of available machines at t0 is ≥ 1,
which leads to an optimal resource utilization. More details
could be found in the remarks of Section 3.2.
The above third and fourth algorithms are obtained
by respectively applying the above core result to a binary
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
3
search procedure, and the related results in [8].
1.2
Related works
Now, we introduce the related works. The linear programming approaches to designing and analyzing algorithms for the task model of this paper [2], [3] and its
variants [4], [5], [7] have been well studied1 . All these
works consider the same objective of maximizing the social welfare. In [2], Jain et al. proposed an algorithm with
C
an approximation ratio of (1 + C−k
)(1 + ) via deterministic rounding of linear programming. Subsequently, Jain et
al. [3] proposed a greedy algorithm GreedyRTL and used
the dual-fitting technique to derive an approximation ratio
C−k
s−1
C · s . In [7], Bodik et al. considered an extension of
our task model, i.e., DAG-structured malleable tasks, and,
based on randomized rounding of linear programming, they
proposed an algorithm with an expected approximation
1
ratio
of α(λ) for every λ i> 0, where α(λ) = λ1 · e− λ ·
h
(1−1/λ)C−k
TABLE 1
Main Notation
Notation
C
T
Ti
Di , di , vi
ki
yi (t)
W (t)
W (t)
leni
i
si
κ
·ln λ·(1− C )
2ωκ
. The online version of our task
1 − e−
model is considered in [4], [5]; again based on the dual-fitting
technique, two weighted greedy algorithms are proposed
respectively for non-committed and committed scheduling
1
and achieve the competitive ratios of crA = 2 + O( ( √
3 s−1)2 )
cr (s·ω(1−ω))
where s > 1 [3] and Aω(1−ω)
where ω ∈ (0, 1) and
1
s > ω(1−ω)
.
In addition, Nagarajan et al. [8] considered DAGstructured malleable tasks and propose two algorithms with
approximation ratios of 6 and 2 respectively for the objectives of minimizing the total weighted completion time and
the maximum weighted lateness of tasks. Nagarajan et al.
showed that optimally scheduling deadline-sensitive malleable
tasks in terms of resource utilization is a key to the solutions to
scheduling for their objectives. In particular, seeking a schedule
for DAG tasks can be transformed into seeking a schedule
for tasks with simpler chain-precedence constraints; then
whenever there is a feasible schedule to complete a set of
tasks by their deadlines, Nagarajan et al. proposed a nonoptimal algorithm where each task is completed by at most
2 times its deadline and give two procedures to obtain nearoptimal completion times of tasks in terms of the above two
objectives.
Technically, the works [2], [3], [4], [5], [7] formulate
their problem as an Integer Program (IP) and relax the IP
to a relaxed linear program (LP). The techniques in [2],
[7] require to solve the LP to obtain a fractional optimal
solution and then manage to round the fractional solution
to an integer solution of the IP that corresponds to an
approximate solution to their original problem. In [3], [4],
[5], the dual fitting technique first finds the dual of the LP
and then construct a feasible algorithmic solution X to the
dual in some greedy way. This solution corresponds to a
feasible solution Y to their original problems, and, due to
the weak duality, the value of the dual under the solution X
(expressed in the form of the value under Y multiplied by
a parameter α ≥ 1) will be an upper bound of the optimal
value of the IP, i.e., the optimal value that can be achieved
in the original problem. Therefore, the approximation ratio
1. We refer readers to [11], [18] for more details on the general
techniques to design scheduling algorithms.
Explanation
the total number of machines
a set of tasks to be scheduled on C machines
a task in T
the workload, deadline, and value of a task Ti
the parallelism bound of Ti , i.e., the maximum
number of machines that can be allocated to and
utilized by Ti simultaneously
the number of machines allocated to Ti at a time
slot t where yi (t) ∈ {0, 1, · · · , ki } and set all
yi (t) to 0 initially
the total number of machines thatP
are allocated
out to the tasks at t, i.e., W (t) = Ti ∈T yi (t)
the total number of machines idle at t, i,e.,
W (t) = C − W (t)
the minimum execution time of Ti where Ti is
allocated ki machines in the entire execution
i
process, i.e., leni = d D
e
k
s
d, D
vi0
{τ1 , · · · , τL }
Di
di
the slackness of a task, i.e., len
, measuring the
i
urgency of machine allocation to complete Ti by
the deadline
the minimum slackness of all tasks of T , i.e.,
minTi ∈T si
the maximum deadline and workload of all tasks
of T , i.e., d = maxTi ∈T di and D = maxTi ∈T Di
vi
the marginal value of Ti , i.e., vi0 = D
i
the set of the deadlines di of all tasks Ti of T ,
where 0 = τ0 < τ1 < · · · < τL = d
all the tasks {Ti,1 , Ti,2 , · · · , Ti,ni } of T that
have a deadline τi , 1 ≤ i ≤ L
of the algorithm involved in the dual becomes clearly 1/α.
Here, the approximation ratio is a lower bound of the ratio
of the actual value obtained by the algorithm to the optimal
value.
A part of results of this paper appeared at the Allerton
conference in the year 2015 [19], [20]. Following [19], [20],
a recent work also gave a similar (sufficient and necessary)
feasibility condition to determine whether a set of malleable
tasks could be completed by their deadlines and showed
that such a condition is central to the application of the
LP technique to the three problems of this paper: greedy
and exact algorithms for social welfare maximization and an
exact algorithm for machine minimization. Guo & Shen first
used the LP technique to give a new proof of this feasibility
condition in the core result. Based on this condition, the
authors gave a new formulation of the original problems as
IP programs, different from the ones in [2], [3]. This new
formulation enables from a different perspective proposing
almost the same algorithmic results as this paper, e.g., for
the machine minimization problem an exact algorithm with
a time complexity O((n + d)3.5 Ls (log n + log k)), and for
the social welfare maximization problem an exact algorithm
with a complexity O(n·(C·d)d ), where Ls is the length of the
LP’s input. In addition, we have shown that the best performance guarantee is s−1
s when a greedy algorithm considers
tasks in the non-increasing order of their marginal values.
Guo & Shen also considered another standard to determine
the order of tasks, and proposed a greedy algorithm with a
2
performance guarantee C−k
C and a complexity O(n + nd).
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
2
M ODEL AND P ROBLEM D ESCRIPTION
There are C identical machines and a set of n tasks
T = {T1 , T2 , · · · , Tn }. The task Ti is specified by several
characteristics: (1) value vi , (2) demand (or workload) Di , (3)
deadline di , and (4) parallelism bound ki . Time is discrete and
the time horizon is divided into d time slots: {1, 2, · · · , d},
where d = maxTi ∈T di and the length of each slot may
be a fixed number of minutes. A task Ti can only utilize
the machines located in time slot interval [1, di ]. The parallelism bound ki limits that, at any time slot t, Ti can
be executed on at most ki machines simultaneously. Let
k = maxTi ∈T ki be the maximum parallelism bound; here,
ki is a system parameter and k is therefore assumed to be
finite [16]. An allocation of machines to a task Ti is a function
yi : [1, di ] → {0, 1, 2, · · · , ki }, where yi (t) is the number of
machines allocated to task Ti at a time slot t ∈ [1, di ]. In this
model, Di , di ∈ Z + for all Ti ∈ T .
P For the system of C machines, denote by W (t) =
Ti ∈T yi (t) the system’s workload at time slot t; and by
W (t) = C − W (t) its complementary, i.e., the amount of
available machines at time t. We say that time t is fully
utilized if W (t) = 0, and is not fully utilized if W (t) > 0.
In addition, we assume that the maximum deadline of
tasks is bounded. Given the model above, the following
three scheduling objectives are considered separately in this
paper:
•
•
•
The first objective is social welfare maximization and
it aims to choose an a subset S ⊆ T and produce
a feasible schedule
for S so as to maximize the
P
social welfare
Ti ∈S vi (i.e., the sum of values of
tasks completed by deadlines); here, the value vi of
a task Ti is gained if P
and only if it is fully allocated
by the deadline, i.e., t≤di yi (t) ≥ Di , and partial
execution of a task yields no value.
The second objective is machine minimization, i.e.,
seeking the minimum number of machines needed
to produce a feasible schedule of T on C machines
such that the task’s parallelism bound and deadline
constraints are not violated.
The third objective is to minimize the maximum
weighted lateness of tasks, i.e., minTi ∈T {vi ·(ti −di )},
where ti is the completion time of a task Ti .
Furthermore, we denote by [l] and [l]+ the sets {0, 1, · · · , l}
and {1, 2, · · · , l} for a positive integer l. Let leni = dDi /ki e
denote the minimum execution time of Ti . Define by si =
di
leni the slackness of Ti , measuring the urgency of machine
allocation (e.g., si = 1 may mean that Ti should be allocated
the maximum amount of machines ki at every t ∈ [1, di ])
and let s = minTi ∈T si be the slackness of the least flexible
vi
task (s ≥ 1). Denote by vi0 = D
the marginal value of task Ti ,
i
i.e., the value obtained by the system per unit of demand. We
assume that the demand of each task is an integer. Let D =
maxTi ∈T {Di } be the demand of the largest task. Given a set
of tasks T , the deadlines di of all tasks Ti ∈ T constitute a
finite set {τ1 , τ2 , · · · , τL }, where L ≤ n, τ1 , · · · , τL ∈ Z + ,
and 0 = τ0 < · · · < τL = d. Let Di = {Ti,1 , Ti,2 , · · · , Ti,ni }
PL
denote the set of tasks with deadline τi , where i=1 ni = n
(i ∈ [L]+ ).
4
The notation of this section is used in the entire paper
and summarized in Table 1. Throughout this paper, we use
i, j , m, l, or m0 as subscripts to index the element of different
sets such as tasks and use t or t to index a time slot.
3
O PTIMAL S CHEDULE
In this section, we identify a state under which C machines can be said to be optimally utilized by a set of tasks.
We then propose a scheduling algorithm that achieves such
an optimal state. Besides Table 1, the additional notation to
be used in this section is summarized in Table 2.
3.1
Optimal Resource Utilization State
In this paper, all tasks are denoted by a set T , and we
denote by S ⊆ T an arbitrary subset of T ; all tasks of T
with a deadline τl are denoted by Dl and we denote by
Sl = S ∩Dl all tasks of S with a deadline τl (l ∈ [L]+ ). In this
subsection, we define the maximum amount of workload of
S that could be processed in a fixed time interval [τm +1, τL ]
on C machines for all m ∈ [L − 1], where τL = d, i.e., the
maximum deadline of tasks.
Fig. 1. The green areas denote the maximum demand of Ti that need
or could be processed in [τL−m + 1, τL ].
We first define the maximum amount of resource, denoted
by λm (S), that could be utilized by S in [τL−m + 1, τL ] in
an idealized case where there is an indefinite number of
machines, i.e., C = ∞, for all m ∈ [L]+ . To define this, we
clarify the maximum amount of resource that an individual
task Ti can utilize in [τL−m + 1, τL ]. The basic constraints of
malleable tasks with deadlines imply that:
•
•
the deadline of Ti limits that Ti can only utilize the
machines in [1, di ], and
the parallelism bound limits that Ti can only utilize
at most ki machines simultaneously at every time
slot.
The tasks with di ≤ τL−m cannot be executed in the interval
[τL−m + 1, τL ]. Let us consider a task Ti with di ∈ [τL−m +
1, τL ]. The number of time slots available in [τL−m + 1, di ] is
di − τL−m in the discrete case, and, also recall that leni the
(minimum) execution time of Ti when it always utilizes the
maximum number ki of machines throughout the execution.
In the illustrative Fig. 1, the green area in the left (resp. right)
subfigure denotes the maximum demand of a task, i.e., Di
(resp. ki · (di − τL−m )), that could or need be processed in
[τL−m + 1, τL ] in the case where the minimum execution
time is such that leni ≤ di − τL−m (resp. leni > di − τL−m ).
As a consequence of the observation above, λm (S)
equals the sum of the maximum workload of every task
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
5
in S that could executed in [τL−m + 1, τL ] and is defined as
follows.
Definition 1. Initially, set λm (S) to zero for all m ∈ [L]. In the
case where C = ∞ (i.e., the capacity constraint is ignored), for
all m ∈ [L]+ , λm (S) is defined as follows:
λm (S) ← λm (S) + βi , for every task Ti ∈ S ,
where βi is such that
•
•
i.e., the size of all the colored areas in [τ0 + 1, τ2 ].
Generalizing the above process, we derived a recursive
definition of λC
m (S).
Definition 2. In the case where C is finite (i.e., with the capacity
constraint), for all m ∈ [L], the maximum amount of resource
λC
m (S) that could be utilized by S in [τL−m + 1, τL ] is defined
by the following recursive procedure:
if Ti ∈ S1 ∪ · · · ∪ SL−m where di ≤ τL−m , βi ← 0;
if Ti ∈ SL−m+1 ∪ · · · ∪ SL where di ≥ τL−m + 1, as
illustrated in Figure 1,
–
–
in the case that leni ≤ di − τL−m , βi ← Dj ;
otherwise, βi ← ki · (di − τL−m ).
Here, βi represents the maximum workload of a task Ti that could
be executed in [τL−m + 1, τL ].
Built on Definition 1, we move to the case where C is
finite and define the maximum amount of resource λC
m (S)
that can be utilized by S on C machines in every [τL−m +
1, τL ], m ∈ [1, L].
•
set λC
0 (S) to zero trivially;
•
C
C
set λ
m (S) to the sum of λm−1 (S) and
C
min λm (S) − λm−1 (S), C · (τL−m+1 − τL−m ) .
We finally state our definition that formalizes the concept
of optimal utilization of C machines by a set S of malleable
tasks with deadlines:
Definition 3 (Optimal Resource Utilization State). We say
that C machines are optimally utilized by a set of tasks S , if, for
all m ∈ [L]+ , S utilizes λC
m (S) resources in [τL−m + 1, d] on C
machines.
P
C
We define µC
m (S) =
Ti ∈S Di −λL−m (S) as the remaining (minimum) workload of S that needs to be processed
after S has maximally utilized C machines in [τm + 1, τL ]
for all m ∈ [L − 1].
Lemma 1 (Boundary Condition). If there exists a feasible
schedule for S , the following inequality holds for all m ∈ [L − 1]:
µC
m (S) ≤ C · τm ,
which is referred to as boundary condition in this paper.
Fig. 2. Derivation from the definition λm (S) to λC
m (S).
To help readers grasp the underlying intuition in the
process of deriving λC
m (S) from λm (S), we first illustrate
this process in the case where L = 2 with the help of Fig. 2.
Fig. 2 (left) illustrates the parameter λm (S) in Definition 1,
where the green area denotes λ1 (S) and the green and blue
areas together denote λ2 (S). As illustrated in Fig. 2 (right),
due to the capacity constraint that C is finite, we have that
(i)
C · (τ2 − τ1 ) is the maximum possible workload
that could be processed in [τ1 + 1, τ2 ] due to the
capacity constraint, and λ1 (S) is the maximum
available workload of S that needs to be processed
in [τ1 + 1, τ2 ] due to the deadline and parallelism
constraints. As a result, on C machines, the maximum workload λC
1 (S) of S that can be processed in
[τ1 + 1, τ2 ] is the size of the green area in [τ1 + 1, τ2 ],
i.e.,
λC
1 (S) = min{C · (τ2 − τ1 ), λ1 (S)} = C · (τ2 − τ1 ).
(ii)
After λC
1 (S) workload of S has been processed in
[τ1 + 1, τ2 ], the remaining workload of S that needs
to processed in [1, τ1 ] is λ2 (S) − λC
1 (S); the maximum workload that could be processed in [1, τ1 ]
is C · τ1 due to the capacity constraint. As a result,
λC
2 (S) is defined as follows:
λC
2 (S)
λC
1 (S)
=
+ min{C · (τ1 − τ0 ), λ2 (S) −
= min{C · (τ2 − τ0 ), λ2 (S)} = λ2 (S),
λC
1 (S)}
Proof. Recall the definition of λC
L−m (S) in Definition 2.
After S has maximally utilized the machines in [τm + 1, d]
and been allocated the maximum amount of resource, i.e.,
λC
L−m (S), if there exists a feasible schedule for S , the total
amount of the remaining demands of S to be processed
should be no more than the capacity C · τm in [1, τm ].
3.2
Scheduling Algorithm
In this section, we assume that S satisfies the boundary
condition above, and, propose an algorithm LDF(S ) that
achieves the optimal resource utilization state, producing
a feasible schedule for S .
3.2.1
Overview of LDF(S )
Initially, for all Ti ∈ S and t ∈ [1, d], we set the allocation
yi (t) to zero and LDF(S ) runs as follows:
1)
2)
the tasks in S are considered in the non-increasing
order of the deadlines, i.e., in the order of SL , SL−1 ,
· · · , S1 ;
for a task Ti being considered, the algorithm
Allocate-B(i), presented as Algorithm 2, is called to
allocate Di resource to Ti under the constraints of
deadline and parallelism bound.
At a high level, we show in the following that, only if S
satisfies the boundary condition and the resource utilization
satisfies some properties upon every completion of AllocateB(·), all tasks in S will be fully allocated.
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
6
Fig. 3. The resource allocation state of Ti and the previous tasks S 0 respectively upon completion of Fully-Utilize(i), Fully-Allocate(i), and
AllocateRLM(i, 1, t2 + 1) where L = m = 3: the blue area in the rectangle denotes the allocation to the previous tasks that satisfies Property 1 and
Property 2 before executing Allocate-B(i) while the green area in the interval [1, τ3 ] denotes the allocation to Ti at every time slot.
TABLE 2
Main Notation for the algorithms LDF(S ), Fully-Utilize(i),
Fully-Allocate(i), and AllocateRLM(i, θ1 , x)
Notation
S
Si
λm (S)
λC
m (S)
µC
m (S)
Ti
S0
S0
t0
t1
t2
t0
t00 , t000
Explanation
a set of tasks to be allocated by LDF(S ) and
S⊆T
the tasks of S with a deadline τi
the maximum amount of resource that could be
utilized by S in [τL−m + 1, τL ] in an idealized
case where there is an indefinite number of
machines, m ∈ [L]+
the maximum amount of resource that can be
utilized by S on C machines in every
[τL−m + 1, τL ], m ∈ [L]+
the remaining workload of S that needs to be
processed after S has optimally utilized C
machines
in [τm + 1, τL ], i.e.,
P
C
µC
m (S) =
Ti ∈S Dj − λL−m (S), m ∈ [L − 1]
a task that is being allocated by the algorithm
LDF(S ); the actual allocation is done by
Allocate-B(i)
so far, all tasks that have been fully allocated by
LDF(S ) and are considered before Ti
S 00 = S 0 ∪ {Ti }
a turning point defined in Property 2, with time
slots respectively later than and no later than t0
having different resource utilization state
similar to t0 , a turning point defined in Lemma 3
upon completion of Fully-Utilize(i)
similar to t0 , a turning point defined in Lemma 6
upon completion of Fully-Allocate(i)
the latest time slot in [1, τm ] with W (t0 ) > 0
a time slot that satisfies some property defined
and only used in Section 3.2.3
Now, we begin to elaborate this high-level idea. In
LDF(S ), when a task Ti is being considered, suppose that
the allocated task Ti belongs to Sm and denote by S 0 ⊆
SL ∪ · · · ∪ Sm the tasks that have been fully allocated so far
and are considered before Ti . Here, S satisfies the boundary
condition and so do all its subsets including S 0 and S 0 ∪{Ti }.
Before the execution of Allocate-B(i), we assume that the
resource utilization satisfies the following two properties:
Recall the optimal resource utilization state in Definitions 3, and the first property is that such an optimal
resource utilization state of C machines is achieved by the
current allocation to S 0 .
0
Property 1. For all l ∈ [L]+ , S 0 is allocated λC
l (S ) resource in
C
0
[τL−l + 1, d] where λl (S ) is defined in Definition 2.
The second property is that a stepped-shape resource uti-
lization state is achieved in [1, τm ] by the current allocation
to S 0 .
Property 2. If there exists a time slot t ∈ [1, τm ] such that
W (t) > 0, let t0 be the latest slot in [1, τm ] such that W (t0 ) > 0;
then we have W (1) ≥ W (2) ≥ · · · ≥ W (t0 ).
If Property 1 and Property 2 hold, we will show in Section 3.2.2 and 3.2.3 that, there exists an algorithm AllocateB(i) such that, upon completion of Allocate-B(i), the following two properties are satisfied:
Property 3. Ti is fully allocated.
Property 4. The resource allocation to S 0 ∪ {Ti } still satisfies
Property 1 and Property 2.
Due to the existence of the above Allocate-B(i), only
if S satisfies the boundary condition, S can be fully allocated by LDF(S ). The reason for this can be explained by
induction. When the first task Ti in S is considered, S 0 is
empty, and, before the execution of Allocate-B(i), Property 1
and Property 2 holds trivially. Further, upon completion
of Allocate-B(i), Ti will be fully allocated by Allocate-B(i)
due to Property 3, and Property 4 still holds. Then, assume
that S 0 that denotes the current fully allocated tasks is
nonempty and Property 1 and Property 2 hold; the task Ti
being considered by LDF(S ) will still be fully allocated and
Property 3 and Property 4, upon completion of AllocateB(i). Hence, all tasks in S will be finally fully allocated upon
completion of LDF(S ).
In the rest of this subsection, we will propose an algorithm Allocate-B(i) mentioned above such that, upon completion of Allocate-B(i), Property 3 and Property 4 holds, if,
before the execution of Allocate-B(i), the resource allocation
to S 0 satisfies Property 1 and Property 2 hold. Then, we
immediately have the following proposition:
Proposition 1. If S satisfies the boundary condition, LDF(S )
will produce a feasible schedule of S on C machines.
Overview of Allocate-B(i). The construction of AllocateB(i) will proceed with two phases. In the first phase, we
introduce what operations are feasible to make Ti fully allocated Di resource under Property 1 and Property 2. We will
use two algorithms Fully-Utilize(i) and Fully-Allocate(i) to
describe them, and the sketch of this phase is as follows:
•
From the deadline di towards earlier time slots,
Fully-Utilize(i) makes Ti fully utilize the maximum
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
7
amount of machines available at every slot. Upon its
completion, the resource allocation state is illustrated
in Fig. 3 (left) and will be described in Lemma 3.
If Ti is not fully allocated yet, as illustrated in Fig. 3
(middle), Fully-Allocate(i) transfers the allocation of
the previous tasks S 0 at the time slots closest to di to
the latest slots in [1, di ] that have idle machines, so
that, ki machines are finally allocated to Ti at each
of these slots closest to di ; as a result, Ti is fully
allocated.
Upon completion of Fully-Utilize(i), in the other case that
the total allocation to S 00 is < C · τm , even for Ti , it may not
be fully allocated. In this case, there exists a slot t ∈ [1, τm ]
such that W (t) > 0, and let t1 denote the latest such time
slot in [1, τm ].
In Fully-Utilize(i), upon completion of the allocation to
Ti at t ∈ [1, t1 ], if Ti has not been fully allocated yet, it
is allocated ki machines at t, i.e., yi (t) = ki , and these
allocations in [1, t1 ] are non-decreasing, i.e.,
Upon completion of Fully-Allocate(i), the resource allocation state may not satisfy Property 1 and Property 2, as
illustrated by Fig. 3 (middle). We therefore propose an
algorithm AllocateRLM(i, η1 , x) in the second phase:
Before executing Fully-Utilize(i), the numbers of idle machines have a stepped shape, i.e., W (1) ≥ · · ·P
≥ W (t0 ) by
Property 2, where W (t) = C − W (t) = C − Tj ∈S 0 yj (t).
Upon its completion, with yi (t) machines occupied by Ti ,
we conclude that
•
•
the allocation of the previous tasks at every slot t
closest to the deadline is again transferred to the latest slots that have idle machines, and, the allocation
of Ti in the earliest slots is transferred to t; the final
resource allocation state is illustrated in Fig. 3 (right).
Following the above high-level ideas, the details of the
first and second phases are respectively presented in Section 3.2.2 and Section 3.2.3.
3.2.2
Phase 1
Now, we introduce Fully-Utilize(i) and Fully-Allocate(i)
formally. Before their execution, recall that we assume in the
last subsection Ti ∈ Sm ; the allocation to the previously allocated tasks in S 0 satisfies Properties 1 and 2. The whole set
of tasks S to be scheduled satisfies the boundary condition
where S 0 S .
Initially, set yi (t) to zero for all time slots, and, FullyUtilize(i) operates as follows:
for every time slot t from the deadline di to 1, set
P i
yi (t) ← min{ki , Di − dt=t+1
yi (t), W (t)}.
Pdi
Here, ki is the parallelism bound, Di − t=t+1
yi (t) is the
remaining workload to be processed upon completion of its
allocations at slots t + 1, · · · , di , and W (t) is the number
Pdi
of machines idle at t; specially,
y (t) is set to 0,
t=di +1 i
representing the allocation to Ti is zero before the allocation
begins. Their minimum denotes the maximum amount of
machines that Ti can or needs to utilize at t after the
allocation to Ti at slots t + 1, · · · , di .
Before executing Fully-Utilize(i), the resource allocation
to the previous tasks S 0 satisfies Property 1. Its execution
does not change the previous allocation to S 0 . Let
•
yi (1) ≤ yi (2) ≤ · · · ≤ yi (t1 ).
Lemma 3. Upon completion of Fully-Utilize(i), in the case that
the total allocation to S 00 is < C · τm ,
•
for all t ∈ [1, t1 ], if the total allocation of Ti in [t, di ] is
Pdi
< the workload of Ti , i.e., Di − t=t
yi (t) > 0, we have
yi (t) = ki ;
•
the numbers of idle/unallocated machines in [1, t1 ] have a
stepped shape, i.e., W (1) ≥ · · · ≥ W (t1 ) > 0.
With the current resource allocation state shown in
Lemma 3, we are enabled to propose the algorithm FullyAllocate(i) to make Ti fully allocated. Deducting the current
resource allocated to Ti , let Ω denote the remaining workload of Ti to be allocated more resource, i.e.,
P
Ω = Di − t≤di yi (t).
For every slot t ∈ [1, t1 ], the number yi (t) of machines
allocated to Ti at t is ki in the case that Ω > 0 by Lemma 3.
The total workload Di is ≤ ki · di , and, with the parallelism
bound, Fully-Allocate(i) considers each slot t from di towards t1 + 1 and operates as follows repeatedly at each t
until Ω = 0:
1)
2)
3)
S 00 = S 0 ∪ {Ti }.
Since di = τm , the workload of Ti can only be processed
in [1, τm ]; the maximum workload of S 00 that could be
processed in [τm + 1, τL ] still equals its counterpart when
S 0 is considered. We come to the following conclusion in
order to not violate the boundary condition:
Lemma 2. Upon completion of Fully-Utilize(i), all tasks of S
would have been fully allocated in the caseP
that theP
total allocaτm
tion to S 00 in [1, τm ] is C ·τm , i.e., C ·τm = Tj ∈S 00 t=1
yj (t).
Proof. See the appendix for detailed proof.
4)
∆ ← min{ki − yi (t), Ω}.
Notes. ki − yi (t) is the maximum number of additional machines that could be utilized at T with its
previous allocation yi (t).
Call Routine(∆, 1, 0, t), presented as Algorithm 1.
Notes. Routine(·) aims to increase the number of
available machines W (t) at t to ∆ by transferring
the allocation of other tasks to an earlier time slot.
Allocate W (t) more machines to Ti at t: yi (t) ←
yi (t) + W (t), and, Ω ← Ω − W (t).
Notes. Ω denotes the currently remaining workload to be processed; in this iteration, if Ω > 0
currently, ∆ = ki − yi (t) and the allocation yi (t)
of Ti at t becomes ki .
t ← t − 1.
Now, we explain the existence of Ti0 in line 12 of
Routine(·) and the reason why Ti will be finally fully allocated by Fully-Allocate(i). The only operation that changes
the allocation to Ti occurs at the third step of FullyAllocate(i). Hence, we have
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
Algorithm 1: Routine(∆, η1 , η2 , t)
1
2
3
4
5
6
7
8
9
10
11
12
13
while W (t) < ∆ do
t0 ← the current time slot earlier than and closest
to t so that W (t0 ) > 0;
if η1 = 1 then
if there exists no such t0 then
f lag ← 1, break;
else
0
if t0 ≤ tth
m , or there exists no such t then
f lag ← 1, break;
if η2 = 1 then
Pt0 −1
if t=1 yi (t) ≤ W (t) then
f lag ← 1, break;
let i0 be a task such that yi0 (t) > yi0 (t0 );
yi0 (t) ← yi0 (t) − 1, yi0 (t0 ) ← yi0 (t0 ) + 1;
Lemma 4. Fully-Allocate(i) never decreases the allocation yi (t)
to Ti at any time slot t ∈ [1, di ] during its execution, compared
with the yi (t) just before executing Fully-Allocate(i).
We could also prove by contradiction that
Lemma 5. When Routine(∆, 1, 0, t) is called, the task Ti0 in
line 12 always exists if (i) the condition in line 4 is false, (ii)
yi (t0 ) = ki , and (iii) yi (t) < ki and W (t) = 0.
Proof. See the Appendix for the detailed proof.
At each iteration of Fully-Allocate(i), if there exists a
t0 such that W (t0 ) > 0 in the loop of Routine(·), with
Lemmas 3 and 4, we have yi (t0 ) = ki . Since Ω > 0 and
yi (t) < ki , when Routine(·) is called, we have W (t) = 0;
otherwise, this contradicts Lemma 3. With Lemma 5, we
will conclude that the task Ti0 in line 11 exists when it is
called by Fully-Allocate(i). In addition, the operation at line
12 of Routine(·) does not change the total allocation to Ti0 ,
and violate the parallelism bound ki0 of Ti0 since the current
yi0 (t0 ) is no more than the initial yi0 (t).
Proposition 2. Upon completion of Fully-Allocate(i), the task Ti
is fully allocated.
Proof. Fully-Allocate(i) ends up with one of the following
three events. The first is that the condition in line 4 of
Routine(·) is true. Then, with Lemma 2, all tasks in S
has been fully allocated. If the first event doesn’t happen,
the second is Ω = 0 and Ti has been fully allocated. If
the first and second events don’t happen, the third occurs
after finishing the iteration of Fully-Allocate(i) at time slot
t1 + 1; then, there is a slot t0 in [1, t1 + 1] that are not fully
utilized. As a result, we have that Ti has been fully allocated;
otherwise, Ω > 0, which implies yi (t1 + 1) = ki , and we
have yi (t) = ki for all t ∈ [di ]+ due to Lemma 3, which
contradicts Ω > 0. Finally, the theorem holds.
Upon completion of Fully-Utilize(i), the resource allocation feature is described in Lemma 3 and illustrated in Fig. 3
(left). Built on this, Fully-Allocate(i) considers every slot
from di to t1 +1; as illustrated in Fig. 3 (middle) and roughly
8
explained there, upon completion of Fully-Allocate(i), the
resource allocation feature is described as follows.
Lemma 6. Upon completion of Fully-Allocate(i), if there exists a
t ∈ [1, τm ] such that W (t) > 0, let t2 be the latest such slot:
•
•
for all t ∈ [1, t2 ], if the total allocation of Ti in [t, di ] is
Pdi
< Di (i.e., Di − t=t
yi (t) > 0), we have yi (t) = ki ;
the numbers of available machines in [1, t2 ] have a stepped
shape, i.e, W (1) ≥ · · · ≥ W (t2 ) > 0.
Here t2 ≥ t1 .
Proof. See the Appendix for the formal proof.
3.2.3 Phase 2
Now, we introduce AllocateRLM(i, η1 , x). Recall that t0
always denotes the slot closest to but earlier than τm (i.e.,
the latest slot in [1, τm ]) such that W (t0 ) > 0 and, before
executing AllocateRLM(·), t0 = t2 due to Lemma 6. The
resource allocation feature before executing AllocateRLM(i,
η1 , x) is described in Lemma 6 and illustrated in Fig. 3
(middle); the underlying intuition of AllocateRLM(i, η1 , x)
is described in Section 3.2.1 and, upon its completion, the
resource allocation feature is illustrated in Fig. 3 (right).
Formally, AllocateRLM(i, η1 , x) considers each slot t
from di to x and operates as follows repeatedly
each t
Pat
t−1
until the total allocation of Ti in [1, t − 1], i.e., t=1 yi (t),
equals zero, where η1 = 1 and x = t2 + 1 in this section:
Pt−1
1) ∆ ← min{ki − yi (t), t=1 yi (t)}.
Notes. ∆ denotes the maximum allocation of Ti
before t that can be transferred to t with the parallelism constraint.
2) if ∆ = 0, go to the step 5; otherwise, execute the
steps 3-5.
3) set f lag ← 0 and call Routine(∆, η1 , 1, t).
Notes. Routine(·) aims to increase the number
W (t) of available machines at t to ∆. With Lemma 6,
the slots t0 earlier than but closest to t2 + 1 in
Routine(·) will become fully utilized one by one and,
together with the next step 4, upon completion of
the iteration at t, for all t ∈ [t0 + 1, di ], W (t) = 0.
4) set θ ← W (t). Allocate θ more machines to Ti :
yi (t) ← yi (t) + W (t),
and reduce the allocations of Ti at the earliest
slots by θ: in particular, let t00 be such a slot that
Pt00 −1
Pt00
t=1 yi (t) ≥ θ , and execute
t=1 yi (t) < θ and
the following operations:
Pt00 −1
a) set θ ← θ − t=1 yi (t), and, for every t ∈
[1, t00 − 1], yi (t) ← 0;
b) yi (t00 ) ← yi (t00 ) − θ.
5)
Notes. The number of idle machines at t becomes
zero again, i.e., W (t) = 0. The allocation yi (t) of Ti
at every t ∈ [1, t00 − 1] is zero.
if Routine(∆, η1 , 1, t) does not change the value
of f lag , i.e., f lag = 0, t ← t − 1; otherwise, exit
AllocateRLM(i, η1 , x).
Here, at each slot t, when Routine(·) is called, ∆ > 0, and
yi (t) < ki . Further, we have W (t) = 0; otherwise, this
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
contradicts Lemma 6. Hence, with Lemma 5, we conclude
that the task Ti0 in line 12 of Routine(·) exists.
Based on our notes in the description of AllocateRLM(·),
we conclude that
Proposition 3. Upon completion of AllocateRLM(i, 1, x) where
x = t2 + 1, the final allocation to S 00 can guarantee that
Property 4 holds where S 00 = S 0 ∪ {Ti }.
Proof. Fully-Utilize(i), Fully-Allocate(i) and AllocateRLM(i,
η1 , x) never change the allocation at any slot in [τm + 1, d].
AllocateRLM(i, 1, x) ends up with one of the following four
events. The first event occurs when the condition in line 4 of
Routine(·) is true; then, the proposition holds trivially since
all the slots t ∈ [1, di ] have been fully utilized, i.e., W (t) = 0.
If the first event doesn’t occur, the second
Pt−1event is that, for
the first time, at some t ∈ [t2 + 1, di ], t=1 yi (t) = 0; then,
we have that, Ti is fully allocated Di resource in [t, di ] ⊆
[t2 + 1, di ]. The third event occurs when the condition in
line 10 of Routine(·) is true. In the following, we will analyze
the resource utilization state when either of the second and
third events occurs.
Recall that t0 is defined in line 2 of Routine(·) where each
slot in [t0 + 1, di ] will be fully utilized; when the second or
third event occurs, all the slots in [t0 +1, di ] are fully utilized,
i.e., W (t) = 0, for all t ∈ [t0 + 1, di ]. Upon completion of
the iteration of AllocateRLM(·) at t when the third event
occurs, or, at t+1 when the second event occurs, we have the
following three points, in contrast to the allocation achieved
just before executing Allocate-B(i),
(i)
(ii)
(iii)
(iv)
t0 ∈ [1, t2 ] and the allocation to the previous tasks S 0
at every t ∈ [1, t0 − 1] is still the allocation achieved
before executing Allocate-B(i);
Pt0 −1
0
t=1 yi (t) = 0, i.e., the allocation to Ti in [1, t − 1]
is zero and Ti is fully allocated Di resource in [t0 , di ];
the allocation to S 0 at t0 is not decreased;
the allocation to Ti at t0 does not change.
Noticing the above resource allocation state in [1, di ] where
di = τm , since Property 2 holds before executing AllocateB(i), we conclude that Property 2 still holds upon its completion where t0 = t0 . Without loss of generality, assume that
t0 ∈ [τm0 −1 + 1, τm0 ] for some m0 ∈ [m]+ . Then, all the slots
in [τm0 + 1, di ] have been fully utilized and the allocation in
[τm + 1, d] does not change at all; hence, we have that every
interval [τl + 1, d], where m0 ≤ l ≤ L, is optimally utilized
by S 0 ∪ {Ti } due to Property 1. Since the total allocation to
S 0 in [1, τm0 −1 ] isn’t changed by Allocate-B(i) if m0 − 1 > 0,
due to Property 1, the interval [τm0 −1 +1, d] is still optimally
utilized by S 0 and the task Ti is fully allocated Di resource in
this interval; hence, it is still optimally utilized by S 0 ∪ {Ti }.
Further, every interval [τl + 1, d] is also optimally utilized
where 1 ≤ l ≤ m0 − 1. Hence, the theorem holds.
If the first three events don’t occur, the fourth event
occurs upon completion of the iteration of AllocateRLM(·)
at t = t2 + 1, i.e., the last iteration. In this case, we have that
the conditions in lines 4 and 10 of Routine(·) are always false
where at each iteration of AllocateRLM(·) there always exists
such t0 (defined in line 2 of Routine(·) with W (t0 ) > 0); due
to the current resource allocation state, we conclude that, at
each of the slots in [t2 + 1, di ], Ti is allocated ki machines.
9
Upon completion of AllocateRLM(·), there exists a t0 defined
in line 2 of Routine(·), and, let t000 denote the earliest slot
at which yi (t000 ) 6= 0 where t000 ≤ t0 ; then, similar to our
conclusion in the second and third events, we have that
(i)
(ii)
(iii)
the first point here is the same as the first and third
points in the last paragraph;
Ti is fully allocated Di resource in [t000 , di ];
if t000 > t0 , the allocation to Ti at each t ∈ [t000 + 1, t0 ]
does not change and yi (t) = ki due to Lemma 6,
and, the allocation to Ti at t000 is greater than zero.
Similar to our analysis in the last paragraph for other events,
we conclude that the proposition holds.
Proposition 2 and Proposition 3 finish to show that
Allocate-B(i) satisfies Property 3 and Property 4 and hence
completes the proof of Proposition 1. We finally analyze the
time complexity of Allocate-B(i).
Lemma 7. The time complexity of Allocate-B(·) is O(n).
Proof. See the Appendix for the proof.
Algorithm 2: Allocate-B(i)
1
2
3
Fully-Utilize(i);
Fully-Allocate(i);
AllocateRLM(i, 1, t2 + 1);
Since LDF(S ) considers a total of n tasks, its complexity
is O(n2 ) with Lemma 7. Finally, we draw a main conclusion
in this section from Lemma 1 and Proposition 1:
Theorem 1. A set of tasks S can be feasibly scheduled and be
completed by their deadlines on C machines if and only if the
boundary condition holds, where the feasible schedule of S could
be produced by LDF(S ) with a time complexity O(n2 ).
In other words, if LDF(S ) cannot produce a feasible
schedule for S on C machines, then S cannot be successfully
scheduled by any algorithm; as a result, LDF(S ) is optimal.
The relationships between the various algorithms of this
paper are illustrated in Fig. 4 where GreedyRLM will be
introduced in the next section.
Remarks. We are inspired by the GreedyRTL algorithm [3]
in the construction of LDF(·). In terms of the two algorithms
themselves, LDF(·) considers tasks in the decreasing order
of deadlines while the order is determined by the marginal
values in GreedyRTL(·). In both algorithms, the allocation to
a task Ti is considered from di to 1 (once in GreedyRTL, and
possibly three times in LDF(·)); to make time slots t closest
to the deadline of a task Ti being considered fully utilized,
the key operations are finding a time slot t0 earlier than t
such that there exists a task Ti0 with yi0 (t) > yi0 (t0 ) when
W (t), and transferring a part of the allocation of Ti0 at t
to t0 . In GreedyRTL(·), the existence of Ti0 requires that (i)
the number W (t0 ) of available machines at t0 is ≥ k and
(ii)2 W (t) < ki ; as a result, before doing any allocation
to Ti at t, the existence could be proved by contradiction.
In LDF(·), to achieve the optimality of resource utilization,
2. The particular condition there is W (t)
Pdi
y (t)}.
t=t+1 i
<
min{ki , Di −
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
10
Fig. 4. Relationship among Algorithms: for A → B , the blue and green arrows denote the relations that the algorithm A calls B , and, the algorithm
B is executed upon completion of A.
one requirement for such existence is relaxed to be that the
number of available machines at t0 is ≥ 1. The existence is
guaranteed by (i) first make every time slot from di to 1 fully
utilized, as what Fully-Utilize(i) does, and (ii) a steppedshape resource utilization state in [1, di ] upon completion of
the allocation to the last task, as described in Property 2.
a total of C · d01 such tasks, where ∈ (0, 1) is small enough;
(iii) for all Tj ∈ D2 , vj0 = 1, kj = 1 and Dj = d02 − d01 + 1.
Greedy will always fully allocate resource to the tasks in
D1 , with all the tasks in D2 rejected to be allocated any
resource. The performance guarantee of Greedy will be no
C·d01
more than C·[(1+)(d0 −1)+1·(d
0 −d0 +1)] . Further, with → 0,
1
2
1
this performance guarantee approaches
4
A PPLICATIONS : PART I
In this section, we illustrate the application of the results
in Section 3 to the greedy algorithm for social welfare
maximization.
In terms of the maximization problem, the general form of
a greedy algorithm is as follows [22], [23]: it tries to build a
solution by iteratively executing the following steps until
no item remains to be considered in a set of items: (1)
selection standard: in a greedy way, choose and consider an
item that is locally optimal according to a simple criterion
at the current stage; (2) feasibility condition: for the item
being considered, accept it if it satisfies a certain condition
such that this item constitutes a feasible solution together
with the tasks that have been accepted so far under the
constraints of this problem, and reject it otherwise. Here,
an item that has been considered and rejected will never
be considered again. The selection criterion is related to the
objective function and constraints, and is usually the ratio of
’advantage’ to ’cost’, measuring the efficiency of an item. In
the problem of this paper, the constraint comes from the
capacity to hold the chosen tasks and the objective is to
maximize the social welfare; therefore, the selection criterion
here is the ratio of the value of a task to its demand that will
refer to as the marginal value of this task.
Given the general form of greedy algorithm, we define a
class GREEDY of algorithms that operate as follows:
1)
2)
Considers the tasks in the non-increasing order of
the marginal value; assume without loss of generality that v10 ≥ v20 ≥ · · · ≥ vn0 ;
Denoting by A the set of the tasks that have been accepted so far, a task Ti being considered is accepted
and fully allocated iff there exists a feasible schedule
for A ∪ {Ti }.
In the following, we refer to the generic algorithm in
GREEDY as Greedy.
Proposition 4. The best performance guarantee that a greedy
algorithm in GREEDY can achieve is s−1
s .
Proof. Let us consider a special instance: (i) let Di = {Tj ∈
T |di = d0i }, where i ∈ [2]+ , d02 and d01 ∈ Z + , and d02 > d01 ;
(ii) for all Tj ∈ D1 , vj0 = 1 + , Dj = 1, kj = 1, and, there is
d02
d02 −d01 +1
s−1
s
d01 −1
d02 .
and
=
When
s=
Hence, the proposition holds.
4.1
d02
d01
d02 .
In this instance,
→ +∞,
d01
d02
=
s−1
s .
Notation
Greedy will consider tasks sequentially. The first considered task will be accepted definitely and then it will
use to the feasibility condition to determine whether or not
to accept or reject the next task according to the current
available resource and the characteristics of this task. To
describe the process under which Greedy accepts or rejects
tasks, we define the sets of consecutive accepted (i.e., fully
allocated) and rejected tasks A1 , R1 , A2 , · · · . Specifically, let
Am = {Tim , Tim , · · · , Tjm } be the m-th set of the adjacent
tasks that are accepted by Greedy where i1 = 1 while
Rm = {Tjm +1 , · · · , Tim+1 −1 } is the m-th set of the adjacent that are rejected tasks following the set Am , where
m ∈ [K]+ for some integer K . Integer K represents the
last step: in the K -th step, AK 6= ∅ and RK can be empty or
non-empty. We also denote by cm the maximum deadline of
all rejected tasks of ∪m
l=1 Rl , i.e.,
cm = maxTi ∈∪m
{di },
l=1 Rl
and by c0m the maximum deadline of ∪m
l=1 Al , i.e.,
c0m = maxTi ∈∪m
{di }.
l=1 Al
While the tasks in Am ∪ Rm are being considered, we
refer to Greedy as being in the m-th phase. Before the
execution of Greedy, we refer to it as being in the 0-th phase.
Upon completion of the m-th phase of Greedy, we define a
threshold parameter tth
m such that
(i)
(ii)
if cm ≥ c0m , set tth
m = cm , and
0
if cm < c0m , set tth
m to any time slot in [cm , cm ].
m
Here, di ≤ tth
m for all Ti ∈ ∪j=1 Rj . For ease of the
th
subsequent exposition, we let t0 = 0 and tth
K+1 = d. We
also add a dummy time slot 0 but the task Ti ∈ T can not
get any resource there, that is, yi (0) = 0 forever. We also let
A0 = R0 = AK+1 = RK+1 = ∅. Besides the notation in
Section 2, the additional key notation used for this section is
also summarized in Table 3.
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
TABLE 3
Main Notation for Section 4
Notation
A1 , R1 ,
A2 , · · · ,
RK
cm
c0m
tth
m
4.2
Explanation
the sets of consecutive accepted (i.e., fully allocated)
and rejected
SK tasks by Greedy where
m=1 Am ∪ Rm = T
the maximum deadline of all rejected tasks of ∪m
l=1 Rl
the maximum deadline of ∪m
l=1 Al
a threshold parameter such that (i) if cm ≥ c0m , set
0
th
tth
m = cm , and (ii) if cm < cm , set tm to any time slot
in [cm , c0m ]; when introducing GreedyRLM, it will be
set to a specific value
A New Algorithmic Analysis
We will show that as soon as the resource allocation done
by Greedy satisfies some features, its performance guarantee
can be deduced immediately, i.e., the main result of this
subsection is Theorem 2.
For all m ∈ [K]+ , upon completion of Greedy, we define
the following two features that we want the allocation to
∪m
j=1 Aj to satisfies:
Sm
Feature 1. The total allocation to j=1 Aj in [1, tth
m ] is at least
r · C · tth
m , where r ∈ [0, 1].
Sm
Feature 2. For each task Ti ∈ j=1 Aj , its maximum amount
of demand that can be processed in each tth
l + 1, d is processed
where m ≤ l ≤ K , i.e.,
di
P
t=tth
l +1
yi (t) = min Di , ki (di − tth
l ) .
Theorem 2. If Greedy achieves a resource allocation structure
that satisfies Feature 1 and Feature 2 for all m ∈ [K]+ , it gives
an r-approximation to the optimal social welfare.
11
Lemma 9. The following schedule achieves an upper bound of
the optimal social welfare of the MSW-II problem, ignoring the
capacity constraint:
1)
2)
Proof. See the appendix for the detailed proof.
Lemma 10. For all m ∈ [K]+ , the total value generated by
0
executing the allocation to T10 , · · · , Tm
is no larger than 1−r
r
times the total value generated by the allocation to ∪m
l=1 Al in
[1, tth
m ].
Proof. See the appendix for the detailed proof.
0
In the case that m = K , the total value from T10 , · · · , TK
is no larger than 1−r
times
the
total
value
from
the
allocar
th
tion to ∪K
l=1 Al in [1, tK ]. Hence, the total value generated
1
by the schedule in Lemma 9 is no larger than 1 + 1−r
r = r
times the total value generated by the allocation to all tasks
of A1 , · · · , AK . By Lemmas 9 and 8, Theorem 2 holds.
4.3
Lemma 8. The optimal social welfare of the MSW-II problem
is an upper bound of the optimal social welfare of the MSW-I
problem.
Proof. See the appendix for the detailed proof.
Due to Feature 1, Feature 2, and the fact that the marginal
0
value of Tm
is no larger than the ones of the tasks of ∪m
l=1 Al ,
we derive the following two lemmas:
Optimal Algorithm Design
We now introduce the executing process of the optimal
greedy algorithm GreedyRLM, presented as Algorithm 3:
(1)
(2)
In the rest of Section 4.2, we prove Theorem 2; we will
first provide an upper bound of the optimal social welfare.
Proof Overview. We refer to the original problem of
scheduling A1 , R1 , · · · , AK , RK on C machines to maximize the social welfare as the MSW-I problem.
In the following, we define a relaxed version of the
MSW-I problem. Assume that R0m consists of a single task
0
whose deadline is tth
Tm
m , whose size is infinite, and whose
marginal value is the largest one of the tasks in Rm , denoted
by v 0m ; here, different from the task in Rm , we assume
0
that there is no parallelism constraint on Tm
whose bound
0
is C . In addition, partial execution of the task Tm
and
the tasks of A1 , · · · , AK can yield linearly proportional
Pdi
value, e.g., if a task Ti ∈ Al is allocated t=1
yi (t) < Di
Pdi
resource by its deadline, a value ( t=1 yi (t)/Di ) · vi will
still be added to the social welfare. We refer to the problem
of scheduling A1 , R01 , · · · , AK , R0K on C machines as the
MSW-II problem.
for all tasks of A1 , · · · , AK , their allocation is the same
as the one achieved by Greedy with Features 1 and 2
satisfied;
0
for all m ∈ [K]+ , execute a part of task Tm
such that
th
the amount of processed
workload
in
t
+
1, tth
m−1
m is
th
(1 − r) · tth
m − tm−1 · C .
(3)
considers the tasks in the non-increasing order of the
marginal value.
in
P the m-th phase, for a task Ti being considered, if
t≤di min{W (t), ki } ≥ Di , call Allocate-A(i), presented as Algorithm 4, where the details on FullyUtilize(i) and AllocateRLM(i, 0, tth
m + 2) can be found
in Section 3.2.2 and Section 3.2.3.
if the allocation condition is not satisfied, set the
threshold parameter tth
m of the m-th phase that is
defined by lines 8-15 of Algorithm 3.
When the condition in line 5 of GreedyRLM is true, every
accepted task can be fully allocated Di resource using FullyUtilize(i). The reason for the existence of Ti0 in Routine(·)
is the same as the reason when introducing LDF(S ) since
W (t0 ) > 0.
Proposition 5. GreedyRLM gives an s−1
s -approximation to the
optimal social welfare with a time complexity of O(n2 ).
Now, we begin to prove Proposition 5. The time complexity of Allocate-A(i) depends on AllocateRLM(·). Using
the time complexity analysis of AllocateRLM(·) in Lemma 7,
we get that AllocateRLM(·) has a time complexity of O(n),
and, the time complexity of GreedyRLM is O(n2 ). Due to
Theorem 2, in the following, we only need to prove that
Features 1 and 2 holds in GreedyRLM where r = s−1
s , which
is given in Propositions 6 and 7.
The utilization of GreedyRLM is derived mainly by
analyzing the resource allocation state when a task Ti cannot
be fully allocated (the condition in line 5 of GreedyRLM is
not satisfied), and we have that
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
12
exceed Di . Finally, upon completion of the whole execution
of Allocate-A(·), we have that
Algorithm 3: GreedyRLM
Input : n tasks with typei = {vi , di , Di , kj }
Output: A feasible allocation of resources to tasks
1
2
3
4
5
6
7
8
9
•
initialize: yi (t) ← 0 for all Ti ∈ T and 1 ≤ t ≤ d,
m = 0, tth
m = 0;
sort tasks in the non-increasing order of the marginal
values: v10 ≥ v20 ≥ · · · ≥ vn0 ;
i ← 1;
while i ≤ n do
P
if t≤di min{W (t), ki } ≥ Di then
Allocate-A(i);
// in the (m + 1)-th phase
else
if Ti−1 has ever been accepted then
m ← m + 1; // in the m-th phase, the
allocation to Am was completed; the
first rejected task is Tjm = Ti
10
11
P
while t≤di+1 min{W (t), ki+1 } < Di+1 do
i ← i + 1;
/* the last rejected task is Tim+1 −1 = Ti
and Rm = {Tjm , · · · , Tim+1 −1 }
12
13
14
15
16
*/
if cm ≥ c0m then
tth
m ← cm ;
else
set tth
m to time slot just before the first time
slot t with W (t) > 0 after cm or to c0m if
there is no time slot t with W (t) > 0 in
[cm , c0m ];
i ← i + 1;
Algorithm 4: Allocate-A(i)
1
2
3
Fully-Utilize(i);
if di ≥ tth
m + 2 then
AllocateRLM(i, 0, tth
m + 2) where t2 = t1 that are
defined in Section 3.2.2;
Proposition 6. Upon completion of GreedyRLM, Feature 1 holds
in which r = s−1
s .
Proof. See the Appendix for the detailed proof.
In GreedyRLM, when a task Ti is accepted (lines 5 and 6),
Allocate-A(·) is called to make it fully allocated. In AllocateA(·), Fully-Utilize(·) and AllocateRLM(·) are sequentially
called; both of them consider time slots t from the deadline
towards earlier ones: (i) Fully-Utilize(·) makes Ti utilize the
remaining (idle) machines at t, and it does not change the
allocations of the previous tasks; (ii) at every t, if Ti does not
utilize the maximum number of machines it can utilize (i.e.,
yi (t) < ki ), AlloacteRLM(·) (a) transfers the allocations of
the previous allocated tasks to an earlier slot that is closest
to t but not fully utilized (i.e., with idle machines), and
(b) increases the allocation of Ti at t to the maximum (i.e.,
ki ) and, correspondingly reduce the equal allocations at the
earliest slots, ensuring the total allocation to Ti does not
the number of allocated machines at each slot does
not decrease,
in contrast to that amount just before executing AllocateA(·). For every accepted task Ti ∈ Am , upon completion
of Allocate-A(·), time slot tth
m + 1 is not fully utilized by
th
the definition of tth
,
i.e.,
W
(t
m
m + 1) > 0. Further, we have
that whenever Allocate-A(·) completes the allocation to a
previous task Ti ∈ Am0 where m0 < m, tth
m + 1 is also not
fully utilized then. Based on this, we draw the following
conclusion.
Lemma 11. Due to the definition of tth
m , we have for all m ≤ j ≤
K that
Sm
(1) [tth
j + 1, d] is optimally utilized by l=1 Al upon completion of the allocation to it using Allocate-A(i);
(2) for the total amount of the allocations to Ti in the interval
[tth
j + 1, d] just upon completion of Allocate-A(i), it does
not change upon completion of GreedyRLM.
Proof. We first prove the first point. Given a m0 ∈ [m]+ , for
every Ti ∈ Am0 , upon completion of Allocate-A(i), W (tth
j +
1) > 0 for all j ∈ [m, K]; based on this,
we conclude that,
Pdi
in the case where di ≥ tth
y (t) = Di if
j + 1, either
t=tth +1 i
j
th
di − tth
j > leni or yi (t) = ki for all t ∈ [tj + 1, di ] otherwise.
The reason for this conclusion is similar to our analysis for
the fourth event when proving Proposition 3; here, there
always exists a slot tth
j + 1 that is not fully utilized, i.e.,
th
W (tj + 1) > 0, leading to that the t0 defined in line 2 of
Routine(·) always exists where W (t0 ) > 0.
Now, we prove the second point in Lemma 11. For
every l ∈ [m0 , K], we observe the subsequent execution of
Allocate-A(·) whose input is a task in Al and could conclude
that,
1)
2)
upon its completion, the allocations to Ti in [1, tth
l ]
are still the ones before executing Allocate-A(·);
Allocate-A(·) can only change the allocations of Ti in
the time range [tl0 +1, tl0 +1 ] where l0 ∈ [l, K] and the
total amount of allocations in [tl0 + 1, tl0 +1 ] upon its
completion is still the amount before its execution.
As a result, we have that, upon completion of Allocate-A(i),
every subsequent execution of Allocate-A(·) never change
the total amount of allocations of Ti in [tl00 + 1, tl00 +1 ] for all
l00 ∈ [m, K].
In the following, it suffices to prove the above two
points. In the execution of Allocate-A(·), Fully-Utilize(·) is
first called and it does not change the allocation to the
previous tasks; then, AllocateRLM(·, 0, tth
l ) is called in which
only Routine(·) (i.e., its lines 12 and 13) in the step 3 can
change the allocation to the previous tasks including Ti . In
lines 12 and 13, a previous task Ti0 is found to change its
allocations at t and t0 ; here, t0 is defined in lines 2 and 7 of
0
Routine(·) and tth
l < t < t. As a result, Allocate-A(·) cannot
change the allocations of the previous tasks in [1, tth
l ]; for all
th
0
t ∈ [tth
l0 + 1, tl0 +1 ] where l ∈ [l, K], during the execution of
the iteration of AllocateRLM(·) at t, we have t0 > tth
l0 . Hence,
the change to the allocations of the previous tasks can only
th
happen in the interval [tth
l0 + 1, tl0 +1 ].
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
From the first and second points of Lemma 11, we could
conclude that
Proposition 7. Given a m
∈ [1, K], tth
l + 1, d is optimally
Sm
utilized by every task Ti ∈ j=1 Aj for all l ∈ [m, K].
5
A PPLICATIONS : PART II
In this section, we illustrate the applications of the results
in Section 3 to (i) the dynamic programming technique for
social welfare maximization, (ii) the machine minimization
objective, and (iii) the objective of minimizing the maximum
weighted completion time.
5.1
Dynamic Programming
For any solution, there must exist a feasible schedule for
the tasks selected to be fully allocated by this solution. So,
the set of tasks in an optimal solution satisfies the boundary
condition by Lemma 1. Then, to find the optimal solution,
we only need address the following problem: if we are given
C machines, how can we choose a subset S of tasks in T
such that (i) this subset satisfies the boundary condition,
and (ii) no other subset of selected tasks achieves a better
social welfare? This problem can be solved via dynamic
programming (DP). To propose a DP algorithm, we need to
identify a dominant condition for the model of this paper
[18]. Let F ⊆ T and recall that the notation λC
m (F) in
Section 3.1. Now, we define a L-dimensional vector
C
C
C
H(F) = (λC
1 (F) − λ0 (F), · · · , λL (F) − λL−1 (F)),
+
C
where λC
m (F) − λm−1 (F), m ∈ [L] , denotes the optimal
resource that F can utilize on C machines in the segmented
timescale [τL−m + 1, τL−m+1 ] after F has utilized λC
m−1 (F)
resource in [τL−m+1 + 1, τL ]. Let v(F) denote the total
value of the tasks in F and then we introduce the notion
of one pair (F, v(F)) dominating another (F 0 , v(F 0 )) if
H(F) = H(F 0 ) and v(F) ≥ v(F 0 ), that is, the solution to
our problem indicated by (F, v(F)) uses the same amount
of resources as (F 0 , v(F 0 )), but obtains at least as much
value.
Algorithm 5: DP(T )
1
2
3
4
5
6
7
8
9
10
11
12
F ← {T1 };
A(1) ← {(∅, 0), (F, v(F))};
for i ← 2 to n do
A(j) ← A(i − 1);
for each (F, v(F)) ∈ A(i − 1) do
if {Ti } ∪ F satisfies the boundary condition then
if there exist a pair (F 0 , v(F 0 )) ∈ A(i) so that
(1) H(F 0 ) = H(F ∪ {Ti }), and (2)
v(F 0 ) ≥ v(F ∪ {Ti }) then
Add ({Ti } ∪ F, v({Ti } ∪ F)) to A(i);
Remove the dominated pair (F 0 , v(F 0 ))
from A(i);
else
Add ({Ti } ∪ F, v({Ti } ∪ F)) to A(i);
return arg max(F ,v(F ))∈A(n) {v(F)};
13
We now give the general DP procedure DP(T ), also
presented as Algorithm 5 [18]. Here, we iteratively construct
the lists A(i) for all i ∈ [n]+ . Each A(i) is a list of pairs
(F, v(F)), in which F is a subset of {T1 , T2 , · · · , Ti } satisfying the boundary condition and v(F) is the total value
of the tasks in F . Each list only maintains all the dominant
pairs. Specifically, we start with A(1) = {(∅, 0), ({T1 }, v1 )}.
For each i = 2, · · · , n, we first set A(i) ← A(i − 1), and for
each (F, v(F)) ∈ A(i − 1), we add (F ∪ {Ti }, v(F ∪ {Ti }))
to the list A(i) if F ∪ {Ti } satisfies the boundary condition.
We finally remove from A(i) all the dominated pairs. DP(T )
will select a subset S of T from all pairs (F, v(F)) ∈ A(n)
so that v(F) is maximum.
Proposition 8. DP(T ) outputs a subset S of T = {T1 , · · · , Tn }
such that v(S) is the maximum value subject to the condition that
S satisfies the boundary condition; the time complexity of DP(T )
is O(ndL C L ).
Proof. The proof is similar to the one in the knapsack
problem [18]. By induction, we need to prove that A(i)
contains all the non-dominated pairs corresponding to feasible sets F ∈ {T1 , · · · , Ti }. When i = 1, the proposition
holds obviously. Now suppose it hold for A(i − 1). Let
F 0 ⊆ {T1 , · · · , Ti } and H(F 0 ) satisfies the boundary condition. We claim that there is some pair (F, v(F)) ∈ A(i) such
that H(F) = H(F 0 ) and v(F) ≥ v(F 0 ). First, suppose that
Ti ∈
/ F 0 . Then, the claim follows by the induction hypothesis
and by the fact that we initially set A(i) to A(i − 1) and removed dominated pairs. Now suppose that Ti ∈ F 0 and let
F10 = F 0 − {Ti }. By the induction hypothesis there is some
(F1 , v(F1 )) ∈ A(i − 1) that dominates (F10 , v(F10 )). Then,
the algorithm will add the pair (F1 ∪ {Ti }, v(F1 ∪ {Ti })) to
A(i). Thus, there will be some pair (F, v(F)) ∈ A(i) that
dominates (F 0 , v(F 0 )). Since the size of the space of H(F)
is no more than (C · T )L , the time complexity of DP(T ) is
ndL C L .
Proposition 9. Given the subset S output by DP(T ), LDF(S )
gives an optimal solution to the welfare maximization problem
with a time complexity O(max{ndL C L , n2 }).
Proof. It follows from Propositions 8 and 1.
Remark. As in the knapsack problem [18], to construct the
algorithm DP(T ), the pairs of the possible state of resource
utilization and the corresponding best social welfare have to
be maintained and a L-dimensional vector has to be defined
to indicate the resource utilization state. This seems to imply
that we cannot make the time complexity of a DP algorithm
polynomial in L.
5.2
Machine Minimization
Given a set of tasks T , the minimal number of machines
needed to produce a feasible schedule of T is exactly the
minimum C ∗ such that the boundary condition is satisfied,
by Theorem 1, where the feasible schedule could be produced with a time complexity O(n2 ). An upper bound of the
minimum C ∗ is k · n and this minimum C ∗ can be obtained
through a binary search procedure with a time complexity
of log (k · n) = O(log n); the corresponding algorithm is
presented as Algorithm 6.
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
Lemma 12. In each iteration of the binary search procedure,
the time complexity of determining the satisfiability of boundary
condition (line 4 of Algorithm 6) is O(L · n) where L ≤ n.
Proof. See the Appendix for the proof.
With Lemma 12, the loop of Algorithm 6 has a complexity O(L · n · log n). Based on the above discussion, we
conclude that
Proposition 10. Algorithm 6 produces an exact algorithm for
the machine minimization problem with a time complexity of
O(n2 , L · n · log n).
Algorithm 6: Machine Minimization)
1
2
3
4
5
L ← 1 , U ← k · n;
// L and U are respectively
the lower and upper bounds of the minimum
number of needed machines
mid ← L+U
2 ;
while U − L ≤ 1 do
if the boundary condition is satisfied with C = C ∗
then
U ← mid;
7
else
L ← mid;
8
mid ←
6
9
10
5.3
L+U
2 ;
C∗ ← U ;
// the optimal number of machines
call the algorithm LDF(T ) to produce a schedule of T
on C ∗ machines;
Minimizing Maximum Weighted Completion Time
Under the task model of this paper and for the objective
of minimizing the maximum weighted completion time
of tasks, a direction application of LDF(S ) improves the
algorithm in [8] by a factor 2. In [8], with a polynomial time
complexity, Nagarajan et al. find a completion time di for
each task Ti that is 1 + times the optimal in terms of the
objective here; then they propose a scheduling algorithm
where each task can be completed by the time at most 2
times di . As a result, an (2 + 2)-approximation algorithm
is obtained. Instead, by using the optimal scheduling algorithm LDF(S ), we have that
Proposition 11. There is a (1 + )-approximation algorithm
for scheduling independent malleable tasks under the objective of
minimizing the maximum weighted completion time of tasks.
6
C ONCLUSION
In this paper, we study the problem of scheduling n
deadline-sensitive malleable batch tasks on C identical machines. Our core result is a new theory to give the first
optimal scheduling algorithm so that C machines can be
optimally utilized by a set of batch tasks. We further derive
four algorithmic results in obvious or non-obvious ways: (i)
the best possible greedy algorithm for social welfare maximization with a polynomial time complexity of O(n2 ) that
achieves an approximation ratio of s−1
s , (ii) the first dynamic
14
programming algorithm for social welfare maximization
with a polynomial time complexity of O(max{ndL C L , n2 }),
(iii) the first exact algorithm for machine minimization with
a polynomial time complexity of O(n2 , L · n · log n), and (iv)
an improved polynomial time approximation algorithm for
the objective of minimizing the maximum weighted completion time of tasks, reducing the previous approximation
ratio by a factor 2. Here, L and d are the number of deadlines
and the maximum deadline of tasks.
R EFERENCES
[1] Han Hu, Yonggang Wen, Tat-Seng Chua, and Xuelong Li. ”Toward
scalable systems for big data analytics: A technology tutorial.” IEEE
Access (2014): 652-687.
[2] Jain, Navendu, Ishai Menache, Joseph Naor, and Jonathan Yaniv. ”A
Truthful Mechanism for Value-Based Scheduling in Cloud Computing.” In the 4th International Symposium on Algorithmic Game Theory,
pp. 178-189, Springer, 2011.
[3] Navendu Jain, Ishai Menache, Joseph Naor, and Jonathan Yaniv.
”Near-optimal scheduling mechanisms for deadline-sensitive jobs
in large computing clusters.” In Proceedings of the twenty-fourth
annual ACM symposium on Parallelism in algorithms and architectures,
pp. 255-266. ACM, 2012.
[4] Brendan Lucier, Ishai Menache, Joseph Seffi Naor, and Jonathan
Yaniv. ”Efficient online scheduling for deadline-sensitive jobs.” In
Proceedings of the twenty-fifth annual ACM symposium on Parallelism
in algorithms and architectures, pp. 305-314. ACM, 2013.
[5] Yossi Azar, Inna Kalp-Shaltiel, Brendan Lucier, Ishai Menache,
Joseph Seffi Naor, and Jonathan Yaniv. ”Truthful online scheduling
with commitments.” In Proceedings of the Sixteenth ACM Conference
on Economics and Computation, pp. 715-732. ACM, 2015.
[6] Ishai Menache, Ohad Shamir, and Navendu Jain. ”On-demand,
Spot, or Both: Dynamic Resource Allocation for Executing Batch
Jobs in the Cloud.” In Proceedings of USENIX International Conference
on Autonomic Computing, 2014.
[7] Peter Bodı́k, Ishai Menache, Joseph Seffi Naor, and Jonathan Yaniv.
”Brief announcement: deadline-aware scheduling of big-data processing jobs.” In Proceedings of the 26th ACM symposium on Parallelism in algorithms and architectures, pp. 211-213. ACM, 2014.
[8] Viswanath Nagarajan, Joel Wolf, Andrey Balmin, and Kirsten Hildrum. ”Flowflex: Malleable scheduling for flows of mapreduce
jobs.” In Proceedings of the 12th ACM/IFIP/USENIX International
Conference on Distributed Systems Platforms and Open Distributed
Processing (MiddleWare), pp. 103-122. Springer, 2013.
[9] J. Wolf, Z. Nabi, V. Nagarajan, R. Saccone, R. Wagle, et al. ”The
X-flex Cross-Platform Scheduler: Who’s the Fairest of Them All?.”
In Proceedings of the ACM/IFIP/USENIX 13th MiddleWare conference,
Industry Track. Springer, 2014.
[10] Eugene L. Lawler. ”A dynamic programming algorithm for preemptive scheduling of a single machine to minimize the number of
late jobs.” Annals of Operations Research 26, no. 1 (1990): 125-133.
[11] D. Karger, C. Stein, and J. Wein. Scheduling Algorithms. In CRC
Handbook of Computer Science. 1997.
[12] James R. Jackson. ”Scheduling a Production Line to Minimize
Maximum Tardiness.” Management Science Research Project Research
Report 43, University of California, Los Angeles, 1955.
[13] W. A. Horn. ”Some Simple Scheduling Algorithms.” Naval Research Logistics Quarterly, 21:177-185, 1974.
[14] Eugene L. Lawler, and J. Michael Moore. ”A Functional Equation
and Its Application to Resource Allocation and Sequencing Problems.” Management Science 16, no. 1 (1969): 77-84.
[15] J. A. Stankovic, M. Spuri, K. Ramamritham, and G. Buttazzo,
Deadline Scheduling for Real-Time Systems: EDF and Related
Algorithms. Kluwer Academic, 1998.
[16] T. White. ”Hadoop: The definitive guide.” O’Reilly Media, Inc.,
2012.
[17] Julia Chuzhoy, Sudipto Guha, Sanjeev Khanna, and Joseph Seffi
Naor. ”Machine minimization for scheduling jobs with interval constraints.” In Foundations of Computer Science, 2004. Proceedings.
45th Annual IEEE Symposium on, pp. 81-90. IEEE, 2004.
[18] D. P. Williamson and D. B. Shmoys. The Design of Approximation
Algorithm. Cambridge University Press, 2011.
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
[19] Xiaohu Wu, and Patrick Loiseau. ”Algorithms for scheduling
deadline-sensitive malleable tasks.” In Proceedings of the 53rd Annual Allerton Conference on Communication, Control, and Computing
(Allerton), pp. 530-537. IEEE, 2015.
[20] Xiaohu Wu, and Patrick Loiseau. ”Algorithms for Scheduling Malleable Cloud Tasks (Technical Report).” arXiv preprint
arXiv:1501.04343v4 (2015).
[21] Longkun Guo, and Hong Shen. ”Efficient Approximation Algorithms for the Bounded Flexible Scheduling Problem in Clouds.”
IEEE Transactions on Parallel and Distributed Systems 28, no. 12
(2017): 3511-3520.
[22] G. Brassard, and P. Bratley. Fundamentals of Algorithmics.
Prentice-Hall, Inc., 1996.
[23] G. Even, Recursive greedy methods, in Handbook of Approximation Algorithms and Metaheuristics, T. F. Gonzalez, ed., CRC, Boca
Raton, FL, 2007, ch. 5.
A PPENDIX
Proof of Lemma 2. Before executing Fully-Utilize(i), the resource allocation to the previously allocated tasks S 0 satisfies Property 1. Its execution does not change the previous
allocation to S 0 . Let S 00 = S 0 ∪ {Ti }. Since di = τm ,
the workload of Ti can only be processed in [1, τm ]; the
maximum workload of S 00 that could be processed in
[τm +1, τL ] still equals its counterpart when S 0 is considered,
00
C
0
i.e., λC
L−m (S ) = λL−m (S ). Upon completion of FullyUtilize(i), if the total allocation to S 00 in [1, τm ] is C · τm , we
could conclude that Ti is the last task of S being considered
and all tasks in S have been fully allocated; otherwise,
S 00 ( S , which contradicts the fact that S and its subset
satisfy the boundary condition, which implies that after the
maximum workload of S 00 has been processed in [τm +1, τL ],
00
the remaining workload µC
m (S ) ≤ ·C · τm . Hence, we
conclude that
Proof of Lemma 3. During the execution of Fully-Utilize(i),
upon completion of the allocation to Ti at t ∈ [1, t1 ], if Ti
has not been fully allocated yet, it is allocated ki machines
at this slot. The allocations to Ti at slots t1 , · · · , 1 are nonincreasing, i.e.,
yi (1) ≤ yi (2) ≤ · · · ≤ yi (t1 ).
The reason for this is as follows: Fully-Utilize(i) allocates
machines to Ti from di towards earlier slots and, after the
allocation at every slot t ∈ [1, t1 ], yi (t) = min{ki , Di −
Pdi
y (t)} whose value is non-increasing with t. With
t=t+1 i
Property 2, before executing Fully-Utilize(i), the numbers of
idle machines have a stepped shape, i.e., W (1) ≥ · · · ≥
W (t0 ). The execution of Fully-Utilize(i) does not change
the previous allocation to S 0 and upon its completion the
number of available machines W (t) at every slot t ∈ [1, τm ]
will be no larger than its counterpart before executing
Fully-Utilize(i); we thus have t0 ≥ t1 . Upon completion
of Fully-Utilize(i), deducting the machines allocated to Ti ,
the numbers of idle machines still have a stepped shape in
[1, t1 ]. Hence, the lemma holds.
Proof of Lemma 5. Recall that W (t) is the sum of the allocations yj (t) of all tasks Tj ∈ S at t and W (t) + W (t) =
C . Initially, we have the inequality that W (t) − yi (t) >
W (t0 ) − yi (t0 ) due to the conditions (i)-(iii) of Lemma 5,
and, there exists a Ti0 such that yi0 (t0 ) < yi0 (t); otherwise,
that inequality would not hold. In the subsequent iteration
of Routine(·), W (t) becomes > 0 since partial allocation of
15
Ti0 is transferred from t to t0 ; however, it still holds that
W (t) < ∆ ≤ ki − yi (t). So, we have
W (t)−yi (t) = C−W (t)−yi (t) > W (t0 )−ki = W (t0 )−yi (t0 )
and such Ti0 can still be found like the initial case.
Proof of Lemma 6. If Ti has been allocated Di resource just
upon completion of Fully-Utilize(·), Fully-Allocate(i) does
nothing upon its completion and we have t2 = t1 and
the lemma holds. Otherwise, within [1, τm ], by Lemma 3,
only the time slots t in [1, t1 ] have available machines, i.e.,
W (t) > 0, and, at these time slots, yi (t) = ki ; for all
t ∈ [t1 + 1, di ], W (t) = 0. So, only for each t in [t1 + 1, di ]
and from di towards earlier time slots, Fully-Allocate(i)
will reduce the allocations of the previous tasks of S 0 at t
and transfer them to the latest time slot t0 in [1, t1 ] with
W (t0 ) > 0 (see the step 2 of Fully-Allocate(i)); then, all the
available machines at t will be re-allocated to Ti and W (t)
is still zero again (see the step 3 of Fully-Allocate(i)), and,
the number of available machines at t0 will be decreased to
zero one by one from t1 toward earlier time slots. Due to
Lemma 3, the lemma holds.
Proof of Lemma 7. The time complexity of Allocate-B(i) depends on Fully-Allocate(i) or AllocateRLM(·). In the worst
case, Fully-Allocate(i) and AllocateRLM(·) have the same
time complexity from the execution of Routine(·) at every
time slot t ∈ [1, di ]. In AllocateRLM(·) for every task Ti ∈ T ,
each loop iteration at t ∈ [1, di ] needs to seek the time slot
t0 and the task Ti0 at most Di times. The time complexities
of respectively seeking t0 and Ti0 are O(d) and O(n); the
maximum of these two complexities is max{d, n}. Since
di ≤ d and Di ≤ D, we have that both the time complexity
of Allocate-B(i) is O(dD max{d, n}). Since we assume that
d and k are finitely bounded where D ≤ d · k , we conclude
that O(dD max{d, n}) = O(n).
Proof of Lemma 8. Let us consider an optimal allocation to
A1 , R1 , · · · , AK , RK for the MSW-I problem. If we replace
an allocation to a task in Rm with the same allocation to
a task in R0m and do not change the allocation to Am , this
generates a feasible schedule for the MSW-II problem, which
yields at least the same social welfare since the marginal
value of the task in R0m is no smaller than the ones of the
tasks in Rm ; hence, Lemma 8 holds.
Proof of Lemma 9. We will show in an optimal schedule
of the MSW-II problem that (i) only the tasks of R0m ,
th
A1 , · · · , AK will be executed in [tth
m−1 + 1, tm ], and (ii) the
upper bound of the maximum workload of R0m that could
th
th
th
be processed in [tth
m−1 + 1, tm ] is (1 − r) · (tm − tm−1 ) · C . As
a result, the total value generated by executing all tasks of
th
A1 , · · · , AK and (1 − r) · (tth
m − tm−1 ) · C workload of each
0
+
Rm (m ∈ [K] ) is an upper bound of the optimal social
welfare for the MSW-II problem.
We prove the first point by contradiction. Given a
m ∈ [K]+ , if m ≥ 2, all tasks of R01 , · · · , R0m−1 could
th
not be processed in [tth
m−1 + 1, tm ] due to the deadline
constraint. If m ≤ K − 1, the marginal value of the task in
R0m is no smaller than the ones of R0m+1 , · · · , R0K ; instead
th
of processing R0m+1 , · · · , R0K in [tth
m−1 + 1, tm ], processing
0
Rm could generate at least the same value or even a higher
value. Hence, the first point holds.
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
We prove the second point also by contradiction. If there
th
exists a m0 ∈ [1, K] such that more than (1−r)·(tth
m0 −tm0 −1 )·
th
th
0
C workload of Rm0 is processed in [tm0 −1 + 1, tm0 ], let m
denote the minimum such m0 . In the case where m = 1,
due to Features 2 and 1, after the maximum workload of the
th
tasks of A1 has been processed in [tth
1 +1, tK ], the minimum
remaining workload that could be processed in [1, tth
1 ] is at
th
0
least r · tth
1 · C . If more than (1 − r) · t1 · C workload of R1
th
is processed in [1, t1 ], this means that the total amount of
th
workload of A1 processed in [1, tth
1 ] is smaller than r · t1 · C ;
in this case, we could always remove the allocation to R01
and add more allocation to A1 to increase the total value. As
a result, the second point holds when m = 1. In the other
case where m ≥ 2, since we are seeking for an upper bound,
we could assume that for all l ∈ [m − 1]+ , (1 − r) · (tth
l −
0
th
th
tth
)
·
C
workload
of
R
is
processed
in
[t
+
1,
t
]
.
Again
l−1
l
l−1
l
due to Features 2 and 1, similar to P
the case where m = 1, the
minimum available workload of Ti ∈∪m Al that could be
l=1
th
processed in [1, tth
m ] is at least r ·tm ·C . In this case, we could
stillPremove the allocation to R0m and add more allocation
to Ti ∈∪m Al to increase the total value, with the second
l=1
point holding when m ≥ 2.
Proof of Lemma 10. It suffices to prove that, the total alloth
cation to ∪m
l=1 Al in [1, tm ] could be divided into m parts
such that, for all l ∈ [1, m], (i) the l-th part has a size
th
r · (tth
l − tl−1 ) · C , and (ii) the allocation of the l-th part
is associated with marginal values no smaller than v 0l . Then,
the total value generated by executing the l-th part is no
times the total value generated by the
smaller than 1−r
r
th
allocation to R0l in [tth
l−1 + 1, tl ]. As a result, the value
th
generated by the total allocation to ∪m
l=1 Al in [1, tm ] is no
1−r
smaller than r times the value generated by the allocation
0
.
to T10 , · · · , Tm
Due to Feature 1, the allocation to A1 achieves a utilization r in [1, tth
1 ] and we could use a part of this allocation as
the first part whose size is r · tth
1 · C . Next, the allocation to
A1 ∪ A2 achieves a utilization r in [1, tth
2 ]; we could deduct
the allocation used for the first part and get a part of the
remaining allocation to A1 ∪ A2 as the second part, whose
th
size is r · (tth
2 − t1 ) · C . Similarly, we could get the 3rd,
· · · , m-th parts that satisfy the first point mentioned at the
beginning of this proof. Since the marginal value of the task
of R0l is no larger than the ones of the tasks in ∪ll0 =1 Al0
for all 1 ≤ l ≤ m, the second point mentioned above also
holds.
Proof of Proposition 6. We first show that the resource utilization of A1 ∪ · · · ∪ Am in [1, τm ] is r upon completion of the
m-th phase of GreedyRLM; then, we consider a task Ti ∈
∪m
l=1 Rl such that di = cm . Since
P Ti is not accepted when
being considered, it means that t≤di min{ki , W (t)} < Di
at that time and there are at most leni − 1 = d dsii e − 1 time
slots t with W (t) ≥ ki in [1, cm ]. Then, we assume that the
number of the time slots t with W (t) ≥ ki is µ. Since Ti isn’t
fully allocated, we have the current resource utilization of
A1 ∪ · · · ∪ Am0 in [1, cm ] is at least
Cdi − µC − (Di − µki )
Cdi − Di − (leni − 1)(C − ki )
≥
C · di
C · di
C(di − leni ) + (C − ki ) + (leni ki − Di )
s−1
≥
≥
≥ r.
C · di
s
16
We assume that Ti ∈ Rm0 for some m0 ∈ [m]+ . Now,
we show that, after Ti is considered and rejected, the subsequent resource allocation by Allocate-A(j ) to each task Tj of
∪L
l=m0 +1 Al doesn’t change the utilization in [1, τm ]. FullyUtilize(j ) does not change the allocation to the previous
accepted tasks; the operations of changing the allocation
to other tasks in AllocateRLM(j , 0, tth
m + 2) happen in its
th
call to Routine(∆, 0, 1, t) where we have cm0 ≤ tth
m0 ≤ tl
0
for all m + 1 ≤ l ≤ L. Due to the function of lines 68 of Routine(∆, 0, 1, t), in the l-th phase of GreedyRLM,
the call to any Allocate-A(j ) will never change the current
allocation of A1 ∪ · · · ∪ Am0 in [1, cm ]. Hence, if tth
m = cm ,
upon completion of GreedyRLM, the resource utilization of
A1 ∪· · ·∪Am where m0 ≤ m; if tth
m > cm , since each time slot
th
in [cm + 1, tth
m ] is fully utilized by the definition of tm , the
th
resource utilization in [cm + 1, tm ] is 1 and the final resource
utilization will also be at least r.
Proof of Lemma 12. Recall the process of defining µC
m (S)
where S = T . In Definition 1 that defines λm (T ), n
tasks are considered sequentially for each m ∈ [L]+ , leading to a complexity L · n. In Definition 2 that derives
C
C
C
λC
m (T ) from λm (T ), λ1 (T ), λ2 (T ), · · · , λL (T ) are considered sequentially,
leading
to
a
complexity
O(L). Finally,
P
C
µC
(T
)
=
D
−
λ
(T
)
.
Hence,
the
time
complexi
m
m
Ti ∈T
ity of determining the satisfiability of boundary condition
depends on Definition 1 and is O(L · n).
| 8 |
Distance-based Camera Network Topology Inference for Person Re-identification
Yeong-Jun Cho and Kuk-Jin Yoon
Computer Vision Laboratory, GIST, South Korea
arXiv:1712.00158v1 [] 1 Dec 2017
{yjcho, kjyoon}@gist.ac.kr
Abstract
ID:337
1.23m/s
In this paper, we propose a novel distance-based camera network topology inference method for efficient person
re-identification. To this end, we first calibrate each camera and estimate relative scales between cameras. Using
the calibration results of multiple cameras, we calculate
the speed of each person and infer the distance between
cameras to generate distance-based camera network topology. The proposed distance-based topology can be applied
adaptively to each person according to its speed and handle
diverse transition time of people between non-overlapping
cameras. To validate the proposed method, we tested the
proposed method using an open person re-identification
dataset and compared to state-of-the-art methods. The experimental results show that the proposed method is effective for person re-identification in the large-scale camera
network with various people transition time.
ID:612
4.01m/s
ID:578
2.04m/s
60 Distribution of
person’s speed
ID:337
ID:494
40
0.51m/s
20
0
0
ID:494
1
ID:612
ID:578
2
3
4
(m/s)
Figure 1. Challenges in person re-identification based on
the time-based camera network topology due to the diverse
speeds of people. Each blob was marked at every 0.3 seconds interval and each color indicates a person identity.
nectivities between cameras) into account; therefore, they
have to examine every possible person candidate in the camera network to re-identify a person. Instead of examining
every person, we can restrict and reduce the search space
by inferring the spatio-temporal relation between cameras,
referred to as a camera network topology.
1. Introduction
Numerous surveillance cameras installed in public
places (e.g., offices, stations, and streets) allow monitoring and tracking of people in large-scale environments.
However, it is difficult for a person to individually observe each camera. To reduce the human efforts, person reidentification techniques which automatically identify people between multiple non-overlapping cameras can be used.
Previous works have mainly focused on modeling people
appearance information such as feature descriptor extraction [11, 21] and learning similarity metrics [10, 16] to perform person re-identification. Recently, many appearance
modeling methods based on deep neural network [1, 42]
have been proposed.
However, in a large-scale camera network where the
number of cameras and the number of people are large, the
person re-identification problem becomes very challenging. In particular, it is difficult to effectively perform reidentification only by using the appearance-based methods
in the large-scale camera network. This is because the conventional appearance-based methods do not take the structure of the camera network (e.g., spatial and temporal con-
In recent years, several camera network topology inference methods [5, 26, 32] have been proposed. In general,
camera network topology represents spatio-temporal relations and connections between cameras. The topology is
represented as a graph Gt = (V, Et ), where vertices V
denote cameras, and edges Et denote the transition distribution of people across cameras according to ‘time’. We
name this topology Gt as a time-based camera network
topology. Although there has been much progress in person re-identification using camera network topology, the
time-based topology has difficulty in dealing with the diverse walking speed of the people. As shown in Fig. 1,
the walking speed of people is very diverse. For example,
if a person walks much faster or slower than the average
speed, it is hard to predict its transition time between two
non-overlapping cameras based on the time-based topology.
To overcome the limitations of the previous works, we propose a novel distance-based camera network topology in1
ference method to perform efficient and accurate person reidentification.
To estimate the speed of a moving person, we need to
calibrate cameras in a network automatically. The progress
in self-calibration methods [22, 25] enables us to estimate camera parameters without any off-line calibration
steps [43]. However, the estimated camera extrinsic parameters (i.e., camera position t) are not accurate due to scale
ambiguity in the self-calibration method, i.e. we can obtain the camera extrinsic parameters determined up to scale.
In this work, we first estimate the relative scale between
cameras based on human height information and correct
the inaccurate camera calibration results. We then calculate speeds of all people in the camera network. Subsequently, we infer the distance between cameras by multiplying the speeds and transition times of people and build
a distance-based camera network topology. The inferred
distance-based topology can be applied adaptively to each
person – when we divide the distance-based camera network topology according to the speed of a person, it gives a
person-specific time-based camera network topology.
The main idea of this work is simple but effective. To the
best of our knowledge, this is the first attempt to infer the
distance-based camera network for person re-identification.
To validate the proposed method, we tested the proposed
method using the SLP [6] dataset and compared with stateof-the-art methods. The results show that the proposed
method is promising for person re-identification in the
large-scale camera network with diverse speeds of people.
The rest of the paper is organized as follows: In Sec. 2,
we review previous works of person re-identification and
camera network topology inference. In Sec. 3, we describe
our proposed distance-based camera network topology inference and person re-identification methods. The dataset
and evaluation methodologies are described in Sec 4. The
experimental results are reported in Sec. 5 and we conclude
the paper in Sec. 6.
identification task. In addition, several works exploited additional information such as human pose prior [7, 33, 39]
and group appearance model [20, 47] to improve the reidentification performance.
In recent years, many re-identification methods based
on learning deep convolutional neural network (CNN) [34,
40, 45] and Siamese convolutional network [1, 8, 36, 42]
have been proposed for simultaneously learning both features and metrics. In addition, several works utilized recurrent neural network (RNN) [28, 48, 49] and long short term
memory (LSTM) [41] network to perform multi-shot person
re-identification. Although a lot of person re-identification
methods have been proposed so far, the challenges of person re-identification in a large-scale camera network, e.g.,
spatio-temporal uncertainty between non-overlapping cameras and high computational complexity, still remain.
2.2. Camera Network Topology Inference
Recently, many works have tried to infer a camera network topology and employ camera geometry to resolve the
spatio-temporal ambiguities. Several works [4, 14, 30] assumed the camera network topology is given and showed
the effectiveness of the spaito-temporal information between cameras. However, in practice, the camera network
topology is not given. Thus, many works have tried to infer
the camera network topology in an unsupervised manner.
Makris et al. [26] proposed a topology inference method
based on a simple event correlation model between cameras. This topology inference method was extended in many
works [5, 29, 32]. Similarly, Loy et al. [23, 24] inferred a
camera network topology by measuring mutual information
between activity patterns of cameras.
The previous topology inference methods [5, 23, 24,
26, 32] are practical since they do not require appearance
matching steps such as re-identification or inter-camera
tracking for topology inference. However, the inferred
topologies are prone to be inaccurate since the topology is
easily contaminated by false event correlations, which frequently occur when people pass through blind regions irregularly. On the other hand, several works [3, 6, 27] inferred
the camera network topology using person re-identification
results. These methods can be more robust to noise than the
event-based approaches since the methods infer the topology by utilizing true correspondences between cameras. Especially, Cho et al. [6] iteratively solved re-identification
and camera network topology inference. It achieved accurate results in both tasks thanks to its iterating strategy.
The previous camera network topology inference methods, which have been proposed so far, mainly focused on to
infer the transition time of people between cameras. However, the previous methods, so-called, ‘time-based topology’ inference methods cannot efficiently handle the diverse
speeds of people as shown in Fig. 1.
2. Previous Works
2.1. Person Re-identification
In general, most of the person re-identification methods
rely on appearances of people to identify people across nonoverlapping views. To describe and classify the appearances of people, many works have tried to propose appearance modeling methods such as feature learning and metric learning methods. For the feature learning, [11, 21, 44]
designed feature descriptors to well describe the appearance of people. For the metric learning, many methods [10, 16] have been proposed and used for person reidentification [9, 31, 38]. Several works [16, 31] extensively evaluated the feature and metric learning methods to
show the effectiveness of those methods in the person re2
800
600
400
200
0
800
600
400
200
0
Distance-based camera
network topology inference
𝑠
𝑘𝑙
0
-1000
Y
parameters
K [R t]
2000
0
1000
X
distance
-1000
Y
1000
0
People
speeds
0.5
0
-0.5
1000
0
-1
1500
500
-1
-400
-200
0
3.2 m/s Distance =
0
200
400
600
800
1000
1000
3.0 m/s
500
0
-500
-1000
(Sec. 3.2)
X
speed × time
Person re-identification via
distance-based topology
𝑝𝑖 (Δ𝑡)
People speeds
in CAM l
𝐺𝑑 = (𝑉,𝐸𝑑) 𝑝(𝑑)
1.6 m/s
1500
1
0
-1000
1
Camera
1000
2000
1000
People speeds
in CAM k 𝑝(𝑑)
1.4 m/s
3 4
ID: 1 2
2000
2000
Re-id
results
People heights
in CAM l
Distance
-based
topology
-500
-400
-200
0
200
400
600
800
1000
𝑣𝑖 , 𝑣𝑗
(Sec. 3.3)
𝑣𝑖
distance
Time
𝑣𝑗 𝑝 Δ𝑡
𝑗
Get adaptive
search range
Perform re-id
People heights
in CAM k
4
ID: 1 2 3
Z
Z
Re-identification results
Relative scale estimation
between cameras
Time
(Sec. 3.4)
Figure 2. Overview of the proposed framework: distance-based camera network topology inference and person reidentification.
3. Proposed Methods
topology as described in Sec 3.3. Finally, we perform person re-identification based on the inferred distance-based
camera network topology in Sec. 3.4. The proposed framework is illustrated in Fig. 2. For reproducibility, the code of
this work is available to the public at: https://.
As mentioned above, many camera network topology inference methods have been proposed recently to perform efficient person re-identification in a large-scale camera network. They inferred a time-based camera network topology
based on the people transition time between cameras. However, the speed of people can be very diverse, and it leads
diverse people transition time between cameras as shown in
Fig. 1. Therefore, the conventional time-based topology becomes ambiguous and inaccurate under the diverse speed of
people.
Actually, in surveillance videos, it is possible to extract
additional cues such as camera parameters (position and
viewpoint) and trajectories of people. Using the additional
cues, we can also estimate walking speed of each person.
Then, the challenge of re-identification due to diverse walking speed becomes more tractable. In this work, we fully
exploit those additional cues and propose a new distancebased camera network topology inference, which does not
depend on the speed of a person.
3.2. Relative Scale Estimation between Cameras
In a general pinhole camera model, the relation between
a 2D image (pixel coordinates [u, v]) and a 3D point (world
coordinates [X, Y, Z]) can be represented by 3× 4 projection matrix P as
>
>
[u, v, 1] = P [X, Y, Z, 1] ,
P = K [R
t] ,
(1)
where K, [R t] represent camera intrinsic and extrinsic
parameters. Unfortunately, most surveillance cameras remain uncalibrated.
In order to estimate the camera parameters, camera selfcalibration techniques [22, 25] can be employed. These
methods do not require any pre-defined checkerboards and
off-line calibration tasks [43]. Instead of using a checkerboard, they utilize a human height H as the checkerboard.
A camera extrinsic parameter t (camera position) is determine by the value of H. However, H is unknown in general; thus Liu et al. [22] set H to any pre-defined specific
value (e.g., the average height of humans: H=1.72m [35]).
Therefore, although we can estimate the intrinsic and extrinsic parameters of each camera using the self-calibration
technique, the camera extrinsic parameter t is not accurate
since we use the inaccurate H value. In this work, we consider every camera in the camera network; thus each camera
extrinsic parameter t should be adjusted to share the same
world coordinate system.
To this end, we estimate relative scale of human heights
between cameras based on person re-identification results
and adjust each camera’s extrinsic parameter t. We denote
a person i in camera k as oki . Then, a matrix of people
3.1. Overall Proposed Framework
We first obtain initial person re-identification results (i.e.,
person correspondence pairs) between cameras. To this end,
we can apply any existing re-identification methods except
for methods using prior knowledge of the camera network1 .
The initial re-identification results are utilized in following proposed steps. In Sec. 3.2, we perform the relative
scale estimation for each camera. Each camera in the camera network is calibrated using camera self-calibration techniques [22,25] and its camera parameters are adjusted based
on the proposed method. After estimating relative scales
between cameras, we calculate walking speeds of people in
each camera. The speeds of people and re-identification results are used to infer the distance-based camera network
1 For example, person re-identification based on metric learning [10,16]
requires true person correspondences between cameras to learn the distance metric. We aim to run our framework without any prior knowledge
of the camera network. Thus we do not use the methods requiring prior
knowledge of the camera network.
3
CAM A
Unit (m)
4
ID:13
Z
ID:7
ID:8
ID:1
ID:5
Head point
Foot point
800
600
400
2
200
0
Z
ID:13
4
2000
2
1000
X
2D image
ID:7
ID:1
00
-2
-1000
Y
ID:5
-2-1000
00
Y
21000
42000
3D world
correspondences is defined as
Mkl = (mij ) |1 ≤ i ≤ N k , 1 ≤ j ≤ N l ,
(
1 if oki corresponds to olj
,
mij =
0 otherwise
S kl = arg min
S
l
i=1 j=1
mij
H(oki )
S−
H(olj )
1.9 m/s
ID: 2 0.8 m/s
1.0 m/s
∆𝑡=46
1.2 m/s
Figure 4. Example of estimating a distance between two
cameras. The speed in the blind area is inferred by averaging two speeds from two cameras. The distance between
cameras is estimated as 46m from both identities.
calculated as follows,
T q
2
2
1X
k −Yk
k − Xk
+ Yi,t
Xi,t
v̄ik =
i,t−1 , (5)
i,t−1
T t=1
(2)
where the t is the time (second), and Xi ,Yi are the world
coordinates (meter) of a person i’s foot position in camera
k, respectively. In order to get the more reliable speed of a
person, we average multiple speeds of the person along its
trajectory in each camera.
After calculating the speeds of people, we build a distance distribution based on person re-identification results
between two cameras. For example, we have a correspondence between two cameras {oki , olj }. We then estimate the
distance between two cameras by multiplying the speed and
the time difference as follows,
where N k and N l are the numbers of identities in camera
k and l. To find the relative scale between two cameras, we
find a scale ratio S that minimizes the following equation
k
2.0 m/s
∆𝑡=23
X
Figure 3. Examples of detected human heights in a 2D image and corresponding 3D human heights in a world coordinate system.
N X
N
X
CAM B
ID: 1 2.1 m/s
ID:8
CAM
Human height
Blind area
!
,
(3)
where H(oki ) is an average height of a person oki along its
moving path in the world coordinate system. Since the proposed Eq. (3) utilizes multiple correspondence pairs, we can
prevent overfitting the value S. We assume that every person is on the planar ground plane. Inspired by [25], we
can detect human foot and head points in a 2D image and
compute the corresponding person’s height according to the
projection matrix P as shown in Fig. 3.
After estimating the scale ratio S between two cameras,
the projection matrices of camera k and l are updated as
Pk = Kk Rk tk ,
(4)
Pl = Kl Rl S kl tl .
1 k
(v̄ + v̄jl ) · ∆t,
(6)
2 i
where ∆t is a transition time of the person who appears in
two cameras at different times. Note that it is impossible
to directly observe the speed of the person in the blind area
(i.e., area of between cameras). For that reason, to infer the
speed in the blind area, we average the two speeds (v̄ik and
v̄jl ) from two cameras. Figure 4 shows the example of the
distance estimation between cameras. Although the speeds
of two identities are different, the estimated distances are
the same. Using multiple correspondences between cameras, we make a histogram of the distance and normalize
the histogram by dividing with the number of the correspondences. We denote the distribution according to the distance
between two cameras k and l as pkl (d).
By performing distance distribution estimation between
all camera pairs in the camera network, we obtain the
distance-based camera network topology, and it is defined
by a graph as follows,
dkl =
Through the proposed process, we can adjust all extrinsic
camera parameters t in the camera network. Note that the
proposed framework does not aim to find the absolute scale
of the world coordinate system and real walking speeds of
people, but to match the scales across cameras.
3.3. Distance-based Camera Network Topology Inference
Gd = (V, Ed ) ,
To infer the distance-based camera network topology, we
exploit the speeds of people in each camera. We assume
that every person is on the planar ground plane (Z = 0,
world XY plane). Based on this assumption and camera
calibration result in Sec 3.2, the speed of a person oki is
V ∈ {k|1 ≤ k ≤ Ncam } ,
Ed ∈ pkl (d)|1 ≤ k ≤ Ncam , 1 ≤ l ≤ Ncam ,
(7)
where Ncam is the number of cameras in the camera network and V is a set of cameras and Ed is a set of distance
4
0.2
0
20
40
60
80
0
20
Transition time (sec)
(a) Transition time distribution
40
60
(b) Distance distribution
20
40
60
80
(a) Person speed: 2.1 m/s
0.4
0.2
0
20
40
60
80
Transition time (sec)
(b) Person speed: 0.8 m/s
Figure 6. Inferring adaptive transition time distribution for
each identity based on the speed of a person.
the fixed time range [20, 60] (sec) for all test queries as
shown in Fig. 5 (a). Although it has the much wider time
range (40 seconds) than our method, it may fail to find a
correct person who moves very slowly or fast. For example, its search range [20, 56.3) (sec) is redundant for the
person who moves slowly (0.8 m/s), and it fails to search
the person if the person reappears during (60, 73.8] (sec) in
the other camera.
distribution between cameras. As shown in Fig. 5, the variance of the distance distribution inferred by the proposed
method (Fig. 5 (b)) is smaller than that of the conventional
time-based transition distribution (Fig. 5 (a)). Note that it is
difficult to reduce the search range when the variance of the
distribution is large.
3.4. Person Re-identification via Distance-based
Camera Network Topology
Person re-identifiction. In this work, we utilize the LOMO
feature extraction method [19], which shows promising reidentification performance, to describe the appearances of
people. It divides a person image patch into six horizontal
stripes and extracts a HSV color histogram from each stripe.
It builds a descriptor based on Scale Invariant Local Ternary
Pattern (SILTP). The descriptor from the 128×48 (pixel)
image has 26,960 dimensions.
In general, each person gives multiple appearances along
with its trajectory. The multiple appearances provide rich
information for re-identification. However, a lot of computations are needed to take into account all the multiple appearances. Inspired by [46], we employ an average feature
pooling method. In average pooling, the feature vectors of
multiple appearances are pooled into one by the averaged
summation. Therefore, we can simply compare two identities and the similarity score between two identities is defined by
In this section, we restrict the search range based on the
inferred distance-based camera network topology and perform person re-identification.
Search range restriction.
Dividing the distance distribution pkl (d) by the speed of a person i in camera k gives a
transition time distribution of the person i who moves from
camera k to l as
pkl (d)
.
v̄ik
0.2
Transition time (sec)
Figure 5. Comparison of two distributions between cameras. The transition time distribution has a larger variance
than that of the distance distribution.
pkl
i (∆t) =
0.4
0
80
Distance (m)
0.6
𝑝(∆𝑡)
0.4
0.2
CAM2(Zone4) - CAM1(Zone4)
0.6
𝑝(∆𝑡)
0.4
0
CAM2(Zone4) - CAM1(Zone4)
CAM2(Zone4) - CAM1(Zone4)
0.6
𝑝(𝑑)
𝑝(∆𝑡)
CAM2(Zone4) - CAM1(Zone4)
0.6
(8)
Thus, we can adaptively give a camera network topology
for each person depending on its speed as shown in Fig. 6.
Based on the obtained transition time distribution pkl
i (∆t),
we restrict the search range for re-identification as follows:
• Find a mean value ofthe transition time distribution:
m = mean pkl
i (∆t) .
k
l
S oki , olj = e−kΦ(oi )−Φ(oj )k2 ,
• Set a search range Tr around the mean value to cover
95% of the distribution: [m − T2r , m + T2r ].
(9)
where Φ (·) is a pooled feature vector of a person. The similarity score lies on [0, 1]. Note that it is possible to utilize
any kind of feature extraction and pooling methods in our
framework.
In consequence, a person who moves fast (2.1 m/s) in the
camera k will be searched within the time range [21.4, 28.1]
(sec) in the camera l (Fig. 6 (a)). On the other hand, a
person who moves slowly (0.8 m/s) in the camera k will
be searched within the time range [56.3, 73.8] (sec) in the
camera l (Fig. 6 (b)). The search range of each person is
determined adaptively in this manner. Thus, the proposed
adaptive search strategy according to the walking speed of
a person becomes more effective under the large variation
of people’s speed.
On the other hand, a search strategy based on the conventional time-based camera network topology searches within
4. Dataset and Methodology
4.1. Dataset
Over the past few years, numerous datasets of person re-identification have been published such as VIPeR
[12], PRID 2011 [13], CUHK [17, 18], iLIDS-VID [37],
MARS [45] and Airport [15]. However, most of the
datasets do not provide camera synchronization information
5
2 3 C4
1 4
C6
4
6 5
2
1
3
1
C2
C1
3
5
4
2
1
3
the number of true matching results and Tgt is the total number of ground truth pairs in the camera network. In the SLP
dataset, Tgt = 2,664 as summarized in Table. 1. In practice,
rank-1 accuracy is the most important one among all ranks
since other ranks (2, 3, ..., n) failed to find the correct correspondences at least one time.
To evaluate the accuracy of the camera network topology inference, we draw a curve of the retrieval rate. The
retrieval rate represents the retrieval accuracy of matching
candidates derived from the camera network topology. For
example, if the matching candidates include the true correspondence of a test query, it counts as a success, otherwise
a fail. Naturally, when the topology gives a wide search
range Tr , the retrieval rate becomes high since the matching candidates within the wide search range are likely to include a true correspondence. For unbiased evaluations, we
draw a curve of the retrieval rate according to the average
search range as shown in Fig. 9. If there are multiple links
in the camera network, we measure the retrieval rates for all
links and average them to get the final rate. Since we have
no ground-truth of the camera network topology, measuring
the retrieval rate is reasonable.
5 1 2
4
3
3
C5
C3 C7
3 4
2 1
C8
2 3
1
2
C9
2
1
4
1
2
Figure 7. Layout of the SLP dataset. Each blue ellipse in
a camera is an entry–exit zone. Red lines are valid links
between cameras (best viewed in color).
Table 1. Numbers of transition identities (IDs) between two
cameras.
Link
1
2
3
4
CAM pairs # of IDs
CAM1-CAM2
227
CAM2-CAM3
571
CAM3-CAM5
568
CAM3-CAM7
168
Link
5
6
7
8
CAM pairs # of IDs
CAM4-CAM5
155
CAM5-CAM6
61
CAM7-CAM8
281
CAM8-CAM9
633
5. Experimental Results
or time stamps of all frames; thus these datasets cannot be
used for testing our framework.
To validate our methods and compare to other stateof-the-art methods, we used the SLP re-identification
dataset [6]. It is a large-scale person re-identification dataset
containing nine synchronized outdoor cameras. The total
number of identities in the dataset is 2,632. The layout of
the camera network is shown in Fig. 7. It has eight valid
links between cameras as summarized in Table. 1. It provides the ground truth detection and tracking information
of every person including people positions (x,y locations)
and sizes (height, width). We used the given detection and
tracking results for our experiments, since we mainly focus
on person re-identification and camera network topology inference problems. Most person re-identification researches
follow this assumption and setting. We resized the every
person image to 128×48 pixels and extracted feature descriptors from the resized images.
In the experiments, we assume that the camera parameters are partially given – camera intrinsic parameter K and
camera rotation R are given, but the camera translation is
given by S · t, where S is unknown. Thus, the cameras in
the camera network do not share the same world coordinate
system, initially. To identify a pair of cameras is connected
or not, we applied the connectivity check schemes in [6, 26]
and we obtained initial person re-identification results based
on [6] for our framework.
Using the initial re-identification results, we perform
two steps in our framework: 1) relative scale estimation
between cameras, and 2) distance-based camera network
topology inference. However, the initial results of the reidentification between cameras may include several false
correspondences leading an inaccurate topology inference
result. Therefore, we utilized only reliable people corre
spondences that have high similarity scores: S oki , olj >
0.7 and excluded the false correspondences in both steps 1)
and 2). We empirically set the threshold as 0.7, but our
framework does not highly depend on this threshold. As a
result, we found seven links in the camera network. Unfortunately, we failed to find a link CAM5-CAM6 since the
number of people is small due to the long distance from the
camera (Fig. 7). Also, CAM6 is isolated from other cameras
as shown in Table. 1.
Figure 8 shows comparisons of two types of distribution
between cameras: (left) transition time distributions in conventional time-based topology, (right) distance distributions
in proposed distance-based topology. As we can see, the
4.2. Evaluation methodology
To evaluate a person re-identification performance, many
previous works plot a Cumulative Match Curve (CMC) [12]
representing true match being found within the first n ranks.
In general, to plot the CMC, the number of a gallery (i.e.,
matching candidates) should be fixed for all test queries.
However, in our framework, the number of a gallery varies
according to the camera network topology and test queries.
Therefore, we cannot plot a complete CMC. In this work,
we followed Cho’s [6] re-identification evaluation metric:
P
measuring the rank-1 accuracy by 100 · TTgt
, where T P is
6
0.4
0.4
0.6
0.4
𝑝(𝑑)
0
20
40
0
60 (sec)
𝑝(𝑑)
0
20
40
0
60 (sec)
0.4
0.4
𝑝(𝑑)
0.6
0.8
Distance-based (gt)
Distance-based (ours)
Time-based
Distance-based (error 25%)
Distance-based (error 50%)
0.7
0.6
0
20
40
60
0
10
20
30
40
Average search range (sec)
50
60
(m)
Figure 9. Topology accuracy: retrieval rate according to the
average search range.
0
20
40
60
80
(m)
0.2
0
20
40
0
60 (sec)
0.6
0.4
0.4
𝑝(𝑑)
0.6
0.2
0
20
40
60
(m)
0
20
40
60
0.6
0.6
0.4
0.4
Distance-based (gt)
Distance-based (ours)
Time-based
Distance-based (error 25%)
Distance-based (error 50%)
50
30
0
60 (sec)
70
40
0.2
𝑝(𝑑)
𝑝(∆𝑡)
(m)
0.9
0.2
0.2
𝑝(∆𝑡)
60
0.4
0.6
0
20
40
60
0
10
20
30
40
Average search range (sec)
50
60
(m)
Figure 10. Re-identification accuracy: rank-1 accuracy according to the average search range.
0.2
0.2
0
40
0.6
0.2
0
20
0.2
0.4
0
0
Rank1 accuracy
𝑝(∆𝑡)
CAM2-CAM3
0
60 (sec)
0.6
𝑝(∆𝑡)
CAM3-CAM5
40
0.2
𝑝(∆𝑡)
CAM4-CAM5
20
0.4
0
CAM3-CAM7
0
0.6
0
CAM8-CAM9
0.2
0.2
0
Retrieval rate
0.6
𝑝(𝑑)
𝑝(∆𝑡)
CAM1-CAM2
1
0.6
0
20
40
60 (sec)
(a) Transition time distribution
0
0
20
40
60
(m)
To validate the effectiveness of the proposed method, we
first evaluated the topology inference accuracy: retrieval
rate according to the average search range. In this experiment, we compared a distance-based method with a timebased method. For a fair comparison, we applied the same
baseline (e.g., feature extraction and pooling methods) to
each method. As shown in Fig. 9, the proposed distancebased method (ours) shows superior performance than the
conventional time-based method. To verify the proposed
scale estimation method in Sec. 3.2, we also compared several distance-based methods with various experimental settings: • gt: a method using ground-truth camera parameters for all cameras, • error N%: a method using camera parameters with N percent of scale error. As we can
see, distance-based methods with erroneous camera parameters (error 25%, 50%) show lower performance than the
distance-based method with ground-truth camera parameters (gt). On the other hand, the proposed method (ours)
shows a similar performance with the distance-based (gt).
Interestingly, our method shows higher performance than
distance-based (gt) around 5–10 second search times. In addition, a distance-based method (error 25%) shows better a
retrieval rate than that of the time-based method. It implies
that our method is robust to camera calibration error.
We also evaluated the accuracy of re-identification based
on each camera network topology. As shown in Fig. 10,
the proposed distance-based method (ours) shows superior
(b) Distance distribution
Figure 8. Comparisons of two types of inferred distributions.
proposed distance distributions show more clear peaks and
small variances compared to those of transition time distributions. The result implies that the proposed distancebased topology is more effective than the time-based topology for person re-identification, since it is ambiguous to restrict search range with the unclear camera network topology. A CAM3-CAM5 pair has the negative values of both
transition time and distance since they are overlapped.
Ideally, a distance between two cameras should be one
value if there is a single path between cameras. However,
the proposed distance distributions did not converge to one
value, but have some ranges e.g., CAM1-CAM2: [40, 55]
and CAM4-CAM5: [35, 55]. This is because the speeds and
paths of moving people in the blind region are totally unknown and can differ person to person. In addition, other
values, which are used for inferring the topology, such as
camera parameters and observed people positions in each
view are not perfect due to noise. To overcome these limitations, we estimated the distance of the blind area by interpolating the information on both sides of cameras as in
Eq. (6) and could obtain quite clear distance distributions.
7
Table 2. Performance comparison with state-of-the-art methods.
Methods Makris’s [26] Nui’s [29] Chen’s [5] DNPR [27] Cai’s [3] Cho’s [6] ours–time ours–dist
rank-1 accuracy
54.0
54.6
55.2
44.7
51.4
68.3
67.8
74.7
0.07
CAM3
CAM7
0.25
0.06
CAM7
CAM8
1
1
1
CAM8
CAM9
0.8
0.8
0.2
0.8
0.04
0.15
0.6
0.03
0.1
0.4
0.4
0.4
0.05
0.2
0.2
0.2
0.05
0.6
0.6
0.8
0.6
0.5
y
0.02
0.01
0
0
0
20
20
4040
6060
8080
100100
0
0
0
20
20
4040
6060
8080
100100
0.2
0
0
0
20
20
4040
6060
8080
100100
0
2.5
1
2
0.8
0.8
1.5
0.6
0.6
0.8
0.15
0.6
0.4
0.05
20
20
4040
6060
8080
100100
x
0.2
0
0
20
20
4040
6060
8080
100100
0
0.08
0.4
0.06
0.3
0.04
0.2
0.02
0.1
0
0
0
20
20
4040
6060
8080
(a) Makris’s [26]
100100
0
0
0
20
20
4040
6060
80 80
100100
1
0.4
0.4
0.5
0.2
0.2
0
0
0
20
20
4040
6060
80 80
0
100100
0
0
20
20
4040
6060
8080
(b) Nui’s [29]
100100
0
00
20
20
4040
x
6060
8080
100
100
0
00
0.8
0.8
0.8
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
6060
8080
(c) Chen’s [5]
100100
20
20
40
40
60
60
8080
100
100
20
20
40
40
60
60
8080
100
100
0.2
0
0
0.6
4040
100
100
0.4
0.8
20
20
8080
0.6
1
0
6060
0.8
1
0
40
40
0.5
1
0
20
20
1
0.6
0.5
y
0
0.2
0
0
y
0.1
0.4
0
0.4
0.2
0
0
20
20
4040
x
6060
8080
(d) Cho’s [6]
100
100
0
00
(e) ours–dist
Figure 11. Comparison of inferred transition time distributions of previous methods and our distance distribution. First row
– camera pair: CAM3-CAM7. Second row – camera pair: CAM7-CAM8. Third row – camera pair: CAM8-CAM9.
re-identification performance than that of the time-based
method. All tested re-identification performances increase
up to a certain search range and then decrease. This is because a wide search range retrieves a lot of matching candidates that are likely to include a true correspondence, but
they also include lots of irrelevant identities. Thus, it is important to set a proper search range. In our methods, we
searches in the range of 20 seconds on average based on
the propose search range restriction in Sec. 3.4. It is reasonable for both topology and re-identification performances as
shown in Fig. 9 and 10.
We compared the proposed method with several previous
methods [3, 5, 6, 26, 27, 29] that infer a time-based camera
network topology. Figure 11 shows comparison of inferred
transition time distributions and an inferred our distance
distribution. As we can see, the methods [5, 26, 29] show
very unclear and noisy transition time distributions. This is
because, to infer the topology, they used a simple correlation of people exiting–entering patterns instead of utilizing
re-identification results. On the other hand, the method [6],
which used re-identification results to infer the topology,
shows reasonable results than other methods [5, 26, 29].
Compared to [6], our distance distributions between camera pairs show more clear peaks and have small variances.
In Table. 2, we summarize the person re-identification
results based on each inferred camera network topology. In
this experiment, we have two results: ours–time and ours–
dist. They share the same baseline except for the utilized
camera network topology (time-, and distance-based) for
re-identification. For a fair comparison, we set the same
average search range (20 seconds) for both of our methods.
Among the methods, which utilized the time-based camera
network topology, the method [6] showed the highest reidentification performance. It employed the random forest
algorithm [2] to perform accurate person re-identification.
On the other hand, our methods (ours–time, ours–dist) used
a simple feature pooling method for re-identification as described in Sec. 3.4. Although our method employed the
simpler re-identification method, re-identification using the
proposed distance-based topology shows superior performance than other state-of-the-art methods.
6. Conclusions
In this paper, we proposed a novel distance-based camera network topology inference. We first estimate relative
scale ratio between cameras based on the human heights
information and infer the distance-based camera network
topology. The proposed distance-based topology can be
applied adaptively to each person according to its speed;
therefore it can effectively handle the various people transition time between cameras. In order to validate the proposed method, we used a public synchronized large-scale
re-identification dataset and compared our method with
state-of-the-art methods. The results show that the proposed
method is promising for person re-identification in largescale camera network with various people transition time
between cameras.
8
References
[21] C. Liu, S. Gong, C. C. Loy, and X. Lin. Person re-identification: What features are important? In ECCV, 2012. 1,
2
[22] J. Liu, R. T. Collins, and Y. Liu. Surveillance camera autocalibration based on pedestrian height distributions. In BMVC,
2011. 2, 3
[23] C. C. Loy, T. Xiang, and S. Gong. Time-delayed correlation analysis for multi-camera activity understanding. IJCV,
2010. 2
[24] C. C. Loy, T. Xiang, and S. Gong. Incremental activity modeling in multiple disjoint cameras. TPAMI, 2012. 2
[25] F. Lv, T. Zhao, and R. Nevatia. Camera calibration from
video of a walking human. TPAMI, 2006. 2, 3, 4
[26] D. Makris, T. Ellis, and J. Black. Bridging the gaps between
cameras. In CVPR, 2004. 1, 2, 6, 8
[27] N. Martinel, G. L. Foresti, and C. Micheloni. Person reidentification in a distributed camera network framework. IEEE
transactions on cybernetics, 2016. 2, 8
[28] N. McLaughlin, J. Martinez del Rincon, and P. Miller. Recurrent convolutional network for video-based person reidentification. In CVPR, 2016. 2
[29] C. Niu and E. Grimson. Recovering non-overlapping network topology using far-field vehicle tracking data. In ICPR,
2006. 2, 8
[30] A. Rahimi, B. Dunagan, and T. Darrell. Simultaneous calibration and tracking with a network of non-overlapping sensors. In CVPR, 2004. 2
[31] P. M. Roth, M. Hirzer, M. Köstinger, C. Beleznai, and
H. Bischof. Mahalanobis distance learning for person reidentification. In Person Re-Identification. 2014. 2
[32] C. Stauffer. Learning to track objects through unobserved
regions. In WACV/MOTIONS, 2005. 1, 2
[33] C. Su, J. Li, S. Zhang, J. Xing, W. Gao, and Q. Tian. Posedriven deep convolutional model for person re-identification.
In ICCV, 2017. 2
[34] C. Su, S. Zhang, J. Xing, W. Gao, and Q. Tian. Deep
attributes driven multi-camera person re-identification. In
ECCV, pages 475–491. Springer, 2016. 2
[35] P. M. Visscher. Sizing up human height variation. Nature
genetics, 40(5):489–490, 2008. 3
[36] F. Wang, W. Zuo, L. Lin, D. Zhang, and L. Zhang. Joint
learning of single-image and cross-image representations for
person re-identification. In CVPR, 2016. 2
[37] T. Wang, S. Gong, X. Zhu, and S. Wang. Person reidentification by video ranking. In ECCV, 2014. 5
[38] K. Q. Weinberger, J. Blitzer, and L. K. Saul. Distance metric
learning for large margin nearest neighbor classification. In
NIPS, 2005. 2
[39] Z. Wu, Y. Li, and R. J. Radke. Viewpoint invariant human
re-identification in camera networks using pose priors and
subject-discriminative features. TPAMI, 2015. 2
[40] T. Xiao, H. Li, W. Ouyang, and X. Wang. Learning deep feature representations with domain guided dropout for person
re-identification. In CVPR, 2016. 2
[41] Y. Yan, B. Ni, Z. Song, C. Ma, Y. Yan, and X. Yang. Person
re-identification via recurrent feature aggregation. In ECCV,
2016. 2
[1] E. Ahmed, M. Jones, and T. K. Marks. An improved deep
learning architecture for person re-identification. In CVPR,
2015. 1, 2
[2] L. Breiman. Random forests. Machine learning, 2001. 8
[3] Y. Cai, K. Huang, T. Tan, and M. Pietikainen. Recovering the
topology of multiple cameras by finding continuous paths in
a trellis. In ICPR, 2010. 2, 8
[4] Y. Cai and G. Medioni. Exploring context information for
inter-camera multiple target tracking. In WACV, 2014. 2
[5] X. Chen, K. Huang, and T. Tan. Object tracking across nonoverlapping views by learning inter-camera transfer models.
Pattern Recognition, 2014. 1, 2, 8
[6] Y.-J. Cho, J.-H. Park, S.-A. Kim, K. Lee, and K.-J. Yoon.
Unified framework for automated person re-identification
and camera network topology inference in camera networks.
In International Workshop on Cross-domain Human Identification (in conjunction with ICCV), 2017. 2, 6, 8
[7] Y.-J. Cho and K.-J. Yoon. Improving person re-identification
via pose-aware multi-shot matching. In CVPR, 2016. 2
[8] D. Chung, K. Tahboub, and E. J. Delp. A two stream siamese
convolutional neural network for person re-identification. In
ICCV, 2017. 2
[9] J. V. Davis, B. Kulis, P. Jain, S. Sra, and I. S. Dhillon.
Information-theoretic metric learning. In ICML, 2007. 2
[10] M. Dikmen, E. Akbas, T. S. Huang, and N. Ahuja. Pedestrian
recognition with a learned metric. In ACCV. 2011. 1, 2, 3
[11] M. Farenzena, L. Bazzani, A. Perina, V. Murino, and
M. Cristani. Person re-identification by symmetry-driven accumulation of local features. In CVPR, 2010. 1, 2
[12] D. Gray, S. Brennan, and H. Tao. Evaluating appearance
models for recognition, reacquisition, and tracking. In PETS,
2007. 5, 6
[13] M. Hirzer, C. Beleznai, P. M. Roth, and H. Bischof. Person
re-identification by descriptive and discriminative classification. In SCIA, 2011. 5
[14] O. Javed, Z. Rasheed, K. Shafique, and M. Shah. Tracking
across multiple cameras with disjoint views. ICCV, 2003. 2
[15] S. Karanam, M. Gou, Z. Wu, A. Rates-Borras, O. Camps,
and R. J. Radke. A comprehensive evaluation and benchmark
for person re-identification: Features, metrics, and datasets.
arXiv:1605.09653, 2016. 5
[16] M. Koestinger, M. Hirzer, P. Wohlhart, P. M. Roth, and
H. Bischof. Large scale metric learning from equivalence
constraints. In CVPR, 2012. 1, 2, 3
[17] W. Li and X. Wang. Locally aligned feature transforms
across views. In CVPR, 2013. 5
[18] W. Li, R. Zhao, T. Xiao, and X. Wang. Deepreid: Deep filter
pairing neural network for person re-identification. In CVPR,
2014. 5
[19] S. Liao, Y. Hu, X. Zhu, and S. Z. Li. Person re-identification
by local maximal occurrence representation and metric
learning. In CVPR, 2015. 5
[20] G. Lisanti, N. Martinel, A. Del Bimbo, and G. Luca Foresti.
Group re-identification via unsupervised transfer of sparse
features encoding. In ICCV, 2017. 2
9
[42] D. Yi, Z. Lei, S. Liao, and S. Z. Li. Deep metric learning for
person re-identification. In ICPR, 2014. 1, 2
[43] Z. Zhang. Flexible camera calibration by viewing a plane
from unknown orientations. In ICCV, 1999. 2, 3
[44] R. Zhao, W. Ouyang, and X. Wang. Learning mid-level filters for person re-identification. In CVPR, 2014. 2
[45] L. Zheng, Z. Bie, Y. Sun, J. Wang, C. Su, S. Wang, and
Q. Tian. Mars: A video benchmark for large-scale person
re-identification. In ECCV, 2016. 2, 5
[46] L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian.
Scalable person re-identification: A benchmark. In ICCV,
2015. 5
[47] W.-S. Zheng, S. Gong, and T. Xiang. Associating groups of
people. In BMVC, 2009. 2
[48] S. Zhou, J. Wang, J. Wang, Y. Gong, and N. Zheng. Point
to set similarity based deep feature learning for person reidentification. In CVPR, 2017. 2
[49] Z. Zhou, Y. Huang, W. Wang, L. Wang, and T. Tan. See
the forest for the trees: Joint spatial and temporal recurrent
neural networks for video-based person re-identification. In
CVPR, 2017. 2
10
| 1 |
Leveraging Time Series Data in Similarity Based Healthcare Predictive Models
Leveraging Time Series Data in Similarity
Based Healthcare Predictive Models:
The Case of Early ICU Mortality Prediction
Full Paper
Mohammad Amin Morid
David Eccles School of Business
University of Utah
amin.morid@business.utah.edu
Olivia R. Liu Sheng
David Eccles School of Business
University of Utah
olivia.sheng@business.utah.edu
Samir Abdelrahman
Department of Biomedical Informatics
University of Utah
samir.abdelrahman@utah.edu
Abstract
Patient time series classification faces challenges in high degrees of dimensionality and missingness. In
light of patient similarity theory, this study explores effective temporal feature engineering and reduction,
missing value imputation, and change point detection methods that can afford similarity-based
classification models with desirable accuracy enhancement. We select a piecewise aggregation
approximation method to extract fine-grain temporal features and propose a minimalist method to
impute missing values in temporal features. For dimensionality reduction, we adopt a gradient descent
search method for feature weight assignment. We propose new patient status and directional change
definitions based on medical knowledge or clinical guidelines about the value ranges for different patient
status levels, and develop a method to detect change points indicating positive or negative patient status
changes. We evaluate the effectiveness of the proposed methods in the context of early Intensive Care
Unit mortality prediction. The evaluation results show that the k-Nearest Neighbor algorithm that
incorporates methods we select and propose significantly outperform the relevant benchmarks for early
ICU mortality prediction. This study makes contributions to time series classification and early ICU
mortality prediction via identifying and enhancing temporal feature engineering and reduction methods
for similarity-based time series classification.
Keywords
time-series classification, similarity-based classification, mortality prediction, directional change point.
Introduction
Patient time series data are collected over time at varying time intervals to update patient status and to
support medical decisions, leading to a wide variety of patient time series data – e.g., vital signs, lab
results, diagnoses, prescriptions and billings in Electronic Health Records (EHRs) and other healthcare
information systems. Past studies have extracted and leveraged temporal patterns (e.g., temporal
statistics, trends, transitions and similarity) from time series data for patient event, (e.g., readmission or
mortality), risk (Johnson et al. 2012), cost prediction (Bertsimas et al. 2008), or performance prediction
(Cho et al. 2008). Some of the past studies have reduced such problems to one of classifying one or
multiple time series of the same entity into different outcome/decision classes, which is termed the time
series classification (TSC) problem (Lee et al. 2012). Time series classification can be tackled by
Twenty-third Americas Conference on Information Systems, Boston, 2017
Leveraging Time Series Data in Similarity Based Healthcare Predictive Models
engineering temporal features that provide meaningful representations of time series and predictive
power at reduced dimensionality for use with general-purposed classification methods (Hippisley-Cox et
al. 2009). The synergy between a certain temporal abstraction approach and classification algorithm
varies amongst the vast choice space, and hence must be properly considered. To diagnose new patients,
physicians are often influenced by previous similar cases with relevant clinical evidences, resulting in
patient similarity theory in the clinical decision support literature (Rouzbahman and Chignell 2014). In
light of this theory, this study focuses on enhancing similarity-based classification methods for patient
time series classification.
The challenges of time series data include high dimensionality and high degree of missingness. Temporal
abstraction of time series, missing value imputation and feature reduction approaches provide options to
address these challenges. In addition, changes or transitions in various contexts including time series
classification have provided essential predictive power (Lin et al. 2014). To explore effective temporal
abstraction, missing value imputation, feature weight assignment and change point detection methods
that can afford the k-Nearest-Neighbor (kNN) algorithm with desirable synergy, this study asks these
research questions:
“What are the effective temporal abstraction, missing value imputation and feature reduction methods
for similarity-based time series classification?"
“What is an effective approach to define and detect patient status change points for similarity-based
time series classification?”
Proposed Framework for Similarity-based Patient Time Series
Classification
In this section, we introduce the methods this study has selected or proposed for patient time series
classification and some background and justifications for these methods.
Similarity based classification
Similarity-based classifiers estimate the class label of a test or new sample based on the similarities
between the test sample and a set of labeled training samples, and the pairwise similarities between the
training samples (Chen et al. 2009). The most popular family of algorithms has grown out the k-NearestNeighbor (k-NN) algorithm where k is the number of training samples with maximal similarities to a test
sample. Many advancements of the original k-NN including ENN (two-way similarity) (Tang and He
2015), CNN (condensed nearest neighbors) (Hart 1968) and kernel-based approaches (e.g., SVM-KNN)
have been proposed (Chen et al. 2009) for similarity-based classification as well. To examine the
usefulness of temporal features and reduction, and their synergy with similarity-based classification
approaches, we made a conscientious decision to use the original k-NN in this study. The potentials of
advancing the methods for patient time-series classification based on other similarity-based classification
remain as future research directions.
Past research on similarity measure has led to a wide variety of time series distance functions such as
Dynamic Time Wrapping (DTW) (Berndt and Clifford 1994), Edit Distance with Real Penalty (ERP) (Chen
and Ng 2004) and Longest Common Subsequence (LCSS) (Vlachos et al. 2002) to measure dissimilarities
based on different considerations. Our empirical exploration of different distance functions shows that the
simplicity of the Manhattan distance function offers desirable flexibility to leverage the joint benefits of
distance function, PAA grain size and missing value imputation methods. We hence select the Manhattan
distance function in k-NN for patient time series classification.
Temporal abstraction
Temporal abstraction decides on how to transform time series data into the input features of a
classification model. The challenges of time series data include high dimensionality and high amount of
missing data amongst others. Extant temporal abstraction methods vary in their considerations to
address these challenges and the predictive power of the resulting temporal features (Fu 2011). One
commonly used simple temporal abstraction approach, called piecewise aggregation approximation
(PAA), segments a time series into a sequence of fixed-sized non-overlapping consecutive windows (or
Twenty-third Americas Conference on Information Systems, Boston, 2017
Leveraging Time Series Data in Similarity Based Healthcare Predictive Models
intervals) (Lin et al. 2003). Each window is represented by the average of all data values time-stamped
within the window. We regard the size of a PAA window as the grain size and divide the temporal features
produced by PAA into fine-grain features versus coarse grain features. Table 1 compares the
dimensionality, degrees of missingness and information loss of fine-grain versus coarse-grain features.
Fine-Grain
Coarse-Grain
Window size
Small
Large
Information loss
Low
High
Dimensionality
High
Low
Missingness
High
Low
Table 1. Comparison of fine-grain versus coarse-grain PAA temporal abstraction
The low information loss of fine-grain PAA could afford k-NN improved accuracy over coarse-grain PAA
features. Exploring effective grain-size, dimensionality reduction and missing value imputation methods
are necessary to realize additional predictive power of fine-grain PAA temporal features.
PAA grain decision
The optimal PAA grain decision is analytically intractable due to the apparent complexity of considering
the interrelated factors including missing value imputation, feature reduction and distance function.
Empirical comparison should be employed to decide on the grain size. To reduce the number of empirical
experiments, we only compare the classification accuracy resulting from different grain sizes while
holding other methods fixed at the selected or proposed settings for time series classification.
Missing value imputation (MVI)
Missing value handling methods such as propensity score imputation, predictive model–based imputation
and hot-deck imputation can be found from past literature (Penny and Chesney 2006). Some of the time
series distance functions such DTW also incorporates missing value imputation (Berndt and Clifford
1994). In particular, DTW considers prior and posterior values of a missing value at a time point when
deciding on the similarity between two time series. Motivated by DTW, we propose an adjacency-based
imputation method which replaces a missing value by its posterior value if its prior value is not available
or by its prior value if its posterior value is not available. If both prior and posterior values are available,
their average becomes the imputed value. The proposed imputation can be performed independent of the
distance function of choice.
Feature weighting (FW)
Feature reduction for classification can utilize a variety of approaches such as information gain, Gini
index and Chi-square metrics to calculate feature rankings or weights for feature selection or reduction
(Singh et al. 2010). Because of the well-tested ability to improve accuracy, we adopt the Gradient Descent
(GS) method (Modha and Spangler 2003; Wettschereck and Aha 1995) to assign weights to time series
features.
Change point detection (CPD)
Many change point detection methods focus on detecting changes in the mean, variance or trend in a time
series that follows a distribution – e.g., Gaussian, normal or regression (Hawkins and Zamba 2012). Such
methods are not appropriate for detecting change points in patient time series due to the underlying data
distribution assumptions. In addition, a change in the mean or variance of numeric patient time series, for
example, of blood pressure may not be a change in patient status if both the values before and after a
change point represent the same patient status – e.g. normal. Therefore, this paper uses a change point
detection method based on clinical domain knowledge.
Twenty-third Americas Conference on Information Systems, Boston, 2017
Leveraging Time Series Data in Similarity Based Healthcare Predictive Models
Directional change point detection method
Past research has emphasized the importance of change point in clinical guidelines and decision supports
(Assareh et al. 2011; La Rosa et al. 2008; Sawaya et al. 2011). Few or many change points of a patient’s
time series can easily differentiate a patient’s condition and outcomes. In this paper we propose to define
and detect change points based on changes in the health status according to categories of values in the
time series rather than measures like mean or variance. For instance, systolic blood pressure less than 120
is considered normal, while between 120 and 139 is considered Prehypertension (Le et al. 2013). A
directional change point is defined as a change in a patients’ status category.
We denote a directional change point of patient i for time series (TS) j as DCP(i, j). The input is the
status(i, j, k) that indicates the health status level of patient i during time window k based on TS j. The
status output is an integer, where the lowest value of status for a feature represents the worst category of
health status, while the highest value of status for this feature represents the best category of health
status. Status change of patient i at time window k in terms of TS j is computed as follow:
StatusChange(i, j, k) = Positive, if status(i, j, k) > status (i, j, m), m<k
= Negative, if status(i, j, k) < status (i, j, m), m<k
= Stable, if status(i, j, k) = status (i, j, m), m<k
where m is the time window of the most recent available status of the patient before window k. We
propose three change point features for patient i based on TS j - the number of directional change points
or 𝑁𝑢𝑚_𝑜𝑓_𝐷𝐶𝑃(𝑖, 𝑗), and the first and last status changes, or LastStatusChange(i,j), and
FirstStatusChange(i,j). The following determines the value of directional change point of patient i status
at time windows k in terms of TS j:
DCP(i, j, k)
= 1, If status(i, j, k) is opposite to status (i, j, r), r<k
= 0, if status(i, j, k) = is not opposite to status (i, j, r), r<k
where r is the most recent available status change of the patient before window k that is either positive or
negative (i.e., Stable is not counted). DCP(i,j,1) for the first window is set to zero. Assume W is the number
of time windows (e.g., 24), the number of directional change points is derived as:
𝑊
𝑁𝑢𝑚_𝑜𝑓_𝐷𝐶𝑃(𝑖, 𝑗) = ∑ 𝐷𝐶𝑃(𝑖, 𝑗, 𝑘)
𝑘=1
The most recent and most related study to this method is the change point detection method proposed by
Lin et al. (2014)(Lin et al. 2014) for time-to-event prediction of chronic conditions using EHR data. To
detect change in patients’ status, the numerical time series values are replaced by three nominal states
(i.e., high, medium, low) based on numerical trend and two nominal trends (i.e., decrease, increase,
stable) based on numerical value changes of each predictor. Their change point detection will be used as a
benchmark for evaluating our proposed domain based directional change point detection method.
kNN-TSC-FIWC
We refer to the proposed patient time series classification method that combines the kNN algorithm with
fine-grain temporal features (F), missing-value imputation (I), feature weight assignment (W) and change
point detection (C) methods we select or propose to enhance similarity-based time series classification as
kNN-TSC-FIWC. Figure 1 summarizes the flow of the training and testing phases of kNN-TSC-FIWC.
Another benchmark of kNN-TSC-FIWC is Lee et al. (2012)(Lee et al. 2012) which proposes a similarity
based time series classification algorithm - KNN-TSC to predict customer churn after 30 days. KNN-TSC
divides each feature’s time series data into 15 equal size intervals and adopts the Discrete Fourier
transform (DFT) technique for time series similarity calculation. It doesn’t assign feature weights. KNNTSC utilizes stratified average voting to estimate the churn decision of a test sample. In empirical
evaluation, we will compare the accuracy of kNN-TSC-FIWC to that of KNN-TSC.
Twenty-third Americas Conference on Information Systems, Boston, 2017
Leveraging Time Series Data in Similarity Based Healthcare Predictive Models
Figure 1: An overview of kNN-TSC-FIWC
Empirical Evaluation
To evaluate the effectiveness of kNN-TSC-FIWC and its enabling methods, we compare them to
benchmarks representing combinations of different temporal features, classification algorithms, similarity
functions and dimension reduction methods in the context of early Intensive Care Unit (ICU) mortality
prediction. Accurate ICU mortality prediction impacts medical therapy, triaging, end-of-life care, and
many other aspects of ICU care (Gartman et al. 2009). To enhance the performance of ICU mortality
prediction more sophisticated machine learning methods have been utilized recently. The
PhysioNet/Computing in Cardiology 2012 Challenge aimed to provide a benchmark environment for early
ICU mortality prediction (Silva et al. 2012). To the best of our knowledge, its winner (CCW hereafter) is
the best early ICU mortality prediction benchmark using patients’ first 48 hours of ICU time series data.
CCW utilizes a new Bayesian ensemble scheme comprising of 500 weak decision tree learners which
randomly assigns an intercept and gradient to a randomly selected single feature (Johnson et al. 2012).
Data and evaluation procedure
To compare our results with CCW, we use the same experimental setup in the competition where patients
were filtered to 22,561 patients who are younger than 16 years old and remained in the ICU for at least 48
hours. The data input consists of time series data of 36 variables (e.g., Glasgow Coma Score GCS)
extracted from patients’ ICU stay, plus four static features (i.e., age, gender, height, and initial weight).
The target variable is a binary feature showing whether or not the patient eventually dies in the hospital
before discharge. While almost half of the ICU patients have died eventually, most of the deaths happened
out of hospital. The problem we analyze in this study is the prediction of in hospital mortality, which has
Twenty-third Americas Conference on Information Systems, Boston, 2017
Leveraging Time Series Data in Similarity Based Healthcare Predictive Models
an imbalanced distribution of 18% positive against 82% negative as shown in Table 2. It is interesting to
observe that the difference in average ICU stays is very minor while the difference in outcomes is life
versus death. Early mortality prediction may be able to help decision makers find ways to improve an ICU
patient’s survival rate. For finding and tuning the parameters including finding the best k, 50% of the
data was used, while the rest remained unseen for validation. In all experiments, 20-fold cross validation
was used to evaluate the performance of each method based on the validation dataset. Classification
performance was measured according to the average precision, recall, and F-measure across 20 folds.
Number of Patients
Average Hospital Stay
Average ICU Stay
Alive
11977 (53%)
9.68
5.21
Died
10584 (47%)
13.87
5.53
Died out of Hospital (no)
6516 (29%)
13.39
4.86
Died in Hospital (yes)
4068 (18%)
13.5
6.6
Died in ICU (yes)
3240 (14%)
10.53
7.49
Table 2: Data distribution over the target variable (mortality)
Results
Table 3 shows that using two hours’ time windows for the proposed method outperforms the same
method with one, four and eight hours’ time windows. The performance of the smallest grain size suffers
from high missingness and the resulting noises in two patients’ common PAA temporal values, while high
loss of information details hurts the performance of large grain sizes. The best prediction performance is
reached when the effect of missing values and information loss is balanced at window size of 2. Hence, the
grain size chosen for the rest of the evaluations is 2 hours.
Without change point detection
(kNN-TSC-FIW)
With change point detection
(kNN-TSC-FIWC)
Window
size
Accuracy
F-Measure
(yes)
F-Measure
(no)
Accuracy
F-Measure
(yes)
F-Measure
(no)
1
0.72
0.56
0.81
0.80
0.69
0.91
2
0.78
0.66
0.89
0.82
0.77
0.93
4
0.75
0.63
0.76
0.80
0.69
0.91
8
0.74
0.61
0.83
0.75
0.63
0.86
12
0.66
0.48
0.75
0.64
0.45
0.74
24
0.55
0.30
0.7
0.55
0.30
0.70
48
0.44
0.19
0.62
0.43
0.16
0.60
Table 3. Window size (abstraction size effect) effect on performance
Table 4 compares the performance of kNN-TSC-FIW where a few benchmarking distance functions
replace the Manhattan distance function. Although the performance results are close, they do validate the
performance benefit the simple Manhattan distance function offers.
Twenty-third Americas Conference on Information Systems, Boston, 2017
Leveraging Time Series Data in Similarity Based Healthcare Predictive Models
Without Change Point (kNN-TSC-FIW)
With Change Point (kNN-TSC-FIWC)
Accuracy
F-Measure (yes)
F-Measure (no)
Accuracy
F-Measure (yes)
F-Measure (no)
Manhattan
0.78
0.66
0.89
0.82
0.77
0.93
Euclidean
0.75
0.62
0.84
0.81
0.73
0.92
DTW
0.71
0.59
0.81
0.8
0.7
0.91
EDR
0.69
0.53
0.77
0.79
0.68
0.9
ERP
0.68
0.52
0.77
0.75
0.63
0.86
DFT
0.64
0.46
0.74
0.7
0.58
0.81
TSC
0.64
0.46
0.74
0.7
0.58
0.81
LCSS
0.57
0.33
0.71
0.66
0.48
0.75
Table 4. Comparison of different time-series distance function
Table 5 compares the performance of kNN-FIW against some of the well-established data mining
methods, including support vector machine (SVM), the original kNN without feature weight assignment,
neural network (NN) and logistic regression (LR). The input features for these algorithms are derived
based on the fine-grain PAA of 2-hr window size. The comparison validates the performance advantage of
similarity-based classification over its non-similarity counter-parts for early ICU mortality prediction.
Table 6 compares the performance of kNN combined with the selected or proposed methods starting with
fine-grain time series abstraction (kNN-TSC-F), missing value imputation (kNN-TSC-FI), feature
weighting (kNN-TSC-FIW) and change point detection (kNN-TSC-FIWC). The features based on the
proposed fine-grain temporal abstraction, missing value imputation and feature weighting help the kNNTSC-FIW model outperform the CCW benchmark by increasing the F-measure of the “yes” class by 11%.
The proposed change point features also further double this performance improvement. The significance
and benefits of these performance improvements in early ICU mortality by the proposed classification
features cannot be underestimated.
Table 7 shows the significant effect of the proposed feature weighting technique on the proposed method
(without considering change point features) against well-established feature weighting techniques,
including Gini index, Chi-square and information gain, as well as the method proposed by Lee et al. (Lee
et al. 2012).
Without Change Point
With Change Point
Accuracy
F-Measure
(yes)
F-Measure
(no)
Accuracy
F-Measure
(yes)
F-Measure
(no)
kNN-TSC-FIW (left)
kNN-TSC-FIWC (right)
0.78
0.66
0.89
0.82
0.77
0.93
CCW
0.7
0.55
0.81
0.75
0.62
0.84
CCW on fine-grain
features
0.66
0.48
0.75
0.71
0.59
0.81
SVM
0.7
0.58
0.81
0.65
0.47
0.85
LR
0.61
0.42
0.74
0.72
0.56
0.81
kNN
0.68
0.5
0.77
0.68
0.52
0.77
NN
0.64
0.46
0.74
0.68
0.52
0.77
Table 5. Similarity based method against son-similarity based methods using fine-grain
abstraction
Twenty-third Americas Conference on Information Systems, Boston, 2017
Leveraging Time Series Data in Similarity Based Healthcare Predictive Models
Method
Grain
MVI
FW
CPD
Accuracy
F-Measure (yes)
F-Measure (no)
CCW
Coarse
No
No
No
0.70
0.55
0.81
kNN-TSC-F
Fine
No
No
No
0.55
0.29
0.71
kNN-TSC-FI
Fine
Yes
No
No
0.64
0.40
0.83
kNN-TSC-FIW
Fine
Yes
Yes
No
0.78
0.66
0.89
kNN-TSCFIWC
Fine
Yes
Yes
Yes
0.82
0.77
0.93
Table 6. Full performance results by class
Without Imputation
With Imputation
Accuracy
F-Measure
(yes)
F-Measure
(no)
Accuracy
F-Measure
(yes)
F-Measure
(no)
kNN-TSC-FW (left)
kNN-TSC-FIW (right)
Lee et al.
0.6
0.40
0.72
0.78
0.66
0.89
0.52
0.27
0.67
0.72
0.56
0.81
Manual weights
0.57
0.33
0.71
0.71
0.59
0.81
Chi Square
0.50
0.24
0.64
0.68
0.52
0.77
Information Gain
0.53
0.28
0.68
0.71
0.55
0.81
Gini Index
0.55
0.30
0.7
0.70
0.58
0.81
No Feature Weights
0.44
0.19
0.62
0.49
0.27
0.64
Table 7. Comparison of different feature weighting techniques on the proposed method
This table shows the advantage of the selected Gradient Descent FW over methods that assign weights
based on pre-calculated values as well as domain knowledge based weight assignment (i.e. Manual
weights). The performance of a model without the proposed adjacency-based imputation on the left-hand
side of Table 7 is significantly lower than the same model with imputation on the right-hand side,
providing evidences for the effectiveness of the proposed minimalist imputation method.
Table 8 compares the performance of kNN-TSC-FIWC using different change point detection (CPD)
methods including parametric (M-G and V-G) and non-parametric (L-NP, S-NP, and LS-NP) CPD
methods as well as the change point detection method proposed by Lin et al. (2014). Although nonparametric CPD approaches perform better than parametric CPD approaches, none of them nor the CPD
method proposed by Lin et al. [16] could outperform kNN-TSC-FIWC. The comparison provides
evidences that changes in patient time series cannot be detected without considering domain-based
patient status categories. In addition, our post process analysis shows that patients with higher number of
DCHP are more likely to die due to their unstable situation. Patients with negative last change points
which indicate declining health status dominate the early death class. These patterns show the importance
of the proposed change point detection features.
Change Point Detection Method
Accuracy
F-Measure (yes)
F-Measure (no)
kNN-TSC-FIWC
0.82
0.77
0.93
Lin et al.
L-NP
S-NP
LS-NP
M-G
V-G
0.75
0.79
0.75
0.8
0.75
0.75
0.63
0.68
0.63
0.7
0.62
0.62
0.86
0.9
0.76
0.91
0.84
0.84
Table 8. Comparison of kNN-TSC-FIWC using different Change Point Detection Methods
Twenty-third Americas Conference on Information Systems, Boston, 2017
Leveraging Time Series Data in Similarity Based Healthcare Predictive Models
Without Change Point
With Change Point
Window
Size
Num of
Patients
Died
Accuracy
F-Measure
(yes)
F-Measure
(no)
Accuracy
F-Measure
(yes)
F-Measure
(no)
24
33625
5102
(15%)
0.72
0.60
0.81
0.78
0.67
0.9
48
22561
4068
(18%)
0.78
0.66
0.89
0.82
0.77
0.93
72
15913
3389
(21%)
0.8
0.7
0.91
0.84
0.82
0.94
96
11976
2903
(24%)
0.75
0.63
0.86
0.82
0.75
0.92
12
9485
2647
(27%)
0.74
0.61
0.83
0.81
0.72
0.92
Table 9. Comparison of kNN-TSC-FIWC using different prediction windows
Contributions and Limitations
This study makes several contributions to the patient time series classification and the early ICU mortality
prediction research fields:
Based on the patient similarity theory, the study evaluates the effectiveness of the similarity-based patient
time series classification approach.
The study further evaluates and identifies effective fine-grain PAA temporal abstraction, similarity
functions and proposes necessary enhancements via adjacency-based missing value imputation. The
study also evaluates the effectiveness of the gradient decent feature weight assignment approach for
reducing temporal dimensions and improving accuracy.
To the best of our knowledge, the study is the first to propose directional patient status change point
detection to extract effective features for patient time series classification.
The study contributes to solutions to an important healthcare predictive problem – early ICU mortality
prediction by significantly improving prediction accuracy with a new framework that embeds effective
extant methods and new enhancements. Both intensive caregivers and patients’ families can benefit from
this framework with the crucial decision on aggressive or supportive treatment. Also, unexpected deaths,
which are still common despite evidence that patients often show signs of clinical deterioration hours in
advance, can be detected.
The main limitations of this study include the use of a single data set for evaluation, the difficulty of
explaining a kNN model, and the need to examine additional methods appropriate for time series
classification. Future research should pursue along the directions that could address these limitations.
REFERENCES
Assareh, H., Smith, I., and Mengersen, K. 2011. "Bayesian Change Point Detection in Monitoring Cardiac Surgery
Outcomes," Quality Management in Healthcare (20:3), pp. 207-222.
Berndt, D.J., and Clifford, J. 1994. "Using Dynamic Time Warping to Find Patterns in Time Series," KDD
workshop: Seattle, WA, pp. 359-370.
Bertsimas, D., Bjarnadóttir, M.V., Kane, M.A., Kryder, J.C., Pandey, R., Vempala, S., and Wang, G. 2008.
"Algorithmic Prediction of Health-Care Costs," Operations Research (56:6), pp. 1382-1392.
Chen, L., and Ng, R. 2004. "On the Marriage of Lp-Norms and Edit Distance," Proceedings of the Thirtieth
international conference on Very large data bases-Volume 30: VLDB Endowment, pp. 792-803.
Twenty-third Americas Conference on Information Systems, Boston, 2017
Leveraging Time Series Data in Similarity Based Healthcare Predictive Models
Chen, Y., Garcia, E.K., Gupta, M.R., Rahimi, A., and Cazzanti, L. 2009. "Similarity-Based Classification: Concepts
and Algorithms," The Journal of Machine Learning Research (10), pp. 747-776.
Cho, B.H., Yu, H., Kim, K.-W., Kim, T.H., Kim, I.Y., and Kim, S.I. 2008. "Application of Irregular and Unbalanced
Data to Predict Diabetic Nephropathy Using Visualization and Feature Selection Methods," Artificial
intelligence in medicine (42:1), pp. 37-53.
Fu, T.-c. 2011. "A Review on Time Series Data Mining," Engineering Applications of Artificial Intelligence (24:1),
pp. 164-181.
Gartman, E.J., Casserly, B.P., Martin, D., and Ward, N.S. 2009. "Using Serial Severity Scores to Predict Death in
Icu Patients: A Validation Study and Review of the Literature," Current opinion in critical care (15:6), pp.
578-582.
Hart, P. 1968. "The Condensed Nearest Neighbor Rule (Corresp.)," IEEE Transactions on Information Theory
(14:3), pp. 515-516.
Hawkins, D.M., and Zamba, K. 2012. "Statistical Process Control for Shifts in Mean or Variance Using a
Changepoint Formulation," Technometrics).
Hippisley-Cox, J., Coupland, C., Robson, J., Sheikh, A., and Brindle, P. 2009. "Predicting Risk of Type 2 Diabetes
in England and Wales: Prospective Derivation and Validation of Qdscore," Bmj (338), p. b880.
Johnson, A.E., Dunkley, N., Mayaud, L., Tsanas, A., Kramer, A., and Clifford, G.D. 2012. "Patient Specific
Predictions in the Intensive Care Unit Using a Bayesian Ensemble," Computing in Cardiology (CinC),
2012: IEEE, pp. 249-252.
La Rosa, P.S., Nehorai, A., Eswaran, H., Lowery, C.L., and Preissl, H. 2008. "Detection of Uterine Mmg
Contractions Using a Multiple Change Point Estimator and the K-Means Cluster Algorithm," IEEE
Transactions on Biomedical Engineering (55:2), pp. 453-467.
Le, H.T., Harris, N.S., Estilong, A.J., Olson, A., and Rice, M.J. 2013. "Blood Glucose Measurement in the Intensive
Care Unit: What Is the Best Method?," Journal of diabetes science and technology (7:2), pp. 489-499.
Lee, Y.-H., Wei, C.-P., Cheng, T.-H., and Yang, C.-T. 2012. "Nearest-Neighbor-Based Approach to Time-Series
Classification," Decision Support Systems (53:1), pp. 207-217.
Lin, J., Keogh, E., Lonardi, S., and Chiu, B. 2003. "A Symbolic Representation of Time Series, with Implications
for Streaming Algorithms," Proceedings of the 8th ACM SIGMOD workshop on Research issues in data
mining and knowledge discovery: ACM, pp. 2-11.
Lin, Y.-K., Chen, H., Brown, R.A., Li, S.-H., and Yang, H.-J. 2014. "Time-to-Event Predictive Modeling for
Chronic Conditions Using Electronic Health Records," IEEE Intelligent Systems (29:3), pp. 14-20.
Modha, D.S., and Spangler, W.S. 2003. "Feature Weighting in K-Means Clustering," Machine learning (52:3), pp.
217-237.
Penny, K.I., and Chesney, T. 2006. "Imputation Methods to Deal with Missing Values When Data Mining Trauma
Injury Data," 28th International Conference on Information Technology Interfaces, 2006.: IEEE, pp. 213218.
Rouzbahman, M., and Chignell, M. 2014. "Predicting Icu Death with Summarized Data: The Emerging Health Data
Search Engine,").
Sawaya, H., Sebag, I.A., Plana, J.C., Januzzi, J.L., Ky, B., Cohen, V., Gosavi, S., Carver, J.R., Wiegers, S.E., and
Martin, R.P. 2011. "Early Detection and Prediction of Cardiotoxicity in Chemotherapy-Treated Patients,"
The American journal of cardiology (107:9), pp. 1375-1380.
Silva, I., Moody, G., Scott, D.J., Celi, L.A., and Mark, R.G. 2012. "Predicting in-Hospital Mortality of Icu Patients:
The Physionet/Computing in Cardiology Challenge 2012," Computing in Cardiology (CinC), 2012: IEEE,
pp. 245-248.
Singh, S.R., Murthy, H.A., and Gonsalves, T.A. 2010. "Feature Selection for Text Classification Based on Gini
Coefficient of Inequality," FSDM (10), pp. 76-85.
Tang, B., and He, H. 2015. "Enn: Extended Nearest Neighbor Method for Pattern Recognition [Research Frontier],"
IEEE Computational Intelligence Magazine (10:3), pp. 52-60.
Vlachos, M., Kollios, G., and Gunopulos, D. 2002. "Discovering Similar Multidimensional Trajectories," Data
Engineering, 2002. Proceedings. 18th International Conference on: IEEE, pp. 673-684.
Wettschereck, D., and Aha, D.W. 1995. "Weighting Features," in Case-Based Reasoning Research and
Development. Springer, pp. 347-358.
Twenty-third Americas Conference on Information Systems, Boston, 2017
| 2 |
1
The Spatial Outage Capacity
of Wireless Networks
arXiv:1708.05870v2 [] 23 Jan 2018
Sanket S. Kalamkar, Member, IEEE, and Martin Haenggi, Fellow, IEEE
Abstract
We address a fundamental question in wireless networks that, surprisingly, has not been studied
before: what is the maximum density of concurrently active links that satisfy a certain outage constraint?
We call this quantity the spatial outage capacity (SOC), give a rigorous definition, and analyze it
for Poisson bipolar networks with ALOHA. Specifically, we provide exact analytical and approximate
expressions for the density of links satisfying an outage constraint and give simple upper and lower
bounds on the SOC. In the high-reliability regime where the target outage probability is close to zero, we
obtain an exact closed-form expression of the SOC, which reveals the interesting and perhaps counterintuitive result that all transmitters need to be always active to achieve the SOC, i.e., the transmit
probability needs to be set to 1 to achieve the SOC.
Index Terms
Interference, outage probability, Poisson point process, spatial outage capacity, stochastic geometry,
wireless networks.
I. I NTRODUCTION
A. Motivation
In a wireless network, the outage probability of a link is a key performance metric that
indicates the quality-of-service. To ensure a certain reliability, it is desirable to impose a limit
on the outage probability, which depends on path loss, fading, and interferer locations. For
example, in an interference-limited network, the outage probability of a link is the probability
S. S. Kalamkar and M. Haenggi are with the Department of Electrical Engineering, University of Notre Dame, Notre Dame,
IN, 46556 USA. (e-mail: {skalamka, mhaenggi}@nd.edu).
This work is supported by the US National Science Foundation (grant CCF 1525904).
Part of this work was presented at the 2017 IEEE International Conference on Communications (ICC’17) [1].
2
that the signal-to-interference ratio (SIR) at the receiver of that link is below a certain threshold.
The interference originates from concurrently active transmitters as governed by a medium access
control (MAC) scheme. Clearly, if more transmitters are active, then the interference at a receiver
is higher, which increases the outage probability. Hence, given the outage constraint, a natural and
a fundamental question, which has surprisingly remained unanswered, is “What is the maximum
density of concurrently active links that meet the outage constraint?” To rigorously formulate
this question, we introduce a quantity termed the spatial outage capacity (SOC). The SOC has
applications in a wide range of wireless networks, including cellular, ad hoc, device-to-device
(D2D), machine-to-machine (M2M), and vehicular networks. In this paper we focus on the
Poisson bipolar model, which is applicable to infrastructureless networks such as ad hoc, D2D,
and M2M networks.
B. Definition and Connection to SIR Meta Distribution
Modeling the random node locations as a point process, formally, the SOC is defined as
follows.
Definition 1 (Spatial outage capacity). For a stationary and ergodic point process model where
λ is the density of potential transmitters, p is the fraction of links that are active at a time, and
η(θ, ǫ) is the fraction of links in each realization of the point process that have an SIR greater
than θ with probability at least 1 − ǫ, the SOC is
S(θ, ǫ) , sup λpη(θ, ǫ),
(1)
λ,p
where θ ∈ R+ , ǫ ∈ (0, 1), and the supremum is taken over λ > 0 and p ∈ (0, 1].
The SOC formulation applies to all MAC schemes where the fraction of active links in each
time slot is p and each link is active for a fraction p of the time. This includes MAC schemes
where the events that nodes are transmitting are dependent on each other, such as carrier-sense
multiple access (CSMA). In Def. 1 ǫ represents an outage constraint. Thus the SOC yields
the maximum density of links that satisfy an outage constraint. Alternatively, the SOC is the
maximum density of concurrently active links that have a success probability (reliability) greater
than 1 − ǫ. Hence the SOC represents the maximum density of reliable links, where ǫ denotes
a reliability threshold. We call the pair of λ and p that achieves the SOC as the SOC point.
3
We denote the density of concurrently active links that have an outage probability less than ǫ
(alternatively, a reliability of 1 − ǫ or higher) as
λǫ , λpη(θ, ǫ),
(2)
which results in S(θ, ǫ) = sup λǫ . Due to the ergodicity of the point process Φ, we can express
λ,p
λǫ as the limit
1 X
1 (P(SIRỹ > θ | Φ) > 1 − ǫ) ,
r→∞ πr 2
y∈Φ
λǫ = lim
kyk<r
where ỹ is the receiver paired with transmitter y and 1(·) is the indicator function. From this
formulation, it is apparent that the outage constraint results in a static dependent thinning of Φ
to a point process of density λǫ .
The probability η(θ, ǫ) in (1), termed meta distribution of the SIR in [2], is the complementary
cumulative distribution function (ccdf) of the conditional link success probability which is given
as
Ps (θ) , P(SIR > θ | Φ),
where the conditional probability is calculated by averaging over the fading and the medium
access scheme (if random) of the interferers, and the SIR is calculated at the receiver of the link
under consideration. Accordingly, the meta distribution is given as
η(θ, ǫ) , P!t (Ps (θ) > 1 − ǫ),
(3)
where P!t (·) denotes the reduced Palm probability, given that an active transmitter is present at
the prescribed location, and the SIR is calculated at its associated receiver. Under the expectation
over the point process, it is the typical receiver. The meta distribution is the distribution of the
conditional link success probability, which is obtained by taking an expectation over the point
process. In other words, the meta distribution is the probability that the success probability of
the transmission over the typical link is at least 1 − ǫ. As a result, as is standard in stochastic
geometry, the calculation of the SOC is done at the typical user and involves averaging over the
point process. Due to the ergodicity of the point process, η(θ, ǫ) corresponds to the fraction of
reliable links in each realization of the point process. Hence we can calculate the SOC using
the meta distribution framework.1
1
Note that the meta distribution provides the tool to analyze the SOC, but the meta distribution is not needed to define the
SOC.
4
Fig. 1. The histogram of the empirical probability density function of the link success probability in a Poisson bipolar network
with ALOHA channel access scheme for transmit probabilities p = 1/10 and p = 1. Both cases have the same mean success
probability of ps (θ) = 0.5944, but we see a different distribution of link success probabilities for different values of the pair
density λ and transmit probability p. For p = 1/10, the link success probabilities mostly lie between 0.4 and 0.8 (concentrated
around their mean), while for p = 1, they are spread much more widely. The SIR threshold θ = −10 dB, distance between a
transmitter and its receiver R = 1, path loss exponent α = 4, and λp = 1/3.
The conditional link success probability Ps (θ) (and thus the meta distribution η(θ, ǫ)) allows us
to directly calculate the standard (mean) success probability, which is a key quantity of interest
in wireless networks. In particular, we can express the mean success probability as
Z 1
!t
ps (θ) , P(SIR > θ) = E (Ps (θ)) =
η(θ, x)dx,
0
!t
where the SIR is calculated at the typical receiver and E (·) denotes the expectation with respect
to the reduced Palm distribution. The standard success probability can be easily calculated by
taking the average of the link success probabilities. Hence, in a realization of the network, ps (θ)
can be interpreted as a spatial average which provides limited information about the outage
performance of an individual link. As Fig. 1 shows, for a Poisson bipolar network with ALOHA
where each transmitter has an associated receiver at a distance R, depending on the network
parameters, the distribution of Ps (θ) varies greatly for the same ps (θ). Hence the link success
probability distribution is a much more comprehensive metric than the mean success probability
that is usually considered. Since the SOC can be evaluated using the distribution of link success
probabilities, it provides fine-grained information about the network.
5
C. Contributions
This paper makes the following contributions:
•
We introduce a new notion of capacity—the spatial outage capacity.
•
For the Poisson bipolar network with Rayleigh fading and ALOHA, we give exact and
approximate expressions of the density of reliable links. We also derive simple upper and
lower bounds on the SOC.
•
We show the trade-off between the density of active links and the fraction of reliable links.
•
In the high-reliability regime where the target outage probability is close to 0, we give
a closed-form expression of the SOC and prove that the SOC is achieved at p = 1. For
Rayleigh distributed link distances, we show that the density of reliable links is asymptotically independent of the density of (potential) transmitters λ as ǫ → 0.
D. Related Work
For Poisson bipolar networks, the mean success probability ps (θ) is calculated in [3] and [4].
For ad hoc networks modeled by the Poisson point process (PPP), the link success probability
Ps (θ) is studied in [5], where the focus is on the mean local delay, i.e., the −1st moment of
Ps (θ) in our notation. The notion of the transmission capacity (TC) is introduced in [6], which
is defined as the maximum density of successful transmissions provided the outage probability
of the typical user stays below a predefined threshold ǫ. While the results obtained in [6] are
certainly important, the TC does not represent the maximum density of successful transmissions
for the target outage probability, as claimed in [6], since the metric implicitly assumes that each
link in a realization of the network is typical.
A version of the TC based on the link success probability distribution is introduced in [7],
but it does not consider a MAC scheme, i.e., all nodes always transmit (p = 1). The choice of
p is important as it greatly affects the link success probability distribution as shown in Fig. 1.
In this paper, we consider the general case with the transmit probability p ∈ (0, 1].
The meta distribution η(θ, ǫ) for Poisson bipolar networks with ALOHA and cellular networks
is calculated in [2], where a closed-form expression for the moments of Ps (θ) is obtained, and
an exact integral expression and simple bounds on η(θ, ǫ) are provided. A key result in [2] is
that, for constant transmitter density λp, as the Poisson bipolar network becomes very dense
(λ → ∞) with a very small transmit probability (p → 0), the disparity among link success
probabilities vanishes and all links have the same success probability, which is the mean success
6
probability ps (θ). For the Poisson cellular network, the meta distribution of the SIR is calculated
for the downlink and uplink scenarios with fractional power control in [8], with base station
cooperation in [9], and for D2D networks underlaying the cellular network (downlink) in [10].
Furthermore, the meta distribution of the SIR is calculated for millimeter-wave D2D networks
in [11] and for D2D networks with interference cancellation in [12].
E. Comparison of the SOC with the TC
The TC defined in [6] can be written as
c(θ, ǫ) , (1 − ǫ) sup{λp : E!t (Ps (θ)) > 1 − ǫ},
while the SOC can be expressed as
S(θ, ǫ) , sup{λpP(Ps (θ) > 1 − ǫ)}.
λ,p
The mean success probability ps (θ) , E!t (Ps (θ)) depends only on the product λp and is
monotonic. Hence the TC can be written as c(θ, ǫ) , (1 − ǫ)p−1
s (1 − ǫ). The TC yields the
maximum density of links such that the typical link satisfies the outage constraint. In other
words, in the TC framework, the outage constraint is applied at the typical link, i.e., after
averaging over the point process. This means that the outage constraint is not applied at the
actual links, but at a fictive link whose SIR statistics correspond to the average over all links.
The supremum is taken over only one parameter, namely λp. On the other hand, in the SOC
framework, the outage constraint is applied at each individual link.2 It accurately yields the
maximum density of links that satisfy an outage constraint. This means that λ and p need to be
considered separately. We further illustrate the difference between the SOC and the TC through
the following example.
Example 1 (Difference between the SOC and the TC). For Poisson bipolar networks with
ALOHA and SIR threshold θ = 1/10, link distance R = 1, path loss exponent α = 4, and
target outage probability ǫ = 1/10, c(1/10, 1/10) = 0.0608 (see [13, (4.15)]), which is achieved
at λp = 0.0675. At this value of the TC, ps (θ) = 0.9. But at p = 1, actually only 82% of
the active links satisfy the 10% outage. Hence the density of links that achieve 10% outage is
only 0.055. On the other hand, S(1/10, 1/10) = 0.09227 which is the actual maximum density
2
Hence the TC can be interpreted as a mean-field approximation of the SOC.
7
of concurrently active links that have an outage probability smaller than 10%. The SOC point
corresponds to λ = 0.23 and p = 1, resulting in ps (θ) = 0.6984. Thus the maximum density of
links given the 10% outage constraint is more than 50% larger than the TC.
The version of the TC proposed in [7] applies an outage constraint at each link, similar to the
SOC, but assumes that each link is always active (i.e., there is no MAC scheme) and calculates
the maximum density of concurrently active links subject to the constraint that a certain fraction
of active links satisfy the outage constraint. Such a constraint is not required by our definition
of the SOC, and the SOC corresponds to the actual density of active links that satisfy the outage
constraint.
F. Organization of the Paper
The rest of the paper is organized as follows. In Sec. II, we provide the network model,
formulate the SOC, give upper and lower bounds on the SOC, and obtain an exact closed-form
expression of the SOC in the high-reliability regime. In Sec. III, we consider the random link
distance case where the link distances are Rayleigh distributed. We draw conclusions in Sec. IV.
II. P OISSON B IPOLAR N ETWORKS
WITH
D ETERMINISTIC L INK D ISTANCE
As seen from Def. 1, the notion of the SOC is applicable to a wide variety of wireless
networks. To gain crisp insights into the design of wireless networks, in this paper, we study
the SOC for Poisson bipolar networks where we consider deterministic as well as random link
distances and obtain analytical results for both cases. Table I provides the key notation used in
the paper.
A. Network Model
We consider the Poisson bipolar network model in which the locations of transmitters form a
homogeneous Poisson point process (PPP) Φ ⊂ R2 with density λ [14, Def. 5.8]. Each transmitter
has a dedicated receiver at a distance R in a uniformly random direction. In a time slot, each node
in Φ independently transmits at unit power with probability p and stays silent with probability
1 − p. Thus the active transmitters form a homogeneous PPP with density λp. We consider a
standard power law path loss model with path loss exponent α. We assume that a channel is
subject to independent Rayleigh fading with channel power gains as i.i.d. exponential random
variables with mean 1.
8
TABLE I
S UMMARY OF N OTATION
Notation
Φ, Φt
Definition/Meaning
Point process of transmitters
θ
SIR threshold
ǫ
Target outage probability
S(θ, ǫ)
Spatial outage capacity (SOC)
η(θ, ǫ)
Fraction of reliable transmissions
λ
Density of potential transmitters
µ
Density of receivers for the random link distances case
p
Fraction of links that are active at a time
λǫ
Density of reliable transmissions
Ps (θ)
Conditional link success probability
ps (θ)
Mean success probability
α
Path loss exponent
δ
2/α
Mb (θ)
R
bth moment of the conditional link success probability
Link distance in a bipolar network
We focus on the interference-limited case, where the received SIR is a key quantity of interest.
To the PPP, we add a (desired) transmitter at location (R, 0) and a receiver at the origin o. Under
the expectation over the PPP, this link is the typical link. The success probability ps (θ) of the
typical link is the ccdf of the SIR calculated at the origin. For Rayleigh fading, from [4], [14],
it is known that
ps (θ) = exp −λpCθδ ,
(4)
where C , πR2 Γ(1 + δ)Γ(1 − δ) with δ , 2/α. The model is scale-invariant in the following
sense: The SIR of all links in any realization of the bipolar network with transmitter locations
ϕ remains unchanged if the plane is scaled by an arbitrary factor a > 0. Such scaling results
in transmitter locations aϕ and link distances aR. The density of the scaled network is λ/a2 .
By setting a = 1/R to obtain unit distance links, the resulting density is λR2 . Hence without
loss of generality, we can set R = 1. Applied to the meta distribution and the SOC, this means
that the model with parameters (R, λ) behaves exactly the same as the model with parameters
(1, λR2 ).
9
B. Exact Formulation
Observe from Def. 1 that the SOC depends on η(θ, ǫ) = P(Ps (θ) > 1 − ǫ) whose direct
calculation seems infeasible. But the moments of Ps (θ) are available in closed-form [2], from
which we can derive exact and approximate expressions of λǫ and obtain simple upper and lower
bounds on the SOC. Let Mb (θ) denote the bth moment of Ps (θ), i.e.,
Mb (θ) , E Ps (θ)b .
(5)
The mean success probability is ps (θ) ≡ M1 (θ).
From [2, Thm. 1], we can express Mb (θ) as
Mb (θ) = exp −λCθδ Db (p, δ) ,
where
∞
X
δ−1 k
b
p ,
Db (p, δ) ,
k k−1
k=1
b ∈ C,
(6)
p, δ ∈ (0, 1].
(7)
For b ∈ N, the sum is finite and Db (p, δ) becomes a polynomial which is termed diversity
polynomial in [15]. The series in (7) converges for p < 1, and at p = 1 it is defined if b ∈
/ Z−
or b + δ ∈
/ Z− and converges if ℜ(b + δ) > 0. Here ℜ(z) is the real part of the complex number
z. For b = 1 (the first moment), D1 (p, δ) = p, and we get the expression of ps (θ) as in (4). We
can also express Db (p, δ) using the Gaussian hypergeometric function 2 F1 as
Db (p, δ) = pb 2 F1 (1 − b, 1 − δ; 2; p).
(8)
Using the Gil-Pelaez theorem [16], the exact expression of λǫ = λpη(θ, ǫ) can be obtained in
an integral form from that of η(θ, ǫ) given in [2, Cor. 3] as
λp λp
−
λǫ =
2
π
where j ,
√
Z∞
0
sin(u ln(1 − ǫ) + λCθδ ℑ(Dju ))
du,
ueλCθδ ℜ(Dju )
(9)
−1, Dju = Dju (p, δ), and ℑ(z) is the imaginary part of the complex number z.
The SOC is then obtained by taking the supremum of λǫ over λ > 0 and p ∈ (0, 1].
C. Approximation with Beta Distribution
We can accurately approximate λǫ in a semi-closed form using the beta distribution, as
shown in [2]. The rationale behind such approximation is that the support of the link success
10
probability Ps (θ) is [0, 1], making the beta distribution a natural choice. With the beta distribution
approximation, λǫ can be approximated as
µβ
,β
,
(10)
λǫ ≈ λp 1 − Iǫ
1−µ
R 1−ǫ
where Iǫ (y, z) , 0 ty−1 (1 − t)z−1 dt/B(y, z) is the regularized incomplete beta function with
B(·, ·) denoting beta function, µ = M1 , and β = (M1 − M2 )(1 − M1 )/(M2 − M12 ).
The advantage of the beta approximation is the faster computation of λǫ compared to the exact
expression without losing much accuracy [2, Tab. I, Fig. 4] (also see Fig. 7 of this paper). In
general, it is difficult to obtain the SOC analytically due to the forms of λǫ given in (9) and (10).
But we can obtain the SOC numerically with ease. We can also gain useful insights considering
some specific scenarios, on which we focus in the following subsection.
D. Constrained SOC
1) Constant λp in dense networks: For constant λp (or, equivalently, a fixed ps (θ)), we now
study how the density of reliable links λǫ behaves in an ultra-dense network. Given θ, α, and
ǫ, this case is equivalent to asking how λǫ varies as λ → ∞ while letting p → 0 for constant
transmitter density λp (constant ps (θ)). We denote the constrained SOC by S̃(θ, ǫ).
Lemma 1 (p → 0 for constant λp). Let ν = λp. Then, for constant ν while letting p → 0, the
SOC constrained on the density of concurrent transmissions is
λp,
if 1 − ǫ < ps (θ)
S̃(θ, ǫ) =
0,
if 1 − ǫ > ps (θ).
(11)
Proof: Applying Chebyshev’s inequality to (3), for 1 − ǫ < ps (θ) = M1 , we have
η(θ, ǫ) > 1 −
var(Ps (θ))
,
((1 − ǫ) − M1 )2
(12)
where var(Ps (θ)) = M2 − M12 is the variance of Ps (θ). From [2, Cor. 1], for constant ν, we
know that lim var(Ps (θ)) = 0. Hence the lower bound in (12) approaches 1, which leads to
p→0
λp=ν
η(θ, ǫ) → 1. This results in the SOC constrained on the density of concurrent transmissions
equal to λp.
On the other hand, for 1 − ǫ > M1 ,
η(θ, ǫ) ≤
var(Ps (θ))
.
((1 − ǫ) − M1 )2
(13)
11
As we let p → 0 for constant ν, the upper bound in (13) approaches 0, which leads to η(θ, ǫ) → 0.
This results in the SOC constrained on the density of concurrent transmissions equal to 0.
In fact, as var(Ps (θ)) → 0, the ccdf of Ps (θ) approaches a step function that drops from 1 to
0 at the mean of Ps (θ), i.e., at 1 − ǫ = ps (θ). This behavior is in agreement with (11).
Remark: Lemma 1 shows that, if p → 0 while λp is fixed to the value ν at which ps (θ)
equals the target reliability 1 − ǫ, the maximum value of the constrained SOC is the value of
the TC times 1/(1 − ǫ), and that value of the TC is ν(1 − ǫ).
This observation can be explained as follows: As p → 0 while keeping λp = ν, all links in
a realization of the network have the same success probability, and that value of the success
probability equals ps (θ) (i.e., the success probability of transmissions over the typical link) [2].
This implies that, from the outage perspective, each link in the network can now be treated as
if it were the typical link, as in the TC framework. If ν is initially set to a value that results
in ps (θ) > 1 − ǫ, we can always increase it till ps (θ) = 1 − ǫ while all active links satisfying
the outage constraint (or, equivalently, the typical link satisfying the outage constraint with
probability one). Accordingly, the value of the TC equals 1 − ǫ times the value of ν at which
ps (θ) = 1 − ǫ.
Fig. 2 shows that at small values of the target outage probability ǫ, the density of reliable
transmissions monotonically increases with p. On the other hand, at larger values of ǫ, it first
decreases with p.
2) λp → 0: For λp → 0, λǫ depends linearly on λp, which we prove in the next lemma.
Lemma 2 (λǫ as λp → 0). As λp → 0,
λǫ ∼ λp.
p(δ−1)
Proof: As λp → 0, M1 approaches 1 and thus var(Ps (θ)) = M12 (M1
− 1) approaches 0.
Since ǫ ∈ (0, 1), we have 1−ǫ < M1 as λp → 0. Using Chebyshev’s inequality for 1−ǫ < M1 as
in (12) and letting var(Ps (θ)) → 0, the lower bound in (12) approaches 1, leading to η(θ, ǫ) → 1.
Lemma 2 can be understood as follows. As λp → 0, the density of active transmitters is very
small. Thus each transmission succeeds with high probability and η(θ, ǫ) → 1. In this regime,
the density of reliable links λǫ is directly given by λp.
The case λp → 0 can be interpreted in two ways: 1) λ → 0 for constant p and 2) p → 0 for
constant λ. Lemma 2 is valid for both cases, or any combination thereof. The case of constant
12
0.1
0.08
0.06
0.04
0.02
0
0
0.2
0.4
0.6
0.8
1
Fig. 2. The density of reliable links λǫ against the transmit probability p for λp = 1/10, θ = −10 dB, α = 4, and R = 1.
The values at the curves are ǫ = 0.05, 0.1, 0.15, 0.2, 0.25, 0.3 (bottom to top). The mean success probability is ps (θ) = 0.855.
p is relevant since it can be interpreted as a delay constraint: As p gets smaller, the probability
that a node makes a transmission attempt in a slot is reduced, increasing the delay. Since the
mean delay until successful reception is larger than the mean channel access delay 1/p, it gets
large for small values of p. Thus, a delay constraint prohibits p from getting too small.
Fig. 3 illustrates Lemma 2. Also, observe that, as p → 0 (p = 10−5 in Fig. 3), λǫ increases
linearly with λp until λp reaches the value 0.0675 which corresponds to ps (θ) = 1 − ǫ = 0.9
and then drops to 0. This behavior is in accordance with Lemma 1. In general, as λp increases,
λǫ increases first and then decreases after a tipping point. This is due to the two opposite effects
of λp on λǫ : The density λp of active transmitters increases, but at the same time, more active
transmissions cause higher interference, which in turn, reduces the fraction η(θ, ǫ) of links that
have a reliability at least 1 − ǫ.
The contour plot in Fig. 4(a) visualizes the trade-off between λp and η(θ, ǫ). The contour
curves for small values of λp run nearly parallel to those for λǫ , indicating that η(θ, ǫ) is close
to 1. Specifically, the contour curves for λp = 0.01 and λp = 0.02 match those for λǫ = 0.01
and λǫ = 0.02 almost exactly. This behavior is in accordance with Lemma 2. In contrast, for
large values of λp, the decrease in η(θ, ǫ) dominates λǫ . Also, for larger values of λ (λ > 0.4 for
Fig. 4(a)), λǫ first increases and then decreases with the increase in p. This behavior is due to the
13
0.1
0.08
0.06
0.04
0.02
0
0
0.2
0.4
0.6
0.8
1
Fig. 3. The density of reliable links λǫ given in (2) for different values of the transmit probability p for θ = −10 dB, α = 4,
and ǫ = 1/10. Observe that the slope of λǫ is one for small λp. The dashed arrow points to the value of λp = 0.0675, which
corresponds to 1 − ǫ = ps (θ) = 0.9.
following trade-off in p. For a small p, there are few active transmitters in the network per unit
area, but a higher fraction of links are reliable. On the other hand, a large p means more active
transmitters per unit area, but also a higher interference which reduces the fraction of reliable
links. For λ < 0.4, the increase in the density of active transmitters dominates the decrease in
η(θ, ǫ), and λǫ increases monotonically with p. The three-dimensional plot corresponding to the
contour plot in Fig. 4(a) is shown in Fig. 4(b).
E. Bounds on the SOC
In this subsection, we obtain simple upper and lower bounds on the SOC.
Theorem 1 (Upper bound on the SOC). For any b > 0, the SOC is upper bounded as
1
1
, 0 < b ≤ 1,
eπθ δ Γ(1−δ)Γ(1+δ) b(1−ǫ)b
S(θ, ǫ) ≤
Γ(b)
1
, b > 1.
eπθ δ Γ(1−δ) Γ(b+δ)(1−ǫ)b
(14)
Proof: Using Markov’s inequality, η(θ, ǫ) can be upper bounded as
η(θ, ǫ) ≤
Mb (θ)
,
(1 − ǫ)b
b > 0,
(15)
14
1.8
1.6
1.8
0.04
X: 1
Y: 0.23
Z: 0.09227
1.4
1.6
1.2
0.1
SOC point
1
1.4
0.8
0.03
1.2
0.4
0.2
0.03
1
0.6
SOC point
0.8
0.02
0.05
0.04
0.6
0.1
0.4
0.01
0.2
0.02
0.05
0.06
0.07
0.02
0.2
0
1
0.08
0.09
0.01
0
0.05
0.01
0.4
0.6
0.8
0
1
0.5
2
1
0
(a)
3
(b)
Fig. 4. (a) Contour plots of λǫ and the product λp for θ = −10 dB, α = 4, and ǫ = 1/10. The solid lines represent the contour
curves for λǫ and the dashed lines represent the contour curves for λp. The numbers in “black” and “red” indicate the contour
levels for λǫ and λp, respectively. The SOC is S(θ, ǫ) = 0.09227. The values of λ and p at the SOC point are 0.23 and 1,
respectively, and the corresponding mean success probability is ps (θ) = 0.6984. The arrow corresponding to the “SOC point”
points to the pair of λ and p for which the SOC is achieved. (b) Three-dimensional plot of λǫ corresponding to the contour
plot.
where Mb (θ) = e−λCθ
δD
b (p,δ)
. Hence we can upper bound the SOC as
S(θ, ǫ) ≤ Su ,
where
δ
e−λCθ Db (p,δ)
Su = sup λp
,
(1 − ǫ)b
λ,p
(16)
with C = πΓ(1 − δ)Γ(1 + δ) and Db (p, δ) = pb 2 F1 (1 − b, 1 − δ; 2; p). Let us denote fu (λ, p) =
λpe−λCθ
δD
b (p,δ)
. We can then write
∂fu (λ, p)
δ
= pe−λCθ Db (p,δ) (1 − λCθδ Db (p, δ)).
|
{z
}
∂λ
>0
Setting
∂fu (λ,p)
∂λ
= 0, we obtain the critical point as λ0 (p) = 1/(Cθδ Db (p, δ)). For any given p,
the objective function is quasiconcave. Thus λ0 (p) is the global optimum for each p. As a result,
the optimization problem in (16) reduces to
1
sup fu (λ0 (p), p)
(1 − ǫ)b p
1
1
sup
.
=
δ
b
eCθ b(1 − ǫ)
p
2 F1 (1 − b, 1 − δ; 2; p)
Su =
(17)
15
For 0 < b < 1, 2 F1 (1 − b, 1 − δ; 2; p) monotonically increases with p. In this case, p → 0 solves
(17). On the other hand, for b > 1, 2 F1 (1 − b, 1 − δ; 2; p) monotonically decreases with p. Thus
p = 1 solves (17). Overall the value of p that solves (17) is
→ 0, 0 < b < 1,
p0
= 1, b > 1.
(18)
Note that the objective function in (17) is monotonic in p. Hence p0 in (18) is again the global
optimum.
Finally, for 0 < b < 1, the upper bound on the SOC is obtained by substituting p = 0 in the
objective of (17). Since 2 F1 (1 − b, 1 − δ; 2; 0) = 1,
Su =
eπθδ Γ(1
1
1
,
− δ)Γ(1 + δ) b(1 − ǫ)b
Similarly since b 2 F1 (1 − b, 1 − δ; 2; 1) =
Su =
1
eπθδ Γ(1
0 < b < 1.
(19)
b > 1.
(20)
Γ(b+δ)
,
Γ(b)Γ(1+δ)
Γ(b)
,
− δ) Γ(b + δ)(1 − ǫ)b
For b = 1, the hypergeometric function returns 1 irrespective of the other parameters, and thus
(19) and (20) are identical.
The tightest Markov upper bound can be obtained by minimizing Su in (19) and (20) over b.
Now, the value of b that minimizes Su in (19) is
bm = −
1
.
ln(1 − ǫ)
(21)
Since Su takes two different values depending on whether 0 < b ≤ 1 or b > 1, to obtain
the tightest Markov upper bound, we need to consider following two cases based on whether
bm ∈ (0, 1] or bm > 1.
1) bm ∈ (0, 1]: If bm ∈ (0, 1], it is the optimum value of b that minimizes Su since Su in (19)
is smaller than Su in (20) for 0 < b < 1, greater for b > 1, and equal to for b = 1. From (21),
it is apparent that the case bm ∈ (0, 1] is equivalent to ǫ ∈ [0.6321, 1). Hence, if ǫ ∈ [0.6321, 1),
the optimum b that gives the tightest Markov upper bound is given by (21). Substituting b = bm
in (19), we get the exact closed-form expression of the tightest Markov upper bound as
Sut =
− ln(1 − ǫ)
,
− δ)Γ(1 + δ)
πθδ Γ(1
if ǫ ∈ [0.6321, 1),
(22)
where ‘t’ in the superscript of Sut indicates the tightest bound.
2) bm > 1: If bm > 1, i.e., ǫ ∈ (0, 0.6321), the optimum b is the value of b that minimizes
Su in (20). However, due to the form of Su in (20), the optimum b cannot be expressed in a
16
closed-form. Hence the tightest Markov upper bound also cannot be expressed in a closed-form,
but it can be easily evaluated numerically. Furthermore, for b > 1, we can get a closed-form
expression of the approximate tightest Markov bound by using the approximation
Γ(b + δ)
≈ bδ
Γ(b)
(23)
in (20). Then, for b > 1, we can express (20) as
Su ≈
1
eπθδ Γ(1
The value of b that minimizes (24) is given as
b̄m = −
− δ)
bδ (1
1
.
− ǫ)b
(24)
δ
.
ln(1 − ǫ)
The corresponding closed-form expression of the tightest approximate Markov upper bound is
obtained by substituting b̄m in (24) which is given as
δ
− ln(1 − ǫ)
e−(1−δ)
t
Su ≈
,
θ
πδ δ Γ(1 − δ)
if ǫ ∈ (0, 0.6321).
(25)
Fig. 5 illustrates upper bounds on the SOC.
Letting ǫ → 0, from (25), we observe that the lower tail of the SOC decreases exponentially,
i.e.,
S(θ, ǫ) /
ǫ δ
θ
e−(1−δ)
,
πδ δ Γ(1 − δ)
ǫ → 0,
(26)
where ‘/’ denotes an upper bound which gets tighter asymptotically (here as ǫ → 0). In the
next subsection, we shall show that the bound in (26) is in fact asymptotically tight, i.e., (26)
matches the exact expression of the SOC as ǫ → 0.
We now obtain lower bounds on the SOC.
Theorem 2 (Lower bound on the SOC). The SOC is lower bounded as
−(1−W ((1−ǫ)e))
1 − W ((1 − ǫ)e)
e
− (1 − ǫ)
S(θ, ǫ) >
,
πθδ Γ(1 + δ)Γ(1 − δ)
ǫ
where W (·) denotes the Lambert W function.
Proof: By the reverse Markov’s inequality, we have
1−
E!t ((1 − Ps (θ))b )
< η(θ, ǫ),
ǫb
For b ∈ N we can lower bound the SOC as
S(θ, ǫ) > Sl ,
b > 0.
(27)
17
101
100
10-1
10
-2
0
0.2
0.4
0.6
0.8
1
Fig. 5. Analytical and numerical results for the SOC. The tightest Markov upper bound on the SOC obtained numerically
uses (19) and (20), which are optimized over b. The tightest Markov upper bound obtained analytically uses (22) and (25). The
SOC upper bound obtained analytically is quite close to that obtained numerically for the almost complete range of reliability
threshold 1 − ǫ, except near 1 − ǫ = 0.3679 (which is due to the approximation in (23)). The classical Markov bounds are
plotted using (19) and (20) for b = 1, b = 2, and b = 4. The lower bound for b = 1 is plotted using (27), while the lower
bounds for b = 2 and b = 4 are plotted numerically using (28). θ = −10 dB and α = 4.
where
Sl = sup λp 1 −
λ,p
with Mk (θ) = e−λCθ
δD
k (p,δ)
Pb
b
k=0 k
(−1)k Mk (θ)
ǫb
!
,
(28)
. For b = 1, (28) reduces to
1 − e−λpCθ
Sl = sup λp 1 −
ǫ
λ,p
δ
!
.
(29)
Since λ and p appear together as their product λp, Sl can be obtained by taking the supremum
over t = λp, i.e.,
!
δ
1 − e−tCθ
.
Sl = sup t 1 −
ǫ
t
{z
}
|
(30)
f (t)
Substituting the value of t that results in
∂f (t)
∂t
= 0 in f (t), we get the desired expression in (27).
18
15
10
5
0
0
10
20
30
40
50
Fig. 6. The solid lines represent the exact Db (p, δ) as in (7), while the dashed lines represent the asymptotic form of Db (p, δ)
as in (31).
For the values of b ∈ R+ \ {1}, an analytical expression for Sl is difficult to obtain due to the
form of (28), but we can easily obtain corresponding lower bounds numerically. Fig. 5 shows
Markov lower bounds on the SOC for b = 1, b = 2, and b = 4.
F. High-reliability Regime
In this section, we investigate the behavior of λǫ and the SOC in the high-reliability regime,
i.e., as ǫ → 0. To this end, we first provide an asymptote of Db (p, δ) as b → ∞, which will
be used to obtain a closed-form expression of the SOC in the high-reliability regime. Then we
state a simplified version of de Bruijn’s Tauberian theorem (see [17, Thm. 4.12.9]) which allows
a convenient formulation of η(θ, ǫ) = P(Ps (θ) > 1 − ǫ) in terms of the Laplace transform as
ǫ → 0. ‘.’ denotes an upper bound with asymptotic equality (here as b → ∞).
Lemma 3 (Asymptote of Db (p, δ) as b → ∞). For b ∈ R, we have
Db (p, δ) .
pδ bδ
,
Γ(1 + δ)
b → ∞,
Proof: See Appendix A.
Fig. 6 illustrates how quickly Db (p, δ) approaches the asymptote.
(31)
19
Theorem 3 (de Bruijn’s Tauberian theorem [18, Thm. 1]). For a non-negative random variable
Y , the Laplace transform E[exp(−sY )] ∼ exp(rsu ) for s → ∞ is equivalent to P(Y ≤ ǫ) ∼
exp(q/ǫv ) for ǫ → 0, when 1/u = 1/v + 1 (for u ∈ (0, 1) and v > 0), and the constants r and
q are related as |ur|1/u = |vq|1/v .
Theorem 4 (λǫ in the high-reliability regime). For ǫ → 0, the density of reliable links λǫ
satisfies
θp
λǫ ∼ λp exp −
ǫ
where κ =
δ
1−δ
=
2
α−2
κ
(δλC ′ )κ/δ
κ
!
,
ǫ → 0,
(32)
and C ′ = πΓ(1 − δ).
Proof: Let Y = − ln(Ps (θ)). The Laplace transform of Y is E(exp(−sY )) = E(Ps (θ)s ) =
Ms (θ). Using (6) and Lemma 3, we have
λC(θp)δ sδ
,
Ms (θ) ∼ exp −
Γ(1 + δ)
|s| → ∞.
δ
Comparing this expression with that in Thm. 3, we have r = − λC(θp)
, u = δ, v = δ/(1−δ) = κ,
Γ(1+δ)
and thus q =
1
κ
(δλC ′ )κ/δ (θp)κ , where C ′ = πΓ(1 − δ). Using Thm. 3, we can now write
P(Y ≤ ǫ) = P(Ps (θ) ≥ exp(−ǫ))
(a)
∼ P(Ps (θ) ≥ 1 − ǫ),
= exp −
ǫ→0
!
(θp)κ (δλC ′ )κ/δ
κǫκ
,
(33)
where (a) follows from exp(−ǫ) ∼ 1 − ǫ as ǫ → 0. Since we have
λǫ = λpP(Ps (θ) > 1 − ǫ),
(34)
the desired result in (32) follows from substituting (33) in (34).
For the special case of p = 1 (all transmitters are active), P(Ps (θ) ≥ 1 − ǫ) in (33) simplifies
to
δλC ′ θδ
P(Ps (θ) ≥ 1 − ǫ) ∼ exp −
κǫκ
κ/δ !
,
ǫ → 0,
in agreement with [7, Thm. 2] where the result for this special case was derived in a less direct
way than Thm. 4. Fig. 7 shows the behavior of (32) in the non-asymptotic regime and also the
accuracy of the beta approximation given by (10).
We now investigate the scaling of S(θ, ǫ) in the high-reliability regime.
20
0.07
0.06
0.05
0.04
0.03
0.02
0.01
0
0.5
0.6
0.7
0.8
0.9
1
Fig. 7. The solid line with marker ‘o’ represents the exact expression of λǫ as in (9), the dotted line represents the asymptotic
expression of λǫ given by (32) as ǫ → 0, and the dashed line represents the approximation by the beta distribution given by
(10). Observe that the beta approximation is quite accurate. θ = 0 dB, α = 4, λ = 1/2, and p = 1/3.
Corollary 1 (SOC in high-reliability regime). For ǫ → 0,
ǫ δ e−(1−δ)
,
S(θ, ǫ) ∼
θ πδ δ Γ(1 − δ)
(35)
and the SOC is achieved at p = 1.
Proof: For notational simplicity, let us define the rate-reliability ratio as ρ , ǫ/θ and
′ κ/δ
denote ξρ , ρ−κ (δC κ)
λǫ ∼ fρ (λ, p),
and fρ (λ, p) , λp exp(−λκ/δ pκ ξρ ). From (32), we can then write
ǫ → 0, and the SOC is
S(θ, ǫ) ∼ sup fρ (λ, p),
λ,p
ǫ → 0.
(36)
We can then write
∂fρ (λ, p)
κξρ κ/δ κ
κ/δ κ
= p exp −λ p ξρ 1 −
λ p .
∂λ
δ
{z
}
|
>0
Setting
∂fρ (λ,p)
∂λ
= 0, we obtain the critical point as
δ/κ
δ
.
λ0 (p) =
ξρ κpκ
(37)
21
For any given p, the objective function is quasiconcave. Hence the optimization problem in (36)
reduces to
S(θ, ǫ) ∼ sup fρ (λ0 (p), p),
p
=
δ
eκξρ
δ/κ
ǫ → 0,
sup p1−δ ,
p
ǫ → 0.
Observe that fρ (λ0 (p), p) monotonically increases with p and thus attains the maximum at p = 1.
δ/κ
and is given by (35) after simplification.
Hence the SOC is achieved at p = 1 and λ = ξρδκ
The equation (35) confirms the asymptotic bound on the SOC given in (26).
Corollary 2 (The meta distribution at the SOC point). As ǫ → 0, the value of the meta
distribution at the SOC point can be simply expressed as
η(θ, ǫ) ∼ e−(1−δ) .
(38)
Proof: From Cor. 1, as ǫ → 0, the SOC can be expressed as
S(θ, ǫ) ∼ λopt popt η(θ, ǫ),
(39)
where λopt = λ0 (given by (37)) and popt = 1 correspond to the SOC point as ǫ → 0. Then,
comparing (39) with (35), we get the desired expression of η(θ, ǫ) as in (38).
Corollary 3 (The mean success probability at the SOC point). As ǫ → 0, the mean success
probability at the SOC point can be expressed as
ǫ δ
ps,opt ∼ 1 −
Γ(1 + δ).
δ
(40)
Proof: Substituting λ = λ0 (given by (37)) and p = 1 in (4) and using e−x ∼ 1 − x as
x → 0 yield the desired expression.
We now provide few remarks pertaining to the high-reliability regime.
Remarks:
•
Letting Cδ =
1 δ e−(1−δ)
,
δ
Γ(1−δ)
the density of transmitters λ∗ , S(θ, ǫ) that maximizes the
density of active links that achieve a reliability at least 1 − ǫ behaves as
ǫ δ
∗
λ π ∼ Cδ
, ǫ → 0.
θ
(41)
22
The coefficient Cδ depends only on δ. In the practically relevant regime 1/2 ≤ δ < 1, i.e.,
2 < α ≤ 4, Cδ ≈ 1 − δ. In (41), the left side λ∗ π is the mean number of reliable receivers in
a disk of unit radius in the network. Equation (41) reveals an interesting trade-off between
the spectral efficiency (captured by θ) and the reliability (captured by ǫ), where only their
ratio matters. For example, at low rates, ln(1 + θ) ∼ θ; thus, a 10× higher reliability can
be achieved by lowering the rate by a factor of 10.
•
The (potential) transmitter density λopt that achieves the SOC is
ǫ δ
1
, ǫ → 0.
(42)
λopt ∼
δ
θ πδ Γ(1 − δ)
Here λopt π is the mean number of (potential) transmitters in a disk of unit radius in the
network that achieves the SOC.
•
From (38), it is apparent that, at the SOC point, the fraction of links that satisfy the outage
constraint depends only on the path loss exponent α, as δ , 2/α.
•
The mean success probability ps,opt at the SOC point (given by (40)) allows us to relate the
SOC and the TC. Substituting q ∗ = 1 − ps,opt in [13, (4.29)], we can express the TC as
ǫ δ
1
, ǫ → 0,
c(θ, ǫ) ∼
δ
θ πδ Γ(1 − δ)
which is the same as the expression of the optimum λ that achieves the SOC (given by
(42)). Hence S(θ, ǫ) = c(θ, ǫ)e−(1−δ) if the TC framework used ps (θ) = ps,opt instead of
ps (θ) = 1 − ǫ (given that p = 1 is optimum).
•
From Cor. 1, observe that the exponents of θ and ǫ are the same. The SOC scales in ǫ
similar to the TC defined in [7], i.e., as Θ(ǫδ ), while the original TC defined in [6] scales
linearly in ǫ.
•
For α = 4, the expression of SOC in (35) simplifies to
ǫ 1/2
S(θ, ǫ) ∼ 0.154
, ǫ → 0,
θ
and the meta distribution gives η ≈ 0.6. In other words, approximately 60% of active links
satisfy the outage constraint if α = 4. Also, for α = 4, the mean success probability at the
√
SOC point is simply given by ps,opt ∼ 1 − 1.2533 ǫ as ǫ → 0. Fig. 8 plots λǫ versus λ and
p for ǫ = 0.007 and α = 4. In this case, the SOC is achieved at p = 1.
III. P OISSON B IPOLAR N ETWORKS
WITH
R ANDOM L INK D ISTANCES
We now consider the case of random link distance, where the link distances are i.i.d. random
variables (which are constant over time).
23
0.05
SOC point
X: 1
Y: 0.0701
Z: 0.04068
0.04
0.03
0.02
0.01
0
1
0.5
0
0.8
0.6
0.4
0.2
0
Fig. 8. Three-dimensional plot of λǫ for ǫ = 0.007, θ = −10 dB, and α = 4. Observe that p = 1 achieves the SOC. The mean
success probability ps (θ) at the SOC point is 0.8964.
A. Network Model
Let Ri denote the random link distance between a transmitter i and its associated receiver in a
√
Poisson bipolar network. We assume that Ri is Rayleigh distributed with mean 1/(2 µ) as it is
the distribution of the nearest-neighbor distance in a PPP of density µ [19].3 This scenario can
be interpreted as the one where an active transmitter tries to communicate to its nearest receiver
in a network where the potential transmitters form a PPP Φt of density λ and the receivers form
an another PPP (independent of Φt ) of density µ. Similar to the deterministic link distance case,
we add a receiver at the origin o to the receiver PPP and an always active transmitter at location
(Ro , 0), where Ro is the Rayleigh distributed link distance. Under the expectation over the point
process, this link is the typical link.
B. Exact Formulation of the SOC
Lemma 4 (bth moment of the link success probability). For Rayleigh distributed link distances
√
with mean 1/(2 µ), the bth moment of Ps (θ) is
Mb (θ) =
3
µ
.
µ + λθδ Γ(1 + δ)Γ(1 − δ)Db (p, δ)
(43)
Generalizations to other link distance distributions are beyond the scope of this paper. This is because some new techniques
may need to be developed, and it is unclear what other distribution to assume.
24
Proof: See Appendix B.
For b = 1, using D1 (p, δ) = p, we get the expression of the mean success probability ps (θ).
Moreover, for b ∈ N, (43) represents the joint success probability of b transmissions with random
link distance, as obtained in [15, (23)].
As in the deterministic link distance case, using the Gil-Pelaez theorem, we can calculate the
density of reliable links from (9), and the SOC is obtained by taking the supremum of λǫ over
λ and p. Like the deterministic link distance case, the beta approximation is quite accurate.
In the rest of the paper, we assume µ = 1 without loss of generality.
C. Bounds on the SOC
Theorem 5 (Upper bound on the SOC). For any b > 0, the SOC for Rayleigh distributed link
distances is upper bounded as
S(θ, ǫ) ≤
1
1
,
θ δ Γ(1−δ)Γ(1+δ) b(1−ǫ)b
Γ(b)
1
,
θ δ Γ(1−δ) Γ(b+δ)(1−ǫ)b
0 < b ≤ 1,
b > 1.
(44)
Proof: Again using Markov’s inequality, η(θ, ǫ) can be upper bounded as
Mb (θ)
, b > 0,
η(θ, ǫ) ≤
(1 − ǫ)b
1
. Hence for any b > 0, we have
where Mb (θ) = 1+λθδ Γ(1+δ)Γ(1−δ)D
b (p,δ)
S ≤ Su ,
where
Su =
1
λp
.
sup
b
δ
(1 − ǫ) λ,p 1 + λθ Γ(1 + δ)Γ(1 − δ)Db (p, δ)
{z
}
|
(45)
Aλ,p
Aλ,p is maximized at λ = ∞, and it follows that
1
1
Su = δ
sup
,
θ Γ(1 + δ)Γ(1 − δ)b(1 − ǫ)b p 2 F1 (1 − b, 1 − δ; 2; p)
where we have used Db (p, δ) = pb 2 F1 (1 − b, 1 − δ; 2; p) as in (8).
(46)
Notice that the optimization problem in (46) is similar to that in (17). Thus, following the
steps after (17) in the proof of Thm. 1, we get (44).
Similar to the deterministic link distance case (as discussed in Sec. II-E after Thm. 1), from
(44), for ǫ ∈ [0.6321, 1), we can obtain the exact closed-form expression of the tightest Markov
bound as
Sut =
−e ln(1 − ǫ)
.
− δ)Γ(1 + δ)
θδ Γ(1
(47)
25
102
10
1
100
10
-1
0
0.2
0.4
0.6
0.8
1
Fig. 9. Analytical and numerical results for the SOC. The tightest Markov upper bound on the SOC obtained numerically uses
(44), which is optimized over b. The tightest Markov upper bound obtained analytically uses (47) when ǫ ∈ [0.6321, 1) and (48)
when ǫ ∈ (0, 0.6321). Observe that the analytical approximation of the SOC upper bound provides a tight upper bound for the
complete range of reliability threshold 1 − ǫ. The curve corresponding to the tightest Markov upper bound obtained analytically
deviates from that obtained numerically at 1 − ǫ = 0.3679 due to the approximation as in (23). The classical Markov bounds
are plotted using (44) for b = 1, b = 2, and b = 4. θ = −10 dB and α = 4.
For ǫ ∈ (0, 0.6321), we can obtain the exact tightest Markov bound numerically. Alternatively,
using the approximation in (23), we get a closed-form expression of the approximate tightest
Markov bound as
Sut
≈
− ln(1 − ǫ)
θ
δ
eδ
.
δ δ Γ(1 − δ)
(48)
As Fig. 9 shows, the tightest Markov upper bound on the SOC obtained analytically using (48)
deviates slightly from that obtained numerically at ǫ = 0.6321 due to the approximation in (23).
As ǫ becomes smaller, i.e., 1 − ǫ becomes closer to 1, the approximation (48) becomes better.
For ǫ < 0.2, the gap between the approximation of the upper bound and the beta approximation
is less than 0.15 dB.
Theorem 6 (Lower bound on the SOC). The SOC is lower bounded as
√
(1 − 1 − ǫ)2
.
S(θ, ǫ) > δ
ǫθ Γ(1 + δ)Γ(1 − δ)
Proof: The proof follows the proof of Thm. 2 with M1 (θ) =
1
.
1+λpθ δ Γ(1+δ)Γ(1−δ)
(49)
26
Fig. 9 shows lower bounds on the SOC. Similar to the deterministic link distance case, the
Markov lower bounds for b ∈ R+ \ {1} are analytically intractable.
D. High-reliability Regime
Theorem 7 (SOC in the high-reliability regime). For Rayleigh distributed link distances,
ǫ δ
1
, ǫ → 0,
S(θ, ǫ) ∼
θ Γ(1 + δ)Γ(1 − δ)
and the SOC is achieved at p = 1.
Proof: As in the proof of Thm. 4, let Y = − ln(Ps (θ)) with its Laplace transform as
LY (s) = Ms (θ). Asymptotically,
(a)
LY (s) ∼
where (a) follows from using Db (p, δ) ∼
A
,
sδ
pδ bδ
Γ(1+δ)
|s| → ∞,
(50)
as |b| → ∞ in (43) and thus A = 1/(λθδ pδ Γ(1−
δ)). We claim that the expression in (50) is equivalent to
FY (ǫ) ∼
Aǫδ
,
Γ(1 + δ)
ǫ → 0.
(51)
The proof that (50) and (51) are equivalent is given in Appendix C.
As ǫ → 0, since
FY (ǫ) = P(− ln(Ps (θ)) < ǫ)
∼ P(Ps (θ) > 1 − ǫ)
= η(θ, ǫ),
the density of reliable links in the high-reliability regime can be expressed as
λǫ ∼
ǫδ p1−δ
,
θδ Γ(1 + δ)Γ(1 − δ)
ǫ → 0.
(52)
Here, λǫ is independent of the density λ of (potential) transmitters. As a result, the SOC is
S(θ, ǫ) ∼
ǫδ
sup p1−δ ,
θδ Γ(1 + δ)Γ(1 − δ) p
Setting p = 1 achieves the SOC, which is given as
ǫ δ
1
S(θ, ǫ) ∼
,
θ Γ(1 + δ)Γ(1 − δ)
ǫ → 0.
ǫ → 0.
27
Similar to the deterministic link distance case, only the ratio of the spectral efficiency and the
reliability matters. As we observe from (52), λǫ does not depend on λ. This is due to the fact
that, in the high-reliability regime, the increase in λ decreases linearly the fraction of reliable
links η(θ, ǫ). For example, a 2× increase in λ decreases η(θ, ǫ) by a factor of 2. Also, the SOC is
a function of just two parameters, the reliability-to-target-SIR ratio and a constant that depends
only on δ, i.e., on the path loss exponent α.
IV. C ONCLUSIONS
This paper introduces a new notion of capacity, termed spatial outage capacity (SOC), which
is the maximum density of concurrently active links that meet a certain constraint on the
success probability. Hence the SOC provides a mathematical foundation for questions of network
densification under strict reliability constraints. Since the definition of the SOC is very general,
i.e., it is not restricted to a specific point process model, link distance distribution, MAC scheme,
transmitter-receiver association schemes, fading distribution, power control scheme, etc., it is
applicable to a wide range of wireless networks.
For Poisson bipolar networks with ALOHA and Rayleigh fading, we provide an exact analytical expression and a simple approximation for the density of reliable links λǫ . The SOC can
be easily calculated numerically as the supremum of λǫ obtained by optimizing over the density
of (potential) transmitters λ and the transmit probability p.
In the high-reliability regime where the target outage probability ǫ of a link goes to 0, we
give a closed-form expression of the SOC which reveals
•
the trade-off between the spectral efficiency and the reliability where only their ratio matters
while calculating the SOC.
•
insights on the scaling behavior of the SOC where, for both deterministic and Rayleigh
distributed link distance cases, we show that the SOC scales in ǫ as Θ(ǫδ ).
Interestingly, p = 1 achieves the SOC in the high-reliability regime. This means that with
ALOHA, all transmitters should be active in order to maximize the number of reliable transmissions in a unit area that succeed with a probability close to one. Hence, in the high-reliability
regime, backing off is not SOC-achieving. This happens because the reduction in the density of
active links with p cannot be overcome by the increase in the fraction of reliable links.
For Rayleigh distributed link distances, in the high-reliability regime, we have shown that
the density of reliable links does not depend on λ as the increase in λ is exactly offset by the
28
fraction of reliable links. To be precise, a t-fold increase in λ decreases the density of reliable
links by a factor of t.
As a future work, it is important to generalize the results obtained for Rayleigh fading to
other fading distributions. However, since the current bounds on the SOC and the high-reliability
regime results exploit a structure induced by Rayleigh fading assumption, one might need to
develop new techniques depending on the fading distribution considered.
A PPENDIX A
P ROOF
L EMMA 3
OF
From (7), we have
Db (p, δ) =
∞
X
δ−1
b
k
k=1
∞
X
=p
k=1
|
By Taylor’s theorem,
Ak (p) =
pk
k−1
δ − 1 k−1
b
.
p
k k−1
{z
}
(53)
Ak (p)
∞
(j)
X
A (1)
k
j!
j=0
(p − 1)j ,
(54)
(j)
where Ak (1) is the jth derivative of Ak (p) at p = 1. Let (k)j , k(k − 1)(k − 2) · · · (k − j + 1)
(j)
denote the falling factorial. Then Ak (1) can be written as
∞
X
δ−1
b
(j)
(k − 1)j
Ak (1) =
k−1
k
k=1
=
(a)
.
where (a) follows from
Γ(b+δ−j)
Γ(b−j)
Γ(b + δ − j)
(δ − 1)j
Γ(b − j)Γ(1 + δ)
bδ (δ − 1)j
,
Γ(1 + δ)
(55)
. bδ as b → ∞. From (54) and (55),
∞
Ak (p) .
X (δ − 1)j
bδ
(p − 1)j .
Γ(1 + δ) j=0
j!
|
{z
}
pδ−1
From (53) and (56), we get the desired result.
(56)
29
A PPENDIX B
P ROOF
OF
L EMMA 4
Let us denote the random link distance of the typical link by R. Then the probability density
function of R is fR (a) = 2πµa exp(−πµa2 ). Let kzk denote the distance between a receiver
and a potential interferer z ∈ Φt . Given Φt , the conditional link success probability Ps (θ) is
−α
hR
> θ | Φt = E (1(h > θRα I) | Φt ) ,
Ps (θ) = P
I
where
I=
X
z∈Φt \{zo }
hz kzk−α 1(z ∈ Φt ),
where zo ∈ Φt denotes the desired transmitter. Conditioning on R and then averaging over fading
and ALOHA results in
Ps (θ) | R =
Let f (r) =
Y
z∈Φt \{zo }
1+θ
p
R
kzk
α + 1 − p .
b
+ 1 − p . Then the bth moment of Ps (θ) is
Y
R
Mb (θ) = E
f
kzk
z∈Φt \{zo }
Z ∞
R
(a)
= ER exp −2πλ
t 1−f
dt
t
0
Z∞
Z∞
a
2
(b)
dt e−µπa da
= 2πµ a exp −2πλ t 1 − f
t
0
0
Z∞
Z∞
1
2
(c)
= 2πµ a exp −2πλa2 y 1 − f
dy e−µπa da
y
p
1+θr α
0
0
µ
µ + 2λ 0 y (1 − f (1/y)) dy
µ
(d)
=
,
b !
Z∞
pθr α
1− 1−
µ + 2λ
r −3 dr
1 + θr α
{z
}
|0
=
R∞
(57)
Fb
where (a) follows from the probability generating functional of the PPP [14, Chapter 4], (b)
follows from the de-conditioning on R, (c) follows from the substitution y = t/a, and (d)
30
follows from the substitution y = 1/r and plugging f (r) back. With 1 as the upper limit of the
integral and µ = λ, (57) reduces to the expression of the bth moment of the success probability
in a Poisson cellular network as in [2, (27)].
With r α = x, the integral in (57) can be expressed as
b !
Z∞
pθx
1
1− 1−
x−δ−1 dx
Fb =
α
1 + θx
0
(e)
=
∞
X
k=1
(f)
=
∞
k Z
xk−δ−1
(pθ)
b
(−1)k+1
dx
α
k
(1 + θx)k
0
δ
θ
πδ
Db (p, δ),
2 sin(πδ)
where (e) follows from the binomial expansion of 1 −
follows from
and (−1)k+1
Z∞
xk−δ−1
δ−k
(58)
pθx b
1+θx
and Fubini’s theorem, and (f)
π
Γ(k − δ)
sin(πδ) Γ(k)Γ(1 − δ)
dx = θ
(1 + θx)k
0
k−δ−1
δ−1
=
. Finally, substituting (58) in (57) and using
k−1
k−1
πδ
sin(πδ)
≡ Γ(1 + δ)Γ(1 −
δ), (43) follows.
A PPENDIX C
P ROOF
THAT
(50) AND (51) ARE
EQUIVALENT
The proof uses the Weierstrass approximation theorem that any continuous function f :
[t1 , t2 ] → R can be approximated by a sequence of polynomials from above and below.
In our case, t1 = 0 and t2 = 1. Thus, for any given t > 0, if f (y) is a continuous real-valued
function on [0, 1], for n ≥ 1, there exists a sequence of polynomials Pn (y) and Qn (y) such that
Pn (y) ≤ f (y) ≤ Qn (y) ∀y ∈ [0, 1],
Z 1
(Qn (y) − f (y))dy ≤ t,
(59)
(60)
0
and
Z
1
0
(f (y) − Pn (y))dy ≤ t.
(61)
Even if f (y) has a discontinuity of the first kind, we can still construct polynomials Pn (y) and
Qn (y) that satisfy (59)-(61).4
4
See [20, Sec. 7.53] for the details of the construction of such polynomials.
31
To prove the desired result, we first show that
Z ∞
Z ∞
A
δ
−sy
−sy
lim s
e f (e )dFY (y) =
y δ−1 f (e−y )e−y dy.
s→∞
Γ(δ)
0
0
Pn
k
Let Qn (y) = k=0 ak y with ak ∈ R for k = 0, 1, . . . , n. We then have
Z ∞
Z ∞
δ
−sy
−sy
δ
lim sup s
e f (e ) dFY (y) ≤ lim s
e−sy Qn (e−sy ) dFY (y)
s→∞
s→∞
0
= lim
s→∞
(62)
0
n
X
ak s
k=0
δ
Z
∞
e−(k+1)sy dFY (y)
0
n
X
ak
(k + 1)δ
k=0
Z ∞
(b) A
=
y δ−1 e−y Qn (e−y ) dy
Γ(δ) 0
Z ∞
(c) A
=
y δ−1 e−y f (e−y ) dy,
Γ(δ) 0
(a)
=A
R∞
where (a) follows from lim sδ 0 e−sy dFY (y) = A, (b) follows from the definition of the
s→∞R
∞
gamma function as Γ(δ) , 0 y δ−1 e−y dy, and (c) follows from the dominated convergence
theorem as n → ∞.
By a similar argument for Pn (y), we have
Z ∞
Z ∞
A
−sy
−sy
δ
y δ−1 e−y f (e−y ) dy,
e f (e ) dFY (y) ≥
lim inf s
s→∞
Γ(δ)
0
0
and (62) follows.
Now let
1,
y
f (y) =
0,
1
e
≤y≤1
0 ≤ y < 1e .
(63)
Letting s = 1/ǫ in (62) and using (63), we have
Z ∞
−δ
δ
e−sy f (e−sy ) dFY (y)
lim ǫ FY (ǫ) = lim s
ǫ→0
s→∞
0
Z 1
(d) A
=
y δ−1 dy
Γ(δ) 0
A
.
=
Γ(1 + δ)
where (d) follows from (62) and (63).
ACKNOWLEDGMENT
The authors would like to thank Ketan Rajawat and Amrit Singh Bedi for their insights on
the optimization problems in the paper.
32
R EFERENCES
[1] S. S. Kalamkar and M. Haenggi, “Spatial outage capacity of Poisson bipolar networks,” in Proc. IEEE International
Conference on Communications (ICC’17), (Paris, France), May 2017.
[2] M. Haenggi, “The meta distribution of the SIR in Poisson bipolar and cellular networks,” IEEE Transactions on Wireless
Communications, vol. 15, pp. 2577–2589, April 2016.
[3] M. Zorzi and S. Pupolin, “Optimum transmission ranges in multihop packet radio networks in the presence of fading,”
IEEE Transactions on Communications, vol. 43, pp. 2201–2205, July 1995.
[4] F. Baccelli, B. Błaszczyszyn, and P. Mühlethaler, “An ALOHA protocol for multihop mobile wireless networks,” IEEE
Transactions on Information Theory, vol. 52, pp. 421–436, February 2006.
[5] F. Baccelli and B. Błaszczyszyn, “A new phase transitions for local delays in MANETs,” in Proc. IEEE International
Conference on Computer Communications (INFOCOM’10), (San Diego, CA, USA), pp. 1–9, March 2010.
[6] S. P. Weber, X. Yang, J. G. Andrews, and G. de Veciana, “Transmission capacity of wireless ad hoc networks with outage
constraints,” IEEE Transactions on Information Theory, vol. 51, pp. 4091–4102, December 2005.
[7] R. K. Ganti and J. G. Andrews, “Correlation of link outages in low-mobility spatial wireless networks,” in Proc. Asilomar
Conference on Signals, Systems, and Computers (Asilomar’10), (Pacific Grove, CA, USA), pp. 312–316, November 2010.
[8] Y. Wang, M. Haenggi, and Z. Tan, “The meta distribution of the SIR for cellular networks with power control,” IEEE
Transactions on Communications. Accepted.
[9] Q. Cui, X. Yu, Y. Wang, and M. Haenggi, “The SIR meta distribution in Poisson cellular networks with base station
cooperation,” IEEE Transactions on Communications. Accepted.
[10] M. Salehi, A. Mohammadi, and M. Haenggi, “Analysis of D2D underlaid cellular networks: SIR meta distribution and
mean local delay,” IEEE Transactions on Communications, vol. 65, pp. 2904–2916, July 2017.
[11] N. Deng and M. Haenggi, “A fine-grained analysis of millimeter-wave device-to-device networks,” IEEE Transactions on
Communications, vol. 65, pp. 4940–4954, November 2017.
[12] Y. Wang, Q. Cui, M. Haenggi, and Z. Tan, “On the SIR meta distribution for Poisson networks with interference
cancellation,” IEEE Wireless Communications Letters. Accepted.
[13] S. P. Weber and J. G. Andrews, “Transmission capacity of wireless networks,” Foundations and Trends in Networking,
vol. 5, no. 2-3, pp. 109–281, 2012.
[14] M. Haenggi, Stochastic Geometry for Wireless Networks. Cambridge, U.K.: Cambridge Univ. Press, 2012.
[15] M. Haenggi and R. Smarandache, “Diversity polynomials for the analysis of temporal correlations in wireless networks,”
IEEE Transactions on Wireless Communications, vol. 12, pp. 5940–5951, November 2013.
[16] J. Gil-Pelaez, “Note on the inversion theorem,” Biometrika, vol. 38, pp. 481–482, December 1951.
[17] N. H. Bingham, C. M. Goldie, and J. L. Teugels, Regular Variation. Cambridge, U.K.: Cambridge Univ. Press, 1987.
[18] J. Voss, “Upper and lower bounds in exponential Tauberian theorems,” Tbilisi Mathematical Journal, vol. 2, pp. 41–50,
2009.
[19] M. Haenggi, “On distances in uniformly random networks,” IEEE Transactions on Information Theory, vol. 51, pp. 3584–
3586, October 2005.
[20] E. C. Titchmarsh, The Theory of Functions. London, U.K.: Oxford Univ. Press, 2nd ed., 1939.
| 7 |
Optimal power dispatch in networks of high-dimensional models of
synchronous machines
arXiv:1603.06688v1 [math.OC] 22 Mar 2016
Tjerk Stegink and Claudio De Persis and Arjan van der Schaft
Abstract— This paper investigates the problem of optimal
frequency regulation of multi-machine power networks where
each synchronous machine is described by a sixth order model.
By analyzing the physical energy stored in the network and
the generators, a port-Hamiltonian representation of the multimachine system is obtained. Moreover, it is shown that the openloop system is passive with respect to its steady states which
implies that passive controllers can be used to control the multimachine network. As a special case, a distributed consensus
based controller is designed that regulates the frequency and
minimizes a global quadratic generation cost in the presence
of a constant unknown demand. In addition, the proposed
controller allows freedom in choosing any desired connected
undirected weighted communication graph.
I. INTRODUCTION
The control of power networks has become increasingly
challenging over the last decades. As renewable energy
sources penetrate the grid, the conventional power plants
have more difficulty in keeping the frequency around the
nominal value, e.g. 50 Hz, leading to an increased chance of
a network failure of even a blackout.
The current developments require that more advanced
models for the power network must be established as the
grid is operating more often near its capacity constraints.
Considering high-order models of, for example, synchronous
machines, that better approximate the reality allows us to establish results on the control and stability of power networks
that are more reliable and accurate.
At the same time, incorporating economic considerations
in the power grid has become more difficult. As the scale of
the grid expands, computing the optimal power production
allocation in a centralized manner as conventionally is done
is computationally expensive, making distributed control far
more desirable compared to centralized control. In addition,
often exact knowledge of the power demand is required for
computing the optimal power dispatch, which is unrealistic
in practical applications. As a result, there is an increased
desire for distributed real-time controllers which are able to
compensate for the uncertainty of the demand.
In this paper, we propose an energy-based approach for the
modeling, analysis and control of the power grid, both for
This work is supported by the NWO (Netherlands Organisation for
Scientific Research) programme Uncertainty Reduction in Smart Energy
Systems (URSES) under the auspices of the project ENBARK.
T.W. Stegink and C. De Persis are with the Engineering and Technology
institute Groningen (ENTEG), University of Groningen, the Netherlands.
{t.w.stegink, c.de.persis}@rug.nl
A.J. van der Schaft is with the Johann Bernoulli Institute for Mathematics
and Computer Science, University of Groningen, Nijenborgh 9, 9747 AG
Groningen, the Netherlands. a.j.van.der.schaft@rug.nl
the physical network as well as for the distributed controller
design. Since energy is the main quantity of interest, the portHamiltonian framework is a natural approach to deal with the
problem. Moreover, the port-Hamiltonian framework lends
itself to deal with complex large-scale nonlinear systems like
power networks [5], [12], [13].
The emphasis in the present paper lies on the modeling and
control of (networked) synchronous machines as they play an
important role in the power network since they are the most
flexible and have to compensate for the increased fluctuation
of power supply and demand. However, the full-order model
of the synchronous machine as derived in many power
engineering books like [2], [6], [8] is difficult to analyze,
see e.g. [5] for a port-Hamiltonian approach, especially when
considering multi-machine networks [4], [9]. Moreover, it is
not necessary to consider the full-order model when studying
electromechanical dynamics [8].
On the other hand of the spectrum, many of the recent optimal controllers in power grids that deal with optimal power
dispatch problems rely on the second-order (non)linear swing
equations as the model for the power network [7], [11], [16],
[17], or the third-order model as e.g. in [14]. However, the
swing equations are inaccurate and only valid on a specific
time scale up to the order of a few seconds so that asymptotic
stability results are often invalid for the actual system [2],
[6], [8].
Hence, it is appropriate to make simplifying assumptions
for the full-order model and to focus on multi-machine
models with intermediate complexity which provide a more
accurate description of the network compared to the secondand third-order models [2], [6], [8]. However, for the resulting intermediate-order multi-machine models the stability
analysis is often carried out for the linearized system, see
[1], [6], [8]. Consequently, the stability results are only valid
around a specific operating point.
Our approach is different as the nonlinear nature of the
power network is preserved. More specifically, in this paper
we consider a nonlinear sixth-order reduced model of the
synchronous machine that enables a quite accurate description of the power network while allowing us to perform a
rigorous analysis.
In particular, we show that the port-Hamiltonian framework is very convenient when representing the dynamics
of the multi-machine network and for the stability analysis.
Based on the physical energy stored in the generators and
the transmission lines, a port-Hamiltonian representation of
the multi-machine power network can be derived. More
specifically, while the system dynamics is complex, the in-
terconnection and damping structure of the port-Hamiltonian
system is sparse and, importantly, state-independent.
The latter property implies shifted passivity of the system
[15] which respect to its steady states which allows the
usage of passive controllers that steer the system to a desired
steady state. As a specific case, we design a distributed realtime controller that regulates the frequency and minimizes
the global generation cost without requiring any information about the unknown demand. In addition, the proposed
controller design allows us to choose any desired undirected
weighted communication graph as long as the underlying
topology is connected.
The main contribution of this paper is to combine distributed optimal frequency controllers with a high-order
nonlinear model of the power network, which is much
more accurate compared to the existing literature, to prove
asymptotic stability to the set of optimal points by using
Lyapunov function based techniques.
The rest of the paper is organized as follows. In Section
II the preliminaries are stated and a sixth order model of
a single synchronous machine is given. Next, the multimachine model is derived in Section III. Then the energy
functions of the system are derived in Section IV, which
are used to represent the multi-machine system in portHamiltonian form, see Section V. In Section VI the design of
the distributed controller is given and asymptotic stability to
the set of optimal points is proven. Finally, the conclusions
and possibilities for future research are discussed in Section
VII.
II. PRELIMINARIES
Consider a power grid consisting of n buses. The network
is represented by a connected and undirected graph G =
(V, E), where the set of nodes, V = {1, ..., n}, is the set of
buses and the set of edges, E = {1, ..., m} ⊂ V × V, is the
set of transmission lines connecting the buses. The ends of
edge l ∈ E are arbitrary labeled with a ‘+’ and a ‘-’, so that
the incidence matrix D of the network is given by
+1 if i is the positive end of l
Dil = −1 if i is the negative end of l
(1)
0
otherwise.
Each bus represents a synchronous machine and is assumed to have controllable mechanical power injection and
a constant unknown power load. The dynamics of each
synchronous machine i ∈ V is assumed to be given by [8]
Mi ω̇i = Pmi − Pdi − Vdi Idi − Vqi Iqi
δ̇i
0
0
Tdi Ėqi
0
0
Tqi
Ėdi
00 00
Tdi
Ėqi
00 00
Tqi Ėdi
see also Table I.
0
0
= −Edi
− (Xqi − Xqi
)Iqi
0
00
0
00
= Eqi
− Eqi
+ (Xdi
− Xdi
)Idi
0
00
0
00
= Edi
− Edi
− (Xqi
− Xqi
)Iqi ,
rotor angle w.r.t. synchronous reference frame
frequency deviation
mechanical power injection
power demand
moment of inertia
synchronous reactances
transient reactances
subtransient reactances
exciter emf/voltage
internal bus transient emfs/voltages
internal bus subtransient emfs/voltages
external bus voltages
generator currents
open-loop transient time-scales
open-loop subtransient time-scales
TABLE I
M ODEL PARAMETERS AND VARIABLES .
Assumption 1: When using model (2), we make the following simplifying assumptions [8]:
•
•
•
•
The frequency of each machine is operating around the
synchronous frequency.
The stator winding resistances are zero.
The excitation voltage Ef i is constant for all i ∈ V.
00
The subtransient saliency is negligible, i.e. Xdi
=
00
Xqi , ∀i ∈ V.
The latter assumption is valid for synchronous machines with
damper windings in both the d and q axes, which is the case
for most synchronous machines [8].
It is standard in the power system literature to represent
the equivalent synchronous machine circuits along the dqaxes as in Figure 1, [6], [8]. Here we use the conventional
00
00
00
00
00
00
+jEdi
where E qi :=
phasor notation E i = E qi +E di = Eqi
00
00
00
Eqi
, E di := jEdi
, and the phasors I i , V i are defined likewise
[8], [10]. Remark that internal voltages Eq0 , Ed0 , Eq00 , Ed00 as
depicted in Figure 1 are not necessarily at steady state but are
governed by (2), where it should be noted that, by definition,
the reactances of a round rotor synchronous machine satisfy
0
00
0
00
Xdi > Xdi
> Xdi
> 0, Xqi > Xqi
> Xqi
> 0 for all i ∈ V
[6], [8].
By Assumption 1 the stator winding resistances are negligible so that synchronous machine i can be represented
by a subtransient emf behind a subtransient reactance, see
Figure 2 [6], [8]. As illustrated in this figure, the internal
and external voltages are related to each other by [8]
00
00
E i = V i + jXdi
I i,
i ∈ V.
(3)
III. MULTI-MACHINE MODEL
= ωi
0
0
= Ef i − Eqi
+ (Xdi − Xdi
)Idi
δi
ωi
Pmi
Pdi
Mi
Xqi , Xdi
0 , X0
Xqi
di
00 , X 00
Xdi
qi
Ef i
0 , E0
Eqi
di
00 , E 00
Eqi
di
Vqi , Edi
Iqi , Idi
0 ,T0
Tqi
di
00 , T 00
Tqi
di
(2)
Consider n synchronous machines which are interconnected by RL-transmission lines and assume that the network
is operating at steady state. As the currents and voltages of
each synchronous machine is expressed w.r.t. its local dqreference frame, the network equations are written as [10]
00
I = diag(e−jδi )Y diag(ejδi )E .
(4)
j(Xd − Xd0 )
Td0
Ef
j(Xd0 − Xd00 )
0
Eq
jXd00
00
Td00
Id
Vq
Eq
To simplify the analysis further, we assume that the
network resistances are negligible so that G = 0. By equating
the real and imaginary part of (4) we obtain the following
expressions for the dq-currents entering generator i ∈ V:
X
00
00
00
Bik (Edk
Idi = Bii Eqi
−
sin δik + Eqk
cos δik ) ,
k∈Ni
j(Xq − Xq0 )
Tq0
j(Xq0 − Xq00 )
0
Ed
jXq00
00
Tq00
Ed
Iqi =
Iq
Vd
Fig. 1: Generator equivalent circuits for both dq-axes [8].
For aesthetic reasons the subscript i is dropped.
00
jXdi
00
Ei
Vi
that D is the incidence matrix of the network defined by (1).
00
Ei
jXT
Vi
Il
X
00
00
Bik (Eqk
sin δik − Edk
cos δik ) ,
k∈Ni
IV. ENERGY FUNCTIONS
−1 T
Here the admittance matrix1 Y := D(R + jX)P
D satisfies
YP
ik = −Gik − jBik and Yii = Gii + jBii =
k∈Ni Gik +
j k∈Ni Bik where G denotes the conductance and B ∈
Rn×n
denotes the susceptance of the network [10]. In
≤0
addition, Ni denotes the set of neighbors of node i.
Remark 1: As the electrical circuit depicted in Figure 2 is
00
in steady state (3), the reactance Xdi
can also be considered
as part of the network (an additional inductive line) and
is therefore implicitly included into the network admittance
matrix Y, see also Figure 3.
00
jXdi
−
(5)
where δik := δi − δk . By substituting (5) and (3) into (2)
we obtain after some rewriting a sixth-order multi-machine
model given by equation (6), illustrated at the top of the next
page.
Remark 2: Since the transmission lines are purely inductive by assumption, there are no energy losses in the
transmission lines implying
that the following energy conserP
∗
vation law holds: i∈V Pei = 0 where Pei = Re(E i I i ) =
00
00
Edi Idi + Eqi Iqi is the electrical power produced by synchronous machine i.
Ii
Fig. 2: Subtransient emf behind a subtransient reactance.
1 Recall
00
−Bii Edi
00
jXdk
00
V k Ek
Fig. 3: Interconnection of two synchronous machines by a
purely inductive transmission line with reactance XT .
When analyzing the stability of the multi-machine system
one often searches for a suitable Lyapunov function. A
natural starting point is to consider the physical energy as a
candidate Lyapunov function. Moreover, when we have an
expression for the energy, a port-Hamiltonian representation
of the associated multi-machine model (6) can be derived,
see Section V.
Remark 3: It is convenient in the definition of the Hamiltonian to multiply the energy stored in the synchronous
machine and the transmission lines by the synchronous
frequency ωs since a factor ωs−1 appears in each of the energy
functions. As a result, the Hamiltonian has the dimension of
power instead of energy. Nevertheless, we still refer to the
Hamiltonian as the energy function in the sequel.
In the remainder of this section we will first identify the
electrical and mechanical energy stored in each synchronous
machine. Next, we identify the energy stored in the transmission lines.
A. Synchronous Machine
1) Electrical Energy: Note that, at steady state, the energy
(see Remark 3) stored in the first two reactances2 of generator
i as illustrated in Figure 1 is given by
!
0
0
00 2
− Ef i )2
(Eqi
− Eqi
)
1 (Eqi
Hedi =
+
0
0 − X 00
2
Xdi − Xdi
Xdi
di
!
(7)
0 2
0
00 2
1
(Edi )
(Edi − Edi )
Heqi =
.
0 + X 0 − X 00
2 Xqi − Xqi
qi
qi
Remark 4: The energy stored in the third (subtransient)
reactance will be considered as part of the energy stored in
the transmission lines, see also Remark 1 and Section IV-B.
2 In
both the d- and the q-axes.
Mi ∆ω̇i = Pmi − Pdi +
X
k∈Ni
h
i
00 00
00 00
00 00
00 00
Bik (Edi
Edk + Eqi
Eqk ) sin δik + (Edi
Eqk − Eqi
Edk ) cos δik
δ̇i = ∆ωi
0
0
0
0
00
Tdi
Ėqi
= Ef i − Eqi
+ (Xdi − Xdi
)(Bii Eqi
−
0
0
Tqi
Ėdi
=
0
−Edi
+ (Xqi −
0
00
Xqi
)(Bii Edi
=
0
Edi
−
00
Edi
+
0
(Xqi
−
00
00
Xqi
)(Bii Edi
B. Inductive Transmission Lines
Consider an interconnection between two SG’s with a
purely inductive transmission line (with reactance XT ) at
steady state, see Figure 3. When expressed in the local dqreference frame of generator i, we observe from Figure 3
that at steady state one obtains3
00
jXl I l = E i − e−jδik E k ,
(8)
where the total reactance between the internal buses of
00
00
generator i and k is given by Xl := Xdi
+ XT + Xdk
.
Note that at steady state the modified energy of the inductive
transmission line l between nodes i and k is given by
∗
Hl = 21 Xl I l I l , which by (8) can be rewritten as
1
00 00
00
00
Hl = − Bik 2 Edi
Eqk − Edk
Eqi
sin δik
2
00 00
00 00
(9)
−2 Edi Edk + Eqi Eqk cos δik
002
002
002
002
+Edi
+ Edk
+ Eqi
+ Eqk
,
where the line susceptance satisfies Bik = − X1l < 0 [10].
C. Total Energy
The total physical energy of the multi-machine system is
equal to the sum of the individual energy functions:
X
X
Hp =
(Hdei + Hqei + Hmi ) +
Hl .
(10)
i∈V
(6)
k∈Ni
2) Mechanical Energy: The kinetic energy of synchronous machine i is given by
1
1
Hmi = Mi ωi2 = Mi−1 p2i ,
2
2
where pi = Mi ωi is the angular momentum of synchronous
machine i with respect to the synchronous rotating reference
frame.
00
k∈N
X i
00
00
Bik (Edk
−
cos δik − Eqk
sin δik ) )
00 00
0
00
0
00
00
Tdi
Ėqi = Eqi
− Eqi
+ (Xdi
− Xdi
)(Bii Eqi
−
00 00
Tqi
Ėdi
X
00
00
Bik (Edk
sin δik + Eqk
cos δik ) )
l∈E
V. PORT-HAMILTONIAN REPRESENTATION
Using the energy functions from the previous section, the
multi-machine model (6) can be put into a port-Hamiltonian
form. To this end, we derive expressions for the gradient of
each energy function.
3 The mapping from dq-reference frame k to dq-reference frame i in the
phasor domain is done by multiplication of e−jδik [10].
−
X
00
00
Bik (Edk
sin δik + Eqk
cos δik ) )
k∈Ni
X
00
00
Bik (Edk
cos δik − Eqk
sin δik ) )
k∈Ni
A. Transmission Line Energy
Recall that the energy stored in transmission line l between
internal buses i and k is given by (9). It can
P be verified that
the gradient of the total energy HL := l∈E Hl stored in
the transmission lines takes the form
∂HL
00
E 00 Idi + Eqi
Iqi
Pei
∂δi
∂H00L di
= Idi ,
Idi
∂Eqi =
∂HL
Iqi
Iqi
00
∂Edi
where Idi , Iqi are given by P
(5). Here it is used that the selfsusceptances satisfy Bii = k∈Ni Bik for all i ∈ V.
1) State transformation: In the sequel, it is more convenient to consider a different set of variable describing
the voltage angle differences. Define for each edge l ∈ E
ηl := δik where i, k are respectively the positive and negative
ends of l. In vector form we obtain η = DT δ ∈ Rm , and
observe that this implies
D
∂HL
∂Hp
=D
= Pe .
∂η
∂η
B. Electrical Energy SG
Further, notice that the electrical energy stored in the
equivalent circuits along the d- and q-axis of generator i
is given by (7) and satisfies
" ∂Hedi # 0
0
0
0
Eqi − Ef i
Xdi − Xdi
Xdi − Xdi
∂Eqi
=
0
00
00
0
∂Hedi
0
Xdi
− Xdi
Eqi
− Eqi
00
∂Eqi
"
#
∂Heqi
0
0
0
0
Xqi − Xqi
Xqi − Xqi
Edi
∂Edi
.
0
00
00
0
∂Heqi =
0
Xqi
− Xqi
Edi
− Edi
∂E 00
di
By the previous observations, and by aggregating the states,
the dynamics of the multi-machine system can now be
written in the form (11) where the Hamiltonian is given
0
0
0
00
by (10) and X̂di := Xdi − Xdi
, X̂di
:= Xdi
− Xdi
, X̂d =
0
0
diagi∈V {X̂di } and X̂d , X̂q , X̂q are defined likewise. In addi0
tion, Td0 = diagi∈V {Tdi
} and Td0 , Tq , Tq0 are defined similarly.
Observe that the multi-machine system (11) is of the form
ẋ = (J − R)∇H(x) + gu
y = g T ∇H(x)
(12)
−D
0
0
0
0
0
0
−(Td0 )−1 X̂d
0
0
0
−(Tq0 )−1 X̂q
0
0
0
0
0
0
0
T
g= I 0 0 0 0 0 .
0
ṗ
η̇ DT
0
Ėq 0
ẋp =
Ė 0 = 0
d
Ė 00 0
q
Ėd00
y = g T ∇Hp ,
where J = −J T , R = RT are respectively the antisymmetric and symmetric part of the matrix depicted in (11).
Notice that the dissipation matrix of the electrical part is
positive definite (which implies R ≥ 0) if
0
X −X 0
Xdi −Xdi
2 diT 0 di
0
Tdi
di
∀i ∈ V,
0
0
00 > 0,
Xdi
−Xdi
Xdi −Xdi
2
T0
T 00
di
di
which, by invoking the Schur complement, holds if and only
if
0
00
0
0
00
4(Xdi
− Xdi
)Tdi
− (Xdi − Xdi
)Tdi
> 0,
∀i ∈ V. (13)
Note that a similar condition holds for the q-axis.
Proposition 1: Suppose that for all i ∈ V the following
holds:
0
00
0
4(Xdi
− Xdi
)Tdi
− (Xdi
0
00
0
4(Xqi − Xqi )Tqi
− (Xqi
0
00
− Xdi
)Tdi
0
00
− Xqi )Tqi
>0
> 0.
(14)
Then (11) is a port-Hamiltonian representation of the multimachine network (6).
It should be stressed that (14) is not a restrictive assumption
00
0
00
, Tqi
Tdi
as it holds for a typical generator since Tdi
0
Tqi , see also Table 4.2 of [6] and Table 4.3 of [8].
Because the interconnection and damping structure J − R
of (11) is state-independent, the shifted Hamiltonian
H̄(x) = H(x) − (x − x̄)T ∇H(x̄) − H(x̄)
(15)
acts as a local storage function for proving passivity in a
neighborhood of a steady state x̄ of (12), provided that the
Hessian of H evaluated at x̄ (denoted as ∇2 H(x̄)) is positive
definite4 .
Proposition 2: Let ū be a constant input and suppose
there exists a corresponding steady state x̄ to (12) such
that ∇2 H(x̄) > 0. Then the system (12) is passive in a
neighborhood of x̄ with respect to the shifted external portvariables ũ := u − ū, ỹ := y − ȳ where ȳ := g T ∇H(x̄).
Proof: Define the shifted Hamiltonian by (15), then we
obtain
ẋ = (J − R)∇H(x) + gu
= (J − R)(∇H̄(x) + ∇H(x̄)) + gu
= (J − R)∇H̄(x) + g(u − ū)
= (J − R)∇H̄(x) + gũ
ỹ = y − ȳ = g T (∇H(x) − ∇H(x̄)) = g T ∇H̄(x).
4 Observe
that ∇2 H(x) = ∇2 H̄(x) for all x.
(16)
0
0
−(Td0 )−1 X̂d
0
00 −1 0
−(Td ) X̂d
0
0
0
0
∇Hp + g(Pm − Pd ),
−(Tq0 )−1 X̂q
0
00 −1 0
−(Tq ) X̂q
(11)
As ∇2 H(x̄) > 0 we have that H̄(x̄) = 0 and H̄(x) > 0
for all x 6= x̄ in a sufficiently small neighborhood around x̄.
Hence, by (16) the passivity property automatically follows
where H̄ acts as a local storage function.
VI. MINIMIZING GENERATION COSTS
The objective is to minimize the total quadratic generation
cost while achieving zero frequency deviation. By analyzing
the steady states of (6), it follows that a necessary condition
for zero frequency deviation is 1T Pm = 1T Pd , i.e., the total
supply must match the total demand. Therefore, consider the
following convex minimization problem:
1 T
P QPm
2 m
s.t. 1T Pm = 1T Pd ,
min
Pm
(17)
where Q = QT > 0 and Pd is a constant unknown power
load.
Remark 5: Note that that minimization problem (17) is
easily extended to quadratic cost functions of the form
1 T
T
n
2 Pm QPm + b Pm for some b ∈ R . Due to space limitations, this extension is omitted.
As the minimization problem (17) is convex, it follows
that Pm is an optimal solution if and only if the KarushKuhn-Tucker conditions are satisfied [3]. Hence, the optimal
points of (17) are characterized by
∗
Pm
= Q−1 1λ∗ ,
λ∗ =
1 T Pd
T
1 Q−1 1
(18)
Next, based on the design of [14], consider a distributed
controller of the form
T θ̇ = −Lc θ − Q−1 ω
Pm = Q−1 θ − Kω
(19)
where T = diagi∈V {Ti } > 0, K = diagi∈V {ki } > 0 are
controller parameters. In addition, Lc is the Laplacian matrix of some connected undirected weighted communication
graph.
The controller (19) consists of three parts. Firstly, the term
−Kω corresponds to a primary controller and adds damping
into the system. The term −Q−1 ω corresponds to secondary
control for guaranteeing zero frequency deviation on longer
time-scales. Finally, the term −Lc θ corresponds to tertiary
control for achieving optimal production allocation over the
network.
Note that (19) admits the port-Hamiltonian representation
ϑ̇ = −Lc ∇Hc − Q−1 ω
Pm = Q−1 ∇Hc − Kω,
Hc =
1 T −1
ϑ T ϑ,
2
(20)
where ϑ := T θ. By interconnecting the controller (20) with
(11), the closed-loop system amounts to
ẋp
g
J − R − RK GT
=
∇H −
P
0 d
−G
−Lc
ϑ̇
(21)
−1
0 0 0 0 0
G= Q
where J − R is given as in (11), H := Hp + Hc , and RK =
blockdiag(0, K, 0, 0, 0, 0). Define the set of steady states of
(21) by Ω and observe that any x := (xp , ϑ) ∈ Ω satisfies
the optimality conditions (18) and ω = 0.
Assumption 2: Ω 6= ∅ and there exists x̄ ∈ Ω such that
∇2 H(x̄) > 0.
Remark 6: While the Hessian condition of Assumption
2 is required for proving local asymptotic stability of (21),
guaranteeing that this condition holds can be bothersome.
However, while we omit the details, it can be shown that
∇2 H(x̄) > 0 if
the generator reactances are small compared to the
transmission line reactances.
• the subtransient voltage differences are small.
• the rotor angle differences are small.
Theorem 1: Suppose Pd is constant and there exists x̄ ∈ Ω
such that Assumption 2 is satisfied. Then the trajectories of
the closed-loop system (21) initialized in a sufficiently small
neighborhood around x̄ converge to the set of optimal points
Ω.
Proof: Observe by (16) that the shifted Hamiltonian
defined by (15) satisfies
•
H̄˙ = −(∇H̄)T blockdiag(R + RK , Lc )∇H̄ ≤ 0
where equality holds if and only if ω = 0, T −1 ϑ = θ = 1θ∗
for some θ∗ ∈ R, and ∇E H̄(x) = ∇E H(x) = 0. Here
∇E H(x) is the gradient of H with respect to the internal
voltages Eq0 , Ed0 , Eq00 , Ed00 . By Assumption 2 there exists a
compact neighborhood Υ around x̄ which is forward invariant. By invoking LaSalle’s invariance principle, trajectories
initialized in Υ converge to the largest invariant set where
H̄˙ = 0. On this set ω, η, θ, Eq0 , Ed0 , Eq00 , Ed00 are constant and,
T
more specifically, ω = 0, θ = 1λ∗ = 1 1T1QP−1d 1 corresponds
to an optimal point of (17) as Pm = Q−1 1λ∗ where λ∗
is defined in (18). We conclude that the trajectories of the
closed-loop system (21) initialized in a sufficiently small
neighborhood around x̄ converge to the set of optimal points
Ω.
Remark 7: While by Theorem 1 the trajectories of the
closed-loop system (21) converge to the set of optimal points,
it may not necessarily converge to a unique steady state as the
closed-loop system (21) may have multiple (isolated) steady
states.
VII. CONCLUSIONS
We have shown that a much more advanced multi-machine
model than conventionally used can be analyzed using the
port-Hamiltonian framework. Based on the energy functions
of the system, a port-Hamiltonian representation of the
model is obtained. Moreover, the system is proven to be
incrementally passive which allows the use of a passive
controller that regulates the frequency in an optimal manner,
even in the presence of an unknown constant demand.
The results established in this paper can be extended in
many possible ways. Current research has shown that the
third, fourth and fifth order model as given in [8] admit a
similar port-Hamiltonian structure as (11). It is expected that
the same controller as designed in this paper can also be used
in these lower order models.
While the focus in this paper is about (optimal) frequency
regulation, further effort is required to investigate the possibilities of (optimal) voltage control using passive controllers.
Another extension is to include transmission line resistances
of the network. Finally, one could look at the possibility to
extend the results to the case where inverters and frequency
dependent loads are included into the network as well.
R EFERENCES
[1] F. Alvarado, J. Meng, C. DeMarco, and W. Mota. Stability analysis of
interconnected power systems coupled with market dynamics. IEEE
Transactions on Power Systems, 16(4):695–701, November 2001.
[2] P. Anderson and A. Fouad. Power System Control and Stability. The
Iowa State Univsersity Press, first edition, 1977.
[3] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge
University Press, first edition, 2004.
[4] S. Caliskan and P. Tabuada. Compositional transient stability analysis
of multimachine power networks. IEEE Transactions on Control of
Network systems, 1(1):4–14, 2014.
[5] S. Fiaz, D. Zonetti, R. Ortega, J. Scherpen, and A. van der Schaft. A
port-Hamiltonian approach to power network modeling and analysis.
European Journal of Control, 19(6):477–485, December 2013.
[6] P. Kundur. Power System Stability and Control. Mc-Graw-Hill
Engineering, 1993.
[7] N. Li, L. Chen, C. Zhao, and S. H. Low. Connecting automatic
generation control and economic dispatch from an optimization view.
In American Control Conference, pages 735–740. IEEE, 2014.
[8] J. Machowski, J. Bialek, and J. Bumby. Power System Dynamics:
Stability and Control. John Wiley & Sons, Ltd, second edition, 2008.
[9] R. Ortega, M. Galaz, A. Astolfi, Y. Sun, and T. Shen. Transient
stabilization of multimachine power systems with nontrivial transfer
conductances. Automatic Control, IEEE Transactions on, 50(1):60–75,
2005.
[10] J. Schiffer, D. Zonetti, R. Ortega, A. Stankovic, T. Sezi, and J. Raisch.
Modeling of microgrids-from fundamental physics to phasors and
voltage sources. arXiv preprint arXiv:1505.00136, 2015.
[11] Y. Seungil and C. Lijun. Reverse and forward engineering of frequency
control in power networks. In Proc. of IEEE Conference on Decision
and Control, Los Angeles, CA, USA, 2014.
[12] T. Stegink, C. De Persis, and A. van der Schaft. A port-Hamiltonian
approach to optimal frequency regulation in power grids. arXiv
preprint arXiv:1509.07318, 2015.
[13] T. Stegink, C. De Persis, and A. van der Schaft. Port-Hamiltonian
formulation of the gradient method applied to smart grids. In 5th
IFAC Workshop on Lagrangian and Hamiltonian Methods for Non
Linear Control, Lyon, France, July 2015.
[14] S. Trip, M. Bürger, and C. De Persis. An internal model approach
to (optimal) frequency regulation in power grids with time-varying
voltages. Automatica, 64:240–253, 2016.
[15] A. van der Schaft and D. Jeltsema. Port-Hamiltonian systems theory:
An introductory overview. Foundations and Trends in Systems and
Control, 1(2-3):173–378, 2014.
[16] X. Zhang and A. Papachristodoulou. A real-time control framework for
smart power networks: Design methodology and stability. Automatica,
58:43–50, 2015.
[17] C. Zhao, E. Mallada, and S. Low. Distributed generator and loadside secondary frequency control in power networks. In 49th Annual
Conference on Information Sciences and Systems (CISS), pages 1–6.
IEEE, 2015.
| 3 |
Nominal Unification Revisited
Christian Urban
TU Munich, Germany
urbanc@in.tum.de
Nominal unification calculates substitutions that make terms involving binders equal modulo alphaequivalence. Although nominal unification can be seen as equivalent to Miller’s higher-order pattern
unification, it has properties, such as the use of first-order terms with names (as opposed to alphaequivalence classes) and that no new names need to be generated during unification, which set it
clearly apart from higher-order pattern unification. The purpose of this paper is to simplify a clunky
proof from the original paper on nominal unification and to give an overview over some results about
nominal unification.
1
Introduction
The well-known first-order unification algorithm by Robinson [18] calculates substitutions for variables
that make terms syntactically equal. For example the terms
f hX, Xi =? f hZ, g hYii
can be made syntactically equal with the substitution [X :=g hYi, Z := g hYi]. In first-order unification
we can regard variables as “holes” for which the unification algorithm calculates terms with which the
holes need be “filled” by substitution. The filling operation is a simple replacement of terms for variables.
However, when binders come into play, this simple picture becomes more complicated: We are no longer
interested in syntactic equality since terms like
a.ha, ci ≈? b.hX, ci
(1)
should unify, despite the fact that the binders a and b disagree. (Following [19] we write a.t for the term
where the name a is bound in t, and ht1 , t2 i for a pair of terms.) If we replace X with term b in (1) we
obtain the instance
a.ha, ci ≈ b.hb, ci
(2)
which are indeed two alpha-equivalent terms. Therefore in a setting with binders, unification has to be
modulo alpha-equivalence.
What is interesting about nominal unification is the fact that it maintains the view from first-order
unification of a variable being a “hole” into which a term can be filled. As can be seen, by going from (1)
to (2) we are replacing X with the term b without bothering that this b will become bound by the binder.
This means the operation of substitution in nominal unification is possibly capturing. A result is that
many complications stemming from the fact that binders need to be renamed when a capture-avoiding
substitution is pushed under a binder do not apply to nominal unification. Its definition of substitution
states that in case of binders
σ (a.t) = a.σ (t)
Maribel Fernandez (Ed.): 24th International Workshop on
Unification (UNIF2010).
EPTCS 42, 2010, pp. 1–11, doi:10.4204/EPTCS.42.1
2
Nominal Unification Revisited
holds without any side-condition about a and σ . In order to obtain a unification algorithm that, roughly
speaking, preserves alpha-equivalence, nominal unification uses the notion of freshness of a name for a
term. This will be written as the judgement a # t. For example in (1) it is ensured that the bound name a
on the left-hand side is fresh for the term on the right-hand side, that means it cannot occur free on the
right-hand side. In general two abstraction terms will not unify, if the binder form one side is free on the
other. This condition is sufficient to ensure that unification preserves alpha-equivalence and allows us to
regard variables as holes with a simple substitution operation to fill them.
Whenever two abstractions with different binders need to be unified, nominal unification uses the
operation of swapping two names to rename the bound names. For example when solving the problem
shown in (1), which has two binders whose names disagree, then it will attempt to unify the bodies ha,
ci and hX, ci, but first applies the swapping (a b) to hX, ci. While it is easy to see how this swapping
should affect the name c (namely not at all), the interesting question is how this swapping should affect
the variable X? Since variables are holes for which nothing is known until they are substituted for, the
answer taken in nominal unification is to suspend such swapping in front of variables. Several such
swapping can potentially accumulate in front of variables. In the example above, this means applying
the swapping (a b) to hX, ci gives the term h(a b)·X, ci, where (a b) is suspended in front of X. The
substitution [X := b] is then determined by unifying the first components of the two pairs, namely a ≈?
(a b)·X. We can extract the substitution by applying the swapping to the term a, giving [X := b]. This
method of suspending swappings in front of variables is related to unification in explicit substitution
calculi which use de Bruijn indices and which record explicitly when indices must be raised [7].
Nominal unification gives a similar answer to the problem of deciding when a name is fresh for a
term containing variables, say a # hX, ci. In this case it will record explicitly that a must be fresh for X.
(Since we assume a 6= c, it will be that a is fresh for c.) This amounts to the constraint that nothing can be
substituted for X that contains a free occurrence of a. Consequently the judgements for freshness #, and
also equality ≈, depend on an explicit freshness context recording what variables need to be fresh for.
We will give the inductive definitions for # and ≈ in Section 2. This method of recording extra freshness
constraints also allows us to regard the following two terms containing a hole (the variable X)
a.X ≈ b.X
as alpha-equal—namely under the condition that both a and b must be fresh for the variable X. This is
defined in terms of judgements of the form
{a # X, b # X} ` a.X ≈ b.X
The reader can easily determine that any substitution for X that satisfies these freshness conditions will
produce two alpha-equivalent terms.
Unification problems solved by nominal unification occur frequently in practice. For example typing
rules are typically specified as:
(x, τ) ∈ Γ
var
Γ`x:τ
Γ ` t1 : τ 1 → τ 2 Γ ` t2 : τ 1 app
Γ ` t1 t2 : τ 2
(x, τ 1 )::Γ ` t : τ 2 x ∈
/ dom Γ
lam
Γ ` λ x.t : τ 1 → τ 2
Assuming we have the typing judgement ∅ ` λ y.s : σ , we are interested how the lam-rule, the only one
that unifies, needs to be instantiated in order to derive the premises under which λ y.s is typable. This
leads to the nominal unification problem
∅ ` λ y.s : σ ≈? Γ ` λ x.t : τ 1 → τ 2
C. Urban
3
which can be solved by the substitution [Γ := ∅, t := (y x) · s, σ := τ 1 → τ 2 ] with the requirement that
x needs to be fresh for s (in order to stay close to informal practice, we deviate here from the convention
of using upper-case letters for variables and lower-case letters for names).
Most closely related to nominal unification is higher-order pattern unification by Miller [14]. Indeed Cheney has shown that higher-order pattern unification problems can be solved by an encoding to
nominal unification problems [4]. Levy and Villaret have presented an encoding for the other direction
[12]. However, there are crucial differences between both methods of unifying terms with binders. One
difference is that nominal unification regards variables as holes for which terms can be substituted in a
possibly capturing manner. In contrast, higher-order pattern unification is based on the notion of captureavoiding substitutions. Hence, variables are not just holes, but always need to come with the parameters,
or names, the variable may depend on. For example in order to imitate the behaviour of (1), we have to
write X a b, explicitly indicating that the variable X may depend on a and b. If we replace X with an
appropriate lambda-abstraction, then the dependency can by “realised” via a beta-reduction. This results
in unification problems involving lambda-terms to be unified modulo alpha, beta and eta equivalence.
In order to make this kind of unification problems to be decidable, Miller introduced restrictions on the
form of the lambda-terms to be unified. With this restriction he obtains unification problems that are not
only decidable, but also possess (if solvable) most general solutions.
Another difference between nominal unification and higher-order pattern unification is that the former uses first-order terms, while the latter uses alpha-equivalence classes. This makes the implementation of higher-order pattern unification in a programming language like ML substantially harder than an
implementation of nominal unification. One possibility is to implement elements of alpha-equivalence
classes as trees and then be very careful in the treatment of names, generating new ones on the fly. Another possibility is to implement them with de-Bruijn indices. Both possibilities, unfortunately, give rise
to rather complicated unification algorithms. This complexity is one reason that higher-order unification
has up to now not been formalised in a theorem prover, whereas nominal unification has been formalised
twice [19, 10]. One concrete example for the higher-order pattern unification algorithm being more
complicated than the nominal unification algorithm is the following: higher-order pattern unification has
been part of the infrastructure of the Isabelle theorem prover for many years [17]. The problem, unfortunately, with this implementation is that it unifies a slightly enriched term-language (which allows general
beta-redexes) and it is not completely understood how eta and beta equality interact in this algorithm.
A formalisation of Isabelle’s version of higher-order pattern unification and its claims is therefore very
much desired, since any bug can potentially compromise the correctness of Isabelle.
In a formalisation it is important to have the simplest possible argument for establishing a property,
since this nearly always yields a simple formalisation. In [19] we gave a rather clunky proof for the
property that the equivalence relation ≈ is transitive. This proof has been slightly simplified in [8]. The
main purpose of this paper is to further simplify this proof. The idea behind the simplification is taken
from the work of Kumar and Norrish who formalised nominal unification in the HOL4 theorem prover
[10], but did not report about their simplification in print. After describing the simpler proof in detail, we
sketch the nominal unification algorithm and outline some results obtained about nominal unification.
2
Equality and Freshness
Two central notions in nominal unification are names, which are called atoms, and permutations of atoms.
We assume in this paper that there is a countably infinite set of atoms and represent permutations as finite
lists of pairs of atoms. The elements of these lists are called swappings. We therefore write permutations
4
Nominal Unification Revisited
as (a1 b1 ) (a2 b2 ) . . . (an bn ); the empty list [] stands for the identity permutation. A permutation π
acting on an atom a is defined as
if π · a = a1
a2
def
def
a1
if π · a = a2
π ·a = a
(a1 a2 )::π · a =
π · a otherwise
where (a1 a2 )::π is the composition of a permutation followed by the swapping (a1 a2 ). The composition
of π followed by another permutation π 0 is given by list-concatenation, written as π 0 @ π, and the inverse
of a permutation is given by list reversal, written as π −1 .
The advantage of our representation of permutations-as-lists-of-swappings is that we can easily calculate the composition and the inverse of permutations, which are basic operations in the nominal unification algorithm. However, the list representation does not give unique representatives for permutations (for
example (a a) 6= []). This is is different from the usual representation of permutations given for example
in [9]. There permutations are (unique) bijective functions from atoms to atoms. For permutations-aslists we can define the disagreement set between two permutations as the set of atoms given by
def
ds π π 0 = {a | π · a 6= π 0 · a}
and then regard two permutations as equal provided their disagreement set is empty. However, we do not
explicitly equate permutations.
The purpose of unification is to make terms equal by substituting terms for variables. The paper [19]
defines nominal terms with the following grammar:
trm
::=
|
|
|
|
|
hi
ht1 , t2 i
ft
a
a.t
π·X
Units
Pairs
Function Symbols
Atoms
Abstractions
Suspensions
In order to slightly simplify the formal reasoning in the Isabelle/HOL theorem prover, the function symbols only take a single argument (instead of the usual list of arguments). Functions symbols with multiple
arguments need to be encoded with pairs. An important point to note is that atoms, written a, b, c, . . . ,
are distinct from variables, written X, Y, Z, . . . , and only variables can potentially be substituted during
nominal unification (a definition of substitution will be given shortly). As mentioned in the Introduction,
variables in general need to be considered together with permutations—therefore suspensions are pairs
consisting of a permutation and a variable. The reason for this definition is that variables stand for unknown terms, and a permutation applied to a term must be “suspended” in front of all unknowns in order
to keep it for the case when any of the unknowns is substituted with a term.
Another important point to note is that, although there are abstraction terms, nominal terms are
first-order terms: there is no implicit quotienting modulo renaming of bound names. For example the
abstractions a.t and b.s are not equal unless a = b and t = s. This has the advantage that nominal
terms can be implemented as a simple datatype in programming languages such as ML and also in the
theorem prover Isabelle/HOL. In [19] a notion of equality and freshness for nominal terms is defined
by two inductive predicates whose rules are shown in Figure 1. This inductive definition uses freshness
environments, written ∇, which are sets of atom-and-variable pairs. We often write such environments
as {a1 # X 1 , . . . , an # X n }. Rule (≈-abstraction2 ) includes the operation of applying a permutation to a
nominal term, which can be recursively defined as
C. Urban
5
∇ ` a # hi
(#-unit)
∇ ` a # t1 ∇ ` a # t2
∇ ` a # ht1 , t2 i
∇ ` a # a.t
(#-abstraction1 )
a 6= b
∇`a#b
∇ ` hi ≈ hi
(≈-unit)
∇ ` t1 ≈ t2
∇ ` a.t1 ≈ a.t2
∇ ` a # t a 6= b
∇ ` a # b.t
∇ ` a # π·X
∇ ` s1 ≈ s2
∇ ` ht1 , s1 i ≈ ht2 , s2 i
∇`a≈a
(#-abstraction2 )
∇ ` t1 ≈ t2
∇ ` f t1 ≈ f t2
(≈-function symbol)
∇ ` t1 ≈ (a b) · t2
∇ ` a.t1 ≈ b.t2
(≈-atom)
(#-function symbol)
(#-suspension)
(≈-pair)
a 6= b ∇ ` a # t2
(≈-abstraction1 )
∇`a#ft
(π −1 · a, X) ∈ ∇
(#-atom)
∇ ` t1 ≈ t2
∇`a#t
(#-pair)
∀ c∈ds π π 0. (c, X) ∈ ∇
∇ ` π·X ≈ π 0·X
(≈-abstraction2 )
(≈-suspension)
Figure 1: Inductive definitions for freshness and equality of nominal terms.
π · (hi)
π · (ht1 , t2 i)
π · (F t)
π·
(π 0·X)
π · (a. t)
def
=
def
=
def
hi
hπ · t1 , π · t2 i
=
F (π · t)
def
(π @ π 0)·X
=
def
=
(π · a). (π · t)
where the clause for atoms is given in (2). Because we suspend permutations in front of variables (see
penultimate clause), it will in general be the case that
π · t 6= π 0 · t
(3)
even if the disagreement set of π and π 0 is empty. Note that permutations acting on abstractions will
permute both, the “binder” a and the “body” t.
In order to show the correctness of the nominal unification algorithm in [19], one first needs to
establish that ≈ is an equivalence relation in the sense of
(i)
(ii)
(iii)
∇`t≈t
∇ ` t1 ≈ t2 implies ∇ ` t2 ≈ t1
∇ ` t1 ≈ t2 and ∇ ` t2 ≈ t3 imply ∇ ` t1 ≈ t3
(reflexivity)
(symmetry)
(transitivity)
The first property can be proved by a routine induction over the structure of t. Given the “unsymmetric”
formulation of the (≈-abstraction2 ) rule, the fact that ≈ is symmetric is at first glance surprising. Furthermore, a direct proof by induction over the rules seems tricky, since in the (≈-abstraction2 ) case one
needs to infer ∇ ` t2 ≈ (b a) · t1 from ∇ ` (a b) · t2 ≈ t1 . This needs several supporting lemmas about
freshness and equality, but ultimately requires that the transitivity property is proved first. Unfortunately,
a direct proof by rule-induction for transitivity seems even more difficult and we did not manage to find
one in [19]. Instead we resorted to a clunky induction over the size of terms (since size is preserved
6
Nominal Unification Revisited
under permutations). To make matters worse, this induction over the size of terms needed to be loaded
with two more properties in order to get the induction through. The authors of [8] managed to split up
this bulky induction, but still relied on an induction over the size of terms in their transitivity proof.
The authors of [10] managed to do considerably better. They use a clever trick in their formalisation
of nominal unification in HOL4 (their proof of equivalence is not shown in the paper). This trick yields
a simpler and more direct proof for transitivity, than the ones given in [19, 8]. We shall below adapt the
proof by Kumar and Norrish to our setting of (first-order) nominal terms1 . First we can establish the
following property.
Lemma 1. If ∇ ` a # t then also ∇ ` (π · a) # (π · t), and vice versa.
The proof is by a routine induction on the structure of t and we omit the details. Following [19] we can
next attempt to prove that freshness is preserved under equality (Lemma 3 below). However here the
trick from [10] already helps to simplify the reasoning. In [10] the notion of weak equivalence, written
as ∼, is defined as follows
t ∼ t0
hi ∼ hi
a∼a
f t ∼ f t0
t1 ∼ s1 t2 ∼ s2
t ∼ t0
ds π π 0 = ∅
ht1 , t2 i ∼ hs1 , s2 i
a.t ∼ a.t 0
π·X ∼ π 0·X
This equivalence is said to be weak because two terms can only differ in the permutations that are
suspended in front of variables. Moreover, these permutations can only be equal (in the sense that is their
disagreement set must be empty). One advantage of this definition is that we can show
π · t ∼ π 0 · t provided ds π π 0 = ∅
(4)
by an easy induction on t. As noted in (3), this property does not hold when formulated with =. It is also
straightforward to show that
Lemma 2.
(i) If ∇ ` a # t1 and t1 ∼ t2 then ∇ ` a # t2 .
(ii) If ∇ ` t1 ≈ t2 and t2 ∼ t3 then ∇ ` t1 ≈ t3 .
by induction over the relations ∼ and ≈, respectively. The reason that these inductions go through with
ease is that the relation ∼ excludes the tricky cases where abstractions differ in their “bound” atoms.
Using these two properties together with (4), it is straightforward to establish:
Lemma 3. If ∇ ` t1 ≈ t2 and ∇ ` a # t1 then ∇ ` a # t2 .
Proof. By induction on the first judgement. The only interesting case is the rule (≈-abstraction2 ) where
we need to establish ∇ ` a # d.t2 from the assumption (∗) ∇ ` a # c.t1 with the side-conditions c 6=
d and a 6= d. Using these side-condition, we can reduce our goal to establishing ∇ ` a # t2 . We can
also discharge the case where a = c, since we know that ∇ ` c # t2 holds by the side-condition of (≈abstraction2 ). In case a 6= c, we can infer ∇ ` a # t1 from (∗), and use the induction hypothesis to
conclude with ∇ ` a # (c d) · t2 . Using Lemma 1 we can infer that ∇ ` (c d) · a # (c d)(c d) · t2 holds,
whose left-hand side simplifies to just a (we have that a 6= d and a 6= c). For the right-hand side we
can prove (c d)(c d) · t2 ∼ t2 , since ds ((c d)(c d)) [] = ∅. From this we can conclude this case using
Lemma 2(i).
1 Their formalisation in HOL4 introduces an indirection by using a quotient construction over nominal terms. This quotient
construction does not translate into a simple datatype definition for nominal terms.
C. Urban
7
The point in this proof is that without the weak equivalence and without Lemma 2, we would need to
perform many more “reshuffles” of swappings than the single reference to ∼ in the proof above [19].
The next property on the way to establish transitivity proves the equivariance for ≈.
Lemma 4. If ∇ ` t1 ≈ t2 then ∇ ` π · t1 ≈ π · t2 .
Also with this lemma the induction on ≈ does not go through without the help of weak equivalence,
because in the (≈-abstraction2 )-case we need to show that ∇ ` π · t1 ≈ π · (a b) · t2 implies ∇ ` π · t1
≈ (π·a π·b) · π · t2 . While it is easy to show that the right-hand sides are equal, one cannot make use of
this fact without a notion of transitivity.
Proof. By induction on ≈. The non-trivial case is the rule (≈-abstraction2 ) where we know ∇ ` π · t1
≈ π · (a b) · t2 by induction hypothesis. We can show that π @ (a b) · t2 ∼ (π·a π·b) @ π · t2 holds
(the corresponding disagreement set is empty). Using Lemma 2(ii), we can join both judgements and
conclude with ∇ ` π · t1 ≈ (π·a π·b) · π · t2 .
The next lemma relates the freshness and equivalence relations.
Lemma 5. If ∀ a∈ds π π 0. ∇ ` a # t then ∇ ` π · t ≈ π 0 · t, and vice versa.
Proof. By induction on t generalising over the permutation π 0. The generalisation is needed in order to
get the abstraction case through.
The crucial lemma in [10], which will allow us to prove the transitivity property by a straightforward
rule induction, is the next one. Its proof still needs to analyse several cases, but the reasoning is much
simpler than in the proof by induction over the size of terms in [19].
Lemma 6. If ∇ ` t1 ≈ t2 and ∇ ` t2 ≈ π · t2 then ∇ ` t1 ≈ π · t2 .
Proof. By induction on the first ≈-judgement with a generalisation over π. The interesting case is (≈abstraction2 ). We know ∇ ` b.t2 ≈ (π · b).(π · t2 ) and have to prove ∇ ` a.t1 ≈ (π · b).(π · t2 ) with a
6= b. We have to analyse several cases about a equal equal with π · b, and b being equal with π · b. Let
us give the details for the case a 6= π · b and b 6= π · b. From the assumption we can infer (∗) ∇ ` b # π
· t2 and (∗∗) ∇ ` t2 ≈ (b π·b) · π · t2 . The side-condition on the first judgement gives us ∇ ` a # t2 . We
have to show ∇ ` a # π · t2 and ∇ ` t1 ≈ (a π·b) · π · t2 . To infer the first fact, we use ∇ ` a # t2 together
with (∗∗) and Lemmas 3 and 1. For the second, the induction hypothesis states that for any π we have ∇
` t1 ≈ π · (a b) · t2 provided ∇ ` (a b) · t2 ≈ π · (a b) · t2 holds. We use the induction hypothesis with
def
the permutation π = (a π·b) @ π @ (a b). This means after simplification the precondition of the IH
we need to establish is (∗∗∗) ∇ ` (a b) · t2 ≈ (a π·b) · π · t2 . By Lemma 5 we can transform (∗∗) to ∀ c
∈ ds [] ((b, π·b) @ π). ∇ ` c # t2 . Similarly with (∗∗∗). Furthermore we can show that
ds (a b) ((a π·b) @ π) ⊆ ds [] ((b π·b) @ π) ∪ {a, π·b}
holds. This means it remains to show that ∇ ` a # t2 (which we already inferred above) and ∇ ` π·b # t2
hold. For the latter, we consider the cases b = π · π · b and b 6= π · π · b. In the first case we infer ∇ `
π·b # t2 from (∗) using Lemma 1. In the second case we have that π · b ∈ ds [] ((b π·b) @ π). So finally
we can use the induction hypothesis, which simplified gives us ∇ ` t1 ≈ (a π·b) · π · t2 as needed.
With this lemma under our belt, we are finally in the position to prove the transitivity property.
Lemma 7. If ∇ ` t1 ≈ t2 and ∇ ` t2 ≈ t3 then ∇ ` t1 ≈ t3 .
8
Nominal Unification Revisited
Proof. By induction on the first judgement generalising over t3 . We then analyse the possible instances
for the second judgement. The non-trivial case is where both judgements are instances of the rule (≈abstraction2 ). We have ∇ ` t1 ≈ (a b) · t2 and (∗) ∇ ` t2 ≈ (b c) · t3 with a, b and c being distinct.
We also have (∗∗) ∇ ` a # t2 and (∗∗∗) ∇ ` b # t3 . We have to show ∇ ` a # t3 and ∇ ` t1 ≈ (a c) ·
t3 . The first fact is a simple consequence of (∗) and the Lemmas 1 and 3. For the other case we can use
the induction hypothesis to infer our proof obligation, provided we can establish that ∇ ` (a b) · t2 ≈
(a c) · t3 holds. From (∗) we have ∇ ` (a b) · t2 ≈ (a b)(b c) · t3 using Lemma 4. We also establish
that ∇ ` (a b)(b c) · t3 ≈ (b c)(a b)(b c) · t3 holds. By Lemma 5 we have to show that all atoms in the
disagreement set are fresh w.r.t. t3 . The disagreement set is equal to {a, b}. For b the property follows
from (∗∗∗). For a we use (∗) and (∗∗). So we can use Lemma 6 to infer (∗∗∗∗) ∇ ` (a b) · t2 ≈ (b c)(a
b)(b c) · t3 . It remains to show that ∇ ` (a b) · t2 ≈ (a c) · t3 holds. We can do so by using (∗∗∗∗) and
Lemma 2, and showing that (b c)(a b)(b c) · t3 ∼ (a c) · t3 holds. This in turn follows from the fact that
the disagreement set ds ((b c)(a b)(b c)) (a c) is empty. This concludes the case.
Once transitivity is proved, reasoning about ≈ is rather straightforward. For example symmetry is a
simple consequence.
Lemma 8. If ∇ ` t1 ≈ t2 then ∇ ` t2 ≈ t1 .
Proof. By induction on ≈. In the (≈-abstraction2 ) we have ∇ ` (a b) · t2 ≈ t1 and need to show ∇ `
t2 ≈ (b a) · t1 . We can do so by inferring ∇ ` (b a)(a b) · t2 ≈ (b a) · t1 using Lemma 4. We can also
show ∇ ` (b a)(a b) · t2 ≈ t2 using Lemma 5. We can join both facts by transitivity to yield the proof
obligation.
To sum up, the neat trick with using ∼ from [10] has allowed us to give a direct, structural, proof for
equivalence of ≈. The formalisation of this direct proof in Isabelle/HOL is approximately half the size
of the formalised proof given in [19].
3
An Algorithm for Nominal Unification
In this section we sketch the algorithm for nominal unification presented in [19]. We refer the reader to
that paper for full details.
The purpose of nominal unification algorithm is to calculate substitutions that make terms ≈-equal.
The substitution operation for nominal terms is defined as follows:
def
σ (a) = a(
π · σ (X) if X ∈ dom σ
def
σ (π·X) =
π·X otherwise
def
σ (a.t) = a.σ (t)
def
σ (ht1 , t2 i) = hσ (t1 ), σ (t2 )i
def
σ (f t) = f σ (t)
There are two kinds of problems the nominal unification algorithms solves:
t1 ≈? t2
a #? t
C. Urban
9
The first are called equational problems, the second freshness problems. Their respective interpretation
is “can the terms t1 and t2 be made equal according to ≈?” and “can the atom a be made fresh for
t according to #?”. A solution for each kind of problems is a pair (∇, σ ) consisting of a freshness
environment and a substitution such that
∇ ` σ (t1 ) ≈ σ (t2 )
∇ ` a # σ (t)
hold. Note the difference with first-order unification and higher-order pattern unification where a solution
consists of a substitution only. An example where nominal unification calculates a non-trivial freshness
environment is the equational problem
a.X ≈? b.X
which is solved by the solution ({a # X, b # X}, []). Solutions in nominal unification can be ordered
so that the unification algorithm produces always most general solutions. This ordering is defined very
similar to the standard ordering in first-order unification.
The nominal unification algorithm in [19] is defined in the usual style of rewriting rules that transform
sets of unification problems to simpler ones calculating a substitution and freshness environment on the
way. The transformation rule for pairs is
{ht1 , t2 i ≈? hs1 , s2 i, . . . } =⇒ {t1 ≈? s1 , t2 ≈? s2 , . . . }
There are two rules for abstractions depending on whether or not the binders agree.
{a.t ≈? a.s, . . . } =⇒ {t ≈? s, . . . }
{a.t ≈? b.s, . . . } =⇒ {t ≈? (a b) · s, a #? s, . . . }
One rule that is also interesting is for unifying two suspensions with the same variable
{π·X ≈? π 0·X,. . . } =⇒ {a #? X | a ∈ ds π π 0} ∪ {. . . }
What is interesting about nominal unification is that it never needs to create fresh names. As can be
seen from the abstraction rules, no new name needs to be introduced in order to unify abstractions. It
is the case that all atoms in a solution, occur already in the original problem. This has the attractive
consequence that nominal unification can dispense with any new-name-generation facility. This makes
it easy to implement and reason about the nominal unification algorithm. Clearly, however, the running
time of the algorithm using the rules sketched above is exponential in the worst-case, just like the simpleminded first-order unification algorithm without sharing.
4
Applications and Complexity of Nominal Unification
Having designed a new algorithm for unification, it is an obvious step to include it into a logic programming language. This has been studied in the work about αProlog [5] and αKanren [1]. The latter is
a system implemented on top of Scheme and is more sophisticated than the former. The point of these
variants of Prolog is that they allow one to implement inference rule systems in a very concise and declarative manner. For example the typing rules for simply-typed lambda-terms shown in the Introduction can
be implemented in αProlog as follows:
10
Nominal Unification Revisited
type (Gamma, var(X), T) :- member (X,T) Gamma.
type (Gamma, app(M,N), T2) :type (Gamma, M, arrow(T1, T2)), type (Gamma, N, T1).
type (Gamma, lam(x.M) , arrow(T1, T2)) / x # Gamma :type ((x,T1)::Gamma, M, T2).
member X X::Tail.
member X Y::Tail :- member X Tail.
The shaded boxes show two novel features of αProlog. Abstractions can be written as x.(−); but note
that the binder x can also occur as a “non-binder” in the body of clauses—just as in the clauses on
“paper.” The side-condition x # Gamma ensures that x is not free in any term substituted for Gamma.
The novel features of αProlog and αKanren can be appreciated when considering that similarly simple
implementations in “vanilla” Prolog (which, surprisingly, one can find in textbooks [15]) are incorrect,
as they give types to untypable lambda-terms. An simple implementation of a first-order theorem prover
in αKanren has been given in [16].
When implementing a logic programming language based on nominal unification it becomes important to answer the question about its complexity. Surprisingly, this turned out to be a difficult question.
Surprising because nominal unification, like first-order unification, uses simple rewrite rules defined over
first-order terms and uses a substitution operation that is a simple replacement of terms for variables. One
would hope the techniques from efficient first-order unification algorithms carry over to nominal unification. This is unfortunately only partially the case. Quadratic algorithms for nominal unification were
obtained by Calves and Fernandez [3, 2] and independently by Levy and Villaret [13]. These are the best
bounds we have for nominal unification so far.
5
Conclusion
Nominal unification was introduced in [19]. It unifies terms involving binders modulo a notion of alphaequivalence. In this way it is more powerful than first-order unification, but is conceptually much simpler
than higher-order pattern unification. Unification algorithms are often critical infrastructure in theorem
provers. Therefore it is important to formalise these algorithms in order to ensure correctness. Nominal unification has been formalised twice, once in [19] in Isabelle/HOL and another in [10] in HOL4.
The latter formalises a more efficient version of nominal unification based on triangular substitutions.
The main purpose of this paper is to simplify the transitivity proof for ≈. This in turn simplified the
formalisation in Isabelle/HOL.
There have been several fruitful avenues of research that use nominal unification as basic building
block. For example the work on αLeanTap [16]. There have also been several works that go beyond the
limitation of nominal unification where bound names are restricted to be constant symbols that are not
substitutable [11, 6].
References
[1] W. Byrd and D. Friedman. αKanren: A Fresh Name in Nominal Logic Programming Languages. In Proc. of
the 8th Workshop on Scheme and Functional Programming, pages 79–90, 2007.
C. Urban
11
[2] C. Calvès. Complexity and Implementation of Nominal Algorithms. PhD thesis, King’s College London,
2010.
[3] C. Calvès and M. Fernández. A Polynomial Nominal Unification Algorithm. Theoretical Computer Science,
403(2-3):285–306, 2008.
[4] J. Cheney. Relating Nominal and Higher-Order Pattern Unification. In Proc. of the 19th International
Workshop on Unification (UNIF), pages 104–119, 2005.
[5] J. Cheney and C. Urban. Alpha-Prolog: A Logic Programming Language with Names, Binding, and αEquivalence. In Proc. of the 20th International Conference on Logic Programming (ICLP), volume 3132 of
LNCS, pages 269–283, 2004.
[6] R. Clouston and A. Pitts. Nominal Equational Logic. In Computation, Meaning and Logic, Articles dedicated
to Gordon Plotkin, volume 172 of ENTCS, pages 223–257. 2007.
[7] G. Dowek, T. Hardin, C. Kirchner, and F. Pfenning. Higher-Order Unification via Explicit Substitutions:
the Case of Higher-Order Patterns. In Proc. of the Joint International Conference and Symposium on Logic
Programming (JICSLP), pages 259–273, 1996.
[8] M. Fernández and J. Gabbay. Nominal Rewriting. Information and Computation, 205:917–965, 2007.
[9] B. Huffman and C. Urban. A New Foundation for Nominal Isabelle. In Proc. of the 1st Interactive Theorem
Prover Conference (ITP), volume 6172 of LNCS, pages 35–50, 2010.
[10] R. Kumar and M. Norrish. (Nominal) Unification by Recursive Descent with Triangular Substitutions. In
Proc. of the 1st Interactive Theorem Prover Conference (ITP), volume 6172 of LNCS, pages 51–66, 2010.
[11] M. Lakin and A. Pitts. Resolving Inductive Definitions with Binders in Higher-Order Typed Functional
Programming. In Proc. of the 18th European Symposium on Programming (ESOP), volume 5502 of LNCS,
pages 47–61, 2009.
[12] J. Levy and M. Villaret. Nominal Unification from a Higher-Order Perspective. In Proc. of the 19th International Conference on Rewriting Techniques and Applications (RTA), volume 5117 of LNCS, pages 246–260,
2008.
[13] J. Levy and M. Villaret. An Efficient Nominal Unification Algorithm. In Proc. of the 21th International
Conference on Rewriting Techniques and Applications (RTA), volume 6 of LIPIcs, pages 246–260, 2010.
[14] D. Miller. A Logic Programming Language with Lambda-Abstraction, Function Variables, and Simple Unification. Journal of Logic and Computation, 1(4):497–536, 1991.
[15] J. C. Mitchell. Concepts in Programming Languages. CUP Press, 2003.
[16] J. Near, W. Byrd, and D. Friedman. αLeanTAP: A Declarative Theorem Prover for First-Order Classical
Logic. In Proc. of the 24th International Conference on Logic Programming (ICLP), volume 5366 of LNCS,
pages 238–252, 2008.
[17] T. Nipkow. Functional Unification of Higher-Order Patterns. In Proc. of the 8th IEEE Symposium of Logic
in Computer Science (LICS), pages 64–74, 1993.
[18] J. A. Robinson. A Machine Oriented Logic Based on the Resolution Priciple. JACM, 12(1):23–41, 1965.
[19] C. Urban, A.M. Pitts, and M.J. Gabbay. Nominal Unification. Theoretical Computer Science, 323(1-3):473–
497, 2004.
| 6 |
EXTENSIONS OF THE BENSON-SOLOMON FUSION SYSTEMS
arXiv:1701.06949v1 [] 24 Jan 2017
ELLEN HENKE AND JUSTIN LYND
To Dave Benson on the occasion of his second 60th birthday
Abstract. The Benson-Solomon systems comprise the only known family of simple saturated
fusion systems at the prime two that do not arise as the fusion system of any finite group. We
determine the automorphism groups and the possible almost simple extensions of these systems
and of their centric linking systems.
1. Introduction
b is controlled
b with normal subgroup G such that C b (G) 6 G, the structure of G
Given a group G
G
by G and Aut(G). On the other hand, if Fb is a saturated fusion system with normal subsystem
F such that CFb (F) 6 F, it is in general more difficult to describe the possibilities for Fb given F.
For example, if F = FS (G) is the fusion system of the finite group G, then it is often the case
that some extensions of F do not arise as extensions of G, but rather as extensions of a different
finite group with the same fusion system. It is also possible that there are extensions of FS (G)
that are exotic in the sense that they are induced by no finite group, but to our knowledge, no
example of this is currently known.
It has become clear that working with the fusion system alone is not on-its-face enough for
describing extensions [Oli10, AOV12, Oli16, BMO16]; one should instead work with an associated
linking system. In particular, when F = FS (G) and L = LcS (G) for some finite group G, there is a
natural map κG : Out(G) → Out(L), and whether the extensions of F all come from finite groups
has been shown to be related to asking whether κG′ is split surjective for some other finite group
G′ with the same Sylow subgroup and with F = FS (G′ ) [AOV12, Oli16]. While this machinery is
not directly applicable to the problem of determining extensions of exotic fusion systems, there
are other tools for use, such as [Oli10].
There is one known family of simple exotic fusion systems at the prime 2, the Benson-Solomon
systems. They were first predicted by Dave Benson [Ben98] to exist as finite versions of a 2local compact group associated to the 2-compact group DI(4) of Dwyer and Wilkerson [DW93].
They were later constructed by Levi and Oliver [LO02] and Aschbacher and Chermak [AC10].
The purpose of this paper is determine the automorphism groups of the Benson-Solomon fusion
and centric linking systems, and use that information to determine the fusion systems having
one of these as its generalized Fitting subsystem. This information is needed within certain
portions of Aschbacher’s program to classify simple fusion systems of component type at the
prime 2. In particular, it is presumably needed within an involution centralizer problem for
Date: April 4, 2018.
2000 Mathematics Subject Classification. Primary 20D20, Secondary 20D05.
Key words and phrases. fusion system, linking system, Benson-Solomon fusion system, group extension.
Justin Lynd was partially supported by NSA Young Investigator Grant H98230-14-1-0312. This project has
received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie
Sklodowska-Curie grant agreement No. 707758.
1
these systems. Some of the work on automorphisms of these systems appears in the standard
references [LO02, LO05, AC10], and part of our aim is to complete the picture.
The plan for this paper is as follows. All maps are written on the left. In Section 2, we
recall the various automorphism groups of fusion and linking systems and the maps between
them, following [AOV12]. In Section 3, we look at automorphisms of the fusion and linking
systems of Spin7 (q) and of the Benson-Solomon systems. We show in Theorem 3.10 that the
outer automorphism group of the latter is a cyclic group of field automorphisms of 2-power order.
Finally, we show in Theorem 4.3 that the systems having a Benson-Solomon generalized Fitting
subsystem are uniquely determined by the outer automorphisms they induce on the fusion system,
and that all such extensions are split.
We would like to thank Jesper Grodal, Ran Levi, and Bob Oliver for helpful conversations.
2. Automorphisms of fusion and linking systems
We refer to [AKO11] for the definition of a saturated fusion system, and also for the definition
of a centric subgroup of a fusion system. Let F be a saturated fusion system over the finite
p-group S, and write F c for the collection of F-centric subgroups. Whenever g is an element of a
finite group, we write cg for the conjugation homomorphism x 7→ gx = gxg −1 and its restrictions.
2.1. Background on linking systems. Whenever ∆ is an overgroup-closed, F-invariant collection of subgroups of S, we have the transporter category T∆ (S) with those objects. This is
the full subcategory of the transporter category TS (S) where the objects are subgroups of S, and
morphisms are the transporter sets: NS (P, Q) = {s ∈ S | sP s−1 6 Q} with composition given by
multiplication in S.
A linking system associated to F is a nonempty category L with object set ∆, together with
functors
(2.1)
δ
π
T∆ (S) −−−−→ L −−−−→ F.
The functor δ is the identity on objects and injective on morphisms, while π is the inclusion on
objects and surjective on morphisms. Write δP,Q for the corresponding injection NS (P, Q) →
MorL (P, Q) on morphisms, write δP for δP,P , and use similar notation for π.
The category and its structural functors are subject to several axioms which may be found
in [AKO11, Definition II.4.1]. In particular, Axiom (B) states that for all objects P and Q in L
and each g ∈ NS (P, Q), we have πP,Q(δP,Q (g)) = cg ∈ HomF (P, Q). A centric linking system is
a linking system with ∆ = F c . Given a finite group G with Sylow p-subgroup S, the canonical
centric linking system for G is the category LcS (G) with objects the p-centric subgroups P 6 S
(namely those P whose centralizer satisfies CG (P ) = Z(P ) × Op′ (CG (P ))), and with morphisms
the orbits of the transporter set NG (P, Q) = {g ∈ G | gP g−1 6 Q} under the right action of
Op′ (CG (P )).
2.1.1. Distinguished subgroups and inclusion morphisms. The subgroups δP (P ) 6 AutL (P ) are
called distinguished subgroups. When P 6 Q, the morphism ιP,Q := δP,Q (1) ∈ MorL (P, Q) is the
inclusion of P into Q.
2.1.2. Axiom (C) for a linking system. We will make use of Axiom (C) for a linking system, which
says that for each morphism ϕ ∈ MorL (P, Q) and element g ∈ NS (P ), the following identity holds
between morphisms in MorL (P, Q):
ϕ ◦ δP (g) = δQ (π(ϕ)(g)) ◦ ϕ.
2
2.1.3. Restrictions in linking systems. For each morphism ψ ∈ MorL (P, Q), and each P0 , Q0 ∈
Ob(L) such that P0 6 P , Q0 6 Q, and π(ψ)(P0 ) 6 Q0 , there is a unique morphism ψ|P0 ,Q0 ∈
MorL (P0 , Q0 ) (the restriction of ψ) such that ψ ◦ ιP0 ,P = ιQ0 ,Q ◦ ψ|P0 ,Q0 ; see [Oli10, Proposition 4(b)] or [AKO11, Proposition 4.3].
Note that in case ψ = δP,Q (s) for some s ∈ NS (P, Q), it can be seen from Axioms (B) and (C)
that ψ|P0 ,Q0 = δP0 ,Q0 (s).
2.2. Background on automorphisms.
2.2.1. Automorphisms of fusion systems. An automorphism of F is, by definition, determined by
its effect on S: define Aut(F) to be the subgroup of Aut(S) consisting of those automorphisms α
αϕα−1
which preserve fusion in F in the sense that the homomorphism given by α(P ) −−−−→ α(Q) is in
ϕ
F for each morphism P −
→ Q in F. The automorphisms AutF (S) of S in F thus form a normal
subgroup of Aut(F), and the quotient Aut(F)/ AutF (S) is denoted by Out(F).
2.2.2. Automorphisms of linking systems. A self-equivalence of L is said to be isotypical if it
sends distinguished subgroups to distinguished subgroups, i.e. α(δP (P )) = δα(P ) (α(P )) for each
object P . It sends inclusions to inclusions provided α(ιP,Q ) = ια(P ),α(Q) whenever P 6 Q. The
monoid Aut(L) of isotypical self-equivalences that send inclusions to inclusions is in fact a group
of automorphisms of the category L, and this has been shown to be the most appropriate group of
automorphisms to consider. Note that Aut(L) has been denoted by AutItyp (L) in [AKO11,AOV12]
and elsewhere. When α ∈ Aut(L) and P is an object with α(P ) = P , we denote by αP the
automorphism of AutL (P ) induced by α.
The group AutL (S) acts by conjugation on L in the following way: given γ ∈ AutL (S), consider
the functor cγ ∈ Aut(L) which is cγ (P ) = π(γ)(P ) on objects, and which sends a morphism
ϕ
P −
→ Q in L to the morphism γϕγ −1 from cγ (P ) to cγ (Q) after replacing γ and γ −1 by the
appropriate restrictions (introduced in §2.1.3). Note that when γ = δS (s) for some s ∈ S, then
cγ (P ) is conjugation by s on objects, and cγ (ϕ) = δQ,sQ (s) ◦ ϕ ◦ δsP,P (s−1 ) for each morphism ϕ ∈
MorL (P, Q) by the remark on distinguished morphisms in §2.1.3. In particular, when L = LcS (G)
for some finite group G, cγ is truly just conjugation by s on morphisms.
The image of AutL (S) under the map γ 7→ cγ is seen to be a normal subgroup of Aut(L); the
outer automorphism group of L is
Out(L) := Aut(L)/{cγ | γ ∈ AutL (S)};
we refer to Lemma 1.14(a) and the surrounding discussion in [AOV12] for more details. This
group is denoted by Outtyp (L) in [AKO11, AOV12] and elsewhere.
2.2.3. From linking system automorphisms to fusion system automorphisms. There is a group
homomorphism
(2.2)
µ
e : Aut(L) −→ Aut(F),
given by restriction to S ∼
e induces a
= δS (S) 6 AutL (S); see [Oli10, Proposition 6]. The map µ
homomorphism on quotient groups
µ : Out(L) −→ Out(F).
We write µL (or µG when L = LcS (G)) whenever we wish to make clear which linking system we
are working with; similar remarks hold for µ
e. As shown in [AKO11, Proposition II.5.12], ker(µ)
has an interesting cohomological interpretation as the first cohomology group of the center functor
3
ZF on the orbit category of F-centric subgroups, and ker(e
µ) is correspondingly a certain group
of normalized 1-cocycles for this functor.
2.2.4. From group automorphisms to fusion system and linking system automorphisms. We also
need to compare automorphisms of groups with the automorphisms of their fusion and linking
systems. If G is a finite group with Sylow p-subgroup S, then each outer automorphism of G is
represented by an automorphism that fixes S. This is a consequence of the transitive action of G
on its Sylow subgroups. More precisely, there is an exact sequence:
incl
g7→cg
1 → Z(G) −−→ NG (S) −−−→ Aut(G, S) → Out(G) → 1.
where Aut(G, S) is the subgroup of Aut(G) consisting of those automorphisms that leave S
invariant.
For each pair of p-centric subgroups P, Q 6 S and each α ∈ Aut(G, S), α induces an isomorphism Op′ (CG (P )) → Op′ (CG (α(P ))) and a bijection NG (P, Q) → NG (α(P ), α(Q)). Thus, there
is a group homomorphism
κ
eG : Aut(G, S) → Aut(LcS (G))
which sends α ∈ Aut(G, S) to the functor which is α on objects, and also α on morphisms in
the way just mentioned. This map sends the image of NG (S) to {cγ | γ ∈ AutLcS (G) (S)}, and so
induces a homomorphism
κG : Out(G) → Out(LcS (G))
on outer automorphism groups.
It is straightforward to check that the restriction to S of any member of Aut(G, S) is an
automorphism of the fusion system FS (G). Indeed, for every α ∈ Aut(G, S), the automorphism
α|S of FS (G) is just the image of α under µ
eG ◦ κ
eG .
2.2.5. Summary. What we will need in our proofs is summarized in the following commutative
diagram, which is an augmented and updated version of the one found in [AKO11, p.186].
Z(F)
incl
1
1
1
// Z(S)
b1 (O(F c ), ZF )
// Z
// lim1 (ZF )
e
λ
δS
(2.3)
Z(F)
// AutL (S)
// Aut(L)
πS
1
// AutF (S)
1
←−
λ
// Out(L)
// Aut(F)
1
// 1
µ
µ
e
// 1
// Out(F)
// 1
1
All sequences in this diagram are exact. Most of this either is shown in the proof of [AKO11,
Proposition II.5.12], or follows from the above definitions. The first and second rows are exact by
this reference, except that the diagram was not augmented by the maps out of Z(F) (the center
of F); exactness at Z(S) and AutL (S) is shown by following the proof there. Given [AKO11,
Proposition II.5.12], exactness of the last column is equivalent to the uniqueness of centric linking
4
systems, a result of Chermak. In all the cases needed in this article, it follows from [LO02,
Lemma 3.2]. The second-to-last column is then exact by a diagram chase akin to that in a
5-lemma for groups.
3. Automorphisms
The isomorphism type of the fusion systems of the Benson-Solomon systems FSol (q), as q ranges
over odd prime powers, is dependent only on the 2-share of q 2 − 1 by [COS08, Theorem B]. Since
the centralizer of the center of the Sylow group is the fusion system of Spin7 (q), the same holds
also for the fusion systems of these groups. For this reason, and because of Proposition 3.2 below,
k
it will be convenient to take q = 52 for the sequel. Thus, we let F be the algebraic closure of the
field with five elements.
3.1. Automorphisms of the fusion system of Spin7 (q). Let H̄ = Spin7 (F). Fix a maximal
torus T̄ of H̄. Thus, H̄ is generated by the T̄ -root groups X̄α = {xα (λ) : λ ∈ F} ∼
= (F, +), as
α ranges over the root system of type B3 , and is subject to the Chevalley relations of [GLS98,
Theorem 1.12.1]. For any power q1 of 5, we let ψq1 denote the standard Frobenius endomorphism
of H̄, namely the endomorphism of H̄ which acts on the root groups via ψq1 (xα (λ)) = xα (λq1 ).
Set H := CH̄ (ψq ). Thus, H = Spin7 (q) since H̄ is of universal type (see [GLS98, Theorem 2.2.6(f)]). Also, T := CT̄ (ψq ) is a maximal torus of H. For each power q1 of 5, the Frobenius
endomorphism ψq1 of H̄ acts on H in the way just mentioned, and it also acts on T by raising
each element to the power q1 . For ease of notation, we denote by ψq1 also the automorphism of
H induced by ψq1 .
The normalizer NH (T ) contains a Sylow 2-subgroup H [GL83, 10-1(2)], and NH (T )/T is isomorphic to C2 × S4 , the Weyl group of B3 . Applying a Frattini argument, we see that there is a
Sylow 2-subgroup S of NH (T ) invariant under ψ5 , and we fix this choice of S for the remainder.
The automorphism groups of the Chevalley groups were determined by Steinberg [Ste60], and
in particular,
(3.1)
Out(H) = Outdiag(H) × Φ ∼
= C2 × C2k ,
where Φ is the group of field automorphisms, and where Outdiag(H) is the group of outer automorphisms of H induced by NT̄ (H) [GLS98, Theorem 2.5.1(b)]. We mention that S is normalized
by every element of NT̄ (H). So we find canonical representatives of the elements of Φ and of
Outdiag(H) in Aut(H, S).
We need to be able to compare automorphisms of the group with automorphisms of the fusion
and linking systems, and this has been carried out in full generality by Broto, Møller, and Oliver
[BMO16] for groups of Lie type.
Let FSpin (q) and LcSpin (q) be the associated fusion and centric linking systems over S of the
group H, and recall the maps µH and κH from §§2.2.3, 2.2.4
Proposition 3.2. The maps µH and κH are isomorphisms, and hence
k
k
Out(LSpin (52 )) ∼
= Out(FSpin (52 )) ∼
= C2 × C2k .
Proof. That µH is an isomorphism follows from (2.3) and [LO02, Lemma 3.2]. Also, κH is an
isomorphism by [BMO16, Propositions 5.14, 5.15].
5
3.2. Automorphisms of the Benson-Solomon systems. We keep the notation from the
previous subsection. We denote by F := FSol (q) a Benson-Solomon fusion system over the 2group S ∈ Syl2 (H) fixed above, and by L := LcSol (q) an associated centric linking system with
structural functors δ and π. Set
T2 := T ∩ S,
the 2-torsion in the maximal torus T of H. Then T2 is homocyclic of rank three and of exponent
2k+2 [AC10, §4]. Also, Z(S) 6 T2 is of order 2, and NF (Z(S)) = CF (Z(S)) = FSpin (q) is a fusion
system over S.
Since Z(S) is contained in every F-centric subgroup, by Definition 6.1 and Lemma 6.2 of
[BLO03], we may take NL (Z(S)) = CL (Z(S)) for the centric linking system of Spin7 (q). By the
items just referenced, CL (Z(S)) is a subcategory of L with the same objects, and with morphisms
those morphisms ϕ in L such that π(ϕ)(z) = z. Further, CL (Z(S)) has the same inclusion
functor δ, and the projection functor for CL (Z(S)) is the restriction of π. (This was also shown
in [LO02, Lemma 3.3(a,b)].)
Write Fz for FSpin (q) and Lz for CL (Z(S)) for short. Each member of Aut(F) fixes Z(S) and
so Aut(F) ⊆ Aut(Fz ). So the inclusion map from Aut(F) to Aut(Fz ) can be thought of as a
“restriction map”
(3.3)
ρ : Aut(F) −→ Aut(Fz )
given by remembering only that an automorphism preserves fusion in Fz . We want to make
explicit in Lemma 3.5 that the map ρ of (3.3) comes from a restriction map on the level of centric
linking systems. First we need to recall some information about the normalizer of T2 in L and
Lz .
Lemma 3.4. The following hold after identifying T2 with its image δT2 (T2 ) 6 AutL (T2 ).
(a) AutLz (T2 ) is an extension of T2 by C2 × S4 , and AutL (T2 ) is an extension of T2 by
C2 × GL3 (2) in which the GL3 (2) factor acts naturally on T2 /Φ(T2 ). In each case, a C2
factor acts as inversion on T2 . Also, T2 is equal to its centralizer in each of the above
normalizers, Z(AutLz (T2 )) = Z(S), and Z(AutL (T2 )) = 1.
(b) AutF (S) = Inn(S) = AutFz (S) and AutL (S) = δS (S) = AutLz (S).
Proof. For part (a), see Lemma 4.3 and Proposition 5.4 of [AC10].
Since T2 is the unique abelian subgroup of its order in S by [AC10, Lemma 4.9(c)], it is
characteristic. By the uniqueness of restrictions 2.1.3, we may therefore view AutL (S) as a
subgroup of AutL (T2 ). Since AutL (T2 ) has self-normalizing Sylow 2-subgroups by (a), the same
holds for AutL (S). Now (b) follows for L, and for F after applying π. This also implies the
statement for Lz and Fz , as subcategories.
There is a 3-dimensional commutative diagram related to (2.3) that is the point of the next
lemma.
Lemma 3.5. There is a restriction map ρb: Aut(L) → Aut(Lz ), with kernel the automorphisms
induced by conjugation by δS (Z(S)) 6 AutL (S), which makes the diagram
Aut(L)
ρb
// Aut(Lz )
µ
eLz
µ
eL
Aut(F)
ρ
6
// Aut(Fz ) ,
commutative, which commutes with the conjugation maps out of
AutL (S)
id
// AutL (S)
z
πS
πS
AutF (S)
id
// AutF (S),
z
and which therefore induces a commutative diagram
Out(L)
[b
ρ]
// Out(Lz )
µLz
µL
Out(F)
[ρ]
// Out(Fz ).
Proof. Recall that we have arranged Lz ⊆ L. Thus, the horizontal maps in the second diagram
are the identity maps by Lemma 3.4, and so the lemma amounts to checking that an element of
Aut(L) sends morphisms in Lz to morphisms in Lz . For then, we can define its image under ρb to
have the same effect on objects, and to be the restriction to Lz on morphisms.
Now fix an arbitrary α ∈ Aut(L), objects P, Q ∈ F c = Fzc , and a morphism ϕ ∈ MorL (P, Q).
Let Z(S) = hzi. By two applications of Axiom (C) for a linking system (§§2.1.2),
(3.6)
ιP,S ◦ δP (z) = δS (z) ◦ ιP,S
and
ια(P ),S ◦ δα(P ) (z) = δS (z) ◦ ια(P ),S ,
because π(ιP,S )(z) = π(ια(P ),S )(z) = z. Since αS is an automorphism of δS (S) ∼
= S, it sends δS (z)
to itself. Thus, α sends the right side of the first equation of (3.6) to the right side of the second,
since it sends inclusions to inclusions. Thus
ια(P ),S ◦ α(δP (z)) = ια(P ),S ◦ δα(P ) (z).
However, each morphism in L is a monomorphism [Oli10, Proposition 4], so we obtain
(3.7)
α(δP (z)) = δα(P ) (z),
and the same holds for Q in place of P .
Since ϕ ∈ Mor(Lz ), we have π(ϕ)(z) = z, so by two more applications of Axiom (C),
(3.8)
ϕ ◦ δP (z) = δQ (z) ◦ ϕ
and
α(ϕ) ◦ δα(P ) (z) = δα(Q) (π(α(ϕ))(z)) ◦ α(ϕ).
After applying α to the left side of the first equation of (3.8), we obtain the left side of the second
by (3.7). Thus, comparing right sides, we obtain
δα(Q) (z) ◦ α(ϕ) = δα(Q) (π(α(ϕ)(z))) ◦ α(ϕ)
Since each morphism in L is an epimorphism [Oli10, Proposition 4], it follows that
δα(Q) (z) = δα(Q) (π(α(ϕ))(z)).
Hence, π(α(ϕ))(z) = z because δα(Q) is injective (Axiom (A2)). That is, α(ϕ) ∈ Mor(Lz ) as
required.
The kernel of ρb is described via a diagram chase in (2.3). Suppose ρb(α) is the identity. Then,
α is sent to the identity automorphism of S by µ
eL , since ρ is injective. Thus, α comes from a
normalized 1-cocycle by (2.3) and these are in turn induced by elements of Z(S) since lim1 (ZF )
←−
is trivial [LO02, Lemma 3.2].
7
Lemma 3.9. Let G be a finite group and let V be an abelian normal 2-subgroup of G such
that CG (V ) 6 V . Let α be an automorphism of G such that [V, α] = 1 and α2 ∈ Inn(G). Then
[G, α] 6 V , and if G acts fixed point freely on V /Φ(V ), then the order of α is at most the exponent
of V .
Proof. As [V, α] = 1, we have [V, G, α] 6 [V, α] = 1 and [α, V, G] = [1, G] = 1. Hence, by the
Three subgroups lemma, it follows [G, α, V ] = 1. As CG (V ) 6 V , this means
[G, α] 6 V.
Assume from now on that G acts fixed point freely on V /Φ(V ). Write G∗ := G ⋊ hαi for the
semidirect product of G by hαi. As [V, α] = 1 and [G, α] 6 V , the subgroup W := V hαi is an
abelian normal subgroup of G∗ with [W, G∗ ] 6 V .
As [V, α] = 1, it follows [V, α2 ] = 1. So α2 ∈ Inn(G) is realized by conjugation with an element
−1 2
of CG (V ) = V . Pick u ∈ V with α2 = cu |G . This means that, for any g ∈ G, we have u α g = g
in G∗ . So Z := hu−1 α2 i centralizes G in G∗ . Since W is abelian and contains Z, it follows that
Z lies in the centre of G∗ = W G. Set
G∗ = G∗ /Z.
Because CG (V ) 6 V , the order of u equals the order of cu |G = α2 . Hence, Z ∩ G = 1 = Z ∩ hαi.
So |ᾱ| = |α| and G ∼
= Ḡ. In particular, we have V̄ ∼
= V and Ḡ acts fixed point freely on
2
V̄ /Φ(V̄ ). Note also that ᾱ = ū. Hence |W̄ /V̄ | = 2 and Φ(W̄ ) 6 V̄ . Moreover, letting n ∈ N
such that 2n is the exponent of V , we have |α| = |ᾱ| 6 2 · 2n = 2n+1 . Assume |ᾱ| = 2n+1 .
Then ū = ᾱ2 has order 2n and is thus not a square in V̄ . Note that Φ(V̄ ) = {v 2 : v ∈ V̄ }
and Φ(W̄ ) = {w2 : w ∈ W } = hᾱ2 iΦ(V̄ ) 6 V̄ . Hence, Φ(W̄ )/Φ(V̄ ) has order 2. As Ḡ normalizes
Φ(W̄ )/Φ(V̄ ), it thus centralizes Φ(W̄ )/Φ(V̄ ) contradicting the assumption that Ḡ acts fixed point
freely on V̄ /Φ(V̄ ). Thus |α| = |ᾱ| 6 2n which shows the assertion.
We are now in a position to determine the automorphisms of F = FSol(q) and L = LcSol (q). It
is known that the field automorphisms induce automorphisms of these systems as we will make
precise next. Recall that the field automorphism ψ5 of H of order 2k normalizes S and so ψ5 |S
is an automorphism of Fz = FS (H). By [AC10, Lemma 5.7], the automorphism ψ5 |S is actually
also an automorphism of F. We thus denote it by ψF and refer to it as the field automorphism
of F induced by ψ5 . By Proposition 3.2, this automorphism has order 2k .
By [LO02, Proposition 3.3(d)], there is a unique lift ψ of ψF under µ
eL that is the identity on
π −1 (FSol (5)) and restricts to κ
eH (ψ5 ) on Lz . We refer to ψ as the field automorphism of L induced
by ψ5 .
Theorem 3.10. The map µL : Out(LcSol (q)) → Out(FSol (q)) is an isomorphism, and
Out(Lc (q)) ∼
= Out(FSol (q)) ∼
=C k
2
Sol
is induced by field automorphisms. Also, the automorphism group Aut(LcSol (q)) is a split extension
of S by Out(LcSol (q)); in particular, it is a 2-group.
More precisely, if ψ is the field automorphism of LcSol (q) induced by ψ5 , then ψ has order 2k
and Aut(LcSol (q)) is the semidirect product of AutL (S) ∼
= S with the cyclic group generated by ψ.
Proof. We continue to write L = LcSol (q), F = FSol (q), Lz = LcSpin (q), and Fz = FSpin (q), and we
continue to assume that L has been chosen so as to contain Lz as a linking subsystem. Recall
that T2 6 S is homocyclic of rank 3 and exponent 2k+2 .
We first check whether the outer automorphism of Lz induced by a diagonal automorphism
of H extends to L, and we claim that it doesn’t. A non-inner diagonal automorphism of H
8
is induced by conjugation by an element t of T̄ by [GLS98, Theorem 2.5.1(b)]. Its class as
an outer automorphism has order 2, so if necessary we replace t by an odd power and assume
that t2 ∈ T2 . Now T2 consists of the elements of T̄ of order at most 2k+2 , so t has order 2k+3
and induces an automorphism of H of order at least 2k+2 , depending on whether it powers into
Z(H) = Z(S) or not (in fact it does, but this is not needed). For ease of notation, we identify T2
with δT2 (T2 ) 6 AutL (T2 ), and we identify s ∈ S with δS (s) ∈ AutL (S).
Let τ = κ
eH ([ct ]) ∈ Aut(Lz ), and assume that τ lifts to an element τb ∈ Aut(L) under the
map ρb of Lemma 3.5. As ρb(b
τ) = τ = κ
eH (ct ), we have ρb(b
τ 2) = τ 2 = κ
eH (ct2 ), i.e. ρb(b
τ 2 ) acts on
every object and every morphism of Lz = LcS (H) as conjugation by t2 . Similarly, if we take the
conjugation automorphism ct2 of L by t2 (or more precisely the conjugation automorphism cδS (t2 )
of L by δS (t2 )), then ρb(ct2 ) is just the conjugation automorphism of Lz by t2 . So according to
the remark at the end of §§2.2.2, the automorphism ρb(ct2 ) acts also on Lz via conjugation by t2 ,
showing ρb(b
τ 2 ) = ρb(ct2 ). By the description of the kernel in Lemma 3.5, we have thus τb2 = ct2 or
2
τb = ct2 z .
Now set α := τbT2 ∈ Aut(AutL (T2 )). From what we have shown, it follows that α equals the
conjugation automorphism ct2 or ct2 z of AutL (T2 ). Note that |ct2 z | = |t2 z| = |t2 | = |ct2 |, since
Z(AutL (T2 )) = 1 by Lemma 3.4(a). Hence,
(3.11)
|α| = 2|α2 | = 2|t2 | = 2k+3 ,
On the other hand, α centralizes T2 , and we have seen that α2 is an inner automorphism of
AutL (T2 ). Moreover, by Lemma 3.4(a), CAutL (T2 ) (T2 ) = T2 , and AutL (T2 ) acts fixed point freely
on T2 /Φ(T2 ). The hypotheses of Lemma 3.9 thus hold for G = AutL (T2 ) and α ∈ Aut(G). So
α has order at most 2k+2 by that lemma, contradicting (3.11). We conclude that a diagonal
automorphism of Lz does not extend to an automorphism of L.
The existence of the field automorphism ψF of F, and the fact that ψF has order 2k , now yields
together with Proposition 3.2 that Out(F) ∼
= C2k is generated by the image of ψF in Out(F).
Moreover, by [LO02, Lemma 3.2] and the exactness of the third column of (2.3), the maps µL
and µLz are isomorphisms. Thus,
Out(L) ∼
= Out(F) ∼
= C k.
2
Let ψ be the field automorphism of L induced by ψ5 as above. Then ψ is the identity on
π −1 (FSol (5)) by definition. It remains to show that ψ has order 2k , since this will imply that
Aut(L) is a split extension of AutL (S) ∼
= S by hψi ∼
= Out(L) ∼
= Out(F).
k
2
The automorphism ψ maps to the trivial automorphism of F, and so is conjugation by an elek
/ Z(AutLcSol (5) (Ω2 (T2 )))
ment of Z(S) by (2.3). Now ψ 2 is trivial on AutLcSol (5) (Ω2 (T2 )), whereas z ∈
by Lemma 3.4(a) as Ω2 (T2 ) is the torus of LcSol (5). Thus, since a morphism ϕ is fixed by cz if and
k
only if π(ϕ)(z) = z (Axiom (C)), we conclude that ψ 2 is the identity automorphism of L, and
this completes the proof.
4. Extensions
In this section, we recall a result of Linckelmann on the Schur multipliers of the BensonSolomon systems, and we prove that each saturated fusion system F with F ∗ (F) ∼
= FSol (q) is a
split extension of F ∗ (F) by a group of outer automorphisms.
Recall that the hyperfocal subgroup of a saturated p-fusion system F over S is defined to be
the subgroup of S given by
hyp(F) = h[ϕ, s] := ϕ(s)s−1 | s ∈ P 6 S and ϕ ∈ O p (AutF (P ))i.
9
A subsystem F0 over S0 6 S is said to be of p-power index in F if hyp(F) 6 S0 and O p (AutF (P )) 6
AutF0 (P ) for each P 6 S0 . There is always a unique normal saturated subsystem on hyp(F) of
p-power index in F, which is denoted by Op (F) [AKO11, §I.7]. We will need the next lemma in
§§4.2.
Lemma 4.1. Let F be a saturated fusion system over S, and let F0 be a weakly normal subsystem
of F over S0 6 S. Assume that Op (AutF (S0 )) 6 AutF0 (S0 ). Then Op (AutF (P )) 6 AutF0 (P )
for every P 6 S0 . Thus, if in addition hyp(F) 6 S0 , then F0 has p-power index in F.
Proof. Note that AutF0 (P ) is normal in AutF (P ) for every P 6 S0 , since F0 is weakly normal
in F. We need to show that AutF (P )/ AutF0 (P ) is a p-group for every P 6 S0 . Suppose this is
false and let P be a counterexample of maximal order. Our assumption gives P < S0 . Hence,
P < Q := NS0 (P ), and the maximality of P implies that
AutF (Q)/ AutF0 (Q)
is a p-group. Notice that
NAutF (Q) (P )/NAutF0 (Q) (P ) ∼
= NAutF (Q) (P ) AutF0 (Q)/ AutF0 (Q)
6 AutF (Q)/ AutF0 (Q)
and thus NAutF (Q) (P )/NAutF0 (Q) (P ) is a p-group.
If α ∈ HomF (P, S) with α(P ) ∈ F f then conjugation by α induces a group isomorphism
from AutF (P ) to AutF (α(P )). As F0 is weakly normal, we have α(P ) 6 S0 and conjugation
by α takes AutF0 (P ) to AutF0 (α(P )). So upon replacing P by α(P ), we may assume without
loss of generality that P is fully F-normalized. Then P is also fully F0 -normalized by [Asc08,
Lemma 3.4(5)]. By the Sylow axiom, AutS0 (P ) is a Sylow p-subgroup of AutF0 (P ). So the
Frattini argument yields
AutF (P ) = AutF0 (P )NAutF (P ) (AutS0 (P ))
and thus
AutF (P )/ AutF0 (P ) ∼
= NAutF (P ) (AutS0 (P ))/NAutF0 (P ) (AutS0 (P )).
By the extension axiom for F and F0 , each element of NAutF (P ) (AutS0 (P )) extends to an automorphism of AutF (Q), and each element of NAutF0 (P ) (AutS0 (P )) extends to an automorphism of
AutF0 (Q). Therefore, the map
Φ : NAutF (Q) (P ) → NAutF (P ) (AutS0 (P )), ϕ 7→ ϕ|P
is an epimorphism which maps NAutF0 (Q) (P ) onto NAutF0 (P ) (AutS0 (P )). Hence,
AutF (P )/ AutF0 (P ) ∼
= NAutF (P ) (AutS0 (P ))/NAutF0 (P ) (AutS0 (P ))
∼
= NAut (Q) (P )/NAut (Q) (P ) ker(Φ).
F
F0
We have seen above that NAutF (Q) (P )/NAutF0 (Q) (P ) is a p-group, and therefore also
NAutF (Q) (P )/NAutF0 (Q) (P ) ker(Φ)
is a p-group. Hence, AutF (P )/ AutF0 (P ) is a p-group, and this contradicts our assumption that
P is a counterexample.
10
4.1. Extensions to the bottom. A central extension of a fusion system F0 is a fusion system
F such that F/Z ∼
= F0 for some subgroup Z 6 Z(F). The central extension is said to be perfect
if F = O p (F). Linckelmann has shown that the Schur multiplier of a Benson-Solomon system is
trivial.
Theorem 4.2 (Linckelmann). Let F be a perfect central extension of a Benson-Solomon fusion
system F0 . Then F = F0 .
Proof. This follows from Corollary 4.4 of [Lin06a] together with the fact that Spin7 (q) has Schur
muliplier of odd order when q is odd [GLS98, Tables 6.1.2, 6.1.3].
4.2. Extensions to the top. The next theorem describes the possible extensions (S, F) of a
Benson-Solomon system (S0 , F0 ). The particular hypotheses are best stated in terms of the
generalized Fitting subsystem of Aschbacher [Asc11], but they are equivalent to requiring that
F0 E F and CS (F0 ) 6 S0 , where CS (F0 ) is the centralizer constructed in [Asc11, §6]. This latter
formulation is sometimes expressed by saying that F0 is centric normal in F.
k
Theorem 4.3. Let F0 = FSol (52 ) be a Benson-Solomon system over S0 .
(a) If F is a saturated fusion system over S such that F ∗ (F) = F0 , then F0 = O 2 (F), S splits
over S0 , and the map S/S0 → Out(F0 ) induced by conjugation is injective.
(b) Conversely, given a subgroup of A 6 Out(F0 ) ∼
= C2k , there is a saturated fusion system
∗
F over some 2-group S such that F (F) = F0 and the map S/S0 → Out(F0 ) induced by
conjugation on S0 has image A. Moreover, the pair (S, F) with these properties is uniquely
determined up to isomorphism.
If L0 is a centric linking system associated to F0 , then AutL0 (S0 ) = S0 , and the p-group
S can be chosen to be the preimage of A in Aut(L0 ) under the quotient map from Aut(L0 )
to Out(L0 ) ∼
= Out(F0 ).
Proof. Let F be a saturated fusion system over S such that F ∗ (F) = F0 . Set F1 = F0 S, the
internal extension of F0 by S, as in [Hen13] or [Asc11, §8]. According to [AOV12, Proposition 1.31],
there is a normal pair of linking systems L0 E L1 , associated to the normal pair F0 E F1 .
Furthermore, L0 E L1 can be chosen such that L0 is a centric linking system. There is a natural
map from AutL1 (S0 ) to Aut(L0 ) which sends a morphism ϕ ∈ AutL1 (S0 ) to conjugation by ϕ.
(So the restriction of this map to AutL0 (S0 ) is the conjugation map described in §§2.2.2.)
The centralizer CS (F0 ) depends a priori on the fusion system F, but it is shown in [Lyn15,
Lemma 1.13] that it does not actually matter whether we form CS (F0 ) inside of F or inside of F1 .
Moreover, since F ∗ (F) = F0 , it follows from [Asc11, Theorem 6] that CS (F0 ) = Z(F0 ) = 1. Thus,
conj
by a result of Semeraro [Sem15, Theorem A], the conjugation map AutL1 (S0 ) −−→ Aut(L0 ) is
injective. By Lemma 3.4, we have S0 = AutL0 (S0 ) via the inclusion functor δ1 for L1 . By
Theorem 3.10, Aut(L0 ) is a 2-group which splits over S0 . Moreover, by the same theorem, we
have that CAut(L0 ) (S0 ) 6 S0 and Out(L0 ) ∼
= Out(F0 ) is cyclic. Since (δ1 )S0 (S) ∼
= S is a Sylow
2-subgroup of AutL (S0 ) by [Oli10, Proposition 4(d)], we can conclude that
S0 = AutL0 (S0 ) E AutL1 (S0 ) = S,
via the inclusion functor δ1 for L1 . Moreover, it follows that S splits over S0 , and CS (S0 ) 6 S0 .
The latter property means that the map
S/S0 → Out(F0 )
is injective. In particular, S/S0 is cyclic as Out(F0 ) is cyclic.
11
Next, we show that O2 (F) = F0 . Fix a subgroup P 6 S, and let α ∈ AutF (P ) be an
automorphism of odd order. Then α induces an odd-order automorphism of the cyclic 2-group
P/(P ∩ S0 ) ∼
= P S0 /S0 6 S/S0 . This automorphism must be trivial, and so [P, α] 6 S0 . Hence,
2
[P, O (AutF (P ))] 6 S0 for all P 6 S. Since hyp(F0 ) = S0 , we have hyp(F) = S0 . Note that
AutF (S0 ) is a 2-group as AutF (S0 ) 6 Aut(F0 ) and Aut(F0 ) is a 2-group by Theorem 3.10.
Therefore O 2 (F) = F0 by Lemma 4.1. We conclude that F1 = F by the uniqueness statement
in [Hen13, Theorem 1]. This completes the proof of (a). Moreover, we have seen that the following
property holds for any normal pair L0 E L attached to F0 E F:
(4.4)
conj
S0 = AutL0 (S0 ) E AutL1 (S0 ) = S and S −−−→ Aut(L0 ) is injective.
Finally, we prove (b). Fix a centric linking system L0 associated to F0 with inclusion functor
δ0 . Let S 6 Aut(L0 ) be the preimage of A under the quotient map to Out(F0 ). We will identify
S0 with δ0 (S0 ) so that S0 = AutL0 (S0 ) by Lemma 3.4. Write ι : S0 → Aut(L0 ), s 7→ cs for map
sending s ∈ S0 to the automorphism of L0 induced by conjugation with s in L0 . Then ι(S0 ) is
normal in S. Let χ : S → Aut(S0 ) be the map defined by α 7→ ι−1 ◦ cα |ι(S0 ) ◦ ι; i.e. χ corresponds
to conjugation in S if we identify S0 with ι(S0 ). We argue next that the following diagram
commutes:
(4.5)
ι
// Aut(L0 )
✇;;
incl ✇✇✇
α7→αS0
ι
✇
✇✇
✇✇✇ χ
// Aut(S0 )
S
S0
The upper triangle clearly commutes. Observe that α◦ι(s)◦α−1 = α◦cs ◦α−1 = cαS0 (s) = ι(αS0 (s))
for every s ∈ S0 and α ∈ S. Hence, for every α ∈ S and s ∈ S0 , we have (ι−1 ◦ cα |ι(S0 ) ◦ ι)(s) =
ι−1 (α ◦ ι(s) ◦ α−1 ) = αS0 (s) and so the lower triangle commutes.
We will now identify S0 with its image in S under ι, so that ι becomes the inclusion map and
χ corresponds to the map S → Aut(S0 ) induced by conjugation in S. As the above diagram
commutes, it follows then that the diagram in [Oli10, Theorem 9] commutes when we take Γ = S
and τ : S → Aut(L0 ) to be the inclusion. Thus, by that theorem, there is a saturated fusion
system F over S in which F0 is weakly normal, and there is a corresponding normal pair of
linking systems L0 E L (in the sense of [AOV12, §1.5]) such that S = AutL (S0 ) has the given
action on L0 (i.e. the automorphism of L0 induced by conjugation with s ∈ S in L equals the
automorphism s of L0 ). By the same theorem, the pair (F, L) is unique up to isomorphism of
fusion systems and linking systems with these properties. Since F0 is simple [Lin06b], F0 is in
fact normal in F by a result of Craven [Cra11, Theorem A]. Thus, since CS (F0 ) 6 CS (S0 ) 6 S0 ,
it is a consequence of [Asc11, (9.1)(2), (9.6)] that F ∗ (F) = F0 .
So it remains only to prove that (S, F) is uniquely determined up to an isomorphism of fusion
systems. Let F ′ be a saturated fusion system over a p-group S ′ such that F ∗ (F ′ ) = F0 , and such
that the map S ′ /S0 → Out(F0 ) induced by conjugation has image A. Then by (a), F0 = Op (F ′ ).
So by [AOV12, Proposition 1.31], there is a normal pair of linking systems L′0 E L′ associated to
the normal pair F0 E F ′ . Moreover, we can choose L′0 to be a centric linking system. Since a
centric linking system attached to F0 is unique, there is an isomorphism θ : L′0 → L0 of linking
systems. We may assume that the set of morphisms which lie in L′ but not in L′0 is disjoint from
the set of morphisms in L0 . Then we can construct a new linking system from L′ by keeping
every morphism of L′ which is not in L′0 and replacing every morphism ψ in L′0 by θ(ψ), and then
carrying over the structure of L′ in the natural way. Thereby we may assume L′0 = L0 . So we are
12
given a normal pair L0 E L′ attached to F0 E F ′ . By (4.4) applied with L′ and F ′ in place of F
and L, we have S0 = AutL0 (S0 ) E AutL′ (S0 ) = S ′ via the inclusion functor δ′ of L′ . Let
τ : S ′ → Aut(L0 )
be the map taking s ∈ S ′ to the automorphism of L0 induced by conjugation with s in L′ . Again
using (4.4), we see that τ is injective. Note also that τ restricts to the identity on S0 if we
identify S0 with ι(S0 ) as above. Recall that the map S ′ /S0 → Out(F0 ) induced by conjugation
has image A. So Theorem 3.10 implies τ (S ′ ) = S, i.e. we can regard τ as an isomorphism
τ : S ′ → S. So replacing (S ′ , F ′ ) by (S, τF ′ ) and then choosing L0 E L′ as before, we may assume
S = S ′ . So F ′ is a fusion system over S with F0 E F ′ , and L0 E L′ is a normal pair of linking
systems associated to F0 E F ′ such that AutL′ (S0 ) = S via δ′ . Let s ∈ S. Recall that τ (s) is
the automorphism of L0 induced by conjugation with s in L′ . Observe that the automorphism of
S0 = AutL0 (S0 ) induced by τ (s) equals just the automorphism of S0 induced by conjugation with
s in S. Similarly, the automorphism s of L0 equals the automorphism of L0 given by conjugation
with s in L, and so induces on S0 = AutL0 (S0 ) just the automorphism given by conjugation with
s in S. Theorem 3.10 gives CAut(L0 ) (S0 ) 6 S0 and this implies that any two automorphisms of
L0 , which induce the same automorphism on S0 , are equal. Hence, τ (s) = s for any s ∈ S. In
other words, S = AutL′ (S0 ) induces by conjugation in L′ the canonical action of S on L0 . The
uniqueness of the pair (F, L) implies now F ′ ∼
= L. This shows that (S, F) is uniquely
= F and L′ ∼
determined up to isomorphism.
References
[AC10]
[AKO11]
[AOV12]
[Asc08]
[Asc11]
[Ben98]
[BLO03]
[BMO16]
[COS08]
[Cra11]
[DW93]
[GL83]
[GLS98]
Michael Aschbacher and Andrew Chermak, A group-theoretic approach to a family of 2-local finite groups
constructed by Levi and Oliver, Ann. of Math. (2) 171 (2010), no. 2, 881–978.
Michael Aschbacher, Radha Kessar, and Bob Oliver, Fusion systems in algebra and topology, London
Mathematical Society Lecture Note Series, vol. 391, Cambridge University Press, Cambridge, 2011.
MR 2848834
Kasper K. S. Andersen, Bob Oliver, and Joana Ventura, Reduced, tame and exotic fusion systems, Proc.
Lond. Math. Soc. (3) 105 (2012), no. 1, 87–152. MR 2948790
Michael Aschbacher, Normal subsystems of fusion systems, Proc. Lond. Math. Soc. (3) 97 (2008), no. 1,
239–271. MR 2434097 (2009e:20044)
, The generalized Fitting subsystem of a fusion system, Mem. Amer. Math. Soc. 209 (2011),
no. 986, vi+110. MR 2752788
David J. Benson, Cohomology of sporadic groups, finite loop spaces, and the Dickson invariants, Geometry and cohomology in group theory (Durham, 1994), London Math. Soc. Lecture Note Ser., vol. 252,
Cambridge Univ. Press, Cambridge, 1998, pp. 10–23. MR 1709949 (2001i:55017)
Carles Broto, Ran Levi, and Bob Oliver, The homotopy theory of fusion systems, J. Amer. Math. Soc.
16 (2003), no. 4, 779–856 (electronic).
Carles Broto, Jesper M. Møller, and Bob Oliver, Automorphisms of fusion systems of finite simple groups
of lie type, preprint (2016), arXiv:1601.04566.
Andrew Chermak, Bob Oliver, and Sergey Shpectorov, The linking systems of the Solomon 2-local finite groups are simply connected, Proc. Lond. Math. Soc. (3) 97 (2008), no. 1, 209–238. MR 2434096
(2009g:55018)
David A. Craven, Normal subsystems of fusion systems, J. Lond. Math. Soc. (2) 84 (2011), no. 1, 137–158.
MR 2819694
W. G. Dwyer and C. W. Wilkerson, A new finite loop space at the prime two, J. Amer. Math. Soc. 6
(1993), no. 1, 37–64. MR 1161306
Daniel Gorenstein and Richard Lyons, The local structure of finite groups of characteristic 2 type, Mem.
Amer. Math. Soc. 42 (1983), no. 276, vii+731. MR 690900
Daniel Gorenstein, Richard Lyons, and Ronald Solomon, The classification of the finite simple groups.
Number 3. Part I. Chapter A, Mathematical Surveys and Monographs, vol. 40, American Mathematical
Society, Providence, RI, 1998, Almost simple K-groups. MR 1490581 (98j:20011)
13
[Hen13] Ellen Henke, Products in fusion systems, J. Algebra 376 (2013), 300–319. MR 3003728
[Lin06a] Markus Linckelmann, A note on the Schur multiplier of a fusion system, J. Algebra 296 (2006), no. 2,
402–408.
[Lin06b]
, Simple fusion systems and the Solomon 2-local groups, J. Algebra 296 (2006), no. 2, 385–401.
[LO02]
Ran Levi and Bob Oliver, Construction of 2-local finite groups of a type studied by Solomon and Benson,
Geom. Topol. 6 (2002), 917–990 (electronic).
[LO05]
, Correction to: “Construction of 2-local finite groups of a type studied by Solomon and Benson”
[Geom. Topol. 6 (2002), 917–990 (electronic); mr1943386], Geom. Topol. 9 (2005), 2395–2415 (electronic).
[Lyn15] Justin Lynd, A characterization of the 2-fusion system of L4 (q), J. Algebra 428 (2015), 315–356.
MR 3314296
[Oli10]
Bob Oliver, Extensions of linking systems and fusion systems, Trans. Amer. Math. Soc. 362 (2010),
no. 10, 5483–5500. MR 2657688 (2011f:55032)
[Oli16]
, Reductions to simple fusion systems, Bulletin of the London Mathematical Society 48 (2016),
no. 6, 923–934.
[Sem15] Jason Semeraro, Centralizers of subsystems of fusion systems, J. Group Theory 18 (2015), no. 3, 393–405.
MR 3341522
[Ste60]
Robert Steinberg, Automorphisms of finite linear groups, Canad. J. Math. 12 (1960), 606–615.
MR 0121427
E-mail address: ellen.henke@abdn.ac.uk
E-mail address: justin.lynd@abdn.ac.uk
Institute of Mathematics, University of Aberdeen, Fraser Noble Building, Aberdeen AB15 5LY,
United Kingdom
14
| 4 |
A generative model for sparse, evolving digraphs
arXiv:1710.06298v1 [] 17 Oct 2017
Georgios Papoudakis and Philippe Preux and Martin Monperrus
Abstract Generating graphs that are similar to real ones is an open problem, while
the similarity notion is quite elusive and hard to formalize. In this paper, we focus
on sparse digraphs and propose SDG, an algorithm that aims at generating graphs
similar to real ones. Since real graphs are evolving and this evolution is important
to study in order to understand the underlying dynamical system, we tackle the
problem of generating series of graphs. We propose SEDGE, an algorithm meant to
generate series of graphs similar to a real series. SEDGE is an extension of SDG. We
consider graphs that are representations of software programs and show experimentally that our approach outperforms other existing approaches. Experiments show
the performance of both algorithms.
1 Introduction
We wish to generate artificial graphs that are similar to real ones: by “real”, we
mean a graph that is observed in the real world; as we know, there is ample evidence
that graphs coming from the real world are not Gilbert, or Erdös-Rényi graphs, but
exhibit more structure. The motivations range from pure intellectual curiosity to,
for instance, being able to test ideas on a set of graphs when only one is available
(the WWW, a social network), or understanding which are the key properties of a
graph. This paper is considering directed, un-looped, un-weighted, sparse graphs
of moderate sizes (number of nodes ranging from 100 to a couple of thousands of
nodes); by sparse, we mean that the number of edges is of the order of the number
of vertices, and typically scales like aN, with a very small with regards to N (say
Georgios Papoudakis e-mail: giwrpapoud@gmail.com
Philippe Preux e-mail: firstname.lastname@inria.fr
Université de Lille, CRIStAL & Inria, Villeneuve d’Ascq, France.
Martin Monperrus
KTH Royal Institute of Technology, Sweden e-mail: firstname.lastname@csc.kth.se
1
2
Georgios Papoudakis and Philippe Preux and Martin Monperrus
a ≤ 10 to give an idea of its value). We assume weak connectivity of the graph. As a
case study, we experiment with graphs extracted from software programs; beyond a
better understanding of software programs, such graphs may be used e.g. to improve
software development and track the sources of bugs [8, 7].
To generate a graph in this context, one may use an algorithm that builds a graph
given the degree distribution of the real graph (see [5] and followers), or its adjacency matrix [3], or some other structure (see [11] and references therein). As we
wish to understand and model the creation of the graph, and as real graphs are often
dynamic, we are more interested in a second type of algorithms that build a graph
incrementally. Another motivation is that we do not want to generate graphs that
have the exact same number of vertices, or the exact same degree distribution, or
anything identical to the real one. A reason for this is that if we consider the degree
distribution, two graphs having the same degree distribution may be very different
regarding their other properties; in the other way around, two graphs that have more
or less slightly different degree distribution, may have very similar properties. The
properties we are interested in are of various natures: connectivity, diameter, average
path length, transitivity, modularity, assortativity, spectral properties, degree distribution. Furthermore, when considering degree distribution or spectral properties, it
is not clear how to meaningfully measure the difference between two degree distributions: mean squared distance, Kolmogorov-Smirnov statistics, Kullback-Leibler
divergence, Jensen-Shannon distance. Finally, an important property of the generator is its stability. We identify two types of stability: the first is that for a given set
of parameters, the graphs that are generated should have approximately the same
characteristics; the other is that the graphs generated by a set of parameters should
not change too much when the value of the parameters change a bit (sort of continuity of the properties of the generated graphs in the space of parameters of the
generator).
Modeling and generating static graphs is important, but we are really interested
in modeling the evolution of a graph. Though some works exist [4], the issues mentioned above take yet another aspect when considering the evolution of a graph. We
see the evolution of a graph as time series of graphs, that is a set of couples {(t, g)}.
We wish to generate the whole series of graphs with a single algorithm. Succeeding in this endeavor, we would access to general properties of the graph and the
evolution process, as well as being able to predict the next graphs.
The content of this paper is as follows: in section 2, we propose the Sparse Digraph Generator (SDG) which is an algorithm that generates graphs that fit our
requirements; we then show that the degree distribution follows a power law distribution; we also show that the in-degree and the out-degree distributions are not
identical, something often observed in real digraphs. Then, we put SDG to the test:
we introduce the real graphs we work with and show how our generator performs.
As we are interested in the modeling of the evolution of a dynamic graph, we introduce Sparse Evolving Digraph GEnerator (SEDGE) which is an incremental version
of SDG in section 4 and put it to the test in section 5. Then, we conclude and draw
some final remarks.
A generative model for sparse, evolving digraphs
3
For the sake of reproducible research, all the experiments may be reproduced
with the material freely available at https://github.com/papoudakis/sparse-digraph-generator.
2 The Sparse Digraph Generator: SDG
We present a novel algorithm that aims at generating sparse digraphs. It is outlined
in algorithm 1. SDG starts by creating a digraph made of N isolated nodes and then,
at each iteration, it adds a link between two nodes. To add a link, SDG selects two
nodes, one as output, and the other as an input node. The selection of either node is
performed either at random or following a preferential attachment rule.
Algorithm 1 Outline of SDG
1: Input: Number of nodes: N
2: Input: Number of edges: E (assumed to be ≪ N 2 )
3: Input: Parameters e1 and e2 , both in the range [0, 1]
4: Output: Generated graph G
5: G ← DiGraph (with N nodes and no edge)
6: for t ∈ {1, ..., E} do
7:
⊲ Selection of the node that the edge will start from
8:
With probability e1 : out ← select a node uniformly at random()
9:
Otherwise: out ← select a node by preferential attachment
10:
⊲ Selection of the node that the edge will end to
11:
With probability e2 : in ← select a node of in-degree 0()
12:
Otherwise: in ← select a node by preferential attachment
13:
G.add edge(out, in)
return G
We consider sparse digraphs in which the number of edges E is aN, where a ∈
(1, 10). Such digraphs are quite common in applications and they are quite specific
with regards to their properties: for instance, there is usually a very small number
of paths to navigate from one node to another. It is often the case that the in-degree
and the out-degree distributions do not have the same shape. SDG achieves this: is
e1 6= e2 , the parameters of the power law of in-degree and out-degree distributions
are different.
The selection of a node to connect to or from is either uniformly at random
(among all nodes at line 8, among nodes of in-degree 0 at line 11), or with a probability proportional to the degree of the node, that is we use a linear preferential
attachment rule.
In the rest of this section, we derive the form of the in-degree and out-degree
distributions resulting from SDG. We show that both distributions follow a power
law, though of different parameters.
4
Georgios Papoudakis and Philippe Preux and Martin Monperrus
2.1 The in-degree distribution
After the completion of the t th iteration of SDG, the graph is made of t edges. So, the
probability for a node of degree k to be selected by linear preferential attachment is
N
k
t . Additionally, we assume that e2 < E , so the expected number of nodes that have
in-degree 0 is bigger than 0, E[N − e2 E] > 0
Let Dk (t) be the number of nodes with in-degree k at timestep t. For k > 1, Dk (t)
decreases at timestep t only if a node with in-degree k is selected due to preferential
attachment (line 12). So the probability that Dk decreases at iteration t is:
(1 − e2)
| {z }
probability of selecting a node by preferential attachment
k
t
|{z}
Dk (t)
(1)
probability of choosing a degree k node
Similarly, Dk (t) increases only if a node with in-degree k − 1 is selected due to
preferential attachment. So the probability that Dk increases at iteration t is:
(1 − e2 )
(k − 1)
Dk−1 (t)
t
(2)
Let dk (t) = E[Dk (t)]. It follows that the expected change in the number of nodes
of degree k at iteration t is:
dk (t + 1) − dk(t) = (1 − e2)
(k − 1)dk−1 (t) − kdk (t)
t
(3)
We set c2 = 1 − e2 and we assume that dk (t) = pk t so we get:
Assuming that k ≫
1
c2
pk = c2 ((k − 1)pk−1 − kpk )
(4)
(1 + c2)/c2
pk−1
pk = 1 −
1/c2 + k
(5)
and using the binomial approximation we come up with:
1+c2
(1 + c2 )/c2
k − 1 c2
pk−1
pk ≈ 1 −
pk−1 ≈
k
k
(6)
Finally, we calculate the values of p0 and p1 and we iterate the equation until k = 2.
pk ≈
k−1
k
1+c2
c2
k−2
k−1
1+c2
c2
1+c2
1 c2
...
p1
2
(7)
1+c
pk ≈ p1 k
− c 2
2
(8)
A generative model for sparse, evolving digraphs
5
2.2 The out-degree distribution
In this section, Dk (t) is the number of nodes with out-degree k at iteration t. Starting
with the same assumptions as before, we can write that the number of nodes with
out-degree distribution k decreases if a node with out-degree k is selected due to
preferential attachment with probability 1 − e1 or if such a node is selected from a
uniform distribution with probability e1 . This second possibility is different from the
analysis we did for the in-degree distribution. So, the probability that Dk decreases
at iteration t is:
Dk (t)
Dk (t)
e1
+ (1 − e1)k
(9)
n
t
Similarly, Dk (t) increases with probability:
e1
Dk−1 (t)
Dk−1 (t)
+ (1 − e1)(k − 1)
n
t
(10)
After following the same steps as before we end up with:
dk (t + 1) − dk(t) = (1 − e1)
(k − 1)dk−1(t) − kdk (t)
dk−1 (t) − dk (t)
+ e1
t
N
(11)
Assuming that the solution is like dk (t) = pk t and by setting c1 = 1 − e1, we can
prove that at the final timestep t = E:
pk ≈ p1 (k +
1
(1 − c1) E − 1+c
) c1
c1 N
(12)
2.3 Discussion & Related Work
We have shown that the in-degree and the out-degree distributions of the graphs
generated by SDG exhibit a power law. This may come as a surprise to the reader,
well aware of earlier works, such as [1]. Indeed, our graph is not growing, keeping a
set of N nodes, connecting them along the iterations of the algorithm. However, the
departure from a power law is expected when the number of iterations is approximately N 2 , that is when the graph gets dense. However, as we emphasized it earlier,
we only consider sparse graphs, and the number of iterations, hence the number of
edges, remains O(N), hence much less than N 2 .
It is worth noting that the power law coefficients of graphs generated by SDG
are the same as those of graphs produced by Bollobas et al., though the algorithms
are slightly different. Actually Bollobas et al. results come as special cases of our
analysis.
SDG departs from the usual Barabasi-Albert type of algorithms because it generates directed graphs. Strictly speaking, our algorithm generates a variant of a Price
6
Georgios Papoudakis and Philippe Preux and Martin Monperrus
graph [9] and setting e1 to 0, e2 to 1, a kind of Price’s algorithm which adds one
edge at a time is recovered. SDG comes very close to the one studied by Bollobas
et al. [2] though only SDG is able to add two vertices at once, in a single iteration.
3 Experimental study of SDG
In this experimental section, we mainly study two questions:
• which algorithm performs the best to produce graphs that are similar to some real
graphs?
• the stability of SDG with regards to its parameters.
We compare our algorithm with GDGNC [6] where it is shown to be the best
graph generator available in the context of software graphs. We also compare our
model with Bollobas et al.’s since they are quite similar: it is interesting to check
how the small difference in these 2 algorithms convert into difference of performance. We have compared SDG with other algorithms (Kronecker graphs, ...) but
since they perform poorly and due to space limitations, we do not report them. The
experiments are performed with 10 major software programs taken from the maven
dataset [10]. Table 1 summarizes the basic features of our dataset.
Software (version)
ant (1.5.1)
findbugs (0.6.4)
freemarker (1.5.3)
hibernate (1.2)
htmlunit (1.10)
jasperreports (3.1.2)
jparsec (0.2.2)
ojb (0.5.200)
pmd jdk14 (4.1.1)
spring core (1.0.1)
Nodes
266
56
76
365
219
1139
75
179
521
112
Edges Edges/Nodes Diameter
1427
5.36
6
183
3.27
5
358
4.71
7
1916
5.25
7
934
4.26
7
7460
6.54
7
203
2.71
5
766
4.28
6
3049
5.85
8
337
3.01
7
Table 1 Statistics of the dataset used in the experiments reported in section 3.
In the literature, the measure of similarity between two graphs is not very well
defined. In this paper, we measure the similarity between the generated graph (gg)
and the original graph (go) using the following set of metrics:
• The Kolmogorov-Smirnov statistic (KS) of the in-degree and out-degree distributions. Let CDFg denote the cumulative degree distribution function of a graph
g, so that CDFg (k) = ∑i≤k Dk where Dk is the degree distribution of graph g.
Then, KS = maxk |CDFgg (k) − CDFgo (k)|. We denote KSin (resp. KSout ) the KS
statistics regarding in-degree (resp. out-degree) distribution.
• The mean squared distance (MSD) of the sorted in-degree and out-degree distributions. For each generated graphs g we consider the in-degree and out-degree
A generative model for sparse, evolving digraphs
7
of each node, sort these two lists to obtain din,g and dout,g . Then: MSDin =
∑i (din,gg (i)−din,go (i))2
N
(d
(i)−d
(i))2
and MSDout = ∑i out,gg N out,go
.
The MSD can only be used for SDG and GDGNC because they generate the
same number of nodes as the original graph. On the contrary, Bollobas et al.’
model does not necessarily produce graphs with the same number of nodes.
We perform a grid search in order to determine the parameters of each model that
best fit for each graph. SDG and GDGNC are optimized to minimize the maximum
value between MSDin and the MSDout : minimize{max(MSDin , MSDout )}. As MSDin
and MSDout are irrelevant for it, Bollobas et al. model is optimized to minimize the
KS statistic. This may be seen as a caveat in our experiments, but we provide ample
observations below to convince the reader that if we were tuning the parameters of
the 3 models with the same metrics, the conclusions of the experiments would not
change much. The experiments presented in table 2 below are performed with the
optimal parameters for each software, averaged over 100 generated graphs. Table 2
provides the average value of KS and MSD for each model and each software.
Software
ant
findbugs
freemarker
hibernate
htmlunit
jasperreports
jparsec
ojb
pmd jdk14
spring core
KSin
KSout
MSDin
SDG GDGNC Bollobas SDG GDGNC Bollobas SDG GDGNC
0.26 0.24
0.39 0.16 0.17
0.34
17.4 30.45
0.29 0.41
0.37 0.33 0.35
0.37
2.32
3.74
0.23 0.23
0.4
0.48 0.49
0.38
3.11
6.14
0.38 0.41
0.33 0.22 0.32
0.32 14.38 21.87
0.37 0.37
0.42 0.31 0.36
0.44 12.67 20.68
0.24 0.24
0.28 0.35 0.43
0.29 32.37 97.72
0.22 0.22
0.41 0.36 0.47
0.42
0.69
2.72
0.26 0.25
0.44 0.21 0.27
0.4
3.6
6.36
0.27 0.28
0.28
0.5
0.56
0.41 14.92 114.9
0.36
0.4
0.4
0.23 0.34
0.3
2.54
4.67
MSDout
SDG GDGNC
1.89
2.58
1.24
2.65
4.89
6.76
3.14
9.23
3.92
8.45
16.1 37.43
4.6
9.98
0.77
3.41
32.08 54.54
1.2
3.72
Table 2 Comparison of SDG with GDGNC and Bollobas et al. in terms of MSD and KS for 10
Java software graphs. Bold faces indicate best results.
We can clearly see that SDG performs better than both GDGNC and Bollobas
et al. model. Additionally, SDG is much more stable than the other models. That
means that given the parameters of the generator the graphs that are produced are
similar. In table 3, we give the average of the standard deviation for the experiments
that appear in table 2.
Model
KSin
KSout
MSDin
MSDout
SDG 0.093 ± 0.012 0.084 ± 0.023 3.94 ± 2.55 1.33 ± 1.07
GDGNC 0.091 ± 0.01 0.081 ± 0.013 19.78 ± 26.6 4.01 ± 3.89
Bollobas 0.102 ± 0.029 0.099 ± 0.025
Table 3 Mean and standard deviation of standard deviation values of MSD and KS on 10 Java
software graphs.
8
Georgios Papoudakis and Philippe Preux and Martin Monperrus
From table 3 we can see the standard deviation values of SDG are on the same
level or smaller than both GDGNC and Bollobas et al. But the most important property of SDG is that it can create graphs similar to the original one without the parameter optimization process, that both other models require in order to perform
decently. For each software, we generate 100 graphs and we compute the average
KSin , KSout , MSDin , and MSDout . All the experiments are performed with the same
values e1 = 0.45 and e2 = NE − 0.05 for all software graphs; these values result from
our experiments. Table 4 provides the results; in ()’s, we report the ratio between
the SDG without and with tuning: e.g., 0.14(0.9) is the first row of column KSout
means that KSout is 0.14 without tuning, and 0.14/0.9 with tuning. The value of
KS without tuning may be smaller than with tuning because the parameter tuning is
performed to minimize MSD.
Software
ant
findbugs
freemarker
hibernate
htmlunit
jasperreports
jparsec
ojb
pmd jdk14
spring core
KSin
0.25 (1.0)
0.3 (1.0)
0.24 (1.0)
0.29 (0.8)
0.33 (0.9)
0.21 (0.9)
0.25 (1.1)
0.33 (1.3)
0.33 (1.2)
0.3 (0.8)
KSout
0.14 (0.9)
0.34 (1.0)
0.46 (1.0)
0.3 (1.4)
0.29 (0.9)
0.43 (1.2)
0.42 (1.2)
0.27 (1.3)
0.54 (1.1)
0.25 (1.1)
MSDin
20.54 (1.2)
2.66 (1.1)
3.43 (1.1)
27.16 (1.9)
12.84 (1.0)
119.42 (3.6)
1.52 (2.2)
13.47 (3.7)
61.67 (4.1)
2.63 (1.0)
MSDout
0.89 (0.5)
1.34 (1.1)
5.31 (1.1)
13.27 (4.2)
5.24 (1.3)
49.16 (3)
8.41 (1.8)
2.36 (3.1)
45.43 (1.4)
2.43 (2.0)
Table 4 MSD and KS without tuning parameters: numbers in ()’s gives the ratio between the
measurement without tuning and the measurement with tuning.
From table 4 we see that in most cases, SDG, without parameter tuning, performs
better than both GDGNC and Bollobas et al. model after parameter tuning. Another
very nice property is that the performance does not change very much as the value
of a parameter is changing: there is some sort of continuity of the performance of
SDG with regards to the value of parameters. This is a very nice property, as this
implies that to tune the parameters of SDG, a coarse grid search is enough and
computationally cheaper.
Figure 1 provides a graphical illustration of these measurements: we plot the
in-degree distribution, the out-degree distribution, and the spectra of the adjacency
matrix for the real graph and for the graphs generated by each algorithm we compare
to.
To conclude this part, let us stress that SDG uses two pieces of information: the
number of nodes N and the number of edges E. We have shown that SDG produces
graphs which degree distributions follow power laws. When we want to generate
graphs similar to a real one, both N and E are available, and we have shown that e1
and e2 , the parameters of SDG, are not that important to obtain satisfying graphs.
A generative model for sparse, evolving digraphs
100
In-degree distribution of ant
100
9
Out-degree distribution of ant
Sorted spectrum of ant
30
10
1
2
10
1
Eigenvalue
10
Cumulative frequency
Cumulative frequency
20
10
10
0
2
10
10
3
100
10
101
102
In degree
103
104
20
3
100
101
102
Out degree
103
104
0
5
100
1
Eigenvalue index
200
2
Fig. 1 In-degree distribution, out-degree distribution, and spectrum for the real graph and the generated graphs.
Another point is that the occasional addition of 2 nodes instead of 1 seems beneficial
since this is the only difference between SDG and Bollobas et al. approach.
Finally, it is important to refer to the metrics we use to compare graphs and
the metrics we use to optimize the parameters of the algorithm. As said earlier,
it is not known how to assess the similarity of two graphs using a single metric;
instead, we use a series of metrics (and more may be used) to formalize the idea of
similarity between two graphs. The metrics we use are recognized as very important
to characterize a graph: the degree distribution, and the spectrum. We have found
that optimizing using the degree distributions leads to better results. We see that as a
primary observation, other spectral information might be used, and other properties
may be used too. Furthermore, a combination of metrics may be optimized or used
to judge the similarity: this is left as future work.
4 SEDGE: modeling the evolution of a graph
We consider a model of evolution of the real graph that is version-oriented. As
the real graphs we consider are software, considering a sequence of versions of a
software, the graphs along this sequence evolve by part: by that, we mean that the
set of nodes and the set of edges evolve by chunks: from one graph to the next one
(one version of a software to the next one), a set of nodes are added, some nodes are
removed, and it is the same for the edges. So, we consider an algorithm that takes a
graph as input, and then adds a set of nodes and a set of edges, possibly removing
some existing nodes and edges.
We propose the “Sparse Evolving Digraph GEnerator” SEDGE (see algorithm 2),
a model to capture the evolution of software graphs based on the generative model
that we proposed in section 2. SEDGE is an extension of SDG. It distinguishes
existing nodes from new nodes. At each timestep, SEDGE chooses two nodes to
connect, sampling them from either set of nodes, based on 2 parameters that act as
probabilities α and β .
10
Georgios Papoudakis and Philippe Preux and Martin Monperrus
Algorithm 2 SEDGE: a generative model for sparse digraph evolution. The SAM PLE A NODE samples nodes in exactly the same way algorithm 1 does. new nodes
refers to the N new nodes that are added to the current graph. all nodes refers to all
nodes of the new graph.
1: Input: Number of nodes to add N new
2: Input: Number of edges to add E new
3: Input: Parameters α , β , e1 , e2 , all in the range [0, 1]
4: Input: Current graph G cur
5: Output: Generated graph G new
6: function SAMPLE A NODE (so, si, e1 , e2 )
7:
With probability e1 : out ← select a node uniformly at random(so)
8:
Otherwise: out ← select a node by preferential attachment (so)
9:
With probability e2 : in ← select a node of in-degree 0(si)
10:
Otherwise: in ← select a node by preferential attachment (si)
11:
return (in, out)
12: End function
13: G new ← G cur.add nodes(N new)
14: for t ∈ {1, ..., E new} do
15:
With probability α : (in, out) ← SAMPLE A NODE(all nodes, new nodes,
e1 , e2 )
16:
With probability β : (in, out) ← SAMPLE A NODE(new nodes, all nodes,
e1 , e2 )
17:
Otherwise: (in, out) ← SAMPLE A NODE(all nodes, all nodes, e1 , e2 )
18:
G new.add edge(out, in)
return G new
5 Experimental study of SEDGE
In this section, we evaluate the ability of SEDGE to capture the software evolution.
For the experiments, we use 10 pairs of consecutive versions of software graphs1
from the maven dataset. With the term “first graph”, we refer to the first version of
the software and with the term “second graph” to the second version. In each pair
of these graphs, the second graph has at least 20% more nodes than the first graph.
The degree distributions and the spectrum of the graphs of two successive versions are close. For this reason, in order to perform a better evaluation of SEDGE
we compute KS and MSD only for the new nodes: doing so, we amplify the difference between the two versions. In table 5, we report on the values of KS and MSD
averaged over 100 experiments, for each real graph, given the optimal parameters
of the model.
1
(ant.1.4.1→ant.1.5), (commons collections.20030418.083655→commons collections.20031027.
000000), (hibernate.2.0.3→hibernate.2.1.1), (jasperreports.0.6.7→jasperreports.1.0.0), (jasperreports.1.0.3→jasperreports.1.1.0), (ojb.0.8.375→ojb.0.9), (ojb.0.9.5→ojb.0.9.6), (spring.1.0
→spring.1.1), (wicket.1.0.3→wicket.1.1), (wicket.1.1.1 → wicket.1.2)
A generative model for sparse, evolving digraphs
Nnew
First Software
116
ant.1.4.1
commons.20030418 118
92
hibernate.2.0.3
jasperreports.0.6.7 170
jasperreports.1.0.3 117
100
ojb.0.8.375
120
ojb.0.9.5
199
spring.1.0
96
wicket.1.0.3
235
wicket.1.1.1
Enew
665
385
853
1100
1214
555
586
830
569
1800
KSin
0.29 (0.8)
0.41 (1.1)
0.39 (0.6)
0.23 (1.1)
0.19 (0.9)
0.31 (0.9)
0.47 (1.0)
0.36 (1.0)
0.36 (0.9)
0.25 (1.0)
11
KSout
0.4 (1.1)
0.39 (0.9)
0.29 (1.0)
0.2 (1.2)
0.25 (1.0)
0.39 (1.0)
0.36 (1.0)
0.4 (0.8)
0.32 (1.0)
0.2 (1.1)
MSDin
5.57 (2.9)
0.99 (1.5)
3.52 (12.4)
15.39 (1.2)
22.1 (2.7)
5.72 (1.6)
1.51 (0.6)
2.66 (3.7)
1.21 (26.4)
4.92 (1.0)
MSDout
2.37 (0.6)
1.05 (0.9)
3.22 (1.0)
6.08 (1.8)
9.2 (0.9)
1.27 (1.0)
2.4 (3.5)
1.17 (3.2)
1.93 (1.4)
2.75 (2.8)
Table 5 MSD and KS for 10 evolutions of software graphs of SEDGE, averaged over 100 runs
for each software. We also run the same experiments without tuning parameters: numbers in ()’s
gives the ratio between the measurement without tuning and the measurement with tuning: a value
below 1 means that it is better without tuning, above 1 that it is worse.
SEDGE has the same fundamental property SDG has: it can capture the structure
of the evolved network without tuning its parameters. As in table 4, the values in ()’s
in table 5 gives the ratio between tuning and no tuning. We use α = 0.5, β = 0.4,
e1 = 0.45 and e2 = NE − 0.05 in the non tuned parameters experiment.
6 Conclusion and future work
In this paper, we consider the problem of generating graphs that are similar to real,
sparse digraphs. We propose SDG which generates such graphs, exhibiting power
law in their degree distributions. We show that SDG performs very well experimentally; furthermore, SDG is stable in terms of parameter tuning: we show that it
behaves very well even if we do not perform parameter tuning. Then, we propose
an extension named SEDGE which aims at generating series of sparse digraphs that
is similar to a series of real graphs. The similarity between two graphs is not well
defined; we have used different ways to measure it and we have discussed the influence on the final result of the generator. Other metrics can also be used and will
be investigated in the future. We have used SDG and SEDGE with a type of graphs
in mind; we have not defined these algorithms using any knowledge on the graphs
being modeled: we have designed the algorithms, tested them on some real graphs,
and observed the results. We think they may be used for many types of real graphs.
More importantly, considering series of graphs is a very important aspect of our
work. As real graphs are evolving, we think that we have to use dynamic models
to deal with them to really capture something about the evolution of the real graph,
and the understanding of the process underneath.
12
Georgios Papoudakis and Philippe Preux and Martin Monperrus
Acknowledgements
This work was partially supported by CPER Nord-Pas de Calais/FEDER DATA
Advanced data science and technologies 2015-2020, and the French Ministry of
Higher Education and Research. We also wish to acknowledge the continual support
of Inria, and the stimulating environment provided by the SequeL Inria project-team.
References
1. Barabasi, A., Albert, R.: Emergence of scaling in random networks. Science 286 (1999)
2. Bollobas, B., Borgs, C., Chayes, J., Riordan, O.: Directed scale-free graphs. In: Proc. SODA,
pp. 132–139 (2003)
3. Carstens, C.J., Berger, A., Strona, G.: Curveball: a new generation of sampling algorithms for
graphs with fixed degree sequence (2016). Arxiv.org, 1609.05137
4. Holme, P.: Modern temporal network theory: a colloquium. The European Physical Journal B
88(9) (2015)
5. Kleitman, D., Wang, D.: Algorithms for constructing graphs and digraphs with given valences
and factors. Discrete Math. 6(1), 79–88 (1973)
6. Musco, V., Monperrus, M., Preux, P.: A generative model of software dependency graphs to
better understand software evolution (2015). Arxiv, 1410.7921
7. Musco, V., Monperrus, M., Preux, P.: Mutation-based graph inference for fault localization.
In: Proc. SCAM, pp. 97–106 (2016)
8. Musco, V., Monperrus, M., Preux, P.: A large-scale study of call graph-based impact prediction
using mutation testing. Software Quality Journal 25(3), 921–950 (2017)
9. Newman, M.: The structure and function of complex networks. SIAM Review 45(2), 167–256
(2003)
10. Raemaekers, S., Deursen, A.v., Visser, J.: The maven repository dataset of metrics, changes,
and dependencies. In: Proc MSR, pp. 221–224. IEEE Press (2013)
11. Staudt, C.L., Hamann, M., Safro, I., Gutfraind, A., Meyerhenke, H.: Generating Scaled Replicas of Real-World Complex Networks, pp. 17–28. Springer International Publishing (2017)
| 8 |
1
Use of a speed equation
for numerical simulation of hydraulic fractures
Alexander M. Linkov
Institute for Problems of Mechanical Engineering, 61, Bol’shoy pr. V. O., Saint Petersburg, 199178, Russia
Presently: Rzeszow University of Technology, ul. Powstancow Warszawy 8, Rzeszow, 35-959, Poland,
e-mail: linkoval@prz.edu.pl
Abstract. This paper treats the propagation of a hydraulically driven crack. We explicitly write the
local speed equation, which facilitates using the theory of propagating interfaces. It is shown that when
neglecting the lag between the liquid front and the crack tip, the lubrication PDE yields that a solution
satisfies the speed equation identically. This implies that for zero or small lag, the boundary value
problem appears ill-posed when solved numerically. We suggest - regularization, which consists in
employing the speed equation together with a prescribed BC on the front to obtain a new BC
formulated at a small distance behind the front rather than on the front itself. It is shown that regularization provides accurate and stable results with reasonable time expense. It is also shown that
the speed equation gives a key to proper choice of unknown functions when solving a hydraulic
fracture problem numerically.
Keywords: hydraulic fracturing, numerical simulation, speed equation, ill-posed problem,
regularization
1. Introduction
Hydraulic fracturing is a technique used extensively to increase the surface to or from which a fluid
flows in a rock mass. It is applied for various engineering purposes such as stimulation of oil and gas
reservoir recovery, increasing heat production of geothermal reservoirs, measurement of in-situ
stresses, control of caving in the roof of coal and ore excavations, enhancing efficiency of CO2
sequestration and isolation of toxic substances in rocks. In natural conditions, a similar process occurs
when a pressurized melted substance fractures impermeable rock leading to the formation of veins of
mineral deposits. Beginning with researchers such as Khristianovich & Zheltov (1955), Carter (1957),
Perkins & Kern (1961), Geertsma & de Klerk (1969), Howard & Fast (1970), Nordgren (1970),
Spence &Sharp (1985), Nolte (1988) numerous studies have been published on the theory and
numerical simulation of hydraulic fracturing (see, e. g., Desroches et al. 1994; Lenoach 1995;
Garagash & Detournay 2000; Adachi & Detournay 2002; Detournay et.al. 2002; Savitski & Detournay
2002; Jamamoto et al. 2004; Pierce & Siebrits 2005; Garagash 2006; Adachi et al. 2007; Mitchel et al.
2007; Kovalyshen & Detournay 2009; Kovalyshen 2010; Hu & Garagash 2010; Garagash et al. 2011
and detailed reviews in many of them). The review by Adachi et al. (2007) is specially organized to
give a comprehensive report on the computer simulation; it actually covers the present state of the art
as well. Thus there is no need to dwell on the historical background, detailed analysis of the processes
in a near-tip zone resulting in particular asymptotics and regimes of flow, general equations and
general approaches used to date. Being interested in computer aided simulation of hydraulic fracturing,
in this paper we would rather focus on a key area that needs to be addressed for further progress in
numerical simulation (Adachi et al. 2007, p. 754); this is the need “to dramatically speed up”
simulators of fracture propagation. Naturally, reasonable accuracy of results should be guaranteed. The
goal cannot be reached without clear understanding of underlying computational difficulties which
strongly influence the accuracy and stability of numerical results and robustness of procedures.
The paper addresses this issue. It presents in detail the results of brief communications by the
author (Linkov 2011a, b). Our prime objective is to delineate and to overcome the computational
difficulty caused, in essence, by strong non-linearity of the lubrication equation and by the moving
boundary. In contrast with the cited previous publications which employed the global form of the mass
2
balance to trace liquid front propagation in time, we explicitly write and use the local speed equation
(SE). The speed function entering the SE may serve to employ methods developed in the theory of
propagating surfaces (Sethian 1999). The SE gives also a key to proper choice of unknown functions,
which are analytical up to the front. We show that at points where the lag between the liquid front and
the crack tip is zero, the lubrication equation yields that its solution identically satisfies the SE. This
implies that for zero or small lag, the problem will appear ill-posed when solved numerically for a
fixed front at a time step, and consequently it requires appropriate regularization to have accurate and
stable numerical results. We suggest a method of regularization that employs the very source of the
difficulty to overcome it. The SE and a BC at the front are used together to derive a new BC
formulated at a small distance
behind the front rather than on the front itself. This leads to regularization which provides accurate and stable numerical results. The Nordgren model serves to
clearly display the computational features discussed. It is also used to obtain benchmarks with five
correct significant digits, at least.
2. Global mass balance, speed equation and ill-posed problem for hydraulically driven fracture
Reynolds transport theorem (e. g. Crowe et al. 2009), applied to the mass of an arbitrary volume of a
medium in a narrow channel between closely located boundaries, after averaging over the channel
width, reads:
dM
w
w
dS
wvn dL ,
(2.1)
dt
t
t
Sm
L
where dM / dt is the external mass coming into or out of the considered volume per unit time, S m is
the middle surface of the volume, w is the width (opening) of the channel, L is its contour, ρ is the
mass density averaged over the width, vn is the normal to L component of the particle velocity also
averaged over the width; the normal n is assumed to be in the plane tangent to S at a considered point.
Applying (2.1) to the total volume of the medium with the middle surface S t and contour Lt at a
moment t we have:
dM e
w
dS
w vn dL .
(2.2)
dt
t
t
St
Lt
Here, and henceforth, the star denotes that a value refers to the medium front. When writing (2.2)
we assume that there is no significant sucking or vaporization through the front. Then the speed V* of
the front propagation coincides with the normal component vn of the particle velocity. Thus we have
the key equation:
dxn *
,
(2.3)
V* vn
dt
where xn* is the normal to the front component of a position vector of a point on the front; in global
coordinates, x x * (t ) is a parametric equation of the front contour Lt . As a rule, the tangent
component of the particle velocity is small as compared with the normal component at the front. In this
case, the front moves with the speed exactly equal to the velocity of fluid particles comprising it, and
we have V* vn* v * , where v * is the vector of the particle velocity averaged over the front width.
For incompressible homogeneous liquid, ρ = const, and (2.2) may be written as the equation of the
liquid volume balance:
dVe
w
dS
w vn dL ,
(2.4)
dt St t
Lt
where Ve
Me / .
3
Comment 1. For 1-D case, the volume conservation equation (2.4) yields
x (t )
*
w( x, t )dx Ve (t ) .
0
Then for a „rigid-wall‟ channel ( w w(x) , w / t 0 ), integration gives the front location as a
function of time x* (t ) f (Ve (t )) . It is easily seen from the last expression that if the width w(x)
decreases fast enough with growing x, then for a prescribed influx Ve (t ) , the front coordinate turns to
infinity at a finite time t* ( x* (t* )
). A solution does not exist for t t* . For instance, in the case of
the constant influx rate q0 ( Ve (t ) q0t ) and exponentially decreasing width ( w a exp( x) , a, α >
0), the front turns to infinity as t
t* a /( q0 ) . There is no solution for t t* . This clearly indicates
that problems involving flow of incompressible liquid in a thin channel are quite tricky. A solution
may not exist or it might be difficult to find the solution numerically, especially when the rigidity of
channel walls is very high (according to Pierce & Siebrits (2005), high rigidity leads to a stiff system
of ordinary differential equations, when solving a boundary value problem by finite differences).
By definition of the flux through the channel width (e.g. Batchelor 1967), we have q wv . Then
in (2.4) w vn
qn is the normal to L component of the flux through the channel width at the liquid
front, and equation (2.4) may be written as
q n* (x * )
.
(2.5)
V* v n (x * )
w* (x * )
This is the speed equation (SE). Its right hand side (r. h. s) defines the so-called speed function.
Emphasize that the SE is general. It is not influenced by viscous properties of a particular
incompressible liquid and by the presence of leak-off or influxes through the walls of a thin channel.
In some cases, for instance, when a liquid flows in a fracture without a lag between the liquid front
and the crack tip, both the flux and opening turn to zero at the liquid front. Then (2.5) takes the limit
form:
q ( x)
.
(2.6)
vn (x* ) lim n
x x w(x)
It is highly apparent that even when w* ( x * ) = 0 and qn* (x* ) = 0, the limit on the r. h. s. of (2.6)
should be finite to exclude the front propagation with infinite velocity. This suggests using the particle
velocity as an unknown in numerical calculations, because it is non-singular in entire flow region
including the front. What is also beneficial, the velocity is non-zero except for flows with return
points.
In formulations of problems for hydraulic fracturing, the average particle velocity
v(x) q(x) / w(x) does not enter equations. Rather, the total flux q through a cross-section is used.
Specifically, the divergence theorem applied to (2.1) in the case of incompressible liquid yields the
continuity equation in terms of the flux q and opening:
w
div q qe ,
(2.7)
t
where qe is the prescribed intensity of distributed external sources of liquid, q is the flux vector,
defined in the tangent plane to St at a considered point. The Poiseuille law, used for a flow of
incompressible liquid in a narrow channel, connects the flux q with the pressure gradient
(2.8)
q
D(w, p)gradp .
Herein, D is a prescribed function or operator, gradient is also confined to the coordinates in the
tangent plane. Substitution of (2.8) into (2.7) gives the lubrication partial differential equation (PDE) at
points of liquid:
w
(2.9)
div D( w, p)gradp qe 0 .
t
Some initial (normally zero-opening) condition is assumed to account for the presence of the time
derivative in (2.9). Being of the second order and elliptic in spatial derivatives, equation (2.9) requires
4
a boundary condition (BC) on the liquid contour Ll. Normally it is the condition of the prescribed flux
q0 at a part Lq and of the prescribed pressure p0 at the remaining part Lp of the contour Ll:
(2.10)
qn (x* ) q0 (x* ) x* Lq ;
p(x* ) p0 (x* ) x* L p .
We see that neither partial differential equations (PDE) (2.7)-(2.9), nor BC (2.10) involve the
particle velocity. The latter enters only the SE (2.5) or its limit form (2.6) for points on the front. At
these points, it is defined by formula (2.3). Since according to (2.8) qn
D(w, p) p / n for any
direction n in a tangent plane, the flux entering the numerator on the r. h. s. of (2.5) is
p
qn
D( w, p )
.
(2.11)
nx x
*
Then for the hydraulic fracturing, the SE (2.5) is specified as:
1
p
vn
D( w, p )
,
w* (x * )
nx x
(2.12)
*
where n is the outward normal to Lt in the tangent plane at a considered point of the liquid front. The
r. h. s. of (2.12) specifies the speed function, which is the basic concept of the theory of propagating
interfaces (Sethian 1999). It may serve to employ level set methods and fast marching methods.
In hydraulic fracture problems, the opening is unknown. To have a complete system, the
lubrication PDE (2.9) is complemented with an equation of solid mechanics connecting the opening w
and pressure p:
(2.13)
A(w, p) 0 .
As a rule, the operator A in (2.13) is prescribed by using the theory of linear elasticity. In addition,
to let the fracture propagate, we need a fracture criterion (Otherwise, the liquid front reaches the crack
contour and stops.). Commonly, at points of the crack contour, authors impose the condition of linear
fracture mechanics:
(2.14)
K I K Ic ,
where K I is the stress intensity factor (SIF), K Ic its critical value.
At the crack contour Lc , the opening is set zero:
(2.15)
w(xc ) 0 .
In general, for physical reasons, excluding negative pressure, which tends to minus infinity at
points of the front, where the latter coincides with the crack contour, the liquid surface St is solely a
part of the crack surface Sc so that the liquid contour Lt is within the crack contour Lc . Thus, in
general there is a lag between Lt and Lc . In this case, the second of the conditions (2.10) is prescribed
at the boundary of the propagating liquid front:
(2.16)
p(x* ) p0 (x* ) .
In many cases, the change of the pressure p0 (x* ) along the liquid front is small. Consequently,
the tangential derivative p / of the pressure is small as compared with the normal derivative
p / n , and (2.8) implies that the tangential component of the flux is small as compared with its
normal component. Since v q / w , it means that the tangential component of the fluid velocity at the
front may be neglected. Then, as mentioned above, the front velocity equals to the particle velocity
itself. This case will be under discussion further on. We see that the existence of a lag, whatever small
it is, has important physical implications for fracture propagation.
Normally the lag is small and accounting for it strongly complicates a problem, while neglecting it
may be justified (e. g., Garagash & Detournay 2000). For these reasons, many papers on hydraulic
fracturing (e. g., Spence & Sharp 1985; Lenoach 1995; Garagash & Detournay 2000; Adachi &
Detournay 2002; Detournay et. al. 2002; Savitski & Detournay 2002; Jamamoto et al. 2004; Pierce &
Siebrits 2005; Adachi et al. 2007; Mitchel et al. 2007; Hu & Garagash 2010; Garagash et al. 2011)
assume that the lag is zero. Then at all points of the propagating liquid front, coinciding in this case
5
with the crack contour, the flux is zero: qn* (x* ) 0 . The latter condition is met in view of (2.15),
because the operator D(w, p) in (2.11) is such that D(0, p) 0 . Still, to satisfy both (2.15) and (2.14),
the elasticity equations require specific asymptotic behavior of the opening with the coefficient of the
asymptotic proportional to the SIF, when the latter is not zero. The SIF depends on the pressure (see, e.
g. Spence & Sharp 1985). As a result, at points of the front we have the boundary condition (2.14)
which, similar to (2.16), involves the pressure.
The fact that the lag is small and it is commonly neglected has significant consequences for
numerical calculations. To show it, consider the problem in the local Cartesian coordinates x1 ' O1 ' x2 '
with the origin O1 ' at a point x* at the liquid front and the axis x1 ' opposite to the direction of
external normal to the front at this point. We employ aforementioned advantages of using the particle
velocity v q / w . In the local system, for points close to the front, the normal component of the
1
p
velocity is v
. Then the lubrication PDE (2.9) at points near the front takes the form:
D( w, p)
w
x1 '
v
ln w
ln w
(2.17)
(v v* )
qe 0 ,
x1 '
x1 '
t x ' const
1
where the partial time derivative is evaluated under constant x1 ' . Using ln w serves to account for an
0 ) at the front. In
arbitrary power asymptotic behavior of the opening w( x1 ' , t ) C (t )(x1 ' ) (
particular, in the case of Newtonian liquid, for zero lag, the exponent α = 2/3 when in (2.14) K Ic 0
and α = ½ when K Ic 0 (Spence & Sharp 1985). In the case of non-Newtonian liquid, formulae for α
are given by Adachi & Detournay (2002). For the Nordgren problem discussed below, α = 1/3. Note
that for a flow with Carter‟s leak-off, so-called intermediate asymptotics may appear (e.g., Lenoach
1995; Kovalyshen & Detournay 2010). These asymptotics manifest themselves at some distance from
a crack contour rather than at the contour itself. For this reason, we shall not use them below in
equations involving a point on the front.
When the opening has the power asymptotic near the front, it is reasonable, in addition to the
particle velocity, to use also the variable y w1 / , which is linear in x1 ' near the front. The PDE
(2.17) then becomes
v v* y
v
1 y
(2.18)
qe 0 .
x1 '
y
x1 '
y t x ' const
1
Note that the derivative v / x1 ' and, under the supposed asymptotic, the derivatives y / x1 ' , the
multiplier (v v * ) / y and the term (1 / y) y / t in (2.18) are finite at the liquid front. Therefore, the
D( w, p )
grad p and y w1 / present a proper choice of unknown functions for
w
problems of hydraulic fracturing.
Write (2.18) as
v
y
y
(2.19)
y
(v v* )
yqe 0 .
x1 '
x1 '
t x1' const
variables v
Equation (2.19) implies that v v* at any point, where the opening w, and consequently y w1 / ,
is zero. Such are points at the front for zero lag. This means that when neglecting the lag, the SE (2.12)
is satisfied identically by a solution of the PDE (2.18). Obviously, the same holds for a solution of the
starting PDE (2.9).
We see that for zero lag, when solving the boundary value (BV) problem for (2.9), one implicitly
has satisfied the SE (2.12) additional to the prescribed BC of zero opening (2.15). Note that for the
mentioned power asymptotic of the opening, the SE may be re-written in terms of the normal
6
derivative of y
w1/
as
y
x1 ' x
Bv * , where B is a function of time only. Then at a point of the
1 0
front, we have satisfied two equations: y ( x* )
0 and
y
n
Bv* . Recall that the operator
x x*
div D( w, p)grad p is elliptic and requires only one BC, whereas actually there are two BC.
Consequently, we have a Cauchy problem for the elliptic operator. As known (e. g. Lavrent‟ev &
Savel‟ev 1999), such a problem is ill-posed in the Hadamard sense (1902). To have accurate and stable
results when solving an ill-posed problem numerically, one needs its proper regularization (e. g.
Tychonoff 1963; Lavrent‟ev & Savel‟ev 1999).
In the case when the lag is not neglected, we come to similar conclusions if the lag is small
enough. In this case, at a point of the liquid front, we have the BC of (2.16) type. As the lag is small,
the opening w* (x* ) at the liquid front is small, as well. Then from (2.19) it follows that for a solution
of the lubrication PDF, the SE is met approximately. Therefore, at points of the liquid front, in addition
1
p
D( w, p )
vn
to the BC (2.16) for the pressure, we have the approximate equation
w* (x * )
nx x
*
for its normal derivative. Hence, the problem of solving the lubrication PDF will appear ill-posed in
numerical calculations having the accuracy less than that of the approximate equation. It is reasonable
to have a method of regularization, which removes computational difficulties caused by the discussed
feature.
Comment 2. If a problem is self-similar and the lag is neglected, then integration of the lubrication
equation in the automodel coordinate (e. g. Spence & Sharp 1985; Adachi & Detournay 2002) from the
liquid front removes the difficulty: it is sufficient to seek the solution by taking into account for
asymptotic representation of the opening and pressure near the liquid front. In papers by Spence &
Sharp (1985) and Adachi & Detournay (2002), such representations meet both conditions (2.15) and
(2.6), which uniquely define the coefficients of the asymptotics at one point (the crack tip). In fact, the
authors solve an initial value (Cauchy) well-posed problem. The boundary condition of the prescribed
flux at another point (the inlet) is not used: the corresponding influx is found after obtaining the
solution of the Cauchy problem. Similar approach is applicable to the Nordgren problem. It serves us
to obtain the benchmarks in Sec. 4. Unfortunately, this method cannot be applied in a general case
when a self-similar formulation is not available.
3. Nordgren problem. Evidence of ill-posed problem
Consider the Nordgren (1972) model to see unambiguously that the BV problem is ill posed, to find a
proper means for its regularization and to obtain accurate numerical results, which may serve as
benchmarks. The analysis also confirms that the front velocity does satisfy the asymptotic equation
(2.6) despite that both the opening w(x ) and the flux qn (x ) are zero at the front.
3.1. Problem formulation
Recall the assumptions of the Nordgren (1972) problem. Similar to the Perkins-Kern (1961) model, it
is assumed that a vertical fracture of a height h (Fig. 1) is in plane-strain conditions in vertical cross
sections perpendicular to the fracture plane. The cross section is elliptical and the maximal opening w
decreases along the fracture. Nordgren‟s improvement of the model includes finding the fracture
length x (t ) as a part of the solution. Nordgren also accounts for the fluid loss due to leak-off. The
corresponding term is actually a prescribed function of time; we neglect it to not overload the analysis.
In this case, the continuity equation (2.7) reads q / x w / t 0 , where w is the average opening in
a vertical cross section, q is the flux through a cross section divided by the prescribed height h. The
7
liquid is assumed Newtonian with the dynamic viscosity
kl
. Then in (2.8), D( w, p)
kl w3 , where
1 /( 2 ) in the case of an elliptic cross section, considered by Nordgren (1972); for an arbitrary
thin plane channel, the Poiseuille value kl 1 /(12 ) is often used (e. g., Garagash & Detournay 2000;
Savitski & Detournay 2002; Garagash 2006; Mitchel et al. 2007; Hu & Garagash 2010). Thus the
equation (2.8) becomes:
p
q
kl w3
.
(3.1)
x
The dependence (2.13) between the average opening and pressure is taken in the simplest form
p kr w ,
(3.2)
found from the solution of a plane strain elasticity problem for a crack of the height h;
is the Poisson‟s ratio. Therefore, for a
k r (2 / h) E /(1 2 ) , E is the rock elasticity modulus,
non-negative opening, the pressure is non-negative behind the front. With lag neglected, the condition
(2.15) in view of (3.2) implies that the pressure becomes zero at the liquid front. The opening (as well
as the pressure) should then be positive behind the front and zero ahead of it. Under these assumptions,
there is no need in the fracture criterion (2.14).
In view of (3.1) and (3.2), the continuity equation (2.9) becomes the Nordgren PDE:
2 4
1
w
w
kl k r
0.
(3.3)
2
4
t
x
It is solved under the initial condition of zero opening
(3.4)
w(x,0) 0
for any x along the prospect path of the fracture.
The BC for the partial differential equation (PDF) (3.3) includes the condition of the prescribed
influx q0 at the fracture inlet x = 0:
(3.5)
q(0, t ) q0
and the condition that there is no lag between the crack tip and the liquid front:
w( x , t ) 0 .
(3.6)
The solution should be such that the opening is positive behind the front and zero ahead of it:
w( x, t ) 0 0 x x , w( x, t ) 0 x x .
(3.7)
The Nordgren problem consists in finding the solution of PDE (3.3) under the zero-opening initial
condition (3.4) and the BC (3.5), (3.6). The solution should comply with (3.7).
3.2. Speed equation, self-similar problem formulation, clear evidence that the problem is ill-posed
Nordgren used the conditions (3.7) to find the front propagation, rather than the global mass balance
commonly employed for this purpose (e. g., Howard & Fast 1970; Spence & Sharp 1985; Adachi &
Detournay 2002; Savitski & Detournay 2002; Jamamoto et al. 2004; Garagash 2006; Adachi et al.
2007; Mitchel et al. 2007; Hu & Garagash 2010; Garagash et al. 2011). For our purposes, we employ
the SE (2.12). In the case considered, it becomes:
1
w3
v
kl k r
,
(3.8)
3
x x x
where according to (2.3) v dx / dt .
Introduce dimensionless variables:
x
x
t
xd
, xd
, td
,v d
xn
xn
tn
dx d
dtd
v
, wd
vn
w
, pd
wn
p
, qd
pn
q
, q0d
qn
q0
qn
where xn (kl k r )1 / 5 qn3 / 5t n3 / 5 , v n xn / tn , wn qnt n / xn , pn 4k r wn , and t n , qn are arbitrary
scales of the time and flux respectively. In terms of dimensionless values, the equations (3.1), (3.2)
8
pd
, pd
xd
become, respectively,
read qd
wd 3
wd4
. Then the PDE (3.3) and the BC (3.5), (3.6)
xd
4wd ; hence, qd
2
w4
x2
w4
x
w
t
0,
(3.9)
q0 ,
(3.10)
x 0
w( x , t ) 0 .
(3.11)
From this point on, we omit the subscript d at variables and consider only dimensionless values. The
homogeneous conditions (3.4) and (3.7) do not change their form. Nordgren (1972) solved the problem
by finite differences not using the SE.
In dimensionless variables, the SE (3.8) takes the form:
4 w3
v
.
(3.12)
3 x x x
In compliance with the said in Sec. 2, the SE (3.12) gives a key to proper choice of the unknown
function. Indeed, to have the front velocity finite, the partial derivative w3 / x should not be singular
at the liquid front. This yields that w 3 is an analytical function of x and it may be represented by a
x at any instant t: w3 ( x, t )
power series in x*
j 0
a j (t )( x*
x) j . The zero-opening condition (3.11)
gives a0 (t ) 0 , while the SE (3.12) gives a1 (t ) 0.75v* (t ) . Then at the vicinity of the crack tip we
have
w( x, t )
w3 ( x, t )
a11/ 3 ( x*
a1 (t )( x*
x) O ( x*
x)1/ 3 O ( x*
x) 2
what
means
that
the
opening
behaves
as
x) 2 / 3 . Therefore, near the crack tip, in contrast with w3 , the
derivative w / x of w is singular as ( x*
x)
2/3
.
We see that using w 3 avoids unfavorable asymptotic behavior of w, while the SE (3.12) governs
the linear asymptotic behavior of w 3 . Hence, it is reasonable to use w3 as the unknown function,
rather than w as used by Nordgren (1972) or w 4 : in contrast with the derivatives of w3 , the
2 4
2 3
w
w
4 w w3
w
w
derivatives
and
are singular at the front x x . In terms of w3
2
x
3 x x
x
x2
PDE (3.9) becomes:
2
2
w3
1
w3
1 w3
0.
(3.13)
x
x 2 3w3
4 w3 t
For further discussion we use the fact that the problem is self-similar which serves to reduce PDF
(3.13) to an ordinary differential equation (ODE). We express the variables x and w via automodel
variables
and
function y( )
as x
3
t 4 / 5 , w( x)
t1 / 5 ( xt 4 / 5 ) . Then (3.13) becomes ODE with the unknown
( ):
d2y
where a( y, dy / d , )
(dy / d
dy
3
20
0,
d
0.6 ) /(3 y) . The BC (3.10) and (3.11) read:
2
a( y, dy / d , )
dy
0.75
0
y( )
q0
,
3 y (0)
0,
(3.14)
(3.15)
(3.16)
9
and the SE (3.12) becomes:
dy
0.6 .
(3.17)
Re-write (3.14) by using the expression for a( y, dy / d , ) as
d2y
1 dy
0.6
3
dy
3
y
20
(3.18)
0,
d 2
In limit
, for a solution, satisfying the BC (3.16), ODE (3.18) turns into the SE (3.17).
Hence, for ODE (3.14), at the point
, we have imposed not only the BC (3.16) for unknown
function y , but also the BC (3.17) for its derivative dy/ d . Note that equations (3.16), (3.17) imply
that the factor a in (3.14) is finite at the liquid front: lim a( y, dy / d , )
1 /(3 ) .
y
It is easy to check by direct substitution that if y1 ( 1 ) is the solution of the problem (3.14)-(3.17)
for q0 q01 with
y1 ( 2 k ) / k is the solution of the problem (3.14)-(3.17)
1 , then y 2 ( 2 )
k 5 / 6q01 with
for q02
C
( q0 )
0.6
/
and C0
2
y ( 0) /
1/
2
k ; herein, k is an arbitrary positive number. This implies that
are constants not dependent on the prescribed influx q 0 . As
(q0 ) / C , it is a matter of convenience to prescribe q 0 or . A particular value of q 0 or
may be also taken as convenient. Indeed, with the solution y1 ( 1 ) for q0 q01 , we find the solution
0.6
for any q0 : y( ) y1 ( k ) / k , where k (q01 / q0 ) 6 / 5 ,
1 / k ).
1/ k (
Let us fix . According to (3.16), (3.17), at the point , we have prescribed both the function y
and its derivative dy / d . Thus, for the ODE of the second order (3.14) we have a Cauchy (initial
value) problem. Naturally, its solution defines y(0) and dy / d
0 and consequently the flux q 0 at
0 . Hence, even a small error when prescribing q 0 in (3.15), excludes the existence of the solution
of the BV problem (3.14)-(3.16). Therefore, by Hadamard (1902) definition (see also Tychonoff 1963;
Lavrent‟ev & Savel‟ev 1999), the BV problem (3.14)-(3.16) is ill-posed. It cannot be solved without a
proper regularization (Tychonoff 1963; Lavrent‟ev & Savel‟ev 1999). To make conclusions on the
accuracy of numerical results obtained without and with regularization, it is reasonable to obtain
benchmarks.
4. Benchmark solution
The initial value (Cauchy) problem (3.14), (3.16), (3.17) is well-posed. Thus its solution provides the
needed benchmarks. To solve the system we transformed the problem (3.14), (3.16), (3.17) to the
equivalent problem in two unknowns Y1 ( ) y ( ) and Y2 ( ) dy / d . This yields the system of two
ODE:
dY1
Y2
d
(4.1)
dY2
3
a(Y1 , Y2 , )Y2
d
20
under the Cauchy conditions at the point , corresponding to (3.16) and (3.17),
Y1 ( ) 0 , Y2 ( )
0.6 .
(4.2)
The Cauchy problem (4.1), (4.2) is solved by using the fourth order Runge-Kutta scheme (see, e.g.
Epperson 2002). Calculations were preformed with double precision. For certainty, we set
= 1= 1
10
(calculation with other values of
gave the same results for C and C0 to the seventh significant
digit including). The integration step
was changed from 10-2 to 10-5, and the number of steps was
consequently changed from 102 to 105 to reach the inlet point
0 . The number of iterations for the
nonlinear factor a( y, dy / d , ) was changed from 20 to 1000. We could see that the step 10-5 and the
number of iterations 50 are sufficient to guarantee at least six correct digits. The values of the
constants C and C0 are evaluated to the accuracy of seven digits:
C 0.7570913 , C0 0.5820636.
For values depending on , we shall use the subscript 1 when they correspond to
= 1 = 1.
Thus we have:
q01 C*5 / 3 0.6288984 , 1 (0) 3 C0 =0.8349418.
Values for an arbitrary flux q0 may be obtained as
q0
0.6
/C
1.3208446q0
0.6
( 0)
,
3
C0
2
1.0051356 q00.4 .
(4.3)
1.0073486 ,
For the value q0 2 / , used by Nordgren (1972), equations (4.3) give
(0) 0.8390285 against the values given by this author to the accuracy of about one percent:
1.01 , (0) 0.83. The values of 1 3 y1 and d 13 / d 1 are presented in Table 1 with five
correct digits.
Values of ( ) and d 3 / d for an arbitrary prescribed flux q0 may be obtained from those in
the
Table
1
3
as
y1 (
k)/k ,
d
3
/d
3
1
(d
/ d 1) / k
k
1
with
k
(q01 / q0 )1.2
0.5731872 / q10.2 .
Comment 3. Table 1 shows that the derivative d
3
/ d , defining the particle velocity, is nearly
constant along the entire liquid being close to its limit value at the front
3
3
approximately linear in . Next, from the BC (3.16) we obtain
( ) / (0) 1
approximate analytical solution of the Nordgren problem is given by the equation:
w( x, t ) / w(0, t ) (1 x / x )1/ 3
0.2
3
. This implies that
/
is
. Hence, the
(4.4)
0.8
with w(0, t ) t
are found
(0) and x (t )
t ; for a given flux q0 , the values of (0) and
from (4.3). As clear from Table 1, the error of (4.4) does not exceed one percent. The graph
corresponding to the approximate solution (4.4) is indistinguishable from that given by Nordgren
(1972). Naturally, the asymptotic of the solution (4.4) agrees with the predicted asymptotic behavior
a11 / 3 ( x*
w( x, t )
, and consequently, w 3 is proportional to x
proportional to
The
(d
3
1
accurate
/ d 1) /
k
1
0.6
1.3208446q0 ,
data
k
with k
x.
(0) 1.0051356 q00.4 ,
( )
,
( ) and d
3
/d
0.2
, w( x, t )
d
3
/d
are known, we can find the front location
x (t ) , the front velocity v (t ) , the opening w( x, t ) and the particle velocity
0.8 t
and
0.5731872 / q10.2 serve to estimate errors when solving the Nordgren
problem as a BV problem. When
v (t )
3 is
x)1 / 3 near the crack tip because the conditions (4.2) guarantee that
t 0.2 ( xt
0.8
) and ( x, t )
4 0.2 d
t
3
d
( x, t ) as x (t )
3
xt 0.8
.
5. Straightforward solving self-similar BV problem. Method of regularization
5.1. Straightforward integration by finite differences
t 0.8 ,
11
Forget for a while about the SE and all the said on its influence on a BV problem. Let us see what
happens when solving the BV problem (3.14)-(3.16) in a common way by finite differences.
We performed hundreds of numerical experiments with various numbers of nodal points and
iterations and different values of the prescribed influx q01 at the inlet. Finite difference approximations
of second order for d 2 y / d 2 and dy/ d were combined with iterations for a( y, dy / d , ) . Up to
100 000 nodal points and up to 1500 iterations were used in attempts to reach the accuracy of three
correct digits, at least. The attempts failed: by no means could we have more than two correct digits.
Moreover, the results always strongly deteriorate near the liquid front. The numerical results clearly
demonstrate that the BV problem (3.14)-(3.16) is ill-posed. It cannot be solved accurately without
regularization.
As illustration, the dashed line in Fig. 2 presents a typical graph of d 3 / d , obtained under the
Nordgren boundary value q01 2 / . For comparison, the benchmark values of d 3 / d , calculated
by using the Table 1, are shown by the solid line with markers. Obviously, the results strongly
deteriorate near the liquid front
(in the considered example, the benchmark value of
equals
1.0073486).
Comment 4. Using the variable 3 ( ) , which is linear near the liquid front, removes a suggestion
that the deterioration is caused by singularity of d / d and d 2 / d 2 at the point
.
Comment 5. It is worth noting that the accuracy of two correct digits was obtained at points not too
close to the front even when using a rough mesh with a hundred or even only ten nodes. This indicates
that using a rough mesh may serve to regularize a problem when high accuracy is not needed.
5.2.
- regularization.
The numerical experiments evidently confirm that the considered ill-posed BV problem (3.14)(3.16) cannot be solved accurately without regularization. A regularization method is suggested by the
conditions (3.16), (3.17). Indeed, we may use them together to get the approximate equation
y 0.6 (
) near the front. Hence, instead of prescribing the BC (3.16) at the liquid front
,
where it is implicitly complimented by the SE (3.17), we may impose the boundary condition, which
combines (3.16) and (3.17) at a point
(1 ) at a small relative distance
1
/ from the
front:
0 .6 2 .
(5.1)
The BV problem (3.14), (3.15), (5.1) is well-posed and may be solved by finite differences.
Numerical implementation of this approach shows that with
= 10-3, 10-4 the results for the step
/ = 10-3, 10-4, 10-5, 10-6 coincide with those of the benchmark solution. The time expense is
fractions of a second. The results are stable if and
are not simultaneously too small (both and
-5
are greater than 10 ).
As could be expected, the results deteriorate when both the regularization parameter and the step
become too small. Specifically, when
=
= 10-6, the results are completely wrong. Actually,
in this case, to the accuracy of computer arithmetic, the problem is solved without regularization.
We could also see that with growing step
, the accuracy decreases and for a coarse mesh it
actually does not depend on the regularization parameter. In particular, for a quite coarse mesh with
the step
= 0.1, the accuracy is about one percent, and the results stay the same to this accuracy for
any
from 10-2 to 10-9.
The essence of the suggested regularization consists in using the SE together with a prescribed BC
to formulate a BC at a small distance behind the liquid front rather than on the front itself. We call
such an approach -regularization. The next section contains its extension to the cases when a selfsimilar formulation is not available or is not used.
y(
)
12
6. Straightforward solution of starting BV problem. Regularization
6.1. Straightforward integration by time steps with finite differences on a time step
Forget again about the SE and its influence on numerical solution of a BV problem. Try to solve the
starting Nordgren problem by common finite differences. Nordgren (1972) used straightforward
numerical integration of the problem (3.9)-(3.11) under the zero-opening initial condition with the
conditions that opening is positive behind the liquid front and zero ahead of it. This author applied
Crank-Nicolson finite difference scheme to approximate PDF (3.9) and to meet the BC (3.10), (3.11).
The resulting non-linear tridiagonal system was linearized by employing linear approximation of w 4 .
Nordgren (1972) does not include details of calculations on the initialization, the time step, the number
of nodes in spatial discretization, the number of iterations, stability of numerical results and expected
accuracy. To obtain knowledge on these issues, we also solved the system (3.9)-(3.11) in a
straightforward way by using the Crank-Nicolson scheme. The results are as follows.
Actually performing 20 iterations to account for the non-linear term w 4 is sufficient to reproduce
four digits of the fracture opening, except for close vicinity of the liquid front (Increasing the number
to 100 iterations does not improve the solution for all tested time and spatial steps.). For various time
-2
-3
-4
-2
-3
-4
steps ( t
10 , 10 , 10 ) and different spatial steps ( x 10 , 10 , 10 ) taken in various
combinations, the results are stable along the main part of the interval [0, x (t ) ], but deteriorate and
are unreliable in close vicinity of the front ( 1 x / x (t ) < 0.001). This yields changes in the third digit
of
and (0) , calculated by using x (t ) and w(0, t ) . The asymptotic behavior (4.4), as x x (t ) ,
is reproduced near the front except for its close vicinity. The results coincide with those given by
Nordgren (1972) to the accuracy of two significant digits accepted in his work.
In all the calculations, by no means could we have a correct third digit. Similar to self-similar
solution, fine meshes did not improve the accuracy as compared with a rough mesh having the step
x / x = 0.01. The results clearly show that the problem cannot be solved accurately without
regularization.
Comment 6. As mentioned in Sec. 3, the variable w has a singular partial derivative w / x at the
liquid front. To remove the influence of the singularity, we also solved the problem by using w3 as an
unknown function, because according to the SE (3.12) its spatial derivative is not singular. The
conclusions when using w3 are the same as those above. Again, by no means could we have reliable a
third digit, and results strongly deteriorated at a close vicinity of the liquid front. This shows that the
inaccuracy is caused not by singularity of the derivative w / x at the liquid front. It is caused by the
fact the BV problem, when solved by common finite differences, appears ill-posed.
6.2. Reformulation of PDE to form appropriate for using
- regularization. Numerical results
Extension of - regularization to solve PDE requires the combined use of the BC (3.11) on the front
with the SE (3.12) to impose a BC at a small relative distance from the front. The distance being
relative, we need to count it in the local system with the origin at the front. Hence it is reasonable to
( x x) / x from the front. The relative distance from the inlet is
introduce the relative distance
/ t x const of a function ( x, t ) , which enters PDE
1
x / x (t ) . The partial time derivative
and is evaluated under constant x, should be transformed into the partial time derivative
/ t
const
( x (t ), t ) evaluated under constant . Omitting routine details of the change
of the function ( , t )
of variables, we have the transformation:
v (t )
,
(6.1)
t x const
t const
x (t )
where v (t ) dx / dt . When using the variable
and transformation (6.1), equation (3.13) becomes:
13
2
Y
2
where Y ( , t )
A(Y , Y /
w3 ( x (t ), t ) , A(Y , Y /
Y
,x v )
,x v )
B(Y , x )
Y/
Y
t
0.75 x v
3Y
0,
, B(Y , x )
(6.2)
x2
.
4Y
The BC (3.10), (3.11) in new variables read:
43Y Y
3 x ς
Y ( ,t)
q0 ,
(6.3)
ς 0
1
0.
(6.4)
The SE (3.12) takes the form:
Y
0.75 x v .
(6.5)
1
Prove firstly, that the problem (6.2)-(6.4) is ill posed because like the self-similar formulation, the
SE (6.5) is met identically by a solution of PDF (6.2) under the BC (6.4) of zero opening at the liquid
front. Indeed, re-write (6.2) by using the expression for A(Y , Y / , x v ) as
2
1 Y
Y
Y
(6.6)
0.75x v
0.75x 2
0.
3
t
x ), for a solution, satisfying the BC (6.4), PDE (6.6) turns into the SE (6.5).
In limit
1 (x
Hence, for PDE (6.2), at the point
1 , we have imposed not only the BC (6.4) for unknown function
Y , but also the BC (6.5) for its spatial derivative dY / d . Therefore, we have two rather than one BC
at
1 and the problem appears ill-posed. Consequently, the starting problem (3.9)-(3.11) is ill-posed,
as well, what explains the failure to solve it to the accuracy greater than two correct digits. Note that
equations (6.4), (6.5) imply that the factor A(Y , Y / , x v ) in (6.2) is finite at the liquid front
despite its denominator 3Y turns to zero: lim A(Y , dY / d , x*v* )
1/ 3 .
Y
Y
2
The regularization of the problem (6.2)-(6.4) follows the line used for the self-similar formulation.
Like the self-similar formulation, the BC (6.4) and the SE (6.5) yield the approximate equation near
the liquid front
1:
Y ( , t ) 0.75 x (t )v (t )(1 ) ,
(6.6)
which defines the asymptotic behavior of the solution when
1 . As mentioned, the non-linear
multiplier A(Y , Y / , x v ) in PDF (6.2) is non-singular at the liquid front
1 . The factor
B (Y , x ) is finite except for the liquid front (
1 ), because the opening is positive behind the front.
Hence, similar to (5.1), we may impose the BC at the relative distance from the liquid front:
(6.7)
Y ( , t ) 0.75x (t )v (t ) ,
where
1 . In contrast with the problem (6.2)-(6.4), the problem (6.2), (6.3), (6.7) does not
involve an additional BC. We may expect that it is well-posed and provides the needed regularization.
Extensive numerical tests confirm the expectation.
We solved the problem (6.2), (6.3), (6.7) by using the Crank-Nicolson scheme and iterations for
non-linear multipliers A(Y , Y / , x v ) , B (Y , x ) at a time step. The velocity v (t ) is also iterated
by using the equation following from (6.6):
Y
3
xv .
(6.8)
4
The condition (6.8) expresses (with an accepted tolerance) the continuity of the particle velocity at
the point
. After completing iterations, the final value of v (t ) shows the new coordinate of the
t ) x (t ) v (t ) t . The value x (t
t ) is used on the next time step.
liquid front x (t
14
Initialization. In the considered problem there is no characteristic time and length. Thus, the
starting equation x (0) 0 should be adjusted to this uncertainty. The adjustment concerns the
initialization of time stepping. At t = 0 the liquid front coincides with the inlet. The opening is also
zero. Then for „small‟ time, to have the factor A(Y , Y / , x v ) finite in the entire liquid, it should
x v f ( ) with f ( )
be Y /
0.75 as
1 . Hence for „small‟ time, we have Y x v f ( )
with f (1) 0 . Substitution of Y and Y /
into the BC (6.3), using v dx / dt and integration of
0.6
the resulting ODE yield x (t )
Ct
0.8
, where C 1.25
0.75q0
0.8
3
f (0) f (0)
is a constant. Then
v (t ) 0.8Ct 0.2 and Y 0.8C 2t 0.6 f ( ) . Insertion of these equations into PDF (6.2) gives ODE (3.14)
/ . Similarly, the BC (6.3), (6.4) and the SE (6.5) turn into the
with y 0.8 f and
corresponding equations (3.15), (3.16) and (3.17). The constant C actually represents
,
corresponding to the prescribed flux q0 .
We see that when using time stepping, initialization requires solving the self-similar problem. The
latter being ill-posed, the solution is obtained by - regularization as explained in Sec. 5.2. Having its
solution, the initial data for an arbitrary chosen initialization time t 0 are found as x (t 0 )
t 00.8 ,
2 0.6
0.8 t 0 0.2 , Y ( , t0 )
t0 ( * ) . For certainty, in the following calculations we set q0 =
1, t0 = 0.01. The same regularization parameter
and the same spatial step
were used for both
the initialization and for each of the time steps.
Numerical results. Two objectives were sought in numerical tests. First, we wanted to check the
efficiency of -regularization, that is, its accuracy, stability and robustness. To this end, we used small
relative distance , small spatial step
, small time step t , and a large number of time steps.
Secondly, we checked if the beneficial features of coarse meshes, observed for self-similar
formulation, hold in time steps. Exploratory calculations have shown that fifty iterations in non-linear
terms at a time step are sufficient to reproduce seven significant digits. Thus, in further tests the
number of iterations was set equal to fifty. The benchmark solution served to evaluate the accuracy. In
particular, for q0 = 1, the benchmark values of the front position and the front velocity are
v (t 0 )
x (t ) 1.3208446 t 0.8 and v (t ) 1.0566757 t 0.2 , respectively.
Results for fine meshes and small time steps. Obviously, the accuracy at the first time steps
depends on the accuracy of the data obtained at the initialization stage. We could see that = 0.0001
and
= 0.01 provide the accuracy of 0.026% for . To keep the accuracy on this level for x (t )
and v (t ) at first time steps, the step t should be notably less than t 0 . By taking t = 0.01 t 0 = 10-4,
we had the accuracy of some 0.03% for both x (t ) and v (t ) at first hundred steps. With further
growth of the time, the accuracy of v (t ) stayed on the same level, while the relative error of x (t )
decreased. In particular, at the step m = 1000 (t = 0.11), it is 0.0077%; at the step m = 20000 (t = 2.01)
it is 0.0043%. The decrease of the relative error in x for large time is related to the growth of the
absolute value of x (Recall that x (t )
t 0.8 .). The time expense on a conventional laptop for 20000
steps with 50 iterations at a step is near 15 s. Note that in view of the growth of x (t ) in time, there is
no need to have the time step constant. By using exponentially growing time steps, the number of time
steps may be drastically reduced without loss of the accuracy. For instance, with the same time
expense we could reach time t = 36128 ( x = 5889.4) with no loss of the accuracy. The number of
iterations could be reduced as well.
There were no signs of instability in these and many other specially designed experiments. We
10 4 and
could see that for 10 2
0.01, the results are accurate and stable in a wide range of
15
values of the time step and for very large number (up to 100000) of steps. Therefore - regularization
(6.7) is quite efficient.
Results for coarse mesh and large number of time steps. For the coarse mesh
= 0.0(9) and =
0.1 at the initialization stage, we have one percent accuracy except for close vicinity of the liquid front.
There is no need to take time steps notably less than the initial time. When taking t = 0.25 t0 =
0.0025, we had x with the error not exceeding 1.2% at any time instant for 20000 equal steps. Again,
the relative error decreased with growing x . Using exponentially growing time step serves to obtain
x (t ) and v (t ) with the accuracy of one percent even when x (t ) becomes of order 10000 (recall that
the starting value is x (t0 ) 0.033170). As the number of nodes is small (only 11), the time expense is
fractions of a second. Again, there were no signs of instability. Moreover, increasing the initial time
step 3-10-fold, influenced the accuracy only of the first steps, while with growing time, the accuracy
kept to the mentioned level of about 1%. Hence, use of a coarse mesh efficiently maintains this
accuracy.
7. Extension of the regularization method to 2D fracture propagation
According to the rationale presented in the preceding section, it appears that the strategy of using regularization when tracing 2-D hydrofracture propagation is as follows. At each point of the liquid
front we introduce the local coordinate system moving with the front in the direction normal to the
front. We transform field equations to this system. The prescribed BC at the front and the SE
associated with front points are transformed accordingly. They are combined to formulate a new BC at
a small distance behind the front. Below, we follow this path.
We introduce the local Cartesian coordinates x1 ' O1 ' x2 ' with the origin O1 ' at a point x* at the
liquid front and the axis x1 ' opposite to the direction of external normal to the front at this point. As
the flux q* at the front is co-linear to the normal, its tangential component is zero. For the point x* ,
equation (2.11) written in the system x1 ' O1 ' x2 ' becomes a scalar equation qn*
D ( w, p )
p
x1'
,
x1 ' 0
and the SE (2.12) reads;
1
p
D( w, p)
w*
x1'
v
where v
obtain:
,
(7.1)
x1 ' 0
dxn* / dt is the absolute value of the front velocity. Combining (7.1) with the BC (2.16) we
p
1
D( w, p)dp v* x1 ' .
p0 w
Equation (7.2) serves to impose the BC at a small distance x1 '
p
(7.2)
from the front:
1
(7.3)
D( w, p)dp v* ,
p0 w
where p
. The - regularization consists in using the BC
p( ) is the pressure at the point
(7.3) instead of the BC (2.16).
In numerical calculations with - regularization, it is reasonable to use the aforementioned
advantages of variables which express the particle velocity v q / w . In the local system x1 ' O1 ' x2 '
moving with the front, for points close to the front, the normal component of the velocity is:
1
p
. Evaluating the partial time derivative under constant x1 ' rather than at constant
v
D(w, p)
w
x1 '
16
global coordinate, the PDF (2.9) at points near the front takes the form (2.17). As noted in Sec. 2, it is
reasonable to transform PDF (2.17) to (2.18) by introducing the variable y w1 / , where the
exponent α accounts for asymptotic behavior of the opening at the front.
As
is small, the velocity v( ) is close to v* . This serves to employ the equation
(7.4)
v( ) v *
1 / 3 , p 4w , p0 4w( x* ) 0 ,
for iterations in v* at a time step. For the Nordgren problem,
x* ; then equations (2.18), (7.3) and (7.4) are easily transformed to (6.2), (6.7) and (6.8),
respectively.
8. Conclusions
The paper presents the following conclusions.
(i)
The speed function for fluid flow in a thin channel is given by the ratio of the flux through a
cross section to the channel width at the front. Its specification for hydraulic fracturing
follows from the equation of Poiseuille type connecting the flux with gradient of pressure.
The speed function, as the basis of the theory of propagating interfaces, facilitates
employing such methods as level set methods and fast marching methods. The speed
equation for hydraulic fracturing is a general condition at the liquid front not dependent on
a particular BC defined by a physical situation ahead of the front, liquid properties, leak-off
through the channel walls and/or presence of distributed sources. When there is no lag
between the liquid front and the crack tip, both the flux and opening are zero at the liquid
front; in this case, the SE is fulfilled in the limit when a point behind the front tends to the
front.
(ii)
Using the SE gives a key to proper choice of unknown functions when solving a hydraulic
fracture problem numerically. Specifically, the particle velocity, averaged over the opening,
is a good choice, because it is non-singular at the front and non-zero in entire flow region.
Using the velocity as an unknown may also serve to avoid the stiffness of the system of
differential equations obtained in a conventional way. Another proper unknown is the
opening to the degree 1/α, where α is non-negative exponent, characterizing the asymptotic
behavior of the opening near the front.
(iii) Existence of the SE also discloses the crucial feature of the problem: for zero or small lag,
at the points of the front we actually have prescribed two rather than one BC; the BV
problem appears ill-posed when solved numerically. It requires a proper regularization to
have accurate and stable numerical results.
(iv)
Self-similar formulation of the Nordgren problem, reducing PDE to ODE, unambiguously
demonstrates that the problem is ill-posed when considered as a BV problem. Numerical
experiments confirm this theoretical conclusion: by no means could we obtain more than
two correct digits when solving the problem without regularization, and the results always
strongly deteriorate near the liquid front.
(v)
The solution of the self-similar problem, obtained when solving it as a Cauchy (initial
value) problem by the Runge-Kutta method, provides benchmarks with at least five correct
digits at entire liquid including its front. These numerical results may serve for testing
methods of hydraulic fracture simulation.
(vi)
Studying of the Nordgren problem, accounting for the SE, suggests a means to overcoming
the difficulty by - regularization. It consists in employing the SE together with a
prescribed BC on the front to formulate a new BC at a small distance behind the front
rather than on the front itself. Application of - regularization and comparing the results
with the benchmarks shows that it is robust and provides highly accurate and stable results.
(vii) Numerical experiments also disclose that using coarse spatial meshes is beneficial and
serves as a specific regularization when high accuracy is not needed. Still, although
17
involving analytical work, - regularization is superior in the possibility to guarantee
accurate and stable results and to evaluate the accuracy of calculations; it is competitive in
time expense.
(viii) The suggested - regularization may serve as a means to obtain accurate and stable results
for simulation of hydrofracture with zero or small lag. When tracing 2-D hydrofracture
propagation, - regularization is obtained by writing equations in the local coordinate
system moving with the liquid front.
Acknowledgment. The author appreciates the support of the EU Marie Curie IAPP program (Grant #
251475).
References
Adachi J.I. & Detournay E. 2002 Self-similar solution of plane-strain fracture driven by a power-law
fluid. Int. J. Numer. Anal. Meth. Geomech., 26, 579-604.
Adachi J., Siebrits E. et al. 2007 Computer simulation of hydraulic fractures. Int. J. Rock Mech.
Mining Sci., 44, 739-757.
Batchelor, G.K. 1967 An Introduction to Fluid Dynamics. Cambridge University Press.
Carter R.D. 1957 Derivation of the general equation for estimating the extent of the fractured area,
Appendix to “Optimum fluid characteristics for fracture extension” by G.C. Howard and C.R.
Fast, Drill. and Prod. Prac. API, 261-270.
Crowe C.T., Elger D.F., Williams B.C. & Roberson J.A. 2009 Engineering Fluid Mechanics, 9th ed.,
John Willey & Sons, Inc.
Descroches J., Detournay E. et al. 1994 The crack tip region in hydraulic fracturing. Proc. Roy Soc.
London, Ser. A, 447, 39-48.
Detournay E., Adachi J. & Garagash D. 2002. Asymptotic and intermediate asymptotic behaviour near
the tip of a fluid-driven fracture propagating in a permeable elastic medium. Structural Integrity
and Fracture, Dyskin A., Hu X. and Sahouryeh E. (eds). The Netherlands: Swets & Zeitlinger, 918.
Epperson J.F. 2002 An Introduction to Numerical Methods and Analysis. 556p. New York: John Wiley
& Sons, Inc.
Garagash D.I. 2006 Propagation of a plane-strain hydraulic fracture with a fluid lag: Early time
solution. Int. J. Solids Struct. 43, 5811-5835.
Garagash D.I. & Detournay E. 2000 The tip region of a fluid-driven fracture in an elastic medium.
ASME J. Appl. Mech. 67, 183-192.
Garagash D.I., Detournay E. & Adachi J.I. 2011 Multiscale tip asymptotics in hydraulic fracture with
leak-off. J. Fluid Mech. 669, 260-297.
Geertsma J. & de Klerk F. 1969 A rapid method of predicting width and extent of hydraulically
induced fractures, J. Pet. Tech. 21, 1571-1581.
Hadamard J. 1902 Sur les problemes aux derivees partielles et leur signification physique. Princeton
University Bulletin, 49-52.
Howard G.C. & Fast C.R. 1970 Hydraulic Fracturing. Dallas: Monograph Series Soc. Petrol. Eng.
Hu J. & Garagash D.I., 2010 Plane strain propagation of a fluid-driven crack in a permeable rock with
fracture toughness. ASCE J. Eng. Mech., 136, 1152-1166.
Jamamoto K., Shimamoto T. & Sukemura S. 2004 Multi fracture propagation model for a threedimensional hydraulic fracture simulator. Int. J. Geomech. ASCE, 1, 46-57.
Khristianovich S.A.& Zheltov V.P. 1955 Formation of vertical fractures by means of highly viscous
liquid. Proc. 4-th World Petroleum Congress, Rome, 579-586.
Kovalyshen Y. & Detournay E. 2009 A re-examination of the classical PKN model of hydraulic
fracture. Transport in Porous Media, 81, 317-339.
Kovalyshen Y. 2010 Fluid-Driven Fracture in Poroelastic Medium. PhD Thesis. 215 p. Minnesota
University.
18
Lavrent‟ev M.M. & Savel‟ev L.Ja. 1999 Theory of Operators and Ill-posed Problems. (Novosibirsk:
Institute of Mathematics im. S. L. Sobolev, ), ISBN 5-86134-077-3 [in Russian].
Lenoach B. 1995 The crack tip solution for hydraulic fracturing in a permeable solid. J. Mech. Phys.
Solids, 43, 1025-1043.
Linkov A.M. 2011a Speed equation and its application for solving ill-posed problems of hydraulic
fracturing. Doklady Physics, 56, 436-438.
Linkov A.M. 2011b On numerical simulation of hydraulic fracturing. Proc. XXXVIII Summer SchoolConference, Advanced Problems in Mechanics, Repino, St. Petersburg, 291-296.
Mitchell S.L., Kuske R. & Pierce A.P. 2007 An asymptotic framework for analysis of hydraulic
fracture: the impermeable fracture case. ASME J. Appl. Mech., 74, 365-372.
Nolte K.G. 1988 Fracture design based on pressure analysis. Soc. Pet. Eng. J., Paper SPE 10911, 1-20.
Nordgren R.P. 1972 Propagation of a vertical hydraulic fracture, Soc. Pet. Eng. J., August, 306-314.
Perkins K. & Kern L. F. 1961 Widths of hydraulic fractures, J. Pet. Tech., 13, 937-949.
Pierce A.P. & Siebrits E. 2005 A dual multigrid preconditioner for efficient solution of hydraulically
driven fracture problem. Int. J. Numer. Meth. Eng., 65, 1797-1823.
Savitski A. & Detournay E. 2002 Propagation of a fluid driven penny-shaped fracture in an
impermeable rock: asymptotic solutions. Int. J. Solids Struct., 39, 6311-6337.
Sethian J.A. 1999 Level Set Methods and Fast Marching Methods. Cambridge: Cambridge University
Press, 1999.
Spence D.A. & Sharp P.W. 1985 Self-similar solutions for elastohydrodynamic cavity flow. Proc. Roy
Soc. London, Ser. A, 400, 289-313.
Tychonoff A.N. 1963 Solution of incorrectly formulated problems and the regularization method.
Soviet Mathematics, 4, 1035-1038 [Translation from Russian: А.Н. Тихонов, Доклады АН
СССР, 1963, 151, 501-504].
19
Well
Hydraulic fracture
h
x
w(x,t)
O
x*(t)
y
Fig. 1 Scheme of the problem on hydraulic fracture propagation
dψ 3 /dξ
0.6
0.4
0.2
0
0
0.2
0.4
Fig. 2 Graphs of the normalized velocity d
0.6
3
/d
0.8
1 ξ
in self-similar formulation of
Nordgren‟s problem
benchmark solution
solution of ill-posed boundary value problem
| 5 |
Interference Alignment with Power Splitting Relays
in Multi-User Multi-Relay Networks
Man Chu∗ , Biao He† , Xuewen Liao∗ , Zhenzhen Gao∗ , and Shihua Zhu∗
∗ Department
arXiv:1705.06791v2 [] 23 May 2017
†
of Information and Communication Engineering, Xi’an Jiaotong University, China
The Center for Pervasive Communications and Computing, University of California at Irvine, Irvine, CA 92697, USA
Email: cmcc 1989414@stu.xjtu.edu.cn, biao.he@uci.edu, {yeplos, zhenzhen.gao, szhu}@mail.xjtu.edu.cn
Abstract—In this paper, we study a multi-user multi-relay
interference-channel network, where energy-constrained relays
harvest energy from sources’ radio frequency (RF) signals
and use the harvested energy to forward the information to
destinations. We adopt the interference alignment (IA) technique
to address the issue of interference, and propose a novel transmission scheme with the IA at sources and the power splitting
(PS) at relays. A distributed and iterative algorithm to obtain
the optimal PS ratios is further proposed, aiming at maximizing
the sum rate of the network. The analysis is then validated by
simulation results. Our results show that the proposed scheme
with the optimal design significantly improves the performance
of the network.
I. I NTRODUCTION
Energy harvesting has been envisioned as a promising
technique to provide perpetual power supplies and prolong
the lifetime of energy-constrained wireless networks, since
wireline charging and battery replacement are not always
feasible or unhazardous. In particular, simultaneous wireless
information and power transfer (SWIPT) has drawn a significant amount of attention with its advantage of transporting
both energy and information at the same time through radio
frequency signals.
The concept of SWIPT was first put forward by Varshney
in his pioneer work [1], in which he investigated the tradeoff
between information delivery and energy transfer. Recently
developed practical receiver designs for SWIPT include power
splitting (PS) and time switching (TS) [2]. For TS, the receiver
switches over time between information decoding and energy
harvesting. Differently, the receiver with PS splits the received
signal power between information decoding and energy harvesting. It has been shown that the TS is theoretically a special
case of the PS with binary PS ratios. Thus, the PS usually
outperforms the TS in terms of the rate-energy tradeoffs. With
the in-depth study on SWIPT, the research problem of how
to combine SWIPT with other key technologies has gained
much attention [3]. For the combination of the SWIPT and
relays, both single-relay and multi-relay networks have been
considered to acquire the efficient SWIPT scheme in [4][5].
In addition, the interference channel (IC) has been taken into
account for the SWIPT with relays in, e.g., [6] and [7]. The
authors in [6] analyzed the outage performance for SWIPT
both with and without relay coordination in multi-user IC
networks. In [7], the SWIPT with relays in IC networks was
studied from a game-theoretic perspective. However, none of
the aforementioned studies has directly addressed the issue
of interference for SWIPT with relays in IC networks. In
practice, interference alignment (IA) is recognized as an
efficient technique to address the issue of interference for
wireless transmissions. With the IA, the interferences are
aligned into certain subspaces of the effective channel to the
receiver, and the desired signals are aligned into interferencefree subspaces [8]. Without the consideration of relays, the
authors in [9] studied the antenna selection with IA for the
SWIPT in the IC network. While to the best of authors’
knowledge, few work has been investigated in the literature
on SWIPT with the IA in multiple AF (amplify and forward)
relay networks.
In this paper, we study an IC network where multiple
sources transmit messages to multiple users through multiple AF relays, where the energy-constrained relays harvest
energy from the RF signals and use that harvested energy to
forward the information to destinations. We propose a novel
transmission scheme by adopting the IA and the PS design. We
further propose a distributed and iterative algorithm to obtain
the optimal PS ratios at relays that maximize the sum rate
of the network. Our results show that the proposed scheme
effectively improves the performance of the network, and it
significantly outperforms benchmark schemes.
II. S YSTEM M ODEL
We consider a K × K × K dual-hop IC network consisting
of K sources, K relays, and K destinations, as shown in
Figure 1. Each source Si intends to transmit messages to its
respective destination Di with the help of its dedicated AF
relay Ri , i ∈ {1, · · · , K}. We assume that there are no direct
links between the sources and the destinations. The numbers
of antennas at each source, relay, and destination are denoted
by M , N , and L, respectively. The sets of sources, relays and
destinations are denoted by NS , NR , and ND , respectively. In
addition, the set of all source-relay-destination (S-R-D) links,
Si → Ri → Di , i = 1, · · · K, is denoted by K.
We adopt the quasi-static block Rayleigh fading model and
assume that all channels are independent of each other. The
N × M normalized channel matrix from Si to Rj is denoted
H11
S1
G11
R1
G12
H12
G1K
H1K
G21
H 21
H 22
S2
D1
G22
R2
H 2K
D2
G2K
GK 1
H K1
GK 2
HK2
SK
H KK
RK
GKK
DK
Fig. 1. Illustration of multi-user multi-relay interference MIMO network.
by Hij , and the L × N normalized channel matrix from Ri
to Dj is denoted by Gij , i, j ∈ {1, · · · , K}. We assume that
each node has the local channel state information (CSI) to
enable the distributed IA. Moreover, the distance between Si
and Rj is denoted by rij , and the distance between Ri and Dj
is denoted by mij . The correspondingly path-loss effects are
−τ
modelled by rij
and m−τ
ij , respectively, where τ ≥ 2 denotes
the path-loss exponent [10].
Half-duplex relays are considered in this work, and the two
hop transmissions operate in two time slots. In the first slot, the
sources simultaneously transmit the information and energy to
the relays. In the second slot, the relays forward the received
information to the destinations by the harvested power in the
first slot. We assume that the wireless power transfer is the
only energy access for the relays. Besides, every time slot is
assumed to be unit interval for simplicity.
III. I NTERFERENCE A LIGNMENT WITH P OWER -S PLITTING
R ELAYS
In this section, we proposed the transmission scheme for
the dual-hop IC with the IA and the PS relays.
A. IA Between Sources and Relays
In the first time slot, the IA is employed for the transmissions from sources to relays. That is, all interference signals
from unintended sources are aligned into certain interference
subspaces at any relay Ri .
As per the mechanism of IA, the following conditions
should be satisfied to have the perfect IA:
UH
i Hji Vj
= 0, ∀j 6= i,
rank(UH
i Hii Vi )
= di
M + N ≥ (K + 1) × di ,
(1)
(2)
(3)
where Vi ∈ CM ×di denotes the unitary precoding matrix at
Si , Ui ∈ CN ×di denotes the unitary decoding matrix at Ri ,
and di denotes the number of data streams from Si . Here,
the perfect IA means that the interferences are aligned into
the null space of the effective channel to the relay and can be
eliminated completely. We can interpret (1) as the condition of
having the interference-free space of the desired dimensions,
and (2) and (3) as the conditions that the desired signal is
visible and resolvable within the interference-free space [11].
When the number of links is larger than 3, i.e., K > 3,
the closed-form expressions for the precoding and decoding
matrixes cannot be obtained. Hence, we use the channel
reciprocity characteristic with the distributed and iterative IA
to determine the IA matrices in this work.
The received signal at Ri (without multiplexing the decoding matrix UH
i ) in the first time slot is given by
q
−τ
IA
pi rii
Hii Vi xi
yR
=
i
K
q
X
−τ
+
pj rji
Hji Vj xj + nAi ,
(4)
j=1,j6=i
where pi denotes the transmit power at Si , xi ∈ Cdi ×1 denotes
the normalized desired signal for Di with E{||xi ||2 } = 1, and
nAi denotes the additive white Gaussian noise (AWGN) at the
receiver antennas of Ri .
B. PS Relays
In this work, the PS scheme is adopted at the relays for
the SWIPT. The block diagram of the receiver at the relay is
shown in Figure 2. For the received signal at Ri , a fraction
0 ≤ ρi ≤ 1 of the power is used for information processing
and forwarding, and the remaining fraction 1−ρi of the power
is used for energy harvesting. We refer to ρi as the PS ratio
at Ri . We assume that the processing power required by the
transmit/receive circuits at the relay is negligible compared
with the power used for signal transmission [10].
Then, the signals received by the energy-harvesting unit at
Ri is given by
q
p
−τ
EH
yRi
=
(1 − ρi )
pi rii
Hii Vi xi
+
K
X
q
−τ
pj rji
Hji Vj xj + nAi
,
(5)
j=1,j6=i
and the harvested energy at Ri is given by
K
X
2
−τ
QRi = η (1 − ρi ) pj rji
||Hji Vj || ,
(6)
j=1
where η denotes the energy conversion efficiency. Note that
the harvested energy due to the noise nAi is very small and
thus ignored.
At the information processing unit of Ri , the decoding
matrix UH
i is multiplexed to the received signal to eliminate
(1 − ρi )Xi ρi pi Yii ρi pi b1ii +σ2
γDi (ρ) = PK
j=1,j6=i
(1 − ρj )Xj
ρj pj Yji +cji σ 2
ρj pj bjj +σ 2
1
…
x
RF Unit and
Antenna
EH
x
IP
IF
.
(12)
j=1,j6=i
nA
=
σ2
η
denotes the power normalization coefficient at Ri . The received signal at Di is given by
q
IP 2
QRi m−τ
yDi =
ii Gii βRi yRi
K
q
X
IP 2
+
QRj m−τ
ji Gji βRj yRj + nDi
Power
Splitting
+
2
+ (1 − ρi ) Xi cii ρi pi bσii +σ2 +
U , nR
q
q
−τ H
QRi m−τ
G
β
ρi pi rii
Ui Hii Vi xi
ii
R
i
ii
K
q
X q
−τ H
QRj m−τ
ρj pj rjj
Uj Hjj
+
ji Gji βRj
j=1,j6=i
Fig. 2. Power splitting model at the relay.
Vj xj +
K q
X
QRj m−τ
ji Gji βRj nRj + nDi , (11)
j=1
the interferences. The processed signals at the information
processing unit of Ri is then given by
IP 1
yR
=
i
q
−τ H
ρi pi rii
Ui Hii Vi xi +
√
ρ i UH
i nAi + nRi , (7)
2
where nRi ∼ CN (0, σR
I ) denotes the AWGN introduced
i N
by the information processing unit at the relay. In practice, the
power of antenna noise nAi is usually much smaller than the
processing noise, and hence, the antenna noise has negligible
impact on the signal processing. Then, the antenna noise nAi
is ignored for simplicity in the rest of analysis in the paper [7],
[12], and the processed signal at Ri is simplified as
IP 2
yR
=
i
q
−τ H
ρi pi rii
Ui Hii Vi xi + nRi .
(8)
C. AF Signals to Destinations
In the second time slot, the relays amplify and forward the
processed signals to destinations by the harvested energy in
the first time slot. Since the utilization of IA for the second
hop would significantly increase the system overhead and
the energy consumption at the energy-constrained relays in
practice, the IA is not adopted for the second-hop transmission.
The transmitted signal at Ri in the second time slot is then
given by
F
yR
=
i
p
IP 2
QRi βRi yR
,
i
(9)
where QRi denotes the transmit power of Ri as given in (6)
and
βR i = q
1
2
−τ
2
ρi pi rii
||UH
i Hii Vi || + σRi
(10)
2
where nDi ∼ CN (0, σD
I ) denotes the AWGN at Di .
i N
In order to simplify the expression, we define Xi =
PK
2
2
−τ
−τ −τ
j=1 pj rji ||Hji Vj || , Yii = mii rii ||Gii H̃ii || , Yji =
2
2
2
−τ
−τ
−τ
m−τ
ji rii ||Gji H̃jj || , bii = rii ||H̃ii || , cji = mji ||Gji || ,
H
and the effective channel matrix H̃ii = Ui Hii Vi . In addi2
2
tion, we assume that σD
= σR
= σ 2 for any i ∈ {1, · · · , K},
i
i
without loss of generality [7]. The signal to interference
plus noise power ratio (SINR) at Di , denoted by γDi (ρ) is
then given by (12) as shown at the top of this page, where
T
ρ = [ρ1 , ρ2 , · · · , ρK ] denotes the vector of all relays’ PS
ratios. Finally, the rate of the transmission to Di is given
by 21 B log2 (1 + γDi (ρ)), where B denotes the frequency
bandwidth and 1/2 is due to the use of two time slots.
IV. O PTIMIZATION OF P OWER S PLITTING R ATIO
We note that the PS ratio ρ plays a pivotal in determining
the network performance. Thus, we optimize the design of ρ
in this section.
A. Problem Formulation
The objective of our design here is to determine a series
of PS ratios, ρ1 , · · · , ρK , to maximize the total transmission
rate of the network. The sum-rate maximization problem is
formulated as:
K
X
1
(P1) max Rsum (ρ) =
B log2 (1 + γDi (ρ)),
ρ
2
i=1
s.t.
ρ ∈ A,
H
Ui UH
i = I, Vi Vi = I,
E{||xi ||2 } = 1, ∀i ∈ {1, · · · , K},
(13)
where A = {ρ | 0 ≤ ρi ≤ 1, ∀i ∈ {1, · · · , K}} is the feasible
region of all PS ratios.
We find that the optimization problem in (13) is not convex,
and the closed-form solution of the globally optimal ρ cannot
(z2 z3 + t) ρi 3 − z3 pi bii + z2 z3 σ 2 − 2t ρi 2 + 2z1 z3 − z2 z3 σ 2 + t + pi bii σ 2 ρ2 + z1 z3
∂γDi (ρ¯i , ρi )
= 0.
=
i2
h
∂ρi
−1
z2 + η (1 − ρi ) Xi cii σ 2 (ρi pi bii ) + σ 2
K
X
z1 =
η (1 − ρj ) Xj pi bii + σ 2
j=1,j6=i
K
X
z2 =
η (1 − ρj ) Xj Yji
K
X
σ2
ρj pj
+
.
η
(1
−
ρ
)
X
c
j
j
ji
ρj pj bjj + σ 2
ρj pj bjj + σ 2
be obtained. Thus, we develop a distributed and iterative
scheme, such that each Ri decides its own PS strategies
aiming at maximizing the individual achievable transmission
rate through the ith link, i.e., Si → Ri → Di , and the global
optimal ρ that maximizes the sum rate of all links can be then
obtained iteratively by the cooperation between the relays. The
design problem is formulated as:
K
X
i=1
max
ρ
1
B log2 (1 + γDi (ρ¯i , ρi )),
2
s.t. ρ ∈ A,
H
Ui UH
i = I, Vi Vi = I,
E{||xi ||2 } = 1, ∀i ∈ {1, · · · , K},
(14)
where ρ¯i = {ρj | 1 ≤ j ≤ K, ∀j 6= i} denotes the vector of
all links’ PS ratios except for the ith link.
B. Optimal Design
We first determine the optimal ρi at Ri that maximizes
the received SINR at Di for any given ρ¯i . The problem is
formulated as:
ρi
s.t. ρ ∈ A,
γ̃Di (ρ¯i , ρi )
η (1 − ρi ) Xi ρi pi Yii ρi pi b1ii +σ2
K
P
j=1,j6=i
=
I, Vi ViH
(18)
to find the optimal ρ. The main idea of our iterative algorithm
is briefly summarized as follows. Assume that the result
of nth iteration is ρ(n) = [ρ1 (n), ρ2 (n), · · · , ρK (n)]. The
solution of (n + 1)th PS ratio ρi (n + 1) is then acquired
by using Newton iteration method with the knowledge of
{ρ1 (n + 1), ρ2 (n + 1), · · · , ρi−1 (n + 1), ρi+1 (n), · · · , ρK (n)}.
We define a parameter ε to evaluate whether the algorithm is
convergent or not. The iteration converges if and only if the
condition of kρ(n + 1) − ρ(n)k2 ε is satisfied.
In what follows, we determine the wise iterative initial
values to reduce the number of the iterations to find the
optimal ρ. To this end, we obtain the optimal PS ratios under
the asymptotic scenario of high SNR, which in fact serve
as the wise initial values of our iterative algorithm under
general scenarios. We note that (12) can be simplified when
the received SINRs at relays are high, i.e., ρi pi bii σ 2 and
ρj pj bjj σ 2 . The second item and the third item of the
denominator of γDi (ρ¯i , ρi ) approximate to zero under the high
SINR consideration. Then, γDi (ρ¯i , ρi ) can be rewritten as
=
max γDi (ρ¯i , ρi ),
Ui UH
i
(17)
j=1,j6=i
j=1,j6=i
(P2)
ρj pj Yji + cji σ 2
− Xi cii σ 2 + pi bii σ 2 .
ρj pj bji + σ 2
(16)
= I,
2
E{||xi || } = 1, ∀i ∈ {1, · · · , K}.
(15)
We find that γDi |ρi =0 = 0 and γDi |ρi =1 = 0, and hence,
γDi (ρ¯i , ρi ) is not a monotonous function of ρi in the range
of [0, 1]. We will later show the uniqueness of the extreme
value point of γDi (ρ¯i , ρi ) for 0 ≤ ρi ≤ 1, and verify that the
extreme point is the optimal PS ratio maximizing the SINR.
Let the first-order derivative of γDi (ρ¯i , ρi ) with respect to
ρi be equal to zero, we have (16) shown at the top of this
paper, where t = Xi cii σ 2 , z3 = Xi pi Yii , and z1 and z2
are given in (17) and (18), respectively. From (16), we find
∂γDi (ρ¯i ,ρi )
∂γDi (ρ¯i ,ρi )
that
|ρi =0 > 0 and
|ρi =1 < 0. Thus,
∂ρi
∂ρi
∂γDi (ρ¯i ,ρi )
= 0. We
∂ρi
∂γDi (ρ¯i ,ρi )
ρi to
= 0
∂ρi
there exists a ρi ∈ [0, 1] that satisfies
note that the closed-form solution of
cannot be obtained. Instead, we propose an iterative algorithm
.
(19)
η (1 − ρj ) Xj ρj pj Yji ρi pi b1ii +σ2 + σ 2
The first-order derivative of γ̃Di (ρ¯i , ρi ) with respect to ρi is
given by
γ̃Di (ρ¯i , ρi )
∂ρi
Xi Yii pi sji + σ 2 −ρi 2 pi bii − 2ρi σ 2 + σ 2
=
,
2
[ρi pi bii (sji + σ 2 ) + σ 2 sji + σ 2 ]
where sji =
K
P
(20)
η (1 − ρj )Xi ρj pj Yji ρj pi b1jj +σ2 . We find
j=1,j6=i
γ̃Di (ρ¯i ,ρi )
|ρi =0 >
∂ρi
γ̃
(ρ¯ ,ρ )
that
0 and Di∂ρii i |ρi =1 < 0. Thus, there
exists a ρi ∈ [0, 1] satisfying that γ̃Di (ρ¯i , ρi ) in (20) is equal
to zero. In addition, we need to solve the following equation
to acquire the initial PS ratios.
fˆ(ρi ) = Xi Yii pi sji + ρi 2 −ρi 2 pi bii − 2ρi ρi 2 + ρi 2 = 0.
(21)
−τ
Since bii = rii
||H̃ii ||2 , it is always true that pi bii > 0. Note
that the solution of (21) is uncorrelated with ρ¯i . Hence, each
ρi can be calculated independently. We further find that (21)
has two real roots, since ∆ = 4σ 2 (σ 2 + pi bi ) > 0. Moreover,
according to the curve of the
√ function as depicted in Figure 3,
2
− ∆
< 0 should be abandoned, and
the solution of ρi = −2σ
2pi bii
the optimal PS √ratio under the high SINR condition is given
2
+ ∆
by ρoi = −2σ
.
2pi bii
TABLE I
A LGORITHM TO SOLVE (P2)
1. Initialization: Set iteration number n = 0; Determine the
initial PS ratios ρ(0) = {ρ1 (0), ρ2 (0) · · · ρK (0)} by (22).
2. Iteration: Obtain the nth iteration results ρ(n + 1) by
solving (16) for each ρi with the Newton iteration method
and {ρ1 (n + 1) · · · ρi−1 (n + 1), ρi+1 (n) · · · ρK (n)}.
3. Check if kρ(n + 1) − ρ(n)k2 ε is satisfied. If it is
satisfied, go to Step 4. Otherwise, update the iteration
fˆ ( ri )
number n = n + 1 and return to Step 2.
4. Obtain the optimal PS ratios as ρ∗ = ρ(n + 1).
0.8
0.75
0.7
r2
0
r1
1
ri
0.65
0.6
0.55
Fig. 3. The curve of fˆ(ρi ) versus ρi .
0.5
0.45
0.4
th
Therefore, the initial PS ratio for the i link of our iterative
algorithm is given by
√
−2σ 2 + ∆
ρi (0) = ρoi =
.
(22)
2pi bii
Different from the system of linear equations, it is difficult
to provide a convergence condition for our proposed iterative
algorithm. However, it can be found that ρi (n + 1) is the
only root of equation γDi (ρ¯i , ρi ). Thus, the result of every
iteration increases the objective function, i.e., γDi (ρi (n +
1)) > γDi (ρi (n)). Considering the monotonic increase of the
objective function and the fact that the maximum achievable
γDi is finite, the proposed algorithm converges to a certain
point, at which the iteration stops. The detailed steps of the
proposed algorithm are given in Table I.
V. N UMERICAL R ESULTS
In this section, we show the performance of the proposed
scheme by simulations. Unless otherwise stated, we set the
distance between each source and its dedicated relay rii = 1,
the distance between each relay and its respective destination
mii = 1, the path-loss exponent τ = 3, the energy conversion
efficiency η = 0.5, the variance of AWGN at receivers σ 2 =
0.01, and the convergency criteria ε = 10−3 . We randomly
generate 10,000 channel realizations to obtain the average sum
rate of the network.
0.35
1
2
3
4
5
6
7
8
9
10
Fig. 4. Illustration of the convergence of the proposed algorithm. The system
parameters are M = N = L = 4 and K = 3.
We first demonstrate the convergence speed of the proposed
iterative algorithm. Figure 4 plots the obtained PS ratio versus
the iteration number for different initial values. As shown in
the figure, the algorithm always converges to the same value
of the PS ratio, i.e., the optimal PS ratio, for any given initial
values. Comparing the results for different initial points, we
find that our proposed initialized PS scheme outperforms all
other schemes, in terms of the use of iterations to obtain the
optimal PS ratio.
We now present the performance comparison between our
proposed scheme and benchmark schemes. We consider three
benchmark schemes, which are the transmission with randomly select PS ratios, the transmission with fixed PS ratios
of ρi = 0.5, and the transmission scheme without the IA as
proposed in [7]. Figures 5 and 6 plot the average sum rate versus the transmit power at the source and the number of links,
respectively. As shown in both figures, our proposed scheme
always significantly outperforms all benchmark schemes. We
note that the benchmark scheme without the IA always has
0.45
0.55
0.5
0.4
0.45
0.35
0.4
0.3
0.35
0.25
0.3
0.2
0.25
0.2
0.15
0.15
0.1
0.1
0.05
0
5
10
15
20
25
30
35
40
0.05
3
Fig. 5. Sum rate of the network versus transmit power at the source. The
system parameters are M = N = L = K = 4.
the worst performance, which implies that the introduction
of the IA improves the network performance even if the PS
ratios are not optimally designed. The great advantage of our
proposed scheme is due to not only the introduction of the
IA to eliminate the interference to relays but also the welldesigned PS ratios that balance the information processing and
energy harvesting to forward the information.
We next discuss the impact of the transmit power at the
source on the performance of the network. As depicted in
Figure 5, the average sum rate increases as the transmit power
increases. From all curves, we note that the speed of the
increase of sum rate decreases as the transmit power increases,
and it becomes very slow when the transmit power is high.
Such an observation can be explained as follows. In this work,
we have not employed the IA for the second hop due to
the practical consideration of the system overhead and the
high energy consumption at relays. Then, the increase of
transmit power at relays may harm the network performance
due to the interference. Thus, the relays actually cannot always
increase the transmit power for the second hop, even if the
harvested energy increases. Furthermore, we find that the
performance gap between our proposed scheme and the other
benchmark schemes increases as the transmit power at the
source increases. This is because that the proposed scheme
wisely takes advantage of the increase of the available energy
resource at sources by the IA and the PS design, while the
other schemes cannot take fully advantage of the increase of
resource due to lack of either interference management or the
PS design.
Finally, we evaluate the impact of the number of links on
the performance of the network. As illustrated in Figure 6,
as the number of links increases, the sum rate of the network
increases first until achieving a peak value, and then decreases.
3.5
4
4.5
5
5.5
6
6.5
7
Fig. 6. Sum rate of the network versus number of links. The system
parameters are M = N = L = 4.
We explain this observation as follows. When the number
of links is small, the increase of the number of links can
significantly increase the multiplexing gain of the network with
the relatively small interference. When the number of links
becomes large, further increasing the number of links would
make the interference issue become serious, as the IA has not
been employed at the relays for the second-hop transmission
in the proposed scheme. From the discussions of Figures 5
and 6, we note that adopting the IA at the second hop has
a considerable potential to further improve the performance
of the network. However, as mentioned previously, the employment of the IA at relays would considerably increase the
system overhead and the energy consumption of the energyconstrained relays. Thus, it is not always feasible to employ
the IA at the wireless-powered relays in practice.
VI. C ONCLUSION
In this paper, we have introduced a novel transmission
scheme for multi-user multi-relay IC networks, where the
IA is employed at the sources and the PS is designed at
the AF relays. We have proposed a distributed and iterative
algorithm to determine the optimal PS ratios that maximize
the sum rate of the network. In order to reduce the number
of iterations to obtain the optimal PS ratios, we have further
derived the closed-form expressions for the optimal PS ratios
under the high SNR condition, which serve as the initial values
of the iteration for the proposed algorithm. Our results have
shown that the proposed scheme significantly outperforms the
benchmark schemes to improve the network performance.
ACKNOWLEDGMENT
This work is supported by National Natural Science Foundation of China (NSFC) under grant 61461136001.
R EFERENCES
[1] L. R. Varshney, “Transporting information and energy simultaneously,”
in Proc. IEEE ISIT, pp. 1612–1616, Jul. 2008.
[2] R. Zhang and C. K. Ho, “MIMO broadcasting for simultaneous wireless
information and power transfer,” IEEE Trans. Wireless Commun., vol.
12, no. 5, pp. 1989–2001, Nov. 2013.
[3] D. W. K. Ng, E. Lo, and R. Schober, “Wireless information and power
transfer: Energy efficiency optimization in OFDMA systems,” IEEE
Trans. Wireless Commun., vol. 12, no. 12, pp. 6352–6370, Dec. 2014.
[4] Z. Chen, B. Xia, and H. Liu, “Wireless information and power transfer
in two-way amplify-and-forward relaying channels,” in Proc. IEEE
GLOBECOM, pp. 168–172, Dec. 2015.
[5] L. Hu, C. Zhang, and Z. Ding, “Dynamic Power Splitting Policies for
AF Relay Networks with Wireless Energy Harvesting,” in Proc. IEEE
ICCW, pp. 2035–2039, Jun. 2015.
[6] I. Krikidis, “Simultaneous information and energy transfer in large-scale
networks with/without relaying,” IEEE Trans. Commun., vol. 62, no. 3,
pp. 900–912, Mar. 2014.
[7] H. Chen, Y. Jiang, Y. Li, Y. Ma, and B. Vucetic, “A game-theoretical
model for wireless information and power transfer in relay interference
channels,” in Proc. IEEE ISIT, pp. 1161–1165, Jun. 2014.
[8] H. Zeng, F. Tian, Y. T. Hou, W. Lou, and S. F. Midkiff, “Interference
alignment for multihop wireless networks: challenges and research
directions,” IEEE Network, vol. 30, no. 2, pp. 74–80, Mar.–Apr. 2016.
[9] X. Li, Y. Sun, F. R. Yu, and N. Zhao, “Antenna selection and power
splitting for simultaneous wireless information and power transfer in
interference alignment networks,” in Proc. IEEE GLOBECOM, pp.
2667–2672, Dec. 2014.
[10] A. A. Nasir, X. Zhou, S. Durrani, and R. A. Kennedy, “Relaying
protocols for wireless energy harvesting and information processing,”
IEEE Trans. Wireless Commun., vol. 12, no. 7, pp. 3622–3636, Jul.
2013.
[11] K. Gomadam, V. R. Cadambe, and S. A. Jafar, “Approaching the capacity
of wireless networks through distributed interference alignment,” in
Proc. IEEE GLOBECOM, pp. 1–6, Dec. 2008.
[12] L. Liu, R. Zhang, and K. C. Chua, “Wireless information and power
transfer: A dynamic power splitting approach,” IEEE Trans. Commun.,
vol. 61, no. 9, pp. 3990–4001, Sep. 2013.
| 7 |
arXiv:1702.03152v1 [cs.CC] 10 Feb 2017
A Variation of Levin Search for All Well-Defined
Problems
Fouad B. Chedid
A’Sharqiyah University, Ibra, Oman
f.chedid@asu.edu.om
February 13, 2017
Abstract
In 1973, L.A. Levin published an algorithm that solves any inversion
problem π as quickly as the fastest algorithm p∗ computing a solution for
∗
π in time bounded by 2l(p ) .t∗ , where l(p∗ ) is the length of the binary
∗
∗
encoding of p , and t is the runtime of p∗ plus the time to verify its
correctness. In 2002, M. Hutter published an algorithm that solves any
well-defined problem π as quickly as the fastest algorithm p∗ computing
a solution for π in time bounded by 5.tp (x) + dp .timetp (x) + cp , where
dp = 40.2l(p)+l(tp ) and cp = 40.2l(f )+1 .O(l(f )2 ), where l(f ) is the length
of the binary encoding of a proof f that produces a pair (p, tp ), where
tp (x) is a provable time bound on the runtime of the fastest program p
provably equivalent to p∗ . In this paper, we rewrite Levin Search using
the ideas of Hutter so that we have a new simple algorithm that solves any
well-defined problem π as quickly as the fastest algorithm p∗ computing
a solution for π in time bounded by O(l(f )2 ).tp (x).
keywords: Computational Complexity; Algorithmic Information Theory; Levin
Search.
1
Introduction
We recall that the class N P is the set of all decision problems that can be
solved efficiently on a nondeterministic Turing Machine. Alternatively, N P is
the set of all decision problems whose guessed solutions can be verified efficienlty
on a deterministic Turing Machine, or equivalently today’s computers. The
interesting part is that the set of all hardest problems in the class N P (also
known as N P -complete problems) have so far defied any efficient solutions on
real computers. This has been a very frustrating challenge given that many
of the N P -complete problems are about real-world applications that we really
would like to be able to solve efficiently.
1
Leonid Levin [6], and independently from Stephen Cook [1], defined the class
N P , identified the class N P -complete, and gave a few reductions that show the
N P -completeness of some N P problems (independent from the work of Richard
Karp [3]). In that same paper [6], Levin gave an algorithm to deal with N P complete problems provided that someone develops a proof that shows P = N P .
In this case, Levin’s algorithm can be used to develop a polynomial-time solution
for every N P -complete problem. In particular, Levin’s algorithm works with
a very broad class of mathematical problems that can be put in the form of
inverting easily computable functions. For example, suppose we have a function
y = f (x), where we know nothing about f except that it is easily computable.
The challenge is to find x = f −1 (y) and to do so in a minimal amount of time.
The idea of Levin’s algorithm is to go through the space of all algorithms in
search for a fastest algorithm A that knows the secret of f . In general, let π be
an inversion problem and let p∗ be a known fastest algorithm for π that runs in
time t∗ , where t∗ is the runtime of p∗ plus the time to verify its correctness. The
Universal Search algorithm of Levin, also known as Levin Search, is an effective
∗
procedure for findng p∗ in time bounded by 2l(p ) .t∗ , where l(p∗ ) is the length
of the binary encoding of p∗ . Levin Search can also be used with some forms
of optimization and prediction problems, and it is theoretically optimal. The
following is a pseudocode for Levin Search:
Pseudocode Levin Search
t = 2.
for all programs p, in parallel, do
run p for at most t.2−l(p) steps
if p is proved to generate a correct solution for π, then return p for p∗ and
halt.
endfor
t = 2.t
goto for all programs p
End of Pseudocode
The idea of Levin Search is to search the space of all programs p in increasing order of l(p) + log t, or equivalently in decreasing order of P rob.(p)
, where
t
P rob.(p) = 2−l(p) is the probability contributed to p in the overall Solomonoff’s
algorithmic probability [8] of finding a solution for π. By construction, P rob.(p)
gives short programs a better chance of being successful on the assumption that
short programs are more worthy (Occam’s razor). We mention that the main
idea of Levin Search was behind the notion of Levin Complexity, which defines
a computable version of Kolmogorov complexity based on time-bounded Turing machines. Let U be a fixed reference universal Turing machine, the Levin
complexity of a finite binary string x is defined as
Kt (x) = min .p {l(p) + log t : U (p) = x, in at most t steps.}
Returning to Levin Search, during each iteration of the “for” loop, each program
2
is assigned a fraction of time proportional to its probability, which basically says
that short programs get to run more often.
The rest of this paper is organized as follows. Section 2 describes a sequential
implementation of Levin Search and shows that it solves any inversion problem
π as quickly as the fastest algorithm p∗ computing a solution for π, save for a
∗
factor of 2l(p ) . Section 3 describes an improvement on Levin Search for all welldefined problems. A problem is designated as well-defined if its solutions can
be formally proved correct and have provably quickly computable time bounds.
This improvement, known as Hutter’s algorithm [2], describes an algorithm that
solves any well-defined problem π as quickly as the fastest algorithm computing
a solution for π, save for a factor of 5. Section 4 contains our contribution. We
rewrite Levin Search using the ideas of Hutter so that we have a new simple
algorithm that solves any well-defined problem π as quickly as the fastest algorithm p∗ computing a solution for π, save for a factor of O(l(f )2 ), where l(f ) is
the length of the binary encoding of a proof f that produces a pair (p, tp ), where
tp is a provable time bound on the runtime of the fastest program p provably
equivalent to p∗ . Section 5 contains some concluding remarks.
2
Sequential Levin Search
The pseudocode of Levin Search from the previous section cannot be readily executed on today’s computers. A sequential version of Levin Search can simulate
the parallel execution of all programs by running programs, each for a fraction
of time, one after another, in increasing order of their length. Hereafter, we
assume that all progams are encoded using prefix-free codes, using for example,
Shannon-Fano coding [9]. Let π be an inversion problem and let p∗ be a known
fastest algorithm for π that runs in time t∗ , where t∗ is the runtime of p∗ plus
the time to verify its correctness. Also, let l(p) be the length of the binary
encoding of program p.
Algorithm Levin Search
t = 2;
for all programs p of length l(p) = 1, 2, . . . , log t do
run p for at most t.2−l(p) steps.
if p is proved to generate a correct solution for p, return p for p∗ and halt.
endfor
t = 2.t
goto for all programs p
End of Levin Search
During each time step t, the algorithm runs all programs p of length ≤ log t
in increasing order of their length and gives each program p a fraction of time
proportional to 2−l(p) .
Time Analysis. The following proof for the runtime complexity of Levin
Search is due to Solomonoff [7]. The “for” loop in the algorithm takes time
3
P
t.2−l(p) ≤ t. This is true by Kraft inequality [5]. Now, suppose that
p:l(p)≤log t
Levin Search stops at time t = T . Then, the total runtime of this algorithm is
timeLevinSearch ≤ 2 + 4 + . . . +
T
T
+ + T = 2T − 2 < 2T.
4
2
Moreover, the runtime t∗ of p∗ is
∗
t∗ = T.2−l(p
)
>
timeLevinSearch −l(p∗ ).
.2
2
Thus,
∗
timeLevinSearch < 2l(p
)+1 ∗
.t = O(t∗ )
∗
This is true because the multiplicative constant 2l(p )+1 is independent of the instance of the problem π. We mention that it is this huge multiplicative constant
that limits the applicability of Levin Search in practice.
It seems that a possible way to improve the runtime of Levin Search is to
find a way, where not all programs of length ≤ log t get executed, for each value
of t. Ideally, we would like a way that executes at most a single program for
each value of t, preferably a shortest and fastest one, and this is exactly what
we will present in Section 4.
3
Hutter’s Fastest and Shortest Algorithm for
All Well-Defined Problems
In [2], Hutter presented an improvement on Levin Search for problems that are
well-defined. By this, Hutter means a problem whose solutions can be formally
proved correct and have provably quickly computable time bounds. Let p∗ (x) be
a known fastest algorithm for some well-defined problem π(x). Then, Hutter’s
algorithm will construct a solution p(x) that is provably equivalent to p∗ (x), for
all x, in time proportional to 5.tp (x) + d.timetp (x) + c, where tp (x) is a provable
time bound on the runtime of p(x), timetp (x) is the time needed to compute
tp (x), and d and c are constants which depend on p but not on π(x). This shows
that Hutter’s agorithm runs in time O(tp ), save for a factor of 5.
Our work in the next section is inspired by the ideas of Hutter, and so, for
completeness, we have included below the details of Hutter’s algorithm as they
appear in [2].
The main idea of Hutter’s algorithm is to search the space of all proofs1 in
some formal axiomatic system, and not the space of all programs. In particular,
the algorithm searches for those proofs that can tell us which programs are
provably equivalent to p∗ and have provably quickly computable time bounds.
This is doable since the set of all proofs in any formal sysem is enumerable.
1 Given a formal logic system F with a set of axioms and inference rules. A proof in F is
a sequence of formulas, where each formula is either an axiom in F or is something that can
be inferred using F ’s axioms and inference rules, and previoulsy inferred formulas.
4
Moreover, in [2], Hutter showed how to formalize the notions of provability,
Turing Machines, and computation time. Let U be a fixed reference Universal
Turing machine. For finite binary strings p and tp , we assume that
• U (p, x) computes the output of program p on input x.
• U (tp , x) returns a time bound on the runtime of program p on input x.
Moreover, Hutter defined the following two terms:
• A term u is defined such that the formula [∀y : u(p, y) = u(p∗ , y)] is true
if and only if U (p, x) = U (p∗ , x), ∀x.
• A term tm is defined such that the formula [tm(p, x) = n] is true if and
only if U (p, x) takes n steps; that is, if timep (x) = n.
Then, we say that programs p and p∗ are provably equivalent if the formula
[∀y : u(p, y) = u(p∗ , y)] can be proved.
The algorithm of Hutter is as follows.
Algorithm Hutter(x)
Let L = φ, tf ast = ∞, and pf ast = p∗ .
Run A,B, and C concurrently with 10%, 10%, and 80% of computational resources, respectively.
Algorithm A
{ This algorithm identifies programs p that are provably equivalent to p∗ and
have provably computable time bounds tp }
for i = 1, 2, 3, . . . do
pick the ith proof in the list of all proofs and
if the last formula in the proof is equal to [∀y, u(p∗ , x) = u(p, y) and
u(tp , y) ≥ tm(p, y)], for some pair of strings (p, tp ) then
Let L = L ∪ {(p, tp )}.
End A
Algorithm B
{ This algorithm finds the program with the shortest time bound in L.}
for all (p, tp ) in L do
run U on all (tp , x) in parallel for all tp with relative computational
resources 2−l(p)−l(tp ) .
if U halts for some tp and U (tp , x) < tf ast then
tf ast = U (tp , x) and pf ast = p.
endfor all (p, tp )
End B
Algorithm C
For k = 1, 2, 4, 8, . . . do
pick the currenlty fastest program pf ast with time bound tf ast .
run U on (p, x) for k steps.
5
if U halts in less than k steps, then print result U (p, x) and abort computation
of A, B, and C.
endfor k
End C
In [2], it was shown that the overall runtime of Hutter’s algorithm is
timeHutter (x) ≤ 5.tpf ast (x) + dp .timetpf ast (x) + cp ,
where dp = 40.2l(pf ast )+l(tpf ast ) and cp = 40.2l(proof (pf ast ))+1 .O(l(proof (pf ast )2 ).
4
A Variation of Levin Search Inspired by the
Fastest and Shortest Algorithm of Hutter
We rewrite Levin Search so that it searches, in parallel, the space of all proofs,
and not the space of all programs. Let p∗ be a known fastest algorithm for some
well-defined problem π. Then, our algorithm will construct a solution p(x), for
all x, that is provably equivalent to p∗ (x) in time O(tp (x)), save for a factor of
O(l(proof (p))2 ), where tp (x) is a provable time bound on the runtime of the
fastest program p provably equivalent to p∗ .
We assume a formal axiomatic system F and the same terms u and tm defined in
Hutter’s algorithm. Moreover, we assume
all pairs (p, tp ) are encoded using
P that
prefix-free codes, and hence, we have
2−l(p)−l(tp ) ≤ 1, by Kraft inequality.
(p,tp )
The following is our Modified Levin Search algorithm:
Algorithm Modified Levin Search(x)
t = 2; tf ast = ∞; pf ast = λ; { pf ast is initialized to the empty string }
for all proofs f ∈ F of length l(f ) = 1, 2, . . . , log t do
write down the first t.2−l(f ) characters of proof f .
if the last formula in these characters is equal to ∀y, u(p∗ , x) = u(p, y)
and u(tp , y) ≥ tm(p, y), for some pair of strings (p, tp ) then
run tp (x) for at most t.2−l(p)−l(tp ) steps.
if tp halts and tp (x) < tf ast then
tf ast = tp (x) and pf ast = p.
endfor
if pf ast 6= λ then
run pf ast for at most t steps with time bound tf ast .
if pf ast (x) halts, then return pf ast for p∗ and halt.
endif
t = 2t.
go to for all proofs f ∈ F
End of Modified Levin Search
6
We note that during each time step t, Modified Levin Search runs at most a
single provably fast program pf ast that is provably equivalent to p∗ .
Running time calculation: Following [2], let naxioms be the finite number
of axioms in F. Then, each proof f ∈ F is a sequence F1 F2 . . . Fn , for some
positive integer n, where each Fi (1 ≤ i ≤ n) is either an axiom or a formula in
F. It takes O(naxioms .l(Fi )) time to check if Fi is an axiom, and O(l(f )) time to
check if Fi is a formula that can be inferred from other formulas Fj ∈ f , j < i.
Thus, checking the validity of all formulas in f , and hence the validity of the
proof f , takes O(l(f )2 ) time.
The “for” loop in Modified Levin Search takes time
X
O(t.2−l(f ) ) + O((t.2−l(f ) )2 ) + t.2−l(p)−l(tp )
t1 =
f :l(f )≤log t
X
≤t+
O(l(f )) + O(l(f )2 )
f :l(f )≤log t
This is true because
P
2−l(p)−l(tp ) ≤ 1, by Kraft inequality. we next have
(p,tp )
t1 ≤ t +
X
O(l(f )2 )
f :l(f )≤log t
< t + 2log t+1 .O(l(f ′ )2 )
< t + 2t.O(l(f ′ )2 )
< O(l(f ′ )2 ).t,
where f ′ is the proof that procuded the pair (p, tp ) with the shortest provable
time bound tp (x), among all proofs of length ≤ log t.
Thus, the runtime of Modified Levin Search is equal to
X
timeModif iedLevinSearch =
O(l(f ′ )2 ).t + t
t=2
=
X
<
X
O(l(f ′ )2 ).t
t=2
O(l(f ∗ )2 ).t,
t=2
where l(f ∗ ) is the length of the binary encoding of the proof that produced the
resultant pair (pf ast , tpf ast ).
7
Suppose that Modified Levin Search stops at time t = T ≤ tpf ast (x).Then
timeModif iedLevinSearch (x) = O(l(f ∗ )2 )}.(2 + 4 + . . . + T /2 + T )
= O(l(f ∗ )2 ).(2T − 2)
< O(l(f ∗ )2 ).2T
= O(l(f ∗ )2 ).T
≤ O(l(f ∗ )2 ).tpf ast (x).
This shows that Modified Levin Search runs in time O(tpf ast ), save for a factor
of O(l(f ∗ )2 ).
5
Concluding Remarks
We recall that Hutter’s algorithm was designed for well-defined problems. These
are problems the solutions of which can be formally proved correct and have
provably quickly computable time bounds. For programs provably correct (they
halt and compute the correct answer), but for which no quickly computable
time bounds exist (For example, the traveling salesman problem), Hutter [2]
explained that an obvious time bound for a progam p is its actual running time
timep (.). By replacing tp (.) with timep (.) in the runtime of Hutter’s algorithm,
and by noticing that, in this case, timetp (x) becomes timetimep (x) ≤ timep (x),
we have
timeHutter (x) ≤ 5.tp (x) + dp .timetp (x) + cp
≤ 5.timep (x) + dp .timep (x) + cp
≤ (5 + dp ).timep (x) + cp
≤ (5 + 40.2l(p) ).timep (x) + 40.2l(f )+1 .O(l(f )2 )
where l(p) is the length of the binary encoding of p, and l(f ) is the length of the
binary encoding of a proof f that produces the program p that is provably equivalent to the fastest known algorithm p∗ for the problem in hand. Thus, for such
programs, Hutter’s algorithm is optimal, save for a huge constant multiplicative
term and a huge constant additive term.
We next calculate the runtime of our algorithm in case we are dealing with
programs that can be provably correct, but for which no quickly computable
time bounds exist. Suppose that our algorithm stops at time t = T . Then, we
have
timeModif iedLevinSearch < O(l(f )2 ).T.
We also have timep (x) = T.2−l(p) . Thus,
timeModif iedLevinSearch < O(l(f )2 ).2l(p) .timep (x)
8
Thus, for such programs, the runtime of our algorithm also suffers from a huge
constant multiplicative term, but with no additinoal huge additive terms.
We conclude this paper by recalling an approach, proposed by Solomonoff in
1985 [7], to manage the huge multiplicative constant in the big-oh notation of the
running time of Levin Search. We do so because we think that the importance
of that work of Solomonoff is not widely appreciated. Solomonoff argued that
if machines are going to have a problem solving capability similar to that of
humans, then machines cannot start from scratch everytime they attempt to
solve a new problem. We, humans, rely on our previous knolwedge of solutions
to other problems to figure out a solution for a new problem. The basic idea
of Solomonoff is that we should be able to construct p∗ via a list of references
to previously discovered solutions for other related problems. We can imagine
writing the code for p∗ as p1 p2 . . . pn , where pi is a reference to a solution for
problem πi stored on a separate work tape of a Kolmogorov-Uspensky Machine
[4]. This way, the solution p∗ is a sequence of calls to other solutions stored on
the work tapes. This way, the length of p∗ would be made significantly smaller
than the sum of the lengths of pi , and the saving in the length of p∗ would
exponentially decrease the multiplicate constant in the big-oh notation of Levin
Search.
References
[1] S. A. Cook, The complexity of theorem-proving procedures, Proceedings
of the Third Annual ACM Symposium on Theory of Computing. (1971)
151–158.
[2] M. Hutter, The fastest and shortest algorithm for all well-defined problems,
International Journal of Foundations of Computer Science. 13:3 (2002)
431–443.
[3] R. M. Karp, Reducibility among combinatorial problems, In R.E. Miller
and J.W. Thatcher (editors). Complexity of Computer Computations.
(1972) 85–103.
[4] A.N. Kolmogorov and V.A. Uspenski, On the definition of an algorithm,
Uspehi Mat. Nauk. 13 (1958), 3–28. AMS Transl. 2nd ser.29 (1963) 217–
245.
[5] L.G. Kraft, A device for quantizing, grouping, and coding amplitude modulated pulses, MS Thesis, Electrical and Engineering Department, MIT.
[6] L. A. Levin, Universal sequential search problems, Problemy Peredaci Informacii. 9 (1973) 115–116. Translated in Problems of Information Transmission. 9 (1973) 265–266.
[7] R. J. Solomonoff, Optimum sequential searh. (1985) 1–13.
9
[8] R. J. Solomonoff, A formal theory of inductive inference: Part 1 and 2,
Inform. Control. 7 (1964) 1–22, 224–254.
[9] C.E. Shannon, A mathematical theory of communication, Bell System
Technical Journal. 27 (1948) 379–423.
10
| 8 |
arXiv:1510.00872v2 [math.NT] 9 Dec 2016
Compactifications of S-arithmetic quotients for the
projective general linear group
Takako Fukaya, Kazuya Kato, Romyar Sharifi
December 12, 2016
Dedicated to Professor John Coates on the occasion of his 70th birthday
Abstract
Let F be a global field, let S be a nonempty finite set of places of F which contains
Q
the archimedean places of F , let d ≥ 1, and let X = v ∈S X v where X v is the symmetric
space (resp., Bruhat-Tits building) associated to PGLd (Fv ) if v is archimedean (resp., nonarchimedean). In this paper, we construct compactifications Γ\X̄ of the quotient spaces
Γ\X for S-arithmetic subgroups Γ of PGLd (F ). The constructions make delicate use of the
maximal Satake compactification of X v (resp., the polyhedral compactification of X v of
Gérardin and Landvogt) for v archimedean (resp., non-archimedean). We also consider a
variant of X̄ in which we use the standard Satake compactification of X v (resp., the compactification of X v due to Werner).
1
Introduction
1.1. Let d ≥ 1, and let X = PGLd (R)/ POd (R) ∼
= SLd (R)/ SOd (R). The Borel-Serre space (resp.,
reductive Borel-Serre space) X̄ contains X as a dense open subspace [3] (resp., [26]). If Γ is a
subgroup of PGLd (Z) of finite index, this gives rise to a compactification Γ\X̄ of Γ\X .
1.2. Let F be a global field, which is to say either a number field or a function field in one
variable over a finite field. For a place v of F , let Fv be the local field of F at v . Fix d ≥ 1.
In this paper, we will consider the space X v of all homothety classes of norms on Fvd and a
certain space X̄ F,v which contains X v as a dense open subset. For F = Q and v the real place,
X v is identified with PGLd (R)/ POd (R), and X̄ F,v is identified with the reductive Borel-Serre
space associated to PGLd (Fv ). We have the following analogue of 1.1.
1
Theorem 1.3. Let F be a function field in one variable over a finite field, let v be a place of F ,
and let O be the subring of F consisting of all elements which are integral outside v . Then for
any subgroup Γ of PGLd (O) of finite index, the quotient Γ\X̄ F,v is a compact Hausdorff space
which contains Γ\X v as a dense open subset.
1.4. Our space X̄ F,v is not a very new object. In the case that v is non-archimedean, X v is
identified as a topological space with the Bruhat-Tits building of PGLd (Fv ). In this case, X̄ F,v is
similar to the polyhedral compactification of X v of Gérardin [7] and Landvogt [19], which we
denote by X̄ v . To each element of X̄ v is associated a parabolic subgroup of PGLd ,Fv . We define
X̄ F,v as the subset of X̄ v consisting of all elements for which the associated parabolic subgroup
is F -rational. We endow X̄ F,v with a topology which is different from its topology as a subspace
of X̄ v .
In the case d = 2, the boundary X̄ v \ X v of X̄ v is P1 (Fv ), whereas the boundary X̄ F,v \ X v of
X̄ F,v is P1 (F ). Unlike X̄ v , the space X̄ F,v is not compact, but the arithmetic quotient as in 1.1
and 1.3 is compact (see 1.6).
1.5. In §4, we derive the following generalization of 1.1 and 1.3.
Let F be a global field. For a nonempty finite set S of places of F , let X̄ F,S be the subspace
Q
of v ∈S X̄ F,v consisting of all elements (x v )v ∈S such that the F -parabolic subgroup associated
Q
to x v is independent of v . Let X S denote the subspace v ∈S X v of X̄ F,S .
Let S 1 be a nonempty finite set of places of F containing all archimedean places of F , let S 2
be a finite set of places of F which is disjoint from S 1 , and let S = S 1 ∪ S 2 . Let OS be the subring
of F consisting of all elements which are integral outside S.
Our main result is the following theorem (see Theorem 4.1.4).
Theorem 1.6. Let Γ be a subgroup of PGLd (OS ) of finite index. Then the quotient Γ\(X̄ F,S 1 × X S 2 )
is a compact Hausdorff space which contains Γ\X S as a dense open subset.
1.7. If F is a number field and S 1 coincides with the set of archimedean places of F , then the
space X̄ F,S 1 is the maximal Satake space of the Weil restriction of PGLd ,F from F to Q. In this
case, the theorem is known for S = S 1 through the work of Satake [23] and in general through
the work of Ji, Murty, Saper, and Scherk [14, 4.4].
♭
♭
1.8. We also consider a variant X̄ F,v
of X̄ F,v and a variant X̄ F,S
of X̄ F,S with continuous surjections
♭
X̄ F,v → X̄ F,v
,
♭
X̄ F,S → X̄ F,S
.
♭
In the case v is non-archimedean (resp., archimedean), X̄ F,v
is the part with “F -rational bound-
ary” in Werner’s compactification (resp., the standard Satake compactification) X̄ v♭ of X v [24,
2
25] (resp., [22]), endowed with a new topology. We will obtain an analogue of 1.6 for this variant.
To grasp the relationship with the Borel-Serre compactification [3], we also consider a vari♯
♯
ant X̄ F,v of X̄ F,v which has a continuous surjection X̄ F,v → X̄ F,v , and we show that in the case
♯
that F = Q and v is the real place, X̄ Q,v coincides with the Borel-Serre space associated to
♯
PGLd ,Q (3.7.4). If v is non-archimedean, the space X̄ F,v is not Hausdorff (3.7.6) and does not
seem useful.
1.9. What we do in this paper is closely related to what Satake did in [22] and [23]. In [22], he
defined a compactification of a symmetric Riemannian space. In [23], he took the part of this
compactification with “rational boundary” and endowed it with the Satake topology. Then he
showed that the quotient of this part by an arithmetic group is compact. We take the part X̄ F,v
of X̄ v with “F -rational boundary” to have a compact quotient by an arithmetic group. So, the
main results and their proofs in this paper might be evident to the experts in the theory of
Bruhat-Tits buildings, but we have not found them in the literature.
1.10. We intend to apply the compactification 1.3 to the construction of toroidal compactifications of the moduli space of Drinfeld modules of rank d in a forthcoming paper. In Section
4.7, we give a short explanation of this plan, along with two other potential applications, to
asymptotic behavior of heights of motives and to modular symbols over function fields.
1.11. We plan to generalize the results of this paper from PGLd to general reductive groups in
another forthcoming paper. The reason why we separate the PGLd -case from the general case
is as follows. For PGLd , we can describe the space X̄ F,v via norms on finite-dimensional vector
spaces over Fv (this method is not used for general reductive groups), and these norms play
an important role in the analytic theory of toroidal compactifications.
1.12. In §2, we review the compactifications of Bruhat-Tits buildings in the non-archimedean
setting and symmetric spaces in the archimedean setting. In §3 and §4, we discuss our compactifications.
1.13. We plan to apply the results of this paper to the study of Iwasawa theory over a function
field F . We dedicate this paper to John Coates, who has played a leading role in the development of Iwasawa theory.
Acknowledgments. The work of the first two authors was supported in part by the National
Science Foundation under Grant No. 1001729. The work of the third author was partially supported by the National Science Foundation under Grant Nos. 1401122/1661568 and 1360583,
and by a grant from the Simons Foundation (304824 to R.S.).
3
2
Spaces associated to local fields
In this section, we briefly review the compactification of the symmetric space (resp., of the
Bruhat-Tits building) associated to PGLd of an archimedean (resp., non-archimedean) local
field. See the papers of Satake [22] and Borel-Serre [3] (resp., Gérardin [7], Landvogt [19], and
Werner [24, 25]) for details.
Let E be a local field. This means that E is a locally compact topological field with a nondiscrete topology. That is, E is isomorphic to R, C, a finite extension of Qp for some prime
number p , or Fq ((T )) for a finite field Fq .
Let | |: E → R≥0 be the normalized absolute value. If E ∼
= R, this is the usual absolute value.
∼
If E = C, this is the square of the usual absolute value. If E is non-archimedean, this is the
unique multiplicative map E → R≥0 such that |a | = ♯(OE /aOE )−1 if a is a nonzero element of
the valuation ring OE of E .
Fix a positive integer d and a d -dimensional E -vector space V .
2.1 Norms
2.1.1. We recall the definitions of norms and semi-norms on V .
A norm (resp., semi-norm) on V is a map µ: V → R≥0 for which there exist an E -basis
(e i )1≤i ≤d of V and an element (ri )1≤i ≤d of Rd>0 (resp., Rd≥0 ) such that
(r 2 |a 1 |2 · · · + rd2 |a d |2 )1/2 if E ∼
= R,
1
µ(a 1 e 1 + · · · + a d e d ) = r1 |a 1 | + · · · + rd |a d |
if E ∼
= C,
max(r1 |a 1 |, . . . , rd |a d |) otherwise.
for all a 1 , . . . , a d ∈ E .
2.1.2. We will call the norm (resp., semi-norm) µ in the above, the norm (resp., semi-norm)
given by the basis (e i )i and by (ri )i .
2.1.3. We have the following characterizations of norms and semi-norms.
(1) If E ∼
= C), then there is a one-to-one correspondence between semi-norms
= R (resp., E ∼
on V and symmetric bilinear (resp., Hermitian) forms ( , ) on V such that (x ,x ) ≥ 0 for
all x ∈ V . The semi-norm µ corresponding to ( , ) is given by µ(x ) = (x ,x )1/2 (resp.,
µ(x ) = (x ,x )). This restricts to a correspondence between norms and forms that are
positive definite.
(2) If E is non-archimedean, then (as in [9]) a map µ: V → R≥0 is a norm (resp., semi-norm)
if and only if µ satisfies the following (i)–(iii) (resp., (i) and (ii)):
4
(i) µ(a x ) = |a |µ(x ) for all a ∈ E and x ∈ V ,
(ii) µ(x + y ) ≤ max(µ(x ), µ(y )) for all x , y ∈ V , and
(iii) µ(x ) > 0 if x ∈ V \ {0}.
These well-known facts imply that if µ is a norm (resp., semi-norm) on V and V ′ is an E subspace of V , then the restriction of µ to V ′ is a norm (resp., semi-norm) on V ′ .
2.1.4. We say that two norms (resp., semi-norms) µ and µ′ on V are equivalent if µ′ = c µ for
some c ∈ R>0 .
2.1.5. The group GLV (E ) acts on the set of all norms (resp., semi-norms) on V : for g ∈ GLV (E )
and a norm (resp., semi-norm) µ on V , g µ is defined as µ ◦ g −1 . This action preserves the
equivalence in 2.1.4.
2.1.6. Let V ∗ be the dual space of V . Then there is a bijection between the set of norms on V
and the set of norms on V ∗ . That is, for a norm µ on V , the corresponding norm µ∗ on V ∗ is
given by
|ϕ(x )|
µ∗ (ϕ) = sup
| x ∈ V \ {0}
µ(x )
for ϕ ∈ V ∗ .
For a norm µ on V associated to a basis (e i )i of V and (ri )i ∈ Rd>0 , the norm µ∗ on V ∗ is associated to the dual basis (e i∗ )i of V ∗ and (ri−1 )i . This proves the bijectivity.
2.1.7. For a norm µ on V and for g ∈ GLV (E ), we have
(µ ◦ g )∗ = µ∗ ◦ (g ∗ )−1 ,
which is to say (g µ)∗ = (g ∗ )−1 µ∗ , where g ∗ ∈ GLV ∗ (E ) is the transpose of g .
2.2 Definitions of the spaces
2.2.1. Let X V denote the set of all equivalence classes of norms on V (as in 2.1.4). We endow
X V with the quotient topology of the subspace topology on the set of all norms on V inside
RV .
2.2.2. In the case that E is archimedean, we have
PGLd (R)/ POd (R) ∼
= SLd (R)/ SOd (R) if E
XV ∼
=
PGL (C)/ PU(d ) ∼
if E
= SLd (C)/ SU(d )
d
∼
=R
∼
= C.
In the case E is non-archimedean, X V is identified with (a geometric realization of) the BruhatTits building associated to PGLV [4] (see also [5, Section 2]).
5
2.2.3. Recall that for a finite-dimensional vector space H 6= 0 over a field I , the following four
objects are in one-to-one correspondence:
(i) a parabolic subgroup of the algebraic group GLH over I ,
(ii) a parabolic subgroup of the algebraic group PGLH over I ,
(iii) a parabolic subgroup of the algebraic group SLH over I , and
(iv) a flag of I -subspaces of H (i.e., a set of subspaces containing {0} and H and totally ordered under inclusion).
The bijections (ii) 7→ (i) and (i) 7→ (iii) are the taking of inverse images. The bijection (i) 7→ (iv)
sends a parabolic subgroup P to the set of all P-stable I -subspaces of H , and the converse
map takes a flag to its isotropy subgroup in GLH .
2.2.4. Let X̄ V be the set of all pairs (P, µ), where P is a parabolic subgroup of the algebraic
group PGLV over E and, if
0 = V−1 ( V0 ( · · · ( Vm = V
denotes the flag corresponding to P (2.2.3), then µ is a family (µi )0≤i ≤m , where µi is an equivalence class of norms on Vi /Vi −1 .
We have an embedding X V ,→ X̄ V which sends µ to (PGLV , µ).
2.2.5. Let X̄ V♭ be the set of all equivalence classes of nonzero semi-norms on the dual space V ∗
of V (2.1.4). We have an embedding X V ,→ X̄ V♭ which sends µ to µ∗ (2.1.6).
This set X̄ V♭ is also identified with the set of pairs (W, µ) with W a nonzero E -subspace of V
and µ an equivalence class of a norm on W . In fact, µ corresponds to an equivalence class µ∗
of a norm on the dual space W ∗ of W (2.1.6), and µ∗ is identified via the projection V ∗ → W ∗
with an equivalence class of semi-norms on V ∗ .
We call the understanding of X̄ V♭ as the set of such pairs (W, µ) the definition of X̄ V♭ in the
second style. In this interpretation of X̄ V♭ , the above embedding X V → X̄ V♭ is written as µ 7→
(V, µ).
2.2.6. In the case that E is non-archimedean, X̄ V is the polyhedral compactification of the
Bruhat-Tits building X V by Gérardin [7] and Landvogt [19] (see also [11, Proposition 19]), and
X̄ V♭ is the compactification of X V by Werner [24, 25]. In the case that E is archimedean, X̄ V is
the maximal Satake compactification, and X̄ V♭ is the minimal Satake compactification for the
standard projective representation of PGLV (E ), as constructed by Satake in [22] (see also [2,
1.4]). The topologies of X̄ V and X̄ V♭ are reviewed in Section 2.3 below.
6
2.2.7. We have a canonical surjection X̄ V → X̄ V♭ which sends (P, µ) to (V0 , µ0 ), where V0 is as
in 2.2.4, and where we use the definition of X̄ V♭ of the second style in 2.2.5. This surjection is
compatible with the inclusion maps from X V to these spaces.
2.2.8. We have the natural actions of PGLV (E ) on X V , X̄ V and X̄ V♭ by 2.1.5. These actions are
compatible with the canonical maps between these spaces.
2.3 Topologies
2.3.1. We define a topology on X̄ V .
Take a basis (e i )i of V . We have a commutative diagram
PGLV (E ) × Rd>0−1
XV
PGLV (E ) × Rd≥0−1
X̄ V .
Here the upper arrow is (g , t ) 7→ g µ, where µ is the class of the norm on V associated to
Q
((e i )i , (ri )i ) with ri = 1≤j <i t j−1 , and where g µ is defined by the action of PGLV (E ) on X V
(2.2.8). The lower arrow is (g , t ) 7→ g (P, µ), where (P, µ) ∈ X̄ V is defined as follows, and g (P, µ) is
then defined by the action of PGLV (E ) on X̄ V (2.2.8). Let
I = {j | t j = 0} ⊂ {1, . . . , d − 1},
and write
I = {c (i ) | 0 ≤ i ≤ m − 1},
where m = ♯I and 1 ≤ c (0) < · · · < c (m − 1) ≤ d − 1. If we also let c (−1) = 0 and c (m ) = d , then
the set of
c (i )
X
Vi =
Fej
j =1
with −1 ≤ i ≤ m forms a flag in V , and P is defined to be the corresponding parabolic sub-
group of PGLV (2.2.3). For 0 ≤ i ≤ m , we take µi to be the equivalence class of the norm on
Q
Vi /Vi −1 given by the basis (e j )c (i −1)<j ≤c (i ) and the sequence (r j )c (i −1)<j ≤c (i ) with r j = c (i −1)<k <j t k−1 .
Both the upper and the lower horizontal arrows in the diagram are surjective, and the
topology on X V coincides with the topology as a quotient space of PGLV (E ) × Rd>0−1 via the
upper horizontal arrow. The topology on X̄ V is defined as the quotient topology of the topol-
ogy on PGLV (E ) × Rd≥0−1 via the lower horizontal arrow. It is easily seen that this topology is
independent of the choice of the basis (e i )i .
7
2.3.2. The space X̄ V♭ has the following topology: the space of all nonzero semi-norms on V ∗
∗
has a topology as a subspace of the product RV , and X̄ V♭ has a topology as a quotient of it.
2.3.3. Both X̄ V and X̄ V♭ are compact Hausdorff spaces containing X V as a dense open subset.
This is proved in [7, 19, 24, 25] in the case that E is non-archimedean and in [22, 2] in the
archimedean case.
2.3.4. The topology on X̄ V♭ coincides with the image of the topology on X̄ V . In fact, it is easily
seen that the canonical map X̄ V → X̄ V♭ is continuous (using, for instance, [25, Theorem 5.1]).
Since both spaces are compact Hausdorff and this continuous map is surjective, the topology
on X̄ V♭ is the image of that of X̄ V .
3
Spaces associated to global fields
Let F be a global field, which is to say, either a number field or a function field in one variable
over a finite field. We fix a finite-dimensional F -vector space V of dimension d ≥ 1. For a place
v of F , let Vv = Fv ⊗F V . We set X v = X Vv and X v♭ = X V♭ v for brevity. If v is non-archimedean, we
let Ov , k v , qv , ̟v denote the valuation ring of Fv , the residue field of Ov , the order of k v , and a
fixed uniformizer in Ov , respectively.
⋆
In this section, we define sets X̄ F,v
containing X v for ⋆ ∈ {♯, , ♭}, which serve as our rational
♭
partial compactifications. Here, X̄ F,v (resp., X̄ F,v
) is defined as a subset of X v (resp., X̄ v♭ ), and
♯
X̄ F,v has X̄ F,v as a quotient. In §3.2, by way of example, we describe these sets and various
topologies on them in the case that d = 2, F = Q, and v is the real place. For ⋆ 6= ♯, we construct
⋆
⋆
more generally sets X̄ F,S
for a nonempty finite set S of places of F . In §3.1, we describe X̄ F,S
as
Q
⋆
a subset of v ∈S X̄ F,v .
In §3.3 and §3.4, we define topologies on these sets. That is, in §3.3, we define the “Borel-
Serre topology”, while in §3.4, we define the “Satake topology” on X̄ F,v and, assuming S con♭
tains all archimedean places of F , on X̄ F,S
. In §3.5, we prove results on X̄ F,v . In §3.6, we com♭
pare the following topologies on X̄ F,v (resp., X̄ F,v
): the Borel-Serre topology, the Satake topol-
ogy, and the topology as a subspace of X̄ v (resp., X̄ v♭ ). In §3.7, we describe the relationship
between these spaces and Borel-Serre and reductive Borel-Serre spaces.
3.1 Definitions of the spaces
3.1.1. Let X̄ F,v = X̄ V,F,v be the subset of X̄ v consisting of all elements (P, µ) such that P is F rational. If P comes from a parabolic subgroup P ′ of PGLV over F , we also denote (P, µ) by
(P ′ , µ).
8
♭
3.1.2. Let X̄ F,v
be the subset of X̄ v♭ consisting of all elements (W, µ) such that W is F -rational
(using the definition of X̄ v♭ in the second style in 2.2.5). If W comes from an F -subspace W ′ of
V , we also denote (W, µ) by (W ′ , µ).
♯
3.1.3. Let X̄ F,v be the set of all triples (P, µ, s ) such that (P, µ) ∈ X̄ F,v and s is a splitting
s:
m
M
i =0
∼
(Vi /Vi −1 )v −
→ Vv
over Fv of the filtration (Vi )−1≤i ≤m of V corresponding to P.
♯
We have an embedding X v ,→ X̄ F,v that sends µ to (PGLV , µ, s ), where s is the identity map
of Vv .
3.1.4. We have a diagram with a commutative square
♯
X̄ F,v
X̄ F,v
♭
X̄ F,v
X̄ v
X̄ v♭ .
Here, the first arrow in the upper row forgets the splitting s , and the second arrow in the upper
row is (P, µ) 7→ (V0 , µ0 ), as is the lower arrow (2.2.7).
♯
♭
3.1.5. The group PGLV (F ) acts on the sets X̄ F,v , X̄ F,v
and X̄ F,v in the canonical manner.
3.1.6. Now let S be a nonempty finite set of places of F .
Q
• Let X̄ F,S be the subset of v ∈S X̄ F,v consisting of all elements (x v )v ∈S such that the parabolic
subgroup of G = PGLV associated to x v is independent of v .
Q
♭
♭
• Let X̄ F,S
be the subset of v ∈S X̄ F,v
consisting of all elements (x v )v ∈S such that the F subspace of V associated to x v is independent of v .
We will denote an element of X̄ F,S as (P, µ), where P is a parabolic subgroup of G and µ ∈
Q
♭
v ∈S,0≤i ≤m X (Vi /Vi −1 )v with (Vi )i the flag corresponding to P. We will denote an element of X̄ F,S
Q
as (W, µ), where W is a nonzero F -subspace of V and µ ∈ v ∈S X Wv . We have a canonical
surjective map
♭
X̄ F,S → X̄ F,S
which commutes with the inclusion maps from X S to these spaces.
9
3.2 Example: Upper half-plane
3.2.1. Suppose that F = Q, v is the real place, and d = 2.
♯
♭
, and X̄ Q,v are described by using the upper
In this case, the sets X v , X̄ v = X̄ v♭ , X̄ Q,v = X̄ Q,v
half-plane. In §2, we discussed topologies on the first two spaces. The remaining spaces also
♯
have natural topologies, as will be discussed in §3.3 and §3.4: the space X̄ Q,v is endowed with
the Borel-Serre topology, and X̄ Q,v has two topologies, the Borel-Serre topology and Satake
topology, which are both different from its topology as a subspace of X̄ v . In this section, as a
prelude to §3.3 and §3.4, we describe what the Borel-Serre and Satake topologies look like in
this special case.
3.2.2. Let H = {x + y i | x , y ∈ R, y > 0} be the upper half-plane. Fix a basis (e i )i =1,2 of V . For
z ∈ H, let µz denote the class of the norm on V corresponding to the class of the norm on
V ∗ given by a e 1∗ + b e 2∗ 7→ |a z + b | for a ,b ∈ R. Here (e i∗ )1≤i ≤d is the dual basis of (e i )i , and
| | denotes the usual absolute value (not the normalized absolute value) on C. We have a
homeomorphism
∼
H −→ X v ,
z 7→ µz
which is compatible with the actions of SL2 (R).
For the square root i ∈ H of −1, the norm a e 1 +b e 2 7→ (a 2 +b 2)1/2 has class µi . For z = x +y i ,
we have
µz =
The action of
!
−1 0
0
1
y
x
0
1
!
µi .
∈ GL2 (R) on X v corresponds to x + y i 7→ −x + y i on H.
3.2.3. The inclusions
X v ⊂ X̄ Q,v ⊂ X̄ v
can be identified with
H ⊂ H ∪ P1(Q) ⊂ H ∪ P1 (R).
Here z ∈ P1 (R) = R ∪ {∞} corresponds to the class in X̄ v♭ = X̄ v of the semi-norm a e 1∗ + b e 2∗ 7→
|a z +b | (resp., a e 1∗ +b e 2∗ 7→ |a |) on V ∗ if z ∈ R (resp., z = ∞). These identifications are compatible with the actions of PGLV (Q).
The topology on X̄ v of 2.3.1 is the topology as a subspace of P1 (C).
10
3.2.4. Let B be the Borel subgroup of PGLV consisting of all upper triangular matrices for the
basis (e i )i , and let 0 = V−1 ( V0 = Qe 1 ( V1 = V be the corresponding flag. Then ∞ ∈ P1 (Q) is
understood as the point (B, µ) of X̄ Q,v , where µ is the unique element of X (V0 )v × X (V /V0 )v .
♯
♯
Let X̄ Q,v (B ) = H ∪ {∞} ⊂ X̄ Q,v and let X̄ Q,v (B ) be the inverse image of X̄ Q,v (B ) in X̄ Q,v . Then
for the Borel-Serre topology defined in §3.3, we have a homeomorphism
♯
X̄ Q,v (B ) ∼
= {x + y i | x ∈ R, 0 < y ≤ ∞} ⊃ H.
Here x + ∞i corresponds to (B, µ, s ) where s is the splitting of the filtration (Vi ,v )i given by the
embedding (V /V0 )v → Vv that sends the class of e 2 to x e 1 + e 2 .
♯
The Borel-Serre topology on X̄ Q,v is characterized by the properties that
♯
(i) the action of the discrete group GLV (Q) on X̄ Q,v is continuous,
♯
(ii) the subset X̄ Q,v (B ) is open, and
♯
(iii) as a subspace, X̄ Q,v (B ) is homeomorphic to {x + y i | x ∈ R, 0 < y ≤ ∞} as above.
3.2.5. The Borel-Serre and Satake topologies on X̄ Q,v (defined in §3.3 and §3.4) are characterized by the following properties:
(i) The subspace topology on X v ⊂ X̄ Q,v coincides with the topology on H.
(ii) The action of the discrete group GLV (Q) on X̄ Q,v is continuous.
(iii) The following sets (a) (resp., (b)) form a base of neighborhoods of ∞ for the Borel-Serre
(resp., Satake) topology:
(a) the sets U f = {x + y i ∈ H | y ≥ f (x )} ∪ {∞} for continuous f : R → R,
(b) the sets Uc = {x + y i ∈ H | y ≥ c } ∪ {∞} with c ∈ R>0 .
♯
The Borel-Serre topology on X̄ Q,v is the image of the Borel-Serre topology on X̄ Q,v .
3.2.6. For example, the set {x +y i ∈ H | y > x }∪{∞} is a neighborhood of ∞ for the Borel-Serre
topology, but it is not a neighborhood of ∞ for the Satake topology.
3.2.7. For any subgroup Γ of PGL2 (Z) of finite index, the Borel-Serre and Satake topologies
induce the same topology on the quotient space X (Γ) = Γ\X̄ Q,v . Under this quotient topology,
X (Γ) is compact Hausdorff. If Γ is the image of a congruence subgroup of SL2 (Z), then this is
the usual topology on the modular curve X (Γ).
11
3.3 Borel-Serre topology
♯
3.3.1. For a parabolic subgroup P of PGLV , let X̄ F,v (P) (resp., X̄ F,v (P)) be the subset of X̄ F,v
♯
(resp., X̄ F,v ) consisting of all elements (Q, µ) (resp., (Q, µ, s )) such that Q ⊃ P.
The action of PGLV (Fv ) on X̄ v induces an action of P(Fv ) on X̄ F,v (P). We have also an action
♯
of P(Fv ) on X̄ F,v (P) given by
g (α, s ) = (g α, g ◦ s ◦ g −1 )
for g ∈ P(Fv ), α ∈ X̄ F,v (P), and s a splitting of the filtration.
3.3.2. Fix a basis (e i )i of V . Let P be a parabolic subgroup of PGLV such that
• if 0 = V−1 ( V0 ( · · · ( Vm = V denotes the flag of F -subspaces corresponding to P, then
each Vi is generated by the e j with 1 ≤ j ≤ c (i ), where c (i ) = dim(Vi ).
This condition on P is equivalent to the condition that P contains the Borel subgroup B of
PGLV consisting of all upper triangular matrices with respect to (e i )i . Where useful, we will
identify PGLV over F with PGLd over F via the basis (e i )i .
Let
∆(P) = {dim(Vj ) | 0 ≤ j ≤ m − 1} ⊂ {1, . . . , d − 1},
and let ∆′ (P) be the complement of ∆(P) in {1, . . . , d − 1}. Let Rd≥0−1 (P) be the open subset of
Rd≥0−1 given by
Rd≥0−1 (P) = {(t i )1≤i ≤d −1 ∈ Rd≥0−1 | t i > 0 for all i ∈ ∆′ (P)}.
In particular, we have
∆′ (P)
Rd≥0−1 (P) ∼
= R>0 × R≥0 .
∆(P)
3.3.3. With P as in 3.3.2, the map PGLV (Fv ) × Rd≥0−1 → X̄ v in 2.3.1 induces a map
π̄P,v : P(Fv ) × Rd≥0−1 (P) → X̄ F,v (P),
which restricts to a map
πP,v : P(Fv ) × Rd>0−1 → X F,v .
The map π̄P,v is induced by a map
♯
♯
π̄P,v : P(Fv ) × Rd≥0−1 (P) → X̄ F,v (P)
defined as (g , t ) 7→ g (P, µ, s ) where (P, µ) is as in 2.3.1 and s is the splitting of the filtration
(Vi )−1≤i ≤m defined by the basis (e i )i . For this splitting s , we set
X
V (i ) = s (Vi /Vi −1 ) =
Fej
c (i −1)<j ≤c (i )
12
for 0 ≤ i ≤ m so that Vi = Vi −1 ⊕ V (i ) and V =
Lm
i =0
V (i ) . If P = B , then we will often omit the
subscript B from our notation for these maps.
3.3.4. We review the Iwasawa decomposition. For v archimedean (resp., non-archimedean),
let A v ≤ PGLd (Fv ) be the subgroup of elements of that lift to diagonal matrices in GLd (Fv ) with
positive real entries (resp., with entries that are powers of ̟v ). Let K v denote the standard
maximal compact subgroup of PGLd (Fv ) given by
POd (R)
K v = PUd
PGLd (Ov )
if v is real,
if v is complex,
otherwise.
Let B u denote the upper-triangular unipotent matrices in the standard Borel B . The Iwasawa
decomposition is given by the equality
PGLd (Fv ) = B u (Fv )A v K v .
3.3.5. If v is archimedean, then the expression of a matrix in PGLd (Fv ) as a product in the
Iwasawa decomposition is unique.
3.3.6. If v is non-archimedean, then the Bruhat decomposition is PGLd (k v ) = B (k v )S d B (k v ),
where the symmetric group S d of degree d is viewed as a subgroup of PGLd over any field via
the permutation representation on the standard basis. This implies that PGLd (Ov ) = B (Ov )S d Iw(Ov ),
where Iw(Ov ) is the Iwahori subgroup consisting of those matrices in with upper triangular image in PGLd (k v ). Combining this with the Iwasawa decomposition (in the notation of 3.3.4),
we have
PGLd (Fv ) = B u (Fv )A v S d Iw(Ov ).
This decomposition is not unique, since B u (Fv ) ∩ Iw(Ov ) = B u (Ov ).
∼
3.3.7. If v is archimedean, then there is a bijection Rd>0−1 −
→ A v given by
t = (t k )1≤k ≤d −1 7→ a =
where ri =
Qi −1
t −1
k =1 k
diag(r1 , . . . , rd )−1
if v is real,
diag(r 1/2 , . . . , r 1/2 )−1
if v is complex,
1
as in 2.3.1.
Proposition 3.3.8.
13
d
(1) Let P be a parabolic subgroup of PGLV as in 3.3.2. Then the maps
♯
♯
π̄P,v : P(Fv ) × Rd≥0−1 (P) → X̄ F,v (P) and π̄P,v : P(Fv ) × Rd≥0−1 (P) → X̄ F,v (P)
of 3.3.3 are surjective.
(2) For the Borel subgroup B of 3.3.2, the maps
πv : B u (Fv ) × Rd>0−1 → X v ,
π̄v : B u (Fv ) × Rd≥0−1 → X̄ F,v (B ),
♯
♯
and π̄v : B u (Fv ) × Rd≥0−1 → X̄ F,v (B ).
of 3.3.3 are all surjective.
♯
(3) If v is archimedean, then πv and π̄v are bijective.
(4) If v is non-archimedean, then π̄v induces a bijection
(B u (Fv ) × Rd≥0−1 )/∼ → X̄ F,v (B )
where (g , (t i )i ) ∼ (g ′ , (t i′ )i ) if and only if
(i) t i = t i′ for all i and
Q
(ii) |(g −1 g ′ )i j | ≤ ( i ≤k <j t k )−1 for all 1 ≤ i < j ≤ d , considering any c ∈ R to be less than
0−1 = ∞.
♯
♯
Proof. If π̄v is surjective, then for any parabolic P containing B , the restriction of π̄v to B u (Fv )×
♯
♯
Rd≥0−1 (P) has image X̄ F,v (P). Since B u (Fv ) ⊂ P(Fv ), this forces the surjectivity of π̄P,v , hence of
π̄P,v as well. So, we turn our attention to (2)–(4). If r ∈ Rd>0 , we let µ(r ) ∈ X v denote the class of
the norm attached to the basis (e i )i and r .
Suppose first that v is archimedean. By the Iwasawa decomposition 3.3.4, and noting 3.3.5
and 2.2.2, we see that B u (Fv )A v → X v given by g 7→ g µ(1) is bijective, where µ(1) denotes the
class of the norm attached to (e i )i and 1 = (1, . . . , 1) ∈ Rd>0 . For t ∈ Rd>0−1 , let a ∈ A v be its image
under the bijection in 3.3.7. Since p a µ(1) = p µ(r ) , for p ∈ B u (Fv ) and r as in 2.3.1, we have the
bijectivity of πv .
Consider the third map in (2). For t ∈ Rd≥0−1 , let P be the parabolic that contains B and is
determined by the set ∆(P) of k ∈ {1, . . . , d − 1} for which t k = 0. Let (Vi )−1≤i ≤m be the correQm
sponding flag. Let M denote the Levi subgroup of P. (It is the quotient of i =0 GLV (i ) < GLV
Lm
by scalars, where V = i =0 V (i ) as in 3.3.3, and M ∩ B u is isomorphic to the product of the
upper-triangular unipotent matrices in each PGLV (i ) .) The product of the first maps in (2) for
the blocks of M is a bijection
∆′ (P) ∼
(M ∩ B u )(Fv ) × R>0
−
→
14
m
Y
i =0
X Vv(i ) ⊂ X̄ F,v (P),
such that (g , t ′ ) is sent to (P, g µ) in X̄ F,v , where µ is the sequence of classes of norms determined by t ′ and the standard basis. The stabilizer of µ in B u is the unipotent radical Pu of P,
and this Pu acts simply transitively on the set of splittings for the graded quotients (Vi /Vi −1 )v .
Since B u = (M ∩ B u )Pu , and this decomposition is unique, we have the desired bijectivity of
♯
π̄v , proving (3).
Suppose next that v is non-archimedean. We prove the surjectivity of the first map in (2).
Using the natural actions of A v and the symmetric group S d on Rd>0 , we see that any norm on
Vv can be written as g µ(r ) , where g ∈ PGLd (Fv ) and r = (ri )i ∈ Rd>0 , with r satisfying
r1 ≤ r2 ≤ · · · ≤ rd ≤ qv r1 .
For such an r , the class µ(r ) is invariant under the action of Iw(Ov ). Hence for such an r , any
′
element of S d Iw(Ov )µ(r ) = S d µ(r ) is of the form µ(r ) , where r ′ = (rσ(i ) )i for some σ ∈ S d . Hence,
′
any element of A v S d Iw(Ov )µ(r ) for such an r is of the form µ(r ) for some r ′ = (ri′ )i ∈ Rd>0 . This
proves the surjecivity of the first map of (2). The surjectivity of the other maps in (2) is then
shown using this, similarly to the archimedean case.
Finally, we prove (4). It is easy to see that the map π̄v factors through the quotient by
the equivalence relation. We can deduce the bijectivity in question from the bijectivity of
(B u (Fv )×Rd>0−1 )/∼ → X v , replacing V by Vi /Vi −1 as in the above arguments for the archimedean
case. Suppose that πv (g , t ) = πv (1, t ′ ) for g ∈ B u (Fv ) and t , t ′ ∈ Rd>0−1 . We must show that
′
(g , t ) ∼ (1, t ′ ). Write πv (g , t ) = g µ(r ) and πv (1, t ′ ) = µ(r ) with r = (ri )i and r ′ = (ri′ )i ∈ Rd>0 such
Q
that r1 = 1 and r j /ri = ( i ≤k <j t k )−1 for all 1 ≤ i < j ≤ d , and similarly for r ′ and t ′ . It then
′
suffices to check that r ′ = r and ri |g i j | ≤ r j for all i < j . Since µ(r ) = g −1 µ(r ) , there exists c ∈ R>0
such that
max{ri |x i | | 1 ≤ i ≤ d } = c max{ri′ |(g x )i | | 1 ≤ i ≤ d }
for all x = (x i )i ∈ Fvd . Taking x = e 1 , we have g x = e 1 as well, so c = 1. Taking x = e i , we
obtain ri ≥ ri′ , and taking x = g −1 e i , we obtain ri ≤ ri′ . Thus r = r ′ , and taking x = e j yields
r j = max{ri |g i j | | 1 ≤ i ≤ j }, which tells us that r j ≥ ri |g i j | for i < j .
♯
Proposition 3.3.9. There is a unique topology on X̄ F,v (resp., X̄ F,v ) satisfying the following conditions (i) and (ii).
♯
(i) For every parabolic subgroup P of PGLV , the set X̄ F,v (P) (resp., X̄ F,v (P)) is open in X̄ F,v
♯
(resp., X̄ F,v ).
(ii) For every parabolic subgroup P of PGLV and basis (e i )i of V such that P contains the Borel
♯
subgroup with respect to (e i )i , the topology on X̄ F,v (P) (resp., X̄ F,v (P)) is the topology as a
quotient of P(Fv ) × Rd≥0−1(P) under the surjection of 3.3.8(1).
15
This topology is also characterized by (i) and the following (ii)’.
(ii)’ If B is a Borel subgroup of PGLV consisting of upper triangular matrices with respect to a
♯
basis (e i )i of V , then the topology on X̄ F,v (B ) (resp., X̄ F,v (B )) is the topology as a quotient
of B u (Fv ) × Rd≥0−1 under the surjection of 3.3.8(2).
Proof. The uniqueness is clear if we have existence of a topology satisfying (i) and (ii). Let
(e i )i be a basis of V , let B be the Borel subgroup of PGLV with respect to this basis, and let
P be a parabolic subgroup of PGLV containing B . It suffices to prove that for the topology
♯
on X̄ F,v (B ) (resp., X̄ F,v (B )) as a quotient of B u (Fv ) × Rd≥0−1 (B ), the subspace topology on X̄ F,v (P)
♯
(resp., X̄ F,v (P)) coincides with the quotient topology from P(Fv )×Rd≥0−1 (P). For this, it is enough
♯
to show that the action of the topological group P(Fv ) on X̄ F,v (P) (resp., X̄ F,v (P)) is continuous
♯
with respect to the topology on X̄ F,v (P) (resp., X̄ F,v (P)) as a quotient of B u (Fv ) × Rd≥0−1 (P). We
must demonstrate this continuity.
Let (Vi )−1≤i ≤m be the flag corresponding to P, and let c (i ) = dim(Vi ). For 0 ≤ i ≤ m , we
Lm
regard GLV (i ) as a subgroup of GLV via the decomposition V = i =0 V (i ) of 3.3.3.
Suppose first that v is archimedean. For 0 ≤ i ≤ m , let K i be the compact subgroup of
GLV (i ) (Fv ) that is the isotropy group of the norm on V (i ) given by the basis (e j )c (i −1)<j ≤c (i ) and
Q
(1, . . . , 1) ∈ c (i −1)<j ≤c (i ) R>0 . We identify Rd>0−1 with A v as in 3.3.7. By the Iwasawa decomposition 3.3.4 and its uniqueness in 3.3.5, the product on P(Fv ) induces a homeomorphism
!
m
Y
∼
K i /{z ∈ Fv× | |z | = 1}.
(a ,b, c ): P(Fv ) −
→ B u (Fv ) × Rd>0−1 ×
i =0
∆′ (P)
We also have a product map φ : P(Fv ) × B u (Fv ) × R>0
∆′ (P)
→ P(Fv ), where we identify t ′ ∈ R>0
1/2
1/2
with the diagonal matrix diag(r1 , . . . , rd )−1 if v is real and diag(r1 , . . . , rd )−1 if v is complex,
Q
with r j−1 = c (i −1)<k <j t k′ for c (i − 1) < j ≤ c (i ) as in 2.3.1. These maps fit in a commutative
diagram
∆(P)
P(Fv ) × R≥0
(φ, id)
∆′ (P)
∆(P)
P(Fv ) × B u (Fv ) × R>0 × R≥0
(id, π̄P,v )
π̄P,v
B u (Fv ) × Rd≥0−1 (P)
P(Fv ) × X̄ F,v (P)
X̄ F,v (P)
in which the right vertical arrow is the action of P(Fv ) on X̄ F,v (P), and the left vertical arrow is
the continuous map
(u , t ) 7→ (a (u ),b (u ) · (1, t )),
∆′ (P)
∆(P)
(u , t ) ∈ P(Fv ) × R≥0
∆(P)
for (1, t ) the element of Rd≥0−1 (P) with R>0 -component 1 and R≥0 -component t . (To see the
commutativity, note that c (u ) commutes with the block-scalar matrix determined by (1, t ).)
16
♯
We also have a commutative diagram of the same form for X̄ F,v . Since the surjective horizontal
arrows are quotient maps, we have the continuity of the action of P(Fv ).
Next, we consider the case that v is non-archimedean. For 0 ≤ i ≤ m , let S (i ) be the group
of permutations of the set
I i = {j ∈ Z | c (i − 1) < j ≤ c (i )},
and regard it as a subgroup of GLV (i ) (F ). Let A v be the subgroup of the diagonal torus of
PGLV (Fv ) with respect to the basis (e i )i with entries powers of a fixed uniformizer, as in 3.3.4.
Qm
Consider the action of A v i =0 S (i ) ⊂ P(Fv ) on Rd≥0−1 (P) that is compatible with the action
of P(Fv ) on X̄ F,v (P) via the embedding Rd≥0−1 (P) → X̄ F,v (P). This action is described as follows.
Any matrix a = diag(a 1 , . . . , a d ) ∈ A v sends t ∈ Rd≥0−1 (P) to (t j |a j +1 ||a j |−1 ) j ∈ Rd≥0−1 (P). The action
Qm
of i =0 S (i ) on Rd≥0−1 (P) is the unique continuous action which is compatible with the evident
Qm
action of i =0 S (i ) on Rd>0 via the map Rd>0 → Rd≥0−1 (P) that sends (ri )i to (t j ) j , where t j = r j /r j +1 .
That is, for
σ = (σi )0≤i ≤m ∈
m
Y
S (i ) ,
i =0
let f ∈ S d be the unique permutation with f |I i = σi−1 for all i . Then σ sends t ∈ Rd≥0−1 (P) to the
element t ′ = (t j′ ) j given by
Q
t j′
=
t
f (j )≤k < f (j +1) k
if f (j ) < f (j + 1),
Q
t −1
f (j +1)≤k < f (j ) k
if f (j + 1) < f (j ).
Let C be the compact subset of Rd≥0−1(P) given by
¨
«
C = t = (t j ) j ∈ Rd≥0−1 (P) ∩ [0, 1]d −1 |
Y
c (i −1)<j <c (i )
t j ≥ qv−1 for all 0 ≤ i ≤ m .
Qm
We claim that for each x ∈ Rd≥0−1 (P), there is a finite family (h k )k of elements of A v i =0 S (i ) such
S
that the union k h k C is a neighborhood of x . This is quickly reduced to the following claim.
Claim. Consider the natural action of H = A v S d ⊂ PGLV on the quotient space Rd>0 /R>0 ,
with the class of (a j ) j in A v acting as multiplication by (|a j |) j . Let C be the image of
{r ∈ Rd>0 | r1 ≤ r2 ≤ · · · ≤ rd ≤ qv r1 }
in Rd /R>0 . Then for each x ∈ Rd>0 /R>0 , there is a finite family (h k )k of elements of H such that
S >0
h C is a neighborhood of x .
k k
17
Proof of Claim. This is a well-known statement in the theory of Bruhat-Tits buildings: the
quotient Rd>0 /R>0 is called the apartment of the Bruhat-Tits building X v of PGLV , and the set
C is a (d − 1)-simplex in this apartment. Any (d − 1)-simplex in this apartment has the form
hC for some h ∈ H , for any x ∈ Rd>0 /R>0 there are only finitely many (d − 1)-simplices in this
apartment which contain x , and the union of these is a neighborhood of x in Rd>0 /R>0 .
S
By compactness of C , the topology on the neighborhood k h k C of x is the quotient topolQm
`
ogy from k h k C . Thus, it is enough to show that for each h ∈ A v i =0 S (i ) , the composition
(id,πP,v )
P(Fv ) × B u (Fv ) × hC −−−→ P(Fv ) × X̄ F,v (P) → X̄ F,v (P)
♯
(where the second map is the action) and its analogue for X̄ F,v are continuous.
For 0 ≤ i ≤ m , let Iwi be the Iwahori subgroup of GLV (i ) (Fv ) for the basis (e j )c (i −1)<j ≤c (i ).
By the the Iwasawa and Bruhat decompositions as in 3.3.6, the product on P(Fv ) induces a
continuous surjection
m
Y
B u (Fv ) × A v
i =0
S (i ) ×
m
Y
i =0
Iwi → P(Fv ),
and it admits continuous sections locally on P(Fv ). (Here, the middle group A v
Qm
i =0
S (i ) has
the discrete topology.) Therefore, there exist an open covering (Uλ )λ of P(Fv ) and, for each λ,
a subset Uλ of the above product mapping homeomorphically to Uλ , together with a contin-
uous map
(a λ ,b λ , c λ ): Uλ → B u (Fv ) × A v
m
Y
S
(i )
i =0
m
Y
×
Iwi
i =0
∼
→ Uλ . Let Uλ′ denote the
such that its composition with the above product map is the map Uλ −
inverse image of Uλ under
P(Fv ) × B u (Fv ) → P(Fv ),
(g , g ′ ) 7→ g g ′ h,
so that (Uλ′ )λ is an open covering of P(Fv ) × B u (Fv ). For any γ in the indexing set of the cover,
′
′
let Uλ,γ
be the inverse image of Uλ′ in Uγ × B u (Fv ). Then the images of the Uλ,γ
form an open
′
cover of P(Fv ) × B u (Fv ) as well. Let (a ′λ,γ ,b λ,γ
) be the composition
(a λ ,b λ )
′
Uλ,γ
→ Uλ −−−→ B u (Fv ) × A v
As
Qm
i =0
m
Y
S (i ) .
i =0
Iwi fixes every element of C under its embedding in X̄ F,v (P), we have a commutative
18
diagram
′
Uλ,γ
× hC
P(Fv ) × B u (Fv ) × hC
B u (Fv ) × Rd≥0−1 (P)
X̄ F,v (P)
in which the left vertical arrow is
′
(u , hx ) 7→ (a ′λ,γ (u ),b λ,γ
(u )x )
♯
for x ∈ C . We also have a commutative diagram of the same form for X̄ F,v . This proves the
continuity of the action of P(Fv ).
♯
3.3.10. We call the topology on X̄ F,v (resp., X̄ F,v ) in 3.3.9 the Borel-Serre topology. The BorelSerre topology on X̄ F,v coincides with the quotient topology of the Borel-Serre topology on
♯
X̄ F,v . This topology on X̄ F,v is finer than the subspace topology from X̄ v .
♭
We define the Borel-Serre topology on X̄ F,v
as the quotient topology of the Borel-Serre
♭
topology of X̄ F,v . This topology on X̄ F,v
is finer than the subspace topology from X̄ v♭ .
For a nonempty finite set S of places of F , we define the Borel-Serre topology on X̄ F,S (resp.,
Q
Q
♭
) for the
as the subspace topology for the product topology on v ∈S X̄ F,v (resp., v ∈S X̄ F,v
♭
X̄ F,S
)
♭
Borel-Serre topology on each X̄ F,v (resp., X̄ F,v
).
3.4 Satake topology
3.4.1. For a nonempty finite set of places S of F , we define the Satake topology on X̄ F,S and,
♭
under the assumption S contains all archimedean places, on X̄ F,S
.
The Satake topology is coarser than the Borel-Serre topology of 3.3.10. On the other hand,
the Satake topology and the Borel-Serre topology induce the same topology on the quotient
space by an arithmetic group (4.1.8). Thus, the Hausdorff compactness of this quotient space
can be formulated without using the Satake topology (i.e., using only the Borel-Serre topology). However, arguments involving the Satake topology appear naturally in the proof of this
property. One nice aspect of the Satake topology is that each point has an explicit base of
neighborhoods (3.2.5, 3.4.9, 4.4.9).
3.4.2. Let H be a finite-dimensional vector space over a local field E . Let H ′ and H ′′ be E subspaces of H such that H ′ ⊃ H ′′ . Then a norm µ on H induces a norm ν on H ′ /H ′′ as
follows. Let µ′ be the restriction of µ to H ′ . Let (µ′ )∗ be the norm on (H ′ )∗ dual to µ′ . Let ν ∗ be
the restriction of (µ′ )∗ to the subspace (H ′ /H ′′ )∗ of (H ′ )∗ . Let ν be the dual of ν ∗ . This norm ν is
19
given on x ∈ H ′ /H ′′ by
ν(x ) = inf {µ(x̃ ) | x̃ ∈ H ′ such that x̃ + H ′′ = x }.
3.4.3. For a parabolic subgroup P of PGLV , let (Vi )−1≤i ≤m be the corresponding flag. Set
X̄ F,S (P) = {(P ′ , µ) ∈ X̄ F,S | P ′ ⊃ P}.
For a place v of F , let us set
m
Y
ZF,v (P) =
i =0
X (Vi /Vi −1)v
and ZF,S (P) =
Y
ZF,v (P).
v ∈S
We let P(Fv ) act on ZF,v (P) through P(Fv )/Pu (Fv ), using the PGL(Vi /Vi −1 )v (Fv )-action on X (Vi /Vi −1)v
for 0 ≤ i ≤ m . We define a P(Fv )-equivariant map
φP,v : X̄ F,v (P) → ZF,v (P)
with the product of these over v ∈ S giving rise to a map φP,S : X̄ F,S (P) → ZF,S (P).
′ ( V ′ ( · · · ( V ′ = V corresponding
Let (P ′ , µ) ∈ X̄ F,v (P). Then the spaces in the flag 0 = V−1
m′
0
to P ′ form a subset of {Vi | −1 ≤ i ≤ m }. The image ν = (νi )0≤i ≤m of (P ′ , µ) under φP,v is as
follows: there is a unique j with 0 ≤ j ≤ m ′ such that
Vj ′ ⊃ Vi ) Vi −1 ⊃ Vj ′−1 ,
and νi is the norm induced from µ j on the subquotient (Vi /Vi −1 )v of (Vj ′ /Vj ′−1 )v , in the sense
of 3.4.2. The P(Fv )-equivariance of φP,v is easily seen using the actions on norms of 2.1.5 and
2.1.7.
Though the following map is not used in this subsection, we introduce it here by way of
♭
comparison between X̄ F,S and X̄ F,S
.
3.4.4. Let W be a nonzero F -subspace of V , and set
♭
♭
X̄ F,S
(W ) = {(W ′ , µ) ∈ X̄ F,S
| W ′ ⊃ W }.
For a place v of F , we have a map
♭
♭
φW,v
: X̄ F,v
(W ) → X Wv
♭
♭
which sends (W ′ , µ) ∈ X̄ F,v
(W ) to the restriction of µ to Wv . The map φW,v
is P(Fv )-equivariant,
for P the parabolic subgroup of PGLV consisting of all elements that preserve W . Setting
Q
♭
♭
: X̄ F,S
(W ) →
Z♭F,S (W ) = v ∈S X Wv , the product of these maps over v ∈ S provides a map φW,S
Z♭F,S (W ).
20
3.4.5. For a finite-dimensional vector space H over a local field E , a basis e = (e i )1≤i ≤d of H ,
and a norm µ on H , we define the absolute value |µ : e | ∈ R>0 of µ relative to e as follows.
Suppose that µ is defined by a basis e ′ = (e i′ )1≤i ≤d and a tuple (ri )1≤i ≤d ∈ Rd>0 . Let h ∈ GLH (E )
be the element such that e ′ = he . We then define
|µ : e | = | det(h)|−1
d
Y
ri .
i =1
This is independent of the choice of e ′ and (ri )i . Note that we have
|g µ : e | = | det(g )|−1 |µ : e |
for all g ∈ GLH (E ).
3.4.6. Let P and (Vi )i be as in 3.4.3, and let v be a place of F . Fix a basis e (i ) of (Vi /Vi −1 )v for
each 0 ≤ i ≤ m . Then we have a map
′
φP,v
: X̄ F,v (P) → Rm
,
≥0
(P ′ , µ) 7→ (t i )1≤i ≤m
where (t i )1≤i ≤m is defined as follows. Let (Vj ′ )−1≤j ≤m ′ be the flag associated to P ′ . Let 1 ≤ i ≤
m . If Vi −1 belongs to (Vj ′ ) j , let t i = 0. If Vi −1 does not belong to the last flag, then there is a
unique j such that Vj ′ ⊃ Vi ⊃ Vi −2 ⊃ Vj ′−1 . Let µ̃ j be a norm on (Vj ′ /Vj ′−1 )v which belongs to the
class µ j , and let µ̃ j ,i and µ̃ j ,i −1 be the norms induced by µ j on the subquotients (Vi /Vi −1 )v and
(Vi −1 /Vi −2 )v , respectively. We then let
t i = |µ̃ j ,i −1 : e (i −1) |1/d i −1 · |µ̃ j ,i : e (i ) |−1/d i ,
where d i := dim(Vi /Vi −1 ).
′
is P(Fv )-equivariant for the following action of P(Fv ) on Rm
The map φP,v
≥0 . For g ∈ P(Fv ),
let g̃ ∈ GLV (Fv ) be a lift of g , and for 0 ≤ i ≤ m , let g i ∈ GLVi /Vi −1 (Fv ) be the element induced by
m
′
g̃ . Then g ∈ P(Fv ) sends t ∈ Rm
≥0 to t ∈ R≥0 where
t i′ = | det(g i )|1/d i · | det(g i −1 )|−1/d i −1 · t i .
If we have two families e = (e (i ) )i and f = (f (i ) )i of bases e (i ) and f (i ) of (Vi /Vi −1 )v , and if the
′
map φP,v
defined by e (resp., f ) sends an element to t (resp., t ′ ), then the same formula also
describes the relationship between t and t ′ , in this case taking g i to be the element of GLVi /Vi −1
such that e (i ) = g i f (i ) .
3.4.7. Fix a basis e (i ) of Vi /Vi −1 for each 0 ≤ i ≤ m . Then we have a map
′
φP,S
: X̄ F,S (P) → Rm
,
≥0
where t i =
Q
t ,
v ∈S v,i
(P ′ , µ) 7→ (t i )1≤i ≤m
′
with (t v,i )i the image of (P ′ , µv ) under the map φP,v
of 3.4.6.
21
3.4.8. We define the Satake topology on X̄ F,S as follows.
For a parabolic subgroup P of PGLV , consider the map
′
ψP,S := (φP,S , φP,S
): X̄ F,S (P) → ZF,S (P) × Rm
≥0
from 3.4.3 and 3.4.7, which we recall depends on a choice of bases of the Vi /Vi −1 . We say that a
subset of X̄ F,S (P) is P-open if it is the inverse image of an open subset of ZF,S (P) × Rm
≥0 . By 3.4.6,
the property of being P-open is independent of the choice of bases.
We define the Satake topology on X̄ F,S to be the coarsest topology for which every P-open
set for each parabolic subgroup P of PGLV is open.
By this definition, we have:
3.4.9. Let a ∈ X̄ F,S be of the form (P, µ) for some µ. As U ranges over neighborhoods of the
image (µ, 0) of a in ZF,S (P) × Rm
≥0 , the inverse images of the U in X̄ F,S (P) under ψP,S form a base
of neighborhoods of a in X̄ F,S .
3.4.10. In §3.5 and §3.6, we explain that the Satake topology on X̄ F,S is strictly coarser than the
Borel-Serre topology for d ≥ 2.
3.4.11. The Satake topology on X̄ F,S can differ from the subspace topology of the product
topology for the Satake topology on each X̄ F,v with v ∈ S.
Example. Let F be a real quadratic field, let V = F 2 , and let S = {v 1 , v 2 } be the set of real places
of F . Consider the point (∞, ∞) ∈ (H ∪ {∞}) × (H ∪ {∞}) ⊂ X̄ v 1 × X̄ v 2 (see §3.2), which we regard
as an element of X̄ F,S . Then the sets
Uc := {(x 1 + y 1 i ,x 2 + y 2 i ) ∈ H × H | y 1 y 2 ≥ c } ∪ {(∞, ∞)}
with c ∈ R>0 form a base of neighborhoods of (∞, ∞) in X̄ F,S for the Satake topology, whereas
the sets
Uc′ := {(x 1 + y 1 i ,x 2 + y 2 i ) ∈ H × H | y 1 ≥ c , y 2 ≥ c } ∪ {(∞, ∞)}
for c ∈ R>0 form a base of neighborhoods of (∞, ∞) in X̄ F,S for the topology induced by the
product of Satake topologies on X̄ F,v 1 and X̄ F,v 2 .
3.4.12. Let G = PGLV , and let Γ be a subgroup of G (F ).
• For a parabolic subgroup P of G , let Γ(P) be the subgroup of Γ ∩ P(F ) consisting of all
elements with image in the center of (P/Pu )(F ).
• For a nonzero F -subspace W of V , let Γ(W ) denote the subgroup of elements of Γ that
can be lifted to elements of GLV (F ) which fix every element of W .
22
3.4.13. We let AF denote the adeles of F , let ASF denote the adeles of F outside of S, and let
Q
AF,S = v ∈S Fv so that AF = ASF × AF,S . Assume that S contains all archimedean places of F .
Let G = PGLV , let K be a compact open subgroup of G (ASF ), and let ΓK < G (F ) be the inverse
image of K under G (F ) → G (ASF ).
The following proposition will be proved in 3.5.15.
Proposition 3.4.14. For S, G , K and ΓK as in 3.4.13, the Satake topology on X̄ F,S is the coarsest
topology such that for every parabolic subgroup P of G , a subset U of X̄ F,S (P) is open if
(i) it is open for Borel-Serre topology, and
(ii) it is stable under the action of ΓK ,(P) (see 3.4.12).
The following proposition follows easily from the fact that for any two compact open subgroups K and K ′ of G (ASF ), the intersection ΓK ∩ ΓK ′ is of finite index in both ΓK and ΓK ′ .
♭
Proposition 3.4.15. For S, K and ΓK as in 3.4.13, consider the coarsest topology on X̄ F,S
such
♭
that for every nonzero F -subspace W , a subset U of X̄ F,S
(W ) is open if
(i) it is open for Borel-Serre topology, and
(ii) it is stable under the action of ΓK ,(W ) (see 3.4.12).
Then this topology is independent of the choice of K .
♭
3.4.16. We call the topology in 3.4.15 the Satake topology on X̄ F,S
.
Proposition 3.4.17.
(1) Let P be a parabolic subgroup of PGLV . For both the Borel-Serre and Satake topologies
on X̄ F,S , the set X̄ F,S (P) is open in X̄ F,S , and the action of the topological group P(AF,S ) on
X̄ F,S (P) is continuous.
(2) The actions of the discrete group PGLV (F ) on the following spaces are continuous: X̄ F,S
♭
and X̄ F,S
with their Borel-Serre topologies, X̄ F,S with the Satake topology, and assuming S
♭
contains all archimedean places, X̄ F,S
with the Satake topology.
Proposition 3.4.18. Let W be a nonzero F -subspace of V . Then for the Borel-Serre topology,
♭
and for the Satake topology if S contains all archimedean places of F , the subset X̄ F,S
(W ) is open
♭
in X̄ F,S
.
Part (1) of 3.4.17 was shown in §3.3 for the Borel-Serre topology, and the result for the
Satake topology on X̄ F,S follows from it. The rest of 3.4.17 and 3.4.18 is easily proven.
23
3.5 Properties of X̄ F,S
Let S be a nonempty finite set of places of F .
3.5.1. Let P and (Vi )−1≤i ≤m be as before. Fix a basis e (i ) of Vi /Vi −1 for each i . Set
Y0 = (RS>0 ∪ {(0)v ∈S })m ⊂ (RS≥0 )m .
′
The maps ψP,v := (φP,v , φP,v
): X̄ F,v (P) → ZF,v (P)×Rm
≥0 of 3.4.3 and 3.4.6 for v ∈ S combine to give
the map
ψP,S : X̄ F,S (P) → ZF,S (P) × Y0 .
3.5.2. In addition to the usual topology on Y0 , we consider the weak topology on Y0 that is the
product topology for the topology on RS>0 ∪ {(0)v ∈S } which extends the usual topology on RS>0
by taking the sets
¨
«
(t v )v ∈S ∈ RS>0
Y
|
v ∈S
t v ≤ c ∪ {(0)v ∈S }
for c ∈ R>0 as a base of neighborhoods of (0)v ∈S . In the case that S consists of a single place, we
have Y0 = Rm
≥0 , and the natural topology and the weak topology on Y0 coincide.
Proposition 3.5.3. The map ψP,S of 3.5.1 induces a homeomorphism
∼
Pu (AF,S )\X̄ F,S (P) −
→ ZF,S (P) × Y0
for the Borel-Serre topology (resp., Satake topology) on X̄ F,S and the usual (resp., weak) topology
on Y0 . This homeomorphism is equivariant for the action of P(AF,S ), with the action of P(AF,S )
on Y0 being that of 3.4.6.
This has the following corollary, which is also the main step in the proof.
Corollary 3.5.4. For any place v of F , the map
Pu (Fv )\X̄ F,v (P) → ZF,v (P) × Rm
≥0
is a homeomorphism for both the Borel-Serre and Satake topologies on X̄ F,v .
We state and prove preliminary results towards the proof of 3.5.3.
3.5.5. Fix a basis (e i )i of V and a parabolic subgroup P of PGLV which satisfies the condition
in 3.3.2 for this basis. Let (Vi )−1≤i ≤m be the flag corresponding to P, and for each i , set c (i ) =
dim(Vi ). We define two maps
ξ, ξ⋆ : Pu (Fv ) × ZF,v (P) × Rm
→ X̄ F,v (P).
≥0
24
3.5.6. First, we define the map ξ.
Set ∆(P) = {c (0), . . . , c (m − 1)}. Let ∆i = {j ∈ Z | c (i − 1) < j < c (i )} for 0 ≤ i ≤ m . We then
clearly have
{1, . . . , d − 1} = ∆(P) ∐
For 0 ≤ i ≤ m , let
V (i )
=
P
c (i −1)<j ≤c (i )
m
a
∆i .
i =0
F e j , so Vi = Vi −1 ⊕ V (i ) . We have
∆(P)
Rd≥0−1 (P) = R≥0 ×
m
Y
∼
×
= Rm
≥0
∆
R>0i
i =0
m
Y
∆
R>0i .
i =0
Let B be the Borel subgroup of PGLV consisting of all upper triangular matrices for the
basis (e i )i . Fix a place v of F . We consider two surjections
B u (Fv ) × Rd≥0−1(P) ։ X̄ F,v (P) and B u (Fv ) × Rd≥0−1(P) ։ Pu (Fv ) × ZF,v (P) × Rm
.
≥0
The first is induced by the surjection π̄v of 3.3.8.
The second map is obtained as follows. For 0 ≤ i ≤ m , let B i be the image of B in PGLV (i )
under P → PGLV /V ∼
= PGL (i ) . Then B i is a Borel subgroup of PGL (i ) , and we have a canonical
i
bijection
i −1
V
V
Pu (Fv ) ×
m
Y
i =0
∼
B i ,u (Fv ) −
→ B u (Fv ).
∆
By 3.3.8, we have surjections B i ,u (Fv )× R>0i ։ X (Vi /Vi −1 )v for 0 ≤ i ≤ m . The second (continuous)
surjection is then the composite
m
Y
∼
B u (Fv ) × Rd≥0−1(P) −
→
Pu (Fv ) ×
։ Pu (Fv ) ×
i =0
B i ,u (Fv ) ×
m
Y
i =0
Rm
×
≥0
m
Y
∆
R>0i
i =0
X (Vi /Vi −1 )v
× Rm
= Pu (Fv ) × ZF,v (P) × Rm
.
≥0
≥0
Proposition 3.5.7. There is a unique surjective continuous map
ξ: Pu (Fv ) × ZF,v (P) × Rm
։ X̄ F,v (P)
≥0
for the Borel-Serre topology on X̄ F,v (P) that is compatible with the surjections from B u (Fv ) ×
Rd≥0−1 (P) to these sets. This map induces a homeomorphism
∼
−
→ Pu (Fv )\X̄ F,v (P)
ZF,v (P) × Rm
≥0
that restricts to a homeomorphism of ZF,v (P) × Rm
>0 with Pu (Fv )\X v .
25
This follows from 3.3.8.
3.5.8. Next, we define the map ξ⋆ .
For g ∈ Pu (Fv ), (µi )i ∈ (X (Vi /Vi −1)v )0≤i ≤m , and (t i )1≤i ≤m ∈ Rm
≥0 , we let
ξ⋆ (g , (µi )i , (t i )i ) = g (P ′ , ν),
where P ′ and ν are as in (1) and (2) below, respectively.
(1) Let J = {c (i − 1) | 1 ≤ i ≤ m , t i = 0}. Write J = {c ′ (0), . . . , c ′ (m ′ − 1)} with c ′ (0) < · · · <
c ′ (m ′ − 1). Let c ′ (−1) = 0 and c ′ (m ′ ) = d . For −1 ≤ i ≤ m ′ , let
c ′ (i )
X
′
Vi =
j =1
F e j ⊂ V.
Let P ′ ⊃ P be the parabolic subgroup of PGLV corresponding to the flag (Vi ′ )i .
(2) For 0 ≤ i ≤ m ′ , set
J i = {j | c ′ (i − 1) < c (j ) ≤ c ′ (i )} ⊂ {1, . . . , m }.
L
We identify Vi ′ /Vi ′−1 with j ∈ J i V (j ) via the basis (e k )c ′ (i −1)<k ≤c ′(i ) . We define a norm ν̃i on
Vi ′ /Vi ′−1 as follows. Let µ̃ j be the unique norm on V (j ) which belongs to µ j and satisfies
P
|µ̃ j : (e k )c (j −1)<k ≤c (j )| = 1. For x = j ∈ J i x j with x j ∈ V (j ) , set
P
2
2 1/2
(r µ̃ j (x j ) )
P j ∈ J i j
ν̃i (x ) =
j ∈ J i r j µ̃ j (x j )
maxj ∈ J i (r j µ̃ j (x j ))
where for j ∈ J i , we set
Y
rj =
if v is real,
if v is complex,
if v is non-archimedean,
t ℓ−1 .
ℓ∈ J i
ℓ<j
Let νi ∈ X (Vi ′ /Vi ′−1 )v be the class of the norm ν̃i .
We omit the proofs of the following two lemmas.
Lemma 3.5.9. The composition
ξ⋆
ψP,v
Pu (Fv ) × ZF,v (P) × Rm
−
→ X̄ F,v (P) −→ ZF,v (P) × Rm
≥0
≥0
coincides with the canonical projection. Here, the definition of the second arrow uses the basis
(e j mod Vi −1 )c (i −1)<j ≤c (i ) of Vi /Vi −1 .
26
Lemma 3.5.10. We have a commutative diagram
Pu (Fv ) × ZF,v (P) × Rm
≥0
Pu (Fv ) × ZF,v (P) × Rm
≥0
ξ
X̄ F,v (P)
ξ⋆
X̄ F,v (P)
in which the left vertical arrow is (u , µ, t ) 7→ (u , µ, t ′ ), for t ′ defined as follows. Let I i : X (Vi /Vi −1 )v →
∆
R>0i be the unique continuous map for which the composition
Ii
∆
∆
B i ,u (Fv ) × R>0i → X (Vi /Vi −1 )v −
→ R>0i
is projection onto the second factor, and for j ∈ ∆i , let I i ,j : X (Vi /Vi −1)v → R>0 denote the composi∆
tion of I i with projection onto the factor of R>0i corresponding to j . Then
Y
Y
j −c (i −2)
c (i )−j
I i −1,j (µi ) c (i −1)−c (i −2) ·
t i′ = t i ·
I i ,j (µi ) c (i )−c (i −1)
j ∈∆i −1
j ∈∆i
for 1 ≤ i ≤ m .
3.5.11. Proposition 3.5.3 is quickly reduced to Corollary 3.5.4, which now follows from 3.5.7,
3.5.9 and 3.5.10.
3.5.12. For two topologies T1 , T2 on a set Z , we use T1 ≥ T2 to denote that the identity map of Z
is a continuous map from Z with T1 to Z with T2 , and T1 > T2 to denote that T1 ≥ T2 and T1 6= T2 .
In other words, T1 ≥ T2 if T1 is finer than T2 and T1 > T2 if T1 is strictly finer than T2 .
By 3.5.3, the map ψP,S : X̄ F,S (P) → ZF,S (P) × Y0 is continuous for the Borel-Serre topology on
X̄ F,S and usual topology on Y0 . On X̄ F,S , we therefore have
Borel-Serre topology ≥ Satake topology.
♭
♭
Corollary 3.5.13. For any nonempty finite set S of places of F , the map φW,S
: X̄ F,S
(W ) → Z♭F,S (W )
♭
of 3.4.4 is continuous for the Borel-Serre topology on X̄ F,S
. If S contains all archimedean places
of F , it is continuous for the Satake topology.
Proof. The continuity for the Borel-Serre topology follows from the continuity of ψP,S , noting
♭
that the Borel-Serre topology on X̄ F,S
is the quotient topology of the Borel-Serre topology on
♭
X̄ F,S . Suppose that S contains all archimedean places. As φW,S
is ΓK ,(W ) -equivariant, and ΓK ,(W )
acts trivially on Z♭F,S (W ), the continuity for the Satake topology is reduced to the continuity for
the Borel-Serre topology.
Remark 3.5.14. We remark that the map φP,v : X̄ F,v (P) → ZF,v (P) of 3.4.3 need not be continu♭
♭
ous for the topology on X̄ F,v as a subspace of X̄ v . Similarly, the map φW,v
: X̄ F,v
(W ) → X Wv of
♭
3.4.4 need not be continuous for the subspace topology on X̄ F,v
⊂ X̄ v♭ . See 3.6.6 and 3.6.7.
27
3.5.15. We prove Proposition 3.4.14.
Proof. Let α = (P, µ) ∈ X̄ F,S . Let U be a neighborhood of α for the Borel-Serre topology which is
stable under the action of ΓK ,(P). By 3.5.12, it is sufficient to prove that there is a neighborhood
W of α for the Satake topology such that W ⊂ U .
Let (Vi )−1≤i ≤m be the flag corresponding to P, and let V (i ) be as before. Let Γ1 = ΓK ∩ Pu (F ),
and let Γ0 be the subgroup of ΓK consisting of the elements that preserve V (i ) and act on V (i )
as a scalar for all i . Then Γ1 is a normal subgroup of ΓK ,(P) and Γ1 Γ0 is a subgroup of ΓK ,(P) of
finite index.
Let
)m
(
Y1 = (a v )v ∈S ∈ RS>0 |
Y
av = 1
,
v ∈S
and set s = ♯S. We have a surjective continuous map
1/s
′
(t , t ′ ) 7→ (t i t v,i
)v,i .
Rm
× Y1 ։ Y0 ,
≥0
m
The composition Rm
≥0 × Y1 → Y0 → R≥0 , where the second arrow is (t v,i )v,i 7→ (
cides with projection onto the first coordinate.
Q
t ),
v ∈S v,i i
coin-
Let
Φ = Pu (AF,S ) × Y1 and Ψ = ZF,S (P) × Rm
.
≥0
Consider the composite map
(ξ⋆v )v ∈S
f : Φ × Ψ → Pu (AF,S ) × ZF,S (P) × Y0 −−−→ X̄ F,S (P).
The map f is Γ1 Γ0 -equivariant for the trivial action on Ψ and the following action on Φ: for
(g , t ) ∈ Φ, γ1 ∈ Γ1 and γ0 ∈ Γ0 , we have
γ1 γ0 · (g , t ) = (γ1 γ0 g γ−1
, γ0 t ),
0
where γ0 acts on Y1 via the embedding ΓK → P(AF,S ) and the actions of the P(Fv ) described in
3.4.6. The composition
ψP,S
f
Φ×Ψ−
→ X̄ F,S (P) −→ Ψ
coincides with the canonical projection.
There exists a compact subset C of Φ such that Φ = Γ1 Γ0C for the above action of Γ1 Γ0 on
Φ. Let β = (µ, 0) ∈ Ψ be the image of α under ψP,S . For x ∈ Φ, we have f (x , β ) = α. Hence,
there is an open neighborhood U ′ (x ) of x in Φ and an open neighborhood U ′′ (x ) of β in Ψ
such that U ′ (x ) × U ′′ (x ) ⊂ f −1 (U ). Since C is compact, there is a finite subset R of C such that
28
C⊂
S
W =
U ′ (x ).
x ∈R
′′
ψ−1
P,S (U ) ⊂
Let U ′′ be the open subset
T
x ∈R
U ′′ (x ) of Ψ, which contains β . The P-open set
X̄ F,S (P) is by definition an open neighborhood of α in the Satake topology on
X̄ F,S . We show that W ⊂ f −1 (U ). Since the map Φ × Ψ → X̄ F,S (P) is surjective, it is sufficient to
prove that the inverse image Φ × U ′′ of W in Φ × Ψ is contained f −1 (U ). For this, we note that
!
[
U ′ (x ) × U ′′ ⊂ Γ1 Γ0 f −1 (U ) = f −1 (U ),
Φ × U ′′ = Γ1 Γ0C × U ′′ = Γ1 Γ0
x ∈R
the last equality by the stability of U under the action of ΓK ,(P) ⊃ Γ1 Γ0 and the Γ1 Γ0 -equivariance
of f .
♭
3.5.16. In the case d = 2, the canonical surjection X̄ F,S → X̄ F,S
is bijective. It is a homeomor-
phism for the Borel-Serre topology. If S contains all archimedean places of F , it is a homeomorphism for the Satake topology by 3.4.14.
3.6 Comparison of the topologies
♭
When considering X̄ F,v
, we assume that all places of F other than v are non-archimedean.
♭
3.6.1. For X̄ F,v (resp., X̄ F,v
), we have introduced several topologies: the Borel-Serre topology,
the Satake topology, and the subspace topology from X̄ v (resp., X̄ v♭ ), which we call the weak
topology. We compare these topologies below; note that we clearly have
Borel-Serre topology ≥ Satake topology and Borel-Serre topology ≥ weak topology.
♭
3.6.2. For both X̄ F,v and X̄ F,v
, the following hold:
(1) Borel-Serre topology > Satake topology if d ≥ 2,
(2) Satake topology > weak topology if d = 2,
(3) Satake topology 6≥ weak topology if d > 2.
We do not give full proofs of these statements. Instead, we describe some special cases
that give clear pictures of the differences between these topologies. The general cases can be
proven in a similar manner to these special cases.
♭
are equal, their Borel-Serre
Recall from 3.5.16 that in the case d = 2, the sets X̄ F,v and X̄ F,v
topologies coincide, and their Satake topologies coincide.
3.6.3. We describe the case d = 2 of 3.6.2(1).
Take a basis (e i )i =1,2 of V . Consider the point α = (B, µ) of X̄ F,v , where B is the Borel
subgroup of upper triangular matrices with respect to (e i )i , and µ is the unique element of
ZF,v (B ) = X Fv e 1 × X Vv /Fv e 1 .
29
Let π̄v be the surjection of 3.3.8(2), and identify B u (Fv ) with Fv in the canonical manner.
The images of the sets
{(x , t ) ∈ Fv × R≥0 | t ≤ c } ⊂ B u (Fv ) × R≥0
in X̄ F,v (B ) for c ∈ R>0 form a base of neighborhoods of α for the Satake topology. Thus, while
the image of the set
{(x , t ) ∈ Fv × R≥0 | t < |x |−1 }
is a neighborhood of α for Borel-Serre topology, it is not a neighborhood of α for the Satake
topology.
3.6.4. We prove 3.6.2(2) in the case that v is non-archimedean. The proof in the archimedean
♭
case is similar. Since all boundary points of X̄ F,v = X̄ F,v
are PGLV (F )-conjugate, to show 3.6.2(2),
it is sufficient to consider any one boundary point. We consider α of 3.6.3 for a fixed basis
(e i )i =1,2 of V .
For x ∈ Fv and y ∈ R>0 , let µy ,x be the norm on Vv defined by
µy ,x (a e 1 + b e 2) = max(|a − x b |, y |b |).
The class of µy ,x is the image of (x , y −1 ) ∈ B u (Fv ) × R>0 . Any element of X v is the class of the
norm µy ,x for some x , y . If we vary x ∈ F∞ and y ∈ R>0 , the classes of µy ,x in X̄ F,v converge
under the Satake topology to the point α if and only if y approaches ∞. In X̄ v , the point α is
the class of the semi-norm ν on Vv∗ defined by ν(a e 1∗ + b e 2∗ ) = |a |. By 2.1.7,
µ∗y ,x = µy ,0 ◦
1 −x
0
!!∗
1
= µ∗y ,0 ◦
!
1 0
x
1
,
from which we see that
µ∗y ,x (a e 1∗ + b e 2∗) = max(|a |, y −1 |x a + b |).
Then µ∗y ,x is equivalent to the norm νy ,x on Vv∗ defined by
νy ,x (a e 1∗ + b e 2∗ ) = min(1, y |x |−1 ) max(|a |, y −1 |x a + b |),
and the classes of the νy ,x converge in X̄ v to the class of the semi-norm ν as y → ∞. Therefore,
the Satake topology is finer than the weak topology.
Now, the norm µ∗1,x is equivalent to the norm ν1,x on Vv∗ defined above, which for sufficiently large x satisfies
ν1,x (a e 1 + b e 2) = max(|a /x |, |a + (b /x )|).
30
Thus, as |x | → ∞, the sequence µ1,x converges in X̄ v = X̄ v♭ to the class of the semi-norm ν.
♭
However, the sequence of classes of the norms µ1,x does not converge to α in X̄ F,v = X̄ F,v
for
the Satake topology, so the Satake topology is strictly finer than the weak topology.
3.6.5. We explain the case d = 3 of 3.6.2(3) in the non-archimedean case.
Take a basis (e i )1≤i ≤3 of V . For y ∈ R>0 , let µy be the norm on Vv defined by
µy (a e 1 + b e 2 + c e 3 ) = max(|a |, y |b |, y 2 |c |).
For x ∈ Fv , consider the norm µy ◦ g x , where
1 0 0
.
gx =
0
1
x
0 0 1
If we vary x ∈ F∞ and let y ∈ R>0 approach ∞, then the class of µy ◦g x in X v converges under the
Satake topology to the class α ∈ X̄ F,v of the pair that is the Borel subgroup of upper triangular
Q2
matrices and the unique element of i =0 X (Vi /Vi −1 )v , where (Vi )−1≤i ≤2 is the corresponding flag.
♭
The quotient topology on X̄ F,v
of the Satake topology on X̄ F,v is finer than the Satake topology
♭
on X̄ F,v
by 3.4.14 and 3.4.15. Thus, if the Satake topology is finer than the weak topology on
♭
X̄ F,v or X̄ F,v
, then the composite µy ◦ g x should converge in X̄ v♭ to the class of the semi-norm ν
on Vv∗ that satisfies ν(a e 1∗ + b e 2∗ + c e 3∗ ) = |a |. However, if y → ∞ and y −2 |x | → ∞, then the class
of µy ◦ g x in X v converges in X̄ v♭ to the class of the semi-norm a e 1∗ + b e 2∗ + c e 3∗ 7→ |b |. In fact, by
2.1.7 we have
(µy ◦ g x )∗ (a e 1∗ + b e 2∗ + c e 3∗ ) = µ∗y ◦ (g x∗ )−1 (a e 1∗ + b e 2∗ + c e 3∗ )
= max(|a |, y −1 |b |, y −2 |−b x + c |) = y −2 |x |νy ,x
where νy ,x is the norm
a e 1∗ + b e 2∗ + c e 3∗ 7→ max(y 2 |x |−1 |a |, y |x |−1 |b |, |−b + x −1 c |)
on Vv∗ . The norms νy ,x converge to the semi-norm a e 1∗ + b e 2∗ + c e 3∗ 7→ |b |.
♭
♭
3.6.6. Let W be a nonzero subspace of V . We demonstrate that the map φW,v
: X̄ F,v
(W ) → X Wv
of 3.4.4 given by restriction to Wv need not be continuous for the weak topology, even though
by 3.5.13, it is continuous for the Borel-Serre topology and (if all places other than v are nonarchimedean) for the Satake topology.
For example, suppose that v is non-archimedean and d = 3. Fix a basis (e i )1≤i ≤3 of V , and
let W = F e 1 + F e 2 . Let µ be the class of the norm
a e 1 + b e 2 7→ max(|a |, |b |)
31
♭
on Wv , and consider the element (W, µ) ∈ X̄ F,v
. For x ∈ Fv and ε ∈ R>0 , let µx ,ε ∈ X v be the class
of the norm
a e 1 + b e 2 + c e 3 7→ max(|a |, |b |, ε−1 |c + b x |)
on Vv . Then µx∗ ,ε is the class of the norm
a e 1∗ + b e 2∗ + c e 3∗ 7→ max(|a |, |b − x c |, ε|c |)
on Vv∗ . When x → 0 and ε → 0, the last norm converges to the semi-norm
a e 1∗ + b e 2∗ + c e 3∗ 7→ max(|a |, |b |)
on Vv∗ , and this implies that µx ,ε converges to (W, µ) for the weak topology. However, the restriction of µx ,ε to Wv is the class of the norm
a e 1 + b e 2 7→ max(|a |, |b |, ε−1 |x ||b |).
If x → 0 and ε = r −1 |x | → 0 for a fixed r > 1, then the latter norms converge to the norm
a e 1 + b e 2 7→ max(|a |, r |b |), the class of which does not coincide with µ.
3.6.7. Let P be a parabolic subgroup of PGLV (F ). We demonstrate that the map φP,v : X̄ F,v (P) →
ZF,v (P) of 3.4.3 is not necessarily continuous for the weak topology, though by 3.5.4, it is con-
tinuous for the Borel-Serre topology and for the Satake topology.
Let d = 3 and W be as in 3.6.6, and let P be the parabolic subgroup of PGLV corresponding
to the flag
0 = V−1 ⊂ V0 = W ⊂ V1 = V.
♭
In this case, the canonical map X̄ F,v (P) → X̄ F,v
(W ) is a homeomorphism for the weak topology
on both spaces. It is also a homeomorphism for the Borel-Serre topology, and for the Satake
topology if all places other than v are non-archimedean. Since ZF,v (P) = X (V ) × X (V /V ) ∼
= XW ,
0 v
0 v
v
the argument of 3.6.6 shows that φP,v is not continuous for the weak topology.
♭
3.6.8. For d ≥ 3, the Satake topology on X̄ F,v
does not coincide with the quotient topology for
the Satake topology on X̄ F,v , which is strictly finer. This is explained in 4.4.12.
3.7 Relations with Borel-Serre spaces and reductive Borel-Serre spaces
3.7.1. In this subsection, we describe the relationship between our work and the theory of
♯
Borel-Serre and reductive Borel-Serre spaces (see Proposition 3.7.4). We also show that X̄ F,v is
not Hausdorff if v is a non-archimedean place.
32
3.7.2. Let G be a semisimple algebraic group over Q. We recall the definitions of the BorelSerre and reductive Borel-Serre spaces associated to G from [3] and [26, p. 190], respectively.
Let Y be the space of all maximal compact subgroups of G (R). Recall from [3, Proposition
1.6] that for K ∈ Y , the Cartan involution θK of G R := R⊗Q G corresponding to K is the unique
homomorphism G R → G R such that
K = {g ∈ G (R) | θK (g ) = g }.
Let P be a parabolic subgroup of G , let S P be the largest Q-split torus in the center of P/Pu ,
and let A P be the connected component of the topological group S P (R) containing the origin.
We have
AP ∼
= Rr>0 ⊂ S P (R) ∼
= (R× )r
for some integer r . We define an action of A P on Y as follows (see [3, Section 3]). For K ∈ Y ,
we have a unique subtorus S P,K of PR = R ⊗Q P over R such that the projection P → P/Pu
induces an isomorphism
∼
S P,K −
→ (S P )R := R ⊗Q S P
and such that the Cartan involution θK : G R → G R of K satisfies θK (t ) = t −1 for all t ∈ S P,K (R).
For t ∈ A P , let t K ∈ S P,K (R) be the inverse image of t . Then A P acts on Y by
AP × Y → Y ,
(t , K ) 7→ t K K t K−1 .
The Borel-Serre space is the set of pairs (P,Z ) such that P is a parabolic subgroup of G
and Z is an A P -orbit in Y . The reductive Borel-Serre space is the quotient of the Borel-Serre
space by the equivalence relation under which two elements (P,Z ) and (P ′ ,Z ′ ) are equivalent
if (P ′ ,Z ′ ) = g (P,Z ) (that is, P = P ′ and Z ′ = g Z ) for some g ∈ Pu (R).
3.7.3. Now assume that F = Q and G = PGLV . Let v be the archimedean place of Q.
We have a bijection between X v and the set Y of all maximal compact subgroups of G (R),
whereby an element of X v corresponds to its isotropy group in G (R), which is a maximal compact subgroup.
Suppose that K ∈ Y corresponds to µ ∈ X v , with µ the class of a norm that in turn cor-
responds to a positive definite symmetric bilinear form ( , ) on Vv . The Cartan involution
θK : G R → G R is induced by the unique homomorphism θK : GLVv → GLVv satisfying
(g x , θK (g )y ) = (x , y )
for all g ∈ GLV (R) and x , y ∈ Vv .
For a parabolic subgroup P of G corresponding to a flag (Vi )−1≤i ≤m , we have
m
Y
Gm,Q /Gm,Q ,
SP =
i =0
33
where the i th term in the product is the group of scalars in GLVi /Vi −1 , and where the last Gm,Q is
embedded diagonally in the product. The above description of θK shows that S P,K is the lifting
of (S P )R to PR obtained through the orthogonal direct sum decomposition
Vv ∼
=
m
M
i =0
(Vi /Vi −1 )v
with respect to ( , ).
♯
Proposition 3.7.4. If v is the archimedean place of Q, then X̄ Q,v (resp., X̄ Q,v ) is the Borel-Serre
space (resp., reductive Borel-Serre space) associated to PGLV .
♯
Proof. Denote the Borel-Serre space by (X̄ Q,v )′ in this proof. We define a canonical map
♯
♯
X̄ Q,v → (X̄ Q,v )′ ,
(P, µ, s ) 7→ (P,Z ),
where Z is the subset of Y corresponding to the following subset Z ′ of X v . Let (Vi )−1≤i ≤m be
the flag corresponding to P. Recall that s is an isomorphism
s:
m
M
i =0
∼
(Vi /Vi −1 )v −
→ Vv .
Then Z ′ is the subset of X v consisting of classes of the norms
µ̃(s ) : x 7→
m
X
!1/2
µ̃i (s −1 (x )i )2
i =0
on Vv , where s −1 (x )i ∈ (Vi /Vi −1 )v denotes the i th component of s −1 (x ) for x ∈ Vv , and µ̃ =
(µ̃i )0≤i ≤m ranges over all families of norms µ̃i on (Vi /Vi −1 )v with class equal to µi . It follows
from the description of S P,K in 3.7.3 that Z is an A P -orbit.
For a parabolic subgroup P of G , let
♯
♯
(X̄ Q,v )′ (P) = {(Q,Z ) ∈ (X̄ Q,v )′ | Q ⊃ P}.
♯
♯
By [3, 7.1], the subset (X̄ Q,v )′ (P) is open in (X Q,v )′ .
Take a basis of V , and let B denote the Borel subgroup of PGLV of upper-triangular matrices for this basis. By 3.3.8(3), we have a homeomorphism
∼
♯
→ X̄ Q,v (B ).
B u (R) × Rd≥0−1 −
It follows from [3, 5.4] that the composition
♯
B u (R) × Rd≥0−1 → (X̄ Q,v )′ (B )
34
induced by the above map
♯
♯
X̄ Q,v (B ) → (X̄ Q,v )′ (B ),
(P, µ, s ) 7→ (P,Z )
♯
♯
is also a homeomorphism. This proves that the map X̄ Q,v → (X̄ Q,v )′ restricts to a homeomor∼
♯
♯
♯
♯
phism X̄ Q,v (B ) −
→ (X̄ Q,v )′ (B ). Therefore, X̄ Q,v → (X̄ Q,v )′ is a homeomorphism as well. It then
follows directly from the definitions that the reductive Borel-Serre space is identified with
X̄ Q,v .
3.7.5. Suppose that F is a number field, let S be the set of all archimedean places of F , and let
G be the Weil restriction ResF /Q PGLV , which is a semisimple algebraic group over Q. Then Y
is identified with X F,S , and X̄ F,S is related to the reductive Borel-Serre space associated to G but
does not always coincide with it. We explain this below.
♯
′
Let (X̄ F,S )′ and X̄ F,S
be the Borel-Serre space and the reductive Borel-Serre space associQ
♯
♯
ated to G , respectively. Let X̄ F,S be the subspace of v ∈S X̄ F,v consisting of all elements (x v )v ∈S
such that the parabolic subgroup of G associated to x v is independent of v . Then by similar
arguments to the case F = Q, we see that Y is canonically homeomorphic to X F,S and this
homeomorphism extends uniquely to surjective continuous maps
♯
♯
(X̄ F,S )′ → X̄ F,S ,
′
X̄ F,S
→ X̄ F,S .
However, these maps are not bijective unless F is Q or imaginary quadratic. We illustrate the
differences between the spaces in the case that F is a real quadratic field and d = 2.
Fix a basis (e i )i =1,2 of V . Let P̃ be the Borel subgroup of upper triangular matrices in PGLV
for this basis, and let P be the Borel subgroup Res F /Q P̃ of G . Then P/Pu ∼
= ResF /Q Gm ,F and
S P = Gm ,Q ⊂ P/Pu . We have the natural identifications Y = X F,S = H × H. For a ∈ R>0 , the set
Z a := {(y i , a y i ) ∈ H × H | y ∈ R>0 }
is an A P -orbit. If a 6= b , the images of (P,Z a ) and (P,Z b ) in (X̄ F,S )′ do not coincide. On the other
♯
hand, both the images of (P,Z a ) and (P,Z b ) in X̄ F,S coincide with (x v )v ∈S , where x v = (P, µv , s v )
with µv the unique element of X Fv e 1 × X Vv /Fv e 1 and s v the splitting given by e 2.
♯
In the case that v is non-archimedean, the space X̄ F,v is not good in the following sense.
♯
Proposition 3.7.6. If v is non-archimedean, then X̄ F,v is not Hausdorff.
Proof. Fix a ,b ∈ B u (Fv ) with a 6= b , for a Borel subgroup B of PGLV . When t ∈ Rd>0−1 is sufficiently near to 0 = (0, . . . , 0), the images of (a , t ) and (b, t ) in X v coincide by 3.3.8(4) applied to
B u (Fv ) × Rd>0−1 → X v . We denote this element of X v by c (t ). The images f (a ) of (a , 0) and f (b )
♯
of (b, 0) in X̄ F,v are different. However, c (t ) converges to both f (a ) and f (b ) as t tends to 0.
♯
Thus, X̄ F,v is not Hausdorff.
35
3.7.7. Let F be a number field, S its set of archimedean places, and G = ResF /Q PGLV , as in
3.7.5. Then X̄ F,S may be identified with the maximal Satake space for G of [23]. Its Satake
topology was considered by Satake (see also [2, III.3]), and its Borel-Serre topology was con♭
sidered by Zucker [27] (see also [14, 2.5]). The space X̄ F,S
is also a Satake space corresponding
to the standard projective representation of G on V viewed as a Q-vector space.
4
Quotients by S-arithmetic groups
As in §3, fix a global field F and a finite-dimensional vector space V over F .
4.1 Results on S-arithmetic quotients
4.1.1. Fix a nonempty finite set S 1 of places of F which contains all archimedean places of F ,
fix a finite set S 2 of places of F which is disjoint from S 1 , and let S = S 1 ∪ S 2 .
4.1.2. In the following, we take X̄ to be one of the following two spaces:
(i) X̄ := X̄ F,S 1 ,
♭
(ii) X̄ := X̄ F,S
.
1
We endow X̄ with either the Borel-Serre or the Satake topology.
4.1.3. Let G = PGLV , and let K be a compact open subgroup of G (ASF ), with ASF as in 3.4.13.
We consider the two situations in which (X, X̄) is taken to be one of the following pairs of
spaces (for either choice of X̄ ):
(I) X := X S × G (ASF )/K ⊂ X̄ := X̄ × X S 2 × G (ASF )/K ,
(II) X := X S ⊂ X̄ := X̄ × X S 2 .
We now come to the main result of this paper.
Theorem 4.1.4. Let the situations and notation be as in 4.1.1–4.1.3.
(1) Assume we are in situation (I). Let Γ be a subgroup of G (F ). Then the quotient space Γ\X̄
is Hausdorff. It is compact if Γ = G (F ).
(2) Assume we are in situation (II). Let ΓK ⊂ G (F ) be the inverse image of K under the canon-
ical map G (F ) → G (ASF ), and let Γ be a subgroup of ΓK . Then the quotient space Γ\X̄ is
Hausdorff. It is compact if Γ is of finite index in ΓK .
♭
4.1.5. The case Γ = {1} of Theorem 4.1.4 shows that X̄ F,S and X̄ F,S
are Hausdorff.
36
4.1.6. Let OS be the subring of F consisting of all elements which are integral outside S. Take
an OS -lattice L in V . Then PGL L (OS ) coincides with ΓK for the compact open subgroup K =
Q
PGLL (Ov ) of G (ASF ). Hence Theorem 1.6 of the introduction follows from Theorem 4.1.4.
v∈
/S
4.1.7. In the case that F is a number field and S (resp., S 1 ) is the set of all archimedean places
of F , Theorem 4.1.4 in situation (II) is a special case of results of Satake [23] (resp., of Ji, Murty,
Saper, and Scherk [14, Proposition 4.2]).
4.1.8. If in Theorem 4.1.4 we take Γ = G (F ) in part (1), or Γ of finite index in ΓK in part (2), then
the Borel-Serre and Satake topologies on X̄ induce the same topology on the quotient space
Γ\X̄. This can be proved directly, but it also follows from the compact Hausdorff property.
4.1.9. We show that some modifications of Theorem 4.1.4 are not good.
Consider the case F = Q, S = {p, ∞} for a prime number p , and V = Q2 , and consider the
S-arithmetic group PGL2 (Z[ p1 ]). Note that PGL2 (Z[ p1 ])\(X̄ Q,∞ × X p ) is compact Hausdorff, as
is well known (and follows from Theorem 4.1.4). We show that some similar spaces are not
Hausdorff. That is, we prove the following statements:
(1) PGL2 (Z[ p1 ])\(X̄ Q,p × X ∞ ) is not Hausdorff.
(2) PGL2 (Z[ p1 ])\(X̄ Q,∞ × PGL2 (Qp )) is not Hausdorff.
(3) PGL2 (Q)\(X̄ Q,∞ × PGL2 (A∞
Q )) is not Hausdorff.
Statement (1) shows that it is important to assume in 4.1.4 that S 1 , not only S, contains all
archimedean places. Statement (3) shows that it is important to take the quotient G (ASF )/K in
situation (I) of 4.1.3.
Our proofs of these statements rely on the facts that the quotient spaces Z[ p1 ]\R, Z[ p1 ]\Qp ,
and Q\A∞
Q are not Hausdorff.
Proof of statements (1)–(3). For an element x of a ring R, let
!
1 x
gx =
∈ PGL2 (R).
0 1
In (1), for b ∈ R, let h b be the point i + b of the upper half plane H = X ∞ . In (2), for b ∈ Qp ,
∞
let h b = g b ∈ PGL2 (Qp ). In (3), for b ∈ A∞
Q , let h b = g b ∈ PGL2 (AQ ). In (1) (resp., (2) and (3)),
let ∞ ∈ X̄ Q,p (resp., X̄ Q,∞ ) be the boundary point corresponding to the the Borel subgroup of
upper triangular matrices.
In (1) (resp., (2), resp., (3)), take an element b of R (resp., Qp , resp., A∞
Q ) which does not belong to Z[ p1 ] (resp., Z[ p1 ], resp., Q). Then the images of (∞, h 0 ) and (∞, h b ) in the quotient space
37
are different, but they are not separated. Indeed, in (1) and (2) (resp., (3)), some sequence of
elements x of Z[ p1 ] (resp., Q) will converge to b , in which case g x (∞, h 0 ) converges to (∞, h b )
since g x ∞ = ∞.
4.2 Review of reduction theory
We review important results in the reduction theory of algebraic groups: 4.2.2, 4.2.4, and a
variant 4.2.6 of 4.2.2. More details may be found in the work of Borel [1] and Godement [8] in
the number field case and Harder [12, 13] in the function field case.
Fix a basis (e i )1≤i ≤d of V . Let B be the Borel subgroup of G = PGLV consisting of all upper
triangular matrices for this basis. Let S be a nonempty finite set of places of F containing all
archimedean places.
4.2.1. For b = (b v ) ∈ A×F , set |b | =
Q
v
|b v |. Let A v be as in 3.3.4. We let a ∈
image of a diagonal matrix diag(a 1 , . . . , a d ) in GLd (AF ). The ratios
a i a −1
i +1
Q
v
A v denote the
are independent of
the choice. For c ∈ R>0 , we let B (c ) = B u (AF )A(c ), where
(
)
Y
A(c ) = a ∈
A v ∩ PGLd (AF ) | |a i a −1
| ≥ c for all 1 ≤ i ≤ d − 1 .
i +1
v
Let K 0 =
Q
v
K v0 < G (AF ), where K v0 is identified via (e i )i with the standard maximal com-
pact subgroup of PGLd (Fv ) of 3.3.4. Note that B u (Fv )A v K v0 = B (Fv )K v0 = G (Fv ) for all v .
We recall the following known result in reduction theory: see [8, Theorem 7] and [12, Satz
2.1.1].
Lemma 4.2.2. For sufficiently small c ∈ R>0 , one has G (AF ) = G (F )B (c )K 0 .
4.2.3. Let the notation be as in 4.2.1. For a subset I of {1, . . . , d − 1}, let PI be the parabolic
P
subgroup of G corresponding to the flag consisting of 0, the F -subspaces 1≤j ≤i F e j for i ∈ I ,
and V . Hence PI ⊃ B for all I , with P∅ = G and P{1,...,d −1} = B .
For c ′ ∈ R>0 , let B I (c , c ′ ) = B u (AF )A I (c , c ′ ), where
A I (c , c ′ ) = {a ∈ A(c ) | |a i a −1
| ≥ c ′ for all i ∈ I }.
i +1
Note that B I (c , c ′ ) = B (c ) if c ≥ c ′ .
The following is also known [12, Satz 2.1.2] (see also [8, Lemma 3]):
Lemma 4.2.4. Fix c ∈ R>0 and a subset I of {1, . . . , d − 1}. Then there exists c ′ ∈ R>0 such that
{γ ∈ G (F ) | B I (c , c ′ )K 0 ∩ γ−1 B (c )K 0 6= ∅} ⊂ PI (F ).
38
4.2.5. We will use the following variant of 4.2.3.
Q
Let A S = v ∈S A v . For c ∈ R>0 , let
A(c )S = A S ∩ A(c ) and B (c )S = B u (AF,S )A(c )S .
For c 1 , c 2 ∈ R>0 , set
A(c 1 , c 2 )S = {a ∈ A S | for all v ∈ S and 1 ≤ i ≤ d − 1,
|a v,i a −1
| ≥ c 1 and |a v,i a −1
| ≥ c 2 |a w,i a −1
| for all w ∈ S}.
v,i +1
v,i +1
w,i +1
Note that A(c 1 , c 2 )S is empty if c 2 > 1. For a compact subset C of B u (AF,S ), we then set
B (C ; c 1, c 2 )S = C · A(c 1 , c 2 )S .
Let DS =
Q
v ∈S
D v , where D v = K v0 < G (Fv ) if v is archimedean, and D v < G (Fv ) is identified
with S d Iw(Ov ) < PGLd (Fv ) using the basis (e i )i otherwise. Here, S d is the symmetric group of
degree d and Iw(Ov ) is the Iwahori subgroup of PGLd (Ov ), as in 3.3.6.
Lemma 4.2.6. Let K be a compact open subgroup of G (ASF ), let ΓK be the inverse image of K
under G (F ) → G (ASF ), and let Γ be a subgroup of ΓK of finite index. Then there exist c 1 , c 2 ,C as
above and a finite subset R of G (F ) such that
G (AF,S ) = ΓR · B (C ; c 1 , c 2 )S DS .
Proof. This can be deduced from 4.2.2 by standard arguments in the following manner. By the
Iwasawa decomposition 3.3.4, we have G (ASF ) = B (ASF )K 0,S where K 0,S is the non-S-component
of K 0 . Choose a set E of representatives in B (ASF ) of the finite set
B (F )\B (ASF )/(B (ASF ) ∩ K 0,S ).
Let D 0 = DS × K 0,S , and note that since A S ∩DS = 1, we can (by the Bruhat decomposition 3.3.6)
replace K 0 by D 0 in Lemma 4.2.2. Using the facts that E is finite, |a | = 1 for all a ∈ F × , and D 0
is compact, we then have that there exists c ∈ R>0 such that
G (AF ) = G (F )(B (c )S × E )D 0.
For any finite subset R of G (F ) consisting of one element from each of those sets G (F )∩K 0,S e −1
with e ∈ E that are nonempty, we obtain from this that
G (AF,S ) = ΓK R · B (c )S DS .
39
As ΓK is a finite union of right Γ-cosets, we may enlarge R and replace ΓK by Γ. Finally, we can
replace B (c )S by C · A(c )S for some C by the compactness of the image of
B u (AF,S ) → Γ\G (AF,S )/DS
and then by B (C ; c 1 , c 2 )S for some c 1 , c 2 ∈ R>0 by the compactness of the cokernel of
Γ ∩ B (AF,S ) → (B /B u )(AF,S )1 ,
where (B /B u )(AF,S )1 denotes the kernel of the homomorphism
(B /B u )(AF,S ) → Rd>0−1 ,
Y a
v,i
a B u (AF,S ) 7→
a v,i +1
v ∈S
!
.
1≤i ≤d −1
4.3 X̄ F,S and reduction theory
4.3.1. Let S be a nonempty finite set of places of F containing all archimedean places. We consider X̄ F,S . From the results 4.2.6 and 4.2.4 of reduction theory, we will deduce results 4.3.4 and
4.3.10 on X̄ F,S , respectively. We will also discuss other properties of X̄ F,S related to reduction
theory. Let G , (e i )i , and B be as in §4.2.
For c 1 , c 2 ∈ R>0 with c 2 ≥ 1, we define a subset T(c 1 , c 2 ) of (RS≥0 )d −1 by
T(c 1 , c 2 ) = {t ∈ (RS≥0 )d −1 | t v,i ≤ c 1 , t v,i ≤ c 2 t w,i for all v, w ∈ S and 1 ≤ i ≤ d − 1}.
Let Y0 = (RS>0 ∪ {(0)v ∈S })d −1 as in 3.5.1 (for the parabolic B ), and note that T(c 1 , c 2 ) ⊂ Y0 . Define
the subset S(c 1 , c 2 ) of X̄ F,S (B ) as the image of B u (AF,S ) × T(c 1 , c 2 ) under the map
πS = (πv )v ∈S : B u (AF,S ) × Y0 → X̄ F,S (B ),
with πv as in 3.3.3. For a compact subset C of B u (AF,S ), we let S(C ; c 1 , c 2 ) ⊂ S(c 1 , c 2 ) denote
the image of C × T(c 1 , c 2 ) under πS .
4.3.2. We give an example of the sets of 4.3.1.
Example. Consider the case that F = Q, the set S contains only the real place, and d = 2, as in
§3.2. Fix a basis (e i )1≤i ≤2 of V . Identify B u (R) with R in the natural manner. We have
S(C ; c 1 , c 2 ) = {x + y i ∈ H | x ∈ C , y ≥ c 1−1 } ∪ {∞},
which is contained in
S(c 1 , c 2 ) = {x + y i ∈ H | x ∈ R, y ≥ c 1−1 } ∪ {∞}.
40
4.3.3. Fix a compact open subgroup K of G (ASF ), and let ΓK ⊂ G (F ) be the inverse image of K
under G (F ) → G (ASF ).
Proposition 4.3.4. Let Γ be a subgroup of ΓK of finite index. Then there exist c 1 , c 2 ,C as in 4.3.1
and a finite subset R of G (F ) such that
X̄ F,S = ΓR · S(C ; c 1 , c 2 ).
Proof. It suffices to prove the weaker statement that there are c 1 , c 2 ,C and R such that
X S = ΓR · (X S ∩ S(C ; c 1 , c 2 )).
Indeed, we claim that the proposition follows from this weaker statement for the spaces in the
Q
product v ∈S X (Vi /Vi −1)v , where PI is as in 4.2.3 for a subset I of {1, . . . , d − 1} and (Vi )−1≤i ≤m is
the corresponding flag. To see this, first note that there is a finite subset R ′ of G (F ) such that
every parabolic subgroup of G has the form γPI γ−1 for some I and γ ∈ ΓR ′ . It then suffices to
consider a = (P, µ) ∈ X̄ F,S , where P = PI for some I , and µ ∈ ZF,S (P). We use the notation of 3.5.6
and 3.5.1. By Proposition 3.5.7, the set X̄ F,S (P) ∩ S(C ; c 1 , c 2 ) is the image under ξ of the image
of C × T(c 1 , c 2 ) in Pu (AF,S ) × ZF,S (P) × Y0 . Note that a has image (1, µ, 0) in the latter set (for 1
the identity matrix of Pu (AF,S )), and ξ(1, µ, 0) = a . Since the projection of T(c 1 , c 2 ) (resp., C ) to
(RS>0 )∆i (resp., B i ,u (AF,S )) is the analogous set for c 1 and c 2 (resp., a compact subset), the claim
follows.
For v ∈ S, we define subsets Q v and Q v′ of X v as follows. If v is archimedean, let Q v = Q v′ be
the one point set consisting of the element of X v given by the basis (e i )i and (ri )i with ri = 1
for all i . If v is non-archimedean, let Q v (resp., Q v′ ) be the subset of X v consisting of elements
given by (e i )i and (ri )i such that 1 = r1 ≤ · · · ≤ rd ≤ qv (resp., r1 = 1 and 1 ≤ ri ≤ qv for 1 ≤ i ≤ d ).
Then X v = G (Fv )Q v for each v ∈ S. Hence by 4.2.6, there exist c 1′ , c 2′ ,C as in 4.3.1 and a finite
subset R of G (F ) such that
X S = ΓR · B (C ; c 1′ , c 2′ )S · DSQ S ,
where Q S =
Q
v ∈S
Qv .
We have DSQ S = Q S′ for Q S′ =
Q
v ∈S
Q v′ , noting for archimedean (resp., non-archimedean)
v that K v0 (resp., Iw(Ov )) stabilizes all elements of Q v . We have B (C ; c 1′ , c 2′ )SQ S′ ⊂ S(C ; c 1 , c 2 ),
where
c 1 = max{qv | v ∈ S f }(c 1′ )−1 and c 2 = max{qv2 | v ∈ S f }(c 2′ )−1 ,
with S f the set of all non-archimedean places in S (and taking the maxima to be 1 if S f = ∅).
41
′
4.3.5. For v ∈ S and 1 ≤ i ≤ d −1, let t v,i : S(c 1 , c 2 ) → R≥0 be the map induced by φ B,v
: X̄ F,v (B ) →
Rd≥0−1 (see 3.4.6) and the i th projection Rd≥0−1 → R≥0 . Note that t v,i is continuous.
4.3.6. Fix a subset I of {1, . . . , d − 1}, and let PI be the parabolic subgroup of G defined in 4.2.3.
For c 1 , c 2 , c 3 ∈ R>0 , let
SI (c 1 , c 2 , c 3 ) = {x ∈ S(c 1 , c 2 ) | min{t v,i (x ) | v ∈ S} ≤ c 3 for each i ∈ I }.
4.3.7. For an element a ∈ X̄ F,S , we define the parabolic type of a to be the subset
{dim(Vi ) | 0 ≤ i ≤ m − 1}
of {1, . . . , d − 1}, where (Vi )−1≤i ≤m is the flag corresponding to the parabolic subgroup of G
associated to a .
Lemma 4.3.8. Let a ∈ X̄ F,S (B ), and let J be the parabolic type of a . Then the parabolic subgroup
of G associated to a is PJ .
This is easily proved.
4.3.9. In the following, we will often consider subsets of G (F ) of the form R 1 ΓK R 2 , ΓK R, or
RΓK , where R 1 , R 2 , R are finite subsets of G (F ). These three types of cosets are essentially the
same thing when we vary K . For finite subsets R 1, R 2 of G (F ), we have R 1 ΓK R 2 = R ′ ΓK ′ = ΓK ′′ R ′′
for some compact open subgroups K ′ and K ′′ of G (ASF ) contained in K and finite subsets R ′
and R ′′ of G (F ).
Proposition 4.3.10. Given c 1 ∈ R>0 and finite subsets R 1 , R 2 of G (F ), there exists c 3 ∈ R>0 such
that for all c 2 ∈ R>0 we have
{γ ∈ R 1 ΓK R 2 | γSI (c 1 , c 2 , c 3 ) ∩ S(c 1, c 2 ) 6= ∅} ⊂ PI (F ).
Proof. First we prove the weaker version that c 3 exists if the condition on γ ∈ R 1 ΓK R 2 is replaced by γSI (c 1 , c 2 , c 3 ) ∩ S(c 1 , c 2 ) ∩ X S 6= ∅.
Let Q v′ for v ∈ S and Q S′ be as in the proof of 4.3.4.
Claim 1. If c 1′ ∈ R>0 is sufficiently small (independent of c 2 ), then we have
X S ∩ S(c 1 , c 2 ) ⊂ B (c 1′ )SQ S′ .
Proof of Claim 1. Any x ∈ X S ∩ S(c 1 , c 2 ) satisfies t v,i (x ) ≤ c 1 for 1 ≤ i ≤ d − 1. Moreover, if
Q
′ −1 for all such i , then x ∈ B (c ′ ) Q ′ . The claim
v ∈S t v,i (x ) is sufficiently small relative to (c 1 )
1 S S
follows.
42
Let C v denote the compact set
C v = {g ∈ G (Fv ) | gQ v′ ∩Q v′ 6= ∅}.
If v is archimedean, then C v is the maximal compact open subgroup K v0 of 4.2.1. Set C S =
Q
S
v ∈S C v . We use the decomposition G (A F ) = G (A F,S ) × G (A F ) to write elements of G (A F ) as
pairs.
Claim 2. Fix c 1′ ∈ R>0 . The subset B (c 1′ )S C S × R 1 K R 2 of G (AF ) is contained in B (c )K 0 for
sufficiently small c ∈ R>0 .
Proof of Claim 2. This follows from the compactness of the C v for v ∈ S and the Iwasawa decomposition G (AF ) = B (AF )K 0 .
Claim 3. Let c 1′ be as in Claim 1, and let c ≤ c 1′ . Let c ′ ∈ R>0 . If c 3 ∈ R>0 is sufficiently small
(independent of c 2 ), we have
X S ∩ SI (c 1 , c 2 , c 3 ) ⊂ B I (c , c ′ )SQ S′ ,
where B I (c , c ′ )S = B (AF,S ) ∩ B I (c , c ′ ).
Proof of Claim 3. An element x ∈ B (c )SQ S′ lies in B I (c , c ′ )SQ S′ if
Q
t (x ) ≤ (c
v ∈S v,i
′ )−1
for all i ∈ I .
An element x ∈ X S ∩ S(c 1 , c 2 ) lies in X S ∩ SI (c 1 , c 2 , c 3 ) if min{t v,i (x ) | v ∈ S} ≤ c 3 for all i ∈ I . In
this case, x will lie in B I (c , c ′ )SQ S′ if c 3 ≤ (c ′ )−1 c 11−s , with s = ♯S.
Let c 1′ be as in Claim 1, take c of Claim 2 for this c 1′ such that c ≤ c 1′ , and let c ′ ∈ R>0 . Take
c 3 satisfying the condition of Claim 3 for these c 1′ , c , and c ′ .
Claim 4. If X S ∩ SI (c 1 , c 2 , c 3 ) ∩ γ−1 S(c 1 , c 2 ) is nonempty for some γ ∈ R 1ΓR 2 ⊂ G (F ), then
B I (c , c ′ ) ∩ γ−1 B (c )K 0 contains an element of G (AF,S ) × {1}.
Proof of Claim 4. By Claim 3, any x ∈ X S ∩ SI (c 1 , c 2 , c 3 ) ∩ γ−1 S(c 1 , c 2 ) lies in gQ S′ for some g ∈
B I (c , c ′ )S . By Claim 1, we have γx ∈ g ′Q S′ for some g ′ ∈ B (c 1′ )S . Since γx ∈ γgQ S′ ∩ g ′Q S′ , we have
(g ′ )−1 γg ∈ C S . Hence γg ∈ B (c 1′ )S C S , and therefore γ(g , 1) = (γg , γ) ∈ B (c )K 0 by Claim 2.
We prove the weaker version of 4.3.10: let x ∈ X S ∩ SI (c 1 , c 2 , c 3 ) ∩ γ−1 S(c 1 , c 2 ) for some
γ ∈ R 1 ΓR 2 . Then by Claim 4 and Lemma 4.2.4, with c ′ satisfying the condition of 4.2.4 for the
given c , we have γ ∈ PI (F ).
We next reduce the proposition to the weaker version, beginning with the following.
Claim 5. Let c 1 , c 2 ∈ R>0 . If γ ∈ G (F ) and x ∈ S(c 1 , c 2 ) ∩ γ−1S(c 1 , c 2 ), then γ ∈ PJ (F ), where J
is the parabolic type of x .
43
Proof of Claim 5. By Lemma 4.3.8, the parabolic subgroup associated to x is PJ and that associated to γx is PJ . Hence γPJ γ−1 = PJ . Since a parabolic subgroup coincides with its normalizer, we have γ ∈ PJ (F ).
Fix J ⊂ {1, . . . , d − 1}, ξ ∈ R 1, and η ∈ R 2 .
Claim 6. There exists c 3 ∈ R>0 such that if γ ∈ ΓK and x ∈ SI (c 1 , c 2 , c 3 ) ∩ (ξγη)−1S(c 1 , c 2 ) is
of parabolic type J , then ξγη ∈ PI (F ).
Proof of Claim 6. Let (Vi )−1≤i ≤m be the flag corresponding to PJ . Suppose that we have γ0 ∈ ΓK
and x 0 ∈ S(c 1 , c 2 )∩(ξγ0η)−1 S(c 1 , c 2 ) of parabolic type J . By Claim 5, we have ξγ0 η, ξγη ∈ PJ (F ).
Hence
ξγη = ξ(γγ−1
)ξ−1 ξγ0η ∈ ΓK ′ η′ ,
0
where K ′ is the compact open subgroup ξK ξ−1 ∩ PJ (ASF ) of PJ (ASF ), and η′ = ξγ0 η ∈ PJ (F ). The
claim follows from the weaker version of the proposition in which V is replaced by Vi /Vi −1 (for
0 ≤ i ≤ m ), the group G is replaced by PGLVi /Vi −1 , the compact open subgroup K is replaced
by the image of ξK ξ−1 ∩ PJ (ASF ) in PGLVi /Vi −1 (ASF ), the set R 1 is replaced by {1}, the set R 2 is
replaced by the image of {η′ } in PGLVi /Vi −1 (F ), and PI (F ) is replaced by the image of PI (F )∩PJ (F )
in PGLVi /Vi −1 (F ).
By Claim 6 for all J , ξ, and η, the result is proven.
Pi
Lemma 4.3.11. Let 1 ≤ i ≤ d − 1, and let V ′ = j =1 F e j . Let x ∈ X v for some v ∈ S, and let
g ∈ GLV (Fv ) be such that g V ′ = V ′ . For 1 ≤ i ≤ d − 1, we have
e (i ,j )
d −1
Y
|det(g v : Vv′ → Vv′ )|(d −i )/i
t v,j (g x )
=
,
t v,j (x )
|det(g v : Vv /Vv′ → Vv /Vv′ )|
j =1
where
e (i , j ) =
j (d −i )
if j ≤ i ,
d − j
if j ≥ i .
i
Proof. By the Iwasawa decomposition 3.3.4 and 3.4.5, it suffices to check this in the case that
g is represented by a diagonal matrix diag(a 1 , . . . , a d ). It follows from the definitions that
t v,j (g x )t v,j (x )−1 = |a j a −1
j +1 |, and the rest of the verification is a simple computation.
Lemma 4.3.12. Let i and V ′ be as in 4.3.11. Let R 1 and R 2 be finite subsets of G (F ). Then
there exist A, B ∈ R>0 such that for all γ ∈ GLV (F ) with image in R 1 ΓK R 2 ⊂ G (F ) and for which
γV ′ = V ′ , we have
A≤
Y |det(γ: V ′ → V ′ )|(d −i )/i
v
v
v ∈S
|det(γ: Vv /Vv′ → Vv /Vv′ )|
44
≤ B.
Proof. We may assume that R 1 and R 2 are one point sets {ξ} and {η}, respectively. Suppose
that an element γ0 with the stated properties of γ exists. Then for any such γ, the image of
−1 ∩ P (F ), and hence the image of γγ−1 in G (AS ) belongs to the
γγ−1
{i }
0 in G (F ) belongs to ξΓ K ξ
0
F
compact subgroup ξK ξ−1 ∩ P{i } (ASF ) of P{i } (ASF ). Hence
|det(γγ−1
: Vv′ → Vv′ )| = |det(γγ−1
: Vv /Vv′ → Vv /Vv′ )| = 1
0
0
for every place v of F which does not belong to S. By Lemma 4.3.11 and the product formula,
we have
Y
v ∈S
(|det(γγ−1
: Vv′ → Vv′ )|(d −i )/i · |det(γγ−1
: Vv /Vv′ → Vv /Vv′ )|−1 ) = 1,
0
0
so the value of the product in the statement is constant under our assumptions, proving the
result.
Proposition 4.3.13. Fix c 1 , c 2 ∈ R>0 and finite subsets R 1 , R 2 of G (F ). Then there exists A > 1
such that if x ∈ S(c 1 , c 2 ) ∩ γ−1S(c 1 , c 2 ) for some γ ∈ R 1ΓK R 2, then
A −1 t v,i (x ) ≤ t v,i (γx ) ≤ At v,i (x )
for all v ∈ S and 1 ≤ i ≤ d − 1.
Proof. By a limit argument, it is enough to consider x ∈ X S ∩ S(c 1 , c 2 ). Fix v ∈ S. For x ,x ′ ∈
X S ∩ S(c 1 , c 2 ) and 1 ≤ i ≤ d − 1, let s i (x ,x ′ ) = t v,i (x ′ )t v,i (x )−1 .
For each 1 ≤ i ≤ d − 1, take c 3 (i ) ∈ R>0 satisfying the condition in 4.3.10 for the set I = {i }
and both pairs of finite subsets R 1 , R 2 and R 2−1 , R 1−1 of G (F ). Let
c 3 = min{c 3 (i ) | 1 ≤ i ≤ d − 1}.
For a subset I of {1, . . . , d − 1}, let Y (I ) be the subset of (X S ∩ S(c 1 , c 2 ))2 consisting of all pairs
(x ,x ′ ) such that x ′ = γx for some γ ∈ R 1 ΓK R 2 and such that
I = {1 ≤ i ≤ d − 1 | min(t v,i (x ), t v,i (x ′ )) ≤ c 3 }.
For the proof of 4.3.13, it is sufficient to prove the following statement (S d ), fixing I .
(S d ) There exists A > 1 such that A −1 ≤ s i (x ,x ′ ) ≤ A for all (x ,x ′ ) ∈ Y (I ) and 1 ≤ i ≤ d − 1.
By Proposition 4.3.10, if γ ∈ R 1ΓK R 2 is such that there exists x ∈ X S with (x , γx ) ∈ Y (I ), then
γ ∈ P{i } (F ) for all i ∈ I . Lemmas 4.3.11 and 4.3.12 then imply the following for all i ∈ I , noting
that c 2−1 t w,i (y ) ≤ t v,i (y ) ≤ c 2 t w,i (y ) for all w ∈ S and y ∈ X S ∩ S(c 1 , c 2 ).
45
(Ti ) There exists B i > 1 such that for all (x ,x ′ ) ∈ Y (I ), we have
B i−1
≤
d −1
Y
j =1
s j (x ,x ′ )e (i ,j ) ≤ B i ,
where e (i , j ) is as in 4.3.11.
We prove the following statement (S i ) for 0 ≤ i ≤ d − 1 by induction on i .
′
′
(S i ) There exists A i > 1 such that A −1
i ≤ s j (x ,x ) ≤ A i for all (x ,x ) ∈ Y (I ) and all j such that
1 ≤ j ≤ i and j is not the largest element of I ∩ {1, . . . , i } (if it is nonempty).
That (S 0 ) holds is clear. Assume that (S i −1 ) holds for some i ≥ 1. If i ∈
/ I , then since c 3 ≤
t v,i (x ) ≤ c 1 and c 3 ≤ t v,i (x ′ ) ≤ c 1 , we have
c3
c1
≤ s i (x ,x ′ ) ≤ ,
c1
c3
and hence (S i ) holds with A i := max(A i −1 , c 1 c 3−1 ).
Assume that i ∈ I . If I ∩ {1, . . . , i − 1} = ∅, then (S i ) is evidently true with A i := A i −1 . If
I ∩ {1, . . . , i − 1} 6= ∅, then let i ′ be the largest element of this intersection. We compare (Ti ) and
(Ti ′ ). We have e (i , j ) = e (i ′ , j ) if j ≥ i and e (i , j ) < e (i ′ , j ) if j < i , so taking the quotient of the
equations in (Ti ′ ) and (Ti ), we have
(B i B i ′ )
−1
≤
i −1
Y
j =1
s j (x ,x ′ )e (i ,j )−e (i ,j ) ≤ B i B i ′ .
′
Since (S i −1 ) is assumed to hold, there then exists a ∈ R>0 such that
(B i B i ′ )−1 A −a
≤ s i ′ (x ,x ′ )e (i ,i )−e (i ,i ) ≤ B i B i ′ A ai−1
i −1
′
′
′
As the exponent e (i ′ , i ′ ) − e (i , i ′ ) is nonzero, this implies that (S i ) holds.
By induction, we have (S d −1 ). To deduce (S d ) from it, we may assume that I is nonempty,
and let i be the largest element of I . Then (S d −1 ) and (Ti ) imply (S d ).
Proposition 4.3.14. Let c 1 , c 2 ∈ R>0 and a ∈ X̄ F,S . Let I be the parabolic type (4.3.7) of a . Fix a
finite subset R of G (F ) and 1 ≤ i ≤ d − 1.
(1) If i ∈ I , then for any ε > 0, there exists a neighborhood U of a in X̄ F,S for the Satake
topology such that max{t v,i (x ) | v ∈ S} < ε for all x ∈ (ΓK R)−1U ∩ S(c 1 , c 2 ).
(2) If i ∈
/ I , then there exist a neighborhood U of a in X̄ F,S for the Satake topology and c ∈ R>0
such that min{t v,i (x ) | v ∈ S} ≥ c for all x ∈ (ΓK R)−1U ∩ S(c 1 , c 2 ).
46
Proof. The first statement is clear by continuity of t v,i and the fact that t v,i (γ−1 a ) = 0 for all
γ ∈ G (F ), and the second follows from 4.3.13, noting 4.3.4.
Proposition 4.3.15. Let a ∈ X̄ F,S , and let P be the parabolic subgroup of PGLV associated to a .
Let ΓK ,(P) ⊂ ΓK be as in 3.4.12. Then there are c 1 , c 2 ∈ R>0 and ϕ ∈ G (F ) such that ΓK ,(P)ϕ S(c 1 , c 2 )
is a neighborhood of a in X̄ F,S for the Satake topology.
Proof. This holds by definition of the Satake topology with ϕ = 1 if a ∈ X̄ F,S (B ). In general, let I
be the parabolic type of a . Then the parabolic subgroup associated to a has the form ϕPI ϕ −1
for some ϕ ∈ G (F ). We have ϕ −1 a ∈ X̄ F,S (PI ) ⊂ X̄ F,S (B ). By that already proven case, there exists
γ ∈ ΓK ,(PI ) such that ΓK ,(P) ϕγS(c 1 , c 2 ) is a neighborhood of a for the Satake topoology.
♭
The following result can be proved in the manner of 4.4.8 for X̄ F,S
below, replacing R by
{ϕ}, and ΓK ,(W ) by ΓK ,(P).
Lemma 4.3.16. Let the notation be as in 4.3.15. Let U ′ be a neighborhood of ϕ −1 a in X̄ F,S for the
Satake topology. Then there is a neighborhood U of a in X̄ F,S for the Satake topology such that
U ⊂ ΓK ,(P)ϕ(S(c 1 , c 2 ) ∩ U ′ ).
♭
4.4 X̄ F,S
and reduction theory
4.4.1. Let S be a finite set of places of F containing the archimedean places. In this subsection,
♭
we consider X̄ F,S
. Fix a basis (e i )1≤i ≤d of V . Let B ⊂ G = PGLV be the Borel subgroup of upper
triangular matrices for (e i )i . Let K be a compact open subgroup of G (ASF ).
♭
4.4.2. Let c 1 , c 2 ∈ R>0 . We let S♭ (c 1 , c 2 ) denote the image of S(c 1 , c 2 ) under X̄ F,S → X̄ F,S
. For
r ∈ {1, . . . , d − 1}, we then define
S♭r (c 1 , c 2 ) = {(W, µ) ∈ S♭ (c 1 , c 2 ) | dim(W ) ≥ r }.
Then the maps t v,i of 4.3.5 for v ∈ S and 1 ≤ i ≤ r induce maps
t v,i : S♭r (c 1 , c 2 ) → R>0 (1 ≤ i ≤ r − 1) and t v,r : S♭r (c 1 , c 2 ) → R≥0 .
For c 3 ∈ R>0 , we also set
S♭r (c 1 , c 2 , c 3 ) = {x ∈ S♭r (c 1 , c 2 ) | min{t v,r (x ) | v ∈ S} ≤ c 3 }.
Proposition 4.4.3. Fix c 1 ∈ R>0 and finite subsets R 1 , R 2 of G (F ). Then there exists c 3 ∈ R>0 such
that for all c 2 ∈ R>0 , we have
{γ ∈ R 1 ΓK R 2 | γS♭r (c 1 , c 2 , c 3 ) ∩ S♭r (c 1 , c 2 ) 6= ∅} ⊂ P{r } .
47
Proof. Take (W, µ) ∈ γS♭r (c 1 , c 2 , c 3 ) ∩ S♭r (c 1 , c 2 ), and let r ′ = dim W . Let P be the parabolic
Pr ′ +i
subgroup of V corresponding to the flag (Vi )−1≤i ≤d −r ′ with Vi = W + j =r ′+1 F e i for 0 ≤ i ≤
d − r ′ . Let µ′ ∈ ZF,S (P) be the unique element such that a = (P, µ′ ) ∈ X̄ F,S maps to (W, µ). Then
a ∈ γS{r } (c 1 , c 2 , c 3 ) ∩ S(c 1 , c 2 ), so we can apply 4.3.10.
Proposition 4.4.4. Fix c 1 , c 2 ∈ R>0 and finite subsets R 1 , R 2 of G (F ). Then there exists A > 1 such
that if x ∈ S♭r (c 1 , c 2 ) ∩ γ−1S♭r (c 1 , c 2 ) for some γ ∈ R 1 ΓK R 2 , then
A −1 t v,i (x ) ≤ t v,i (γx ) ≤ At v,i (x )
for all v ∈ S and 1 ≤ i ≤ r .
Proof. This follows from 4.3.13.
We also have the following easy consequence of Lemma 4.3.8.
♭
Lemma 4.4.5. Let a be in the image of X̄ F,S (B ) → X̄ F,S
, and let r be the dimension of the F Pr
subspace of V associated to a . Then the F -subspace of V associated to a is i =1 F e i .
♭
Proposition 4.4.6. Let a ∈ X̄ F,S
and let r be the dimension of the F -subspace of V associated to
a . Let c 1 , c 2 ∈ R>0 . Fix a finite subset R of G (F ).
♭
(1) For any ε > 0, there exists a neighborhood U of a in X̄ F,S
for the Satake topology such that
max{t v,r (x ) | v ∈ S} < ε for all x ∈ (ΓK R)−1U ∩ S♭r (c 1 , c 2 ).
♭
(2) If 1 ≤ i < r , then there exist a neighborhood U of a in X̄ F,S
for the Satake topology and
c ∈ R>0 such that min{t v,i (x ) | v ∈ S} ≥ c for all x ∈ (ΓK R)−1U ∩ S♭r (c 1 , c 2 ).
Proof. This follows from 4.4.4, as in the proof of 4.3.14.
Proposition 4.4.7. Let W be an F -subspace of V of dimension r ≥ 1. Let Φ be set of ϕ ∈ G (F )
Pr
such that ϕ( i =1 F e i ) = W .
♭
(1) There exists a finite subset R of Φ such that for any a ∈ X̄ F,S
(W ), there exist c 1 , c 2 ∈ R>0 for
which the set ΓK ,(W ) R S♭r (c 1 , c 2 ) is a neighborhood of a in the Satake topology.
♭
(2) For any ϕ ∈ Φ and a ∈ X̄ F,S
with associated subspace W , there exist c 1 , c 2 ∈ R>0 such that
a ∈ ϕ S♭r (c 1 , c 2 ) and ΓK ,(W ) ϕ S♭r (c 1 , c 2 ) is a neighborhood of a in the Satake topology.
Proof. We may suppose without loss of generality that W =
Pr
j =1
F e j , in which case Φ =
G (F )(W ) (see 3.4.12). Consider the set Q of all parabolic subgroups Q of G such that W is
contained in the smallest nonzero subspace of V preserved by Q. Any Q ∈ Q has the form
48
Q = ϕPI ϕ −1 for some ϕ ∈ G (F )(W ) and subset I of J := {i ∈ Z | r ≤ i ≤ d − 1}. There exists a
finite subset R of G (F )(W ) such that we may always choose ϕ ∈ ΓK ,(W ) R.
♭
By 4.4.5, an element of X̄ F,S (B ) has image in X̄ F,S
(W ) if and only if the parabolic subgroup
associated to it has the form PI for some I ⊂ J . The intersection of the image of X̄ F,S (B ) →
♭
♭
X̄ F,S
with X̄ F,S
(W ) is the union of the S♭r (c 1 , c 2 ) with c 1 , c 2 ∈ R>0 . By the above, for any a ∈
♭
X̄ F,S
(W ), we may choose ξ ∈ ΓK ,(W ) R such that ξ−1 a is in this intersection, and part (1) follows.
♭
Moreover, if W is the subspace associated to a , then ϕ −1 a ∈ X̄ F,S
(W ) is in the image of X̄ F,S (B )
for all ϕ ∈ G (F )(W ) , from which (2) follows.
♭
Lemma 4.4.8. Let W , Φ, R be as in 4.4.7, fix a ∈ X̄ F,S
(W ), and let c 1 , c 2 ∈ R>0 be as in 4.4.7(1) for
♭
this a . For each ϕ ∈ R, let Uϕ be a neighborhood of ϕ −1 a in X̄ F,S
for the Satake topology. Then
♭
there is a neighborhood U of a in X̄ F,S
for the Satake topology such that
U⊂
[
ϕ∈R
ΓK ,(W ) ϕ(S♭r (c 1 , c 2 ) ∩ Uϕ ).
Proof. We may assume that each ϕ(Uϕ ) is stable under the action of ΓK ,(W ) . Let
U = ΓK ,(W ) R S♭r (c 1 , c 2 ) ∩
\
ϕ(Uϕ ).
ϕ∈R
Then U is a neighborhood of a by 4.4.7(1). Let x ∈ U . Take γ ∈ ΓK ,(W ) and ϕ ∈ R such that
x ∈ γϕ S♭r (c 1 , c 2 ). Since ϕ(Uϕ ) is ΓK ,(W ) -stable, γ−1x ∈ ϕ(Uϕ ) and hence ϕ −1 γ−1x ∈ S♭r (c 1 , c 2 ) ∩
Uϕ .
♭
Proposition 4.4.9. Let a = (W, µ) ∈ X̄ F,S
, and let r = dim(W ). Take ϕ ∈ G (F ) and c 1 , c 2 ∈ R>0 as
♭
♭
in 4.4.7(2) such that ΓK ,(W ) ϕ S♭r (c 1 , c 2 ) is a neighborhood of a . Let φW,S
: X̄ F,S
(W ) → Z♭F,S (W ) be as
♭
in 3.4.4. For any neighborhood U of µ = φW,S
(a ) in Z♭F,S (W ) and any ε ∈ R>0 , set
♭
Φ(U , ε) = (φW,S
)−1 (U ) ∩ ΓK ,(W ) ϕ{x ∈ S♭r (c 1 , c 2 ) | t v,r (x ) < ε for all v ∈ S}.
♭
Then the set of all Φ(U , ε) forms a base of neighborhoods of a in X̄ F,S
under the Satake topology.
Proof. We may suppose that W =
Pr
i =1
F e i without loss of generality, in which case ϕ ∈
G (F )(W ) . Let P be the smallest parabolic subgroup containing B with flag (Vi )−1≤i ≤m such that
V0 = W and m = d − r . Let Q be the parabolic of all elements that preserve W . We then have
G ⊃ Q ⊃ P ⊃ B . Let B ′ be the Borel subgroup of PGLV /W that is the image of P and which we
regard as a subgroup of G using (e r +i )1≤i ≤m to split V → V /W .
Let
f v : Q u (Fv ) × X̄ V /W,F,v (B ′ ) × X Wv × R≥0 → X̄ F,v (P)
49
be the unique surjective continuous map such that ξ = f v ◦ h, where ξ is as in 3.5.6 and h is
defined as the composition
∼
−1
Pu (Fv ) × X Wv × Rm
−
→ Q u (Fv ) × B u′ (Fv ) × X Wv × R≥0 × Rm
→ Q u (Fv ) × X̄ V /W,F,v (B ′ ) × X Wv × R≥0
≥0
≥0
∼
of the map induced by the isomorphism Pu (Fv ) −
→ Q u (Fv ) × B u′ (Fv ) and the map induced by
−1
′
the surjection π̄ B ′ ,v : B u′ (Fv ) × Rm
≥0 → X̄ V /W,F,v (B ) of 3.3.8(2). The existence of f v follows from
3.3.8(4).
Set Y0 = RS>0 ∪ {(0)v ∈S }, and let
f S : Q u (AF,S ) × X̄ V /W,F,S (B ′ ) × Z♭F,S (W ) × Y0 → X̄ F,S (P)
be the product of the maps f v . Let t v,r : X̄ F,v (P) → R≥0 denote the composition
′
φB,v
X̄ F,v (P) → X̄ F,v (B ) −→ Rd≥0−1 → R≥0 ,
where the last arrow is the r th projection. The composition of f S with (t v,r )v ∈S is projection
onto Y0 by 3.5.7 and 3.4.6.
♭
Let X̄ F,S (W ) denote the inverse image of X̄ F,S
(W ) under the canonical surjection ΠS : X̄ F,S →
♭
X̄ F,S
. Combining f S with the action of G (F )(W ) , we obtain a surjective map
f S′ : G (F )(W ) × (Q u (AF,S ) × X̄ V /W,F,S (B ′ ) × Z♭F,S (W ) × Y0 ) → X̄ F,S (W ),
f S′ (g , z ) = g f S (z ).
♭
The composition of f S′ with φW,S
◦ ΠS is projection onto Z♭F,S (W ) by 3.5.9 and 3.5.10.
Applying 4.3.4 with V /W in place of V , there exists a compact subset C of Q u (AF,S ) ×
X̄ V /W,F,S (B ′ ) and a finite subset R of G (F )(W ) such that f S′ (ΓK ,(W ) R × C × Z♭F,S (W ) × Y0 ) = X̄ F,S (W ).
Consider the restriction of ΠS ◦ f S′ to a surjective map
♭
λS : ΓK ,(W ) R × C × Z♭F,S (W ) × Y0 → X̄ F,S
(W ).
We may suppose that R contains ϕ, since it lies in G (F )(W ) .
♭
Now, let U ′ be a neighborhood of a in X̄ F,S
(W ) for the Satake topology. It is sufficient to
prove that there exist an open neighborhood U of µ in Z♭F,S (W ) and ε ∈ R>0 such that Φ(U , ε) ⊂
U ′ . For ε ∈ R>0 , set Yε = {(t v )v ∈S ∈ Y0 | t v < ε for all v ∈ S}.
For any x ∈ C , we have λS (α,x , µ, 0) = (W, µ) ∈ U ′ for all α ∈ R. By the continuity of λS , there
exist a neighborhood D(x ) ⊂ Q u (AF,S ) × X̄ V /W,F,S (B ′ ) of x , a neighborhood U (x ) ⊂ Z♭F,v (W ) of µ,
and ε(x ) ∈ R>0 such that
λS (R × D(x ) × U (x ) × Yε(x )) ⊂ U ′ .
50
Since C is compact, some finite collection of the sets D(x ) cover C . Thus, there exist a neighborhood U of µ in Z♭F,v (W ) and ε ∈ R>0 such that λS (R ×C ×U ×Yε ) ⊂ U ′ . Since U ′ is ΓK ,(W ) -stable
by 3.4.15, we have λS (ΓK ,(W ) R × C × U × Yε ) ⊂ U ′ .
Let y ∈ Φ(U , ε), and write y = g x with g ∈ ΓK ,(W ) ϕ and x ∈ S♭r (c 1 , c 2 ) such that t v,r (x ) < ε
♭
for all v ∈ S. Since Φ(U , ε) ⊂ X̄ F,S
(W ), we may by our above remarks write y = λS (g , c , ν, t ) =
♭
g ΠS (f S (c , ν, t )), where c ∈ C , ν = φW,S
(y ), and t = (t v,r (x ))v ∈S . Since ν ∈ U and t ∈ Yε by
definition, y is contained in U ′ . Therefore, we have Φ(U , ε) ⊂ U ′ .
Example 4.4.10. Consider the case F = Q, S = {v } with v the archimedean place, and d = 3.
♭
We construct a base of neighborhoods of a point in X̄ Q,v
for the Satake topology.
♭
Fix a basis (e i )1≤i ≤3 of V . Let a = (W, µ) ∈ X̄ Q,v
, where W = Qe 1 , and µ is the unique element
of X Wv .
For c ∈ R>0 , let Uc be the subset of X v = PGL3 (R)/ PO3 (R) consisting of the elements
! 1 x 12 x 13
y1y2 0 0
1 0
0 1 x 23 0
y
0
2
0 γ
0 0
1
0
0 1
such that γ ∈ PGL2 (Z), x i j ∈ R, y 1 ≥ c , and y 2 ≥
these elements converge to a in
♭
X̄ Q,v
p
3
.
2
When γ, x i j and y 2 are fixed and y 1 → ∞,
under the Satake topology. When γ, x i j , and y 1 are fixed
and y 2 → ∞, they converge in the Satake topology to
! 1 x 12 0
1 0
0 1 0 µ(y 1 ),
µ(γ,x 12 , y 1 ) :=
0 γ
0 0 1
♭
where µ(y 1 ) is the class in X̄ Q,v
of the semi-norm a 1 e 1∗ + a 2 e 2∗ + a 3 e 3∗ 7→ (a 21 y 12 + a 22 )1/2 on Vv∗ .
The set of
Ūc := {a } ∪ {µ(γ,x , y ) | γ ∈ PGL2 (Z),x ∈ R, y ≥ c } ∪ Uc .
♭
is a base of neighborhoods for a in X̄ Q,v
under the Satake topology. Note that H = SL2 (Z){z ∈
H | Im(z ) ≥
p
3
},
2
which is the reason for the appearance of
any b ∈ R>0 such that b ≤
p
3
.
2
p
3
.
2
It can of course be replaced by
♭
4.4.11. We continue with Example 4.4.10. Under the canonical surjection X̄ Q,v → X̄ Q,v
, the
inverse image of a = (W, µ) in X̄ Q,v is canonically homeomorphic to X̄ (V /W )v = H ∪ P1 (Q) under
the Satake topology on both spaces. This homeomorphism sends x + y 2 i ∈ H (x ∈ R, y 2 ∈ R>0 )
to the limit for the Satake topology of
1 0 0
y1 y2 0 0
0 1 x 0
∈ PGL3 (R)/ PO3 (R)
y
0
2
0 0 1
0
0 1
51
♭
as y 1 → ∞. (This limit in X̄ Q,v depends on x and y 2 , but the limit in X̄ Q,v
is a .)
♭
4.4.12. In the example of 4.4.10, we explain that the quotient topology on X̄ Q,v
of the Satake
♭
topology on X̄ Q,v is different from the Satake topology on X̄ Q,v
.
For a map
1 Z
f : PGL2 (Z)/
0 1
!
→ R>0 ,
define a subset U f of X v as in the definition of Uc but replacing the condition on γ,x i j , y i by
γ ∈ PGL2 (Z), x i j ∈ R, y 1 ≥ f (γ), and y 2 ≥
p
3
.
2
Let
Ū f = {a } ∪ {µ(γ,x , y ) | γ ∈ PGL2 (Z),x ∈ R, y ≥ f (γ)} ∪ U f .
♭
When f varies, the Ū f form a base of neighborhoods of a in X̄ Q,v
for the quotient topology of
the Satake topology on X̄ Q,v . On the other hand, if inf{ f (γ) | γ ∈ PGL2 (Z)} = 0, then Ū f is not a
♭
neighborhood of a for the Satake topology on X̄ Q,v
.
4.5 Proof of the main theorem
In this subsection, we prove Theorem 4.1.4. We begin with the quasi-compactness asserted
therein. Throughout this subsection, we set Z = X S 2 × G (ASF )/K in situation (I) and Z = X S 2 in
situation (II), so X̄ = X̄ × Z .
Proposition 4.5.1. In situation (I) of 4.1.3, the quotient G (F )\X̄ is quasi-compact. In situation
(II), the quotient Γ\X̄ is quasi-compact for any subgroup Γ of ΓK of finite index.
♭
Proof. We may restrict to case (i) of 4.1.2 that X̄ = X̄ F,S 1 , as X̄ F,S
of case (ii) is a quotient of
1
X̄ F,S 1 (under the Borel-Serre topology). In situation (I), we claim that there exist c 1 , c 2 ∈ R>0 , a
compact subset C of B u (AF,S ), and a compact subset C ′ of Z such that X̄ = G (F )(S(C ; c 1 , c 2 ) ×
C ′ ). In situation (II), we claim that there exist c 1 , c 2 ,C ,C ′ as above and a finite subset R of G (F )
such that X̄ = ΓR(S(C ; c 1 , c 2 )×C ′ ). It follows that in situation (I) (resp., (II)), there is a surjective
continuous map from the compact space C × T(c 1 , c 2 ) × C ′ (resp., R × C × T(c 1 , c 2 ) × C ′ ) onto
the quotient space under consideration, which yields the proposition.
For any compact open subgroup K ′ of G (ASF1 ), the set G (F )\G (ASF1 )/K ′ is finite. Each X v for
v ∈ S 2 may be identified with the geometric realization of the Bruhat-Tits building for PGLVv ,
the set of i -simplices of which for a fixed i can be identified with G (Fv )/K v′ for some K ′ . So,
we see that in situation (I) (resp., (II)), there is a compact subset D of Z such that Z = G (F )D
(resp., Z = ΓD).
52
Now fix such a compact open subgroup K ′ of G (ASF1 ). By 4.3.4, there are c 1 , c 2 ∈ R>0 , a
compact subset C of Pu (AF,S 1 ), and a finite subset R ′ of G (F ) such that X̄ F,S = ΓK ′ R ′ S(C ; c 1 , c 2 ).
We consider the compact subset C ′ := (R ′ )−1 K ′ D of Z .
Let (x , y ) ∈ X̄, where x ∈ X̄ F,S and y ∈ Z . Write y = γz for some z ∈ D and γ ∈ G (F ) (resp.,
γ ∈ Γ) in situation (I) (resp., (II)). In situation (II), we write ΓΓK ′ R ′ = ΓR for some finite subset
R of G (F ). Write γ−1x = γ′ ϕs where γ′ ∈ ΓK ′ , ϕ ∈ R ′ , s ∈ S(C ; c 1 , c 2 ). We have
(x , y ) = γ(γ−1x , z ) = γ(γ′ ϕs , z ) = (γγ′ ϕ)(s , ϕ −1 (γ′ )−1 z ).
As γγ′ ϕ lies in G (F ) in situation (I) and in ΓR in situation (II), we have the claim.
4.5.2. To prove Theorem 4.1.4, it remains only to verify the Hausdorff property. For this, it is
sufficient to prove the following.
Proposition 4.5.3. Let Γ = G (F ) in situation (I) of 4.1.3, and let Γ = ΓK in situation (II). For
every a , a ′ ∈ X̄, there exist neighborhoods U of a and U ′ of a ′ such that if γ ∈ Γ and γU ∩U ′ 6= ∅,
then γa = a ′ .
In the rest of this subsection, let the notation be as in 4.5.3. It is sufficient to prove 4.5.3 for
the Satake topology on X̄. In 4.5.4–4.5.8, we prove 4.5.3 in situation (II) for S = S 1 . That is, we
suppose that X̄ = X̄ . In 4.5.9 and 4.5.10, we deduce 4.5.3 in general from this case.
Lemma 4.5.4. Assume that X̄ = X̄ F,S . Suppose that a , a ′ ∈ X̄ have distinct parabolic types (4.3.7).
Then there exist neighborhoods U of a and U ′ of a ′ such that γU ∩ U ′ = ∅ for all γ ∈ Γ.
Proof. Let I (resp., I ′ ) be the parabolic type of a (resp., a ′ ). We may assume that there exists
an i ∈ I with i ∈
/ I ′.
By 4.3.15, there exist ϕ, ψ ∈ G (F ) and c 1 , c 2 ∈ R>0 such that ΓK ϕ S(c 1 , c 2 ) is a neighbor-
hood of a and ΓK ψS(c 1 , c 2 ) is a neighborhood of a ′ . By 4.3.14(2), there exist a neighborhood
U ′ ⊂ ΓK ψS(c 1 , c 2 ) of a ′ and c ∈ R>0 with the property that min{t v,i (x ) | v ∈ S} ≥ c for all
x ∈ (ΓK ψ)−1U ′ ∩ S(c 1 , c 2 ). Let A ∈ R>1 be as in 4.3.13 for these c 1 , c 2 for R 1 = {ϕ −1 } and R 2 = {ψ}.
Take ε ∈ R>0 such that Aε ≤ c . By 4.3.14(1), there exists a neighborhood U ⊂ ΓK ϕ S(c 1 , c 2 ) of a
such that max{t v,i (x ) | v ∈ S} < ε for all x ∈ (ΓK ϕ)−1U ∩ S(c 1 , c 2 ).
We prove that γU ∩ U ′ = ∅ for all γ ∈ ΓK . If x ∈ γU ∩ U ′ , then we may take δ, δ′ ∈ ΓK such
that (δϕ)−1 γ−1x ∈ S(c 1 , c 2 ) and (δ′ ψ)−1x ∈ S(c 1 , c 2 ). Since
(δϕ)−1 γ−1x = ϕ −1 (δ−1 γ−1 δ′ )ψ(δ′ ψ)−1x ∈ ϕ −1 ΓK ψ · (δ′ ψ)−1x ,
we have by 4.3.13 that
c ≤ t v,i ((δ′ ψ)−1x ) ≤ At v,i ((δϕ)−1 γ−1x ) < Aε,
for all v ∈ S and hence c < Aε, a contradiction.
53
♭
Lemma 4.5.5. Assume that X̄ = X̄ F,S
. Let a , a ′ ∈ X̄ and assume that the dimension of the F -
subspace associated to a is different from that of a ′ . Then there exist neighborhoods U of a and
U ′ of a ′ such that γU ∩ U ′ = ∅ for all γ ∈ Γ.
Proof. The proof is similar to that of 4.5.4. In place of 4.3.13, 4.3.14, and 4.3.15, we use 4.4.4,
4.4.6, and 4.4.7, respectively.
Lemma 4.5.6. Let P be a parabolic subgroup of G . Let a , a ′ ∈ ZF,S (P) (see 3.4.3), and let R 1 and
R 2 be finite subsets of G (F ). Then there exist neighborhoods U of a and U ′ of a ′ in ZF,S (P) such
that γa = a ′ for every γ ∈ R 1ΓK R 2 ∩ P(F ) for which γU ∩ U ′ 6= ∅.
Proof. For each ξ ∈ R 1 and η ∈ R 2 , the set ξΓK η ∩ P(F ) is a ξΓK ξ−1 ∩ P(F )-orbit for the left
Qm
action of ξΓK ξ−1 . Hence its image in i =0 PGLVi /Vi −1 (AF,S ) is discrete, for (Vi )−1≤i ≤m the flag
Qm
corresponding to P, and thus the image of R 1 ΓK R 2 ∩ P(F ) in i =0 PGLVi /Vi −1 (AF,S ) is discrete as
well. On the other hand, for any compact neighborhoods U of a and U ′ of a ′ , the set
)
(
m
Y
′
PGLVi /Vi −1 (AF,S ) | g U ∩ U 6= ∅
g∈
i =0
is compact. Hence the intersection M := {γ ∈ R 1 ΓK R 2 ∩P(F ) | γU ∩U ′ 6= ∅} is finite. If γ ∈ M and
γa 6= a ′ , then replacing U and U ′ by smaller neighborhoods of a and a ′ , respectively, we have
γU ∩ U ′ = ∅. Hence for sufficiently small neighborhoods U and U ′ of a and a ′ , respectively,
we have that if γ ∈ M , then γa = a ′ .
Lemma 4.5.7. Let W be an F -subspace of V . Let a , a ′ ∈ Z♭F,S (W ) (see 3.4.4), and let R 1 and R 2
be finite subsets of G (F ). Let P be the parabolic subgroup of G consisting of all elements which
preserve W . Then there exist neighborhoods U of a and U ′ of a ′ in Z♭F,S (W ) such that γa = a ′ for
every γ ∈ R 1ΓK R 2 ∩ P(F ) for which γU ∩ U ′ 6= ∅.
Proof. This is proven in the same way as 4.5.6.
4.5.8. We prove 4.5.3 in situation (II), supposing that S = S 1 .
In case (i) (that is, X̄ = X̄ = X̄ F,S ), we may assume by 4.5.4 that a and a ′ have the same
♭
parabolic type I . In case (ii) (that is, X̄ = X̄ = X̄ F,S
), we may assume by 4.5.5 that the dimension
r of the F -subspace of V associated to a coincides with that of a ′ . In case (i) (resp., (ii)), take
c 1 , c 2 ∈ R>0 and elements ϕ and ψ (resp., finite subsets R and R ′ ) of G (F ) such that c 1 , c 2 , ϕ
(resp., c 1 , c 2 , R) satisfy the condition in 4.3.15 (resp., 4.4.7) for a and c 1 , c 2 , ψ (resp., c 1 , c 2 , R ′ )
satisfy the condition in 4.3.15 (resp., 4.4.7) for a ′ . In case (i), we set R = {ϕ} and R ′ = {ψ}.
Fix a basis (e i )1≤i ≤d of V . In case (i) (resp., (ii)), denote S(c 1 , c 2 ) (resp., S♭r (c 1 , c 2 )) by S. In
Pr
case (i), let P = PI , and let (Vi )−1≤i ≤m be the associated flag. In case (ii), let W = i =1 F e i , and
let P be the parabolic subgroup of G consisting of all elements which preserve W .
54
Note that in case (i) (resp., (ii)), for all ϕ ∈ R and ψ ∈ R ′ , the parabolic subgroup P is
associated to ϕ −1 a and to ψ−1 a ′ (resp., W is associated to ϕ −1 a and to ψ−1 a ′ ) and hence
these elements are determined by their images in ZF,S (P) (resp. Z♭F,S (W )).
In case (i) (resp., case (ii)), apply 4.5.6 (resp., 4.5.7) to the images of ϕ −1 a and ψ−1 a ′ for
ϕ ∈ R, ψ ∈ R ′ in ZF,S (P) (resp., Z♭F,S (W )). By this, and by 4.3.10 for case (i) and 4.4.3 for case
(ii), we see that there exist neighborhoods Uϕ of ϕ −1 a for each ϕ ∈ R and Uψ′ of ψ−1 a ′ for each
ψ ∈ R ′ for the Satake topology with the following two properties:
(A) {γ ∈ (R ′ )−1 ΓK R | γ(S ∩ Uϕ ) ∩ (S ∩ Uψ′ ) 6= ∅ for some ϕ ∈ R, ψ ∈ R ′ } ⊂ P(F ),
(B) if γ ∈ (R ′ )−1 ΓK R ∩ P(F ) and γUϕ ∩ Uψ′ 6= ∅ for ϕ ∈ R and ψ ∈ R ′ , then γϕ −1 a = ψ−1 a ′ .
In case (i) (resp., (ii)), take a neighborhood U of a satisfying the condition in 4.3.16 (resp.,
4.4.8) for (Uϕ )ϕ∈R , and take a neighborhood U ′ of a ′ satisfying the condition in 4.3.16 (resp.,
4.4.8) for (Uψ′ )ψ∈R ′ . Let γ ∈ ΓK and assume γU ∩ U ′ 6= ∅. We prove γa = a ′ . Take x ∈ U and
x ′ ∈ U ′ such that γx = x ′ . By 4.3.16 (resp., 4.4.8), there are ϕ ∈ R, ψ ∈ R ′ , and ε ∈ ΓK ,(ϕPϕ −1)
and δ ∈ ΓK ,(ψPψ−1) in case (i) (resp., ε ∈ ΓK ,(ϕW ) and δ ∈ ΓK ,(ψW ) in case (ii)) such that ϕ −1 ε−1 x ∈
S ∩ Uϕ and ψ−1 δ−1x ′ ∈ S ∩ Uψ′ . Since
(ψ−1 δ−1 γεϕ)ϕ −1 ε−1x = ψ−1 δ−1x ′ ,
we have ψ−1 δ−1 γεϕ ∈ P(F ) by property (A). By property (B), we have
(ψ−1 δ−1 γεϕ)ϕ −1 a = ψ−1 a ′ .
Since εa = a and δa ′ = a ′ , this proves γa = a ′ .
We have proved 4.5.3 in situation (II) under the assumption S = S 1 . In the following 4.5.9
and 4.5.10, we reduce the general case to that case.
Lemma 4.5.9. Let a , a ′ ∈ Z . In situation (I) (resp., (II)), let H = G (ASF1 ) (resp., H = G (AF,S 2 )).
Then there exist neighborhoods U of a and U ′ of a ′ in Z such that g a = a ′ for all g ∈ H for
which g U ∩ U ′ 6= ∅.
Proof. For any compact neighborhoods U of a and U ′ of a ′ , the set M := {g ∈ H | g U ∩U ′ 6= ∅}
is compact. By definition of Z , there exist a compact open subgroup N of H and a compact
neighborhood U of a such that g x = x for all g ∈ N and x ∈ U . For such a choice of U , the
set M is stable under the right translation by N , and M /N is finite because M is compact and
N is an open subgroup of H . If g ∈ M and if g a 6= a ′ , then by shrinking the neighborhoods U
and U ′ , we have that g U ∩U ′ = ∅. As M /N is finite, we have sufficiently small neighborhoods
U and U ′ such that if g ∈ M and g U ∩ U ′ 6= ∅, then g a = a ′ .
55
4.5.10. We prove Proposition 4.5.3.
Let H be as in Lemma 4.5.9. Write a = (a S 1 , a Z ) and a ′ = (a S′ 1 , a Z′ ) as elements of X̄ × Z . By
4.5.9, there exist neighborhoodsUZ of a Z and UZ′ of a Z′ in Z such that if g ∈ H and g UZ ∩UZ′ 6= ∅,
then g a = a ′ . The set K ′ := {g ∈ H | g a Z = a Z } is a compact open subgroup of H . Let Γ′ be
the inverse image of K ′ under Γ → H , where Γ = G (F ) in situation (I). In situation (II), the
group Γ′ is of finite index in the inverse image of the compact open subgroup K ′ × K under
S
G (F ) → G (AF1 ). In both situations, the set M := {γ ∈ Γ | γa Z = a Z′ } is either empty or a Γ′ -torsor
for the right action of Γ′ .
Assume first that M 6= ∅, in which case we may choose θ ∈ Γ such that M = θ Γ′ . Since we
have proven 4.5.3 in situation (II) for S 1 = S, there exist neighborhoods US 1 of a S 1 and US′ 1 of
θ −1 a S′ 1 such that if γ ∈ Γ′ satisfies γUS 1 ∩ US′ 1 6= ∅, then γa S 1 = θ −1 a S′ 1 . Let U = US 1 × UZ and
U ′ = θUS′ 1 × UZ′ , which are neighborhoods of a and a ′ in X̄, respectively. Suppose that γ ∈ Γ
satisfies γU ∩U ′ 6= ∅. Then, since γUZ ∩UZ′ 6= ∅, we have γa Z = a Z′ and hence γ = θ γ′ for some
γ′ ∈ Γ′ . Since θ γ′US 1 ∩ θUS′ 1 6= ∅, we have γ′US 1 ∩US′ 1 6= ∅, and hence γ′ a S 1 = θ −1 a S′ 1 . That is, we
have γa S 1 = a S′ 1 , so γa = a ′ .
In the case that M = ∅, take any neighborhoods US 1 of a S 1 and US′ 1 of a S′ 1 , and set U =
US 1 × UZ and U ′ = US′ 1 × UZ′ . Any γ ∈ Γ such that γU ∩ U ′ 6= ∅ is contained in M , so no such γ
exists.
4.6 Supplements to the main theorem
We use the notation of §4.1 throughout this subsection. We suppose that Γ = G (F ) in situation
(I), and we let Γ be a subgroup of ΓK of finite index in situation (II). For a ∈ X̄, let Γa < Γ denote
the stabilizer of a .
Theorem 4.6.1. For a ∈ X̄ (with either the Borel-Serre or the Satake topology), there is an open
neighborhood U of the image of a in Γa \X̄ such that the image U ′ of U under the quotient map
Γa \X̄ → Γ\X̄ is open and the map U → U ′ is a homeomorphism.
Proof. By the case a = a ′ of Proposition 4.5.3, there is an open neighborhood U ′′ ⊂ X̄ of a such
that if γ ∈ ΓK and γU ′′ ∩ U ′′ 6= ∅, then γa = a . Then the subset U := Γa \Γa U ′′ of Γa \X̄ is open
and has the desired property.
Proposition 4.6.2. Suppose that S = S 1 , and let a ∈ X̄.
(1) Take X̄ = X̄ F,S , and let P be the parabolic subgroup associated to a . Then Γ(P) (as in 3.4.12)
is a normal subgroup of Γa of finite index.
56
♭
(2) Take X̄ = X̄ F,S
, and let W be the F -subspace of V associated to a . Then Γ(W ) (as in 3.4.12)
is a normal subgroup of Γa of finite index.
Proof. We prove (1), the proof of (2) being similar. Let (Vi )−1≤i ≤m be the flag corresponding
Qm
to P. The image of Γ ∩ P(F ) in i =0 PGLVi /Vi −1 (AF,S ) is discrete. On the other hand, the stabiQm
lizer in i =0 PGLVi /Vi −1 (AF,S ) of the image of a in ZF,S (P) is compact. Hence the image of Γa in
Qm
PGLVi /Vi −1 (F ), which is isomorphic to Γa /Γ(P) , is finite.
i =0
Theorem 4.6.3. Assume that F is a function field and X̄ = X̄ F,S 1 , where S 1 consists of a single
place v . Then the inclusion map Γ\X ,→ Γ\X̄ is a homotopy equivalence.
Proof. Let a ∈ X̄. In situation (I) (resp., (II)), write a = (a v , a v ) with a v ∈ X̄ F,v and a v ∈ X S 2 ×
Q
G (ASF )/K (resp., X S 2 ). Let K ′ be the isotropy subgroup of a v in G (AvF ) (resp., w ∈S 2 G (Fw )), and
Q
let Γ′ < Γ be the inverse image of K ′ under the map Γ → G (AvF ) (resp., Γ → w ∈S 2 G (Fw )).
Let P be the parabolic subgroup associated to a . Let Γa be the isotropy subgroup of a in Γ,
which is contained in P(F ) and equal to the isotropy subgroup Γ′a v of a v in Γ′ . In situation (I)
(resp., (II)), take a Γa -stable open neighborhood D of a v in X S 2 × G (ASF )/K (resp., X S 2 ) that has
compact closure.
Claim 1. The subgroup ΓD := {γ ∈ Γa | γx = x for all x ∈ D} of Γa is normal of finite index.
Proof of Claim 1. Normality follows from the Γa -stability of D. For any x in the closure D̄ of D,
there exists an open neighborhood Vx of x and a compact open subgroup N x of G (AvF ) (resp.,
Q
G (Fw )) in situation (I) (resp., (II)) such that g y = y for all g ∈ N x and y ∈ Vx . For a finite
w ∈S 2
Tn
subcover {Vx 1 , . . . , Vx n } of D̄, the group ΓD is the inverse image in Γa of i =1 N x i , so is of finite
index.
Claim 2. The subgroup H := ΓD ∩ Pu (F ) of Γa is normal of finite index.
Proof of Claim 2. Normality is immediate from Claim 1 as Pu (F ) is normal in P(F ). Let H ′ =
Γ′(P) ∩ Pu (F ), which has finite index in Γ′(P) and equals Γ′ ∩ Pu (F ) by definition of Γ′(P) . Since
Γ′(P) ⊂ Γ′a v ⊂ Γ′ and Γ′a v = Γa , we have H ′ = Γa ∩ Pu (F ) as well. By Claim 1, we then have that H ′
contains H with finite index, so H has finite index in Γ′(P) . Proposition 4.6.2(1) tells us that Γ′(P)
is of finite index in Γ′a v = Γa .
Let (Vi )−1≤i ≤m be the flag corresponding to P. By Corollary 3.5.4, we have a homeomor-
phism
∼
χ : Pu (Fv )\X̄ F,v (P) −
→ ZF,v (P) × Rm
≥0
′
on quotient spaces arising from the P(Fv )-equivariant homeomorphism ψP,v = (φP,v , φP,v
) of
3.5.1 (see 3.4.3 and 3.4.6).
57
Claim 3. For a sufficiently small open neighborhood U of 0 = (0, . . . , 0) in Rm
≥0 , the map χ
induces a homeomorphism
∼
χU : H \X̄ F,v (P)U −
→ ZF,v (P) × U ,
′
where X̄ F,v (P)U denotes the inverse image of U under φP,v
: X̄ F,v (P) → Rm
≥0 .
Proof of Claim 3. By definition, χ restricts to a homeomorphism
∼
Pu (Fv )\X̄ F,v (P)U −
→ ZF,v (P) × U
for any open neighborhood U of 0. For a sufficiently large compact open subset C of Pu (Fv ),
we have Pu (Fv ) = HC . For U sufficiently small, every g ∈ C fixes all x ∈ X̄ F,v (P)U , which yields
the claim.
Claim 4. The map χU and the identity map on D induce a homeomorphism
∼
χU ,a : Γa \(X̄ F,v (P)U × D) −
→ (Γa \(ZF,v (P) × D)) × U .
Proof of Claim 4. The quotient group Γa /H is finite by Claim 2. Since the determinant of an
automorphism of Vi /Vi −1 of finite order has trivial absolute value at v , the Γa -action on Rm
≥0 is
trivial. Since H acts trivially on D, the claim follows from Claim 3.
m
Now let c ∈ Rm
>0 , and set U = {t ∈ R≥0 | t i < c for all 1 ≤ i ≤ m }. Set (X v )U = X v ∩ X̄ F,v (P)U . If
c is sufficiently small, then
(Γa \(ZF,v (P) × D)) × (U ∩ Rm
) ,→ (Γa \(ZF,v (P) × D)) × U
>0
is a homotopy equivalence, and we can apply χU−1,a to both sides to see that the inclusion map
Γa \((X v )U × D) ,→ Γa \(X̄ F,v (P)U × D)
is also a homotopy equivalence. By Theorem 4.6.1, this proves Theorem 4.6.3.
Remark 4.6.4. Theorem 4.6.3 is well-viewed as a function field analogue of the homotopy
equivalence for Borel-Serre spaces of [3].
4.6.5. Theorem 4.1.4 remains true if we replace G = PGLV by G = SLV in 4.1.3 and 4.1.4. It
also remains true if we replace G = PGLV by G = GLV and we replace X̄ in 4.1.4 in situation (I)
(resp., (II)) by
X̄ × X S 2 × (RS>0 × G (ASF )/K )1
58
(resp., X̄ × X S 2 × (RS>0 )1 ),
where ( )1 denotes the kernel of
((a v )v ∈S , g ) 7→ |det(g )|
Y
av
v ∈S
(resp., (a v )v ∈S 7→
Y
a v ),
v ∈S
and γ ∈ GLV (F ) (resp., γ ∈ ΓK ) acts on this kernel by multiplication by ((| det(γ)|v )v ∈S , γ) (resp.,
(|det(γ)|v )v ∈S ).
Theorems 4.6.1 and 4.6.3 also remain true under these modifications. These modified ver-
sions of the results are easily reduced to the original case G = PGLV .
4.7 Subjects related to this paper
4.7.1. In this subsection, as possibilities of future applications of this paper, we describe connections with the study of
• toroidal compactifications of moduli spaces of Drinfeld modules (4.7.2–4.7.5)
• the asymptotic behavior of Hodge structures and p -adic Hodge structures associated to
a degenerating family of motives over a number field (4.7.6, 4.7.7), and
• modular symbols over function fields (4.7.8, 4.7.9).
4.7.2. In [21], Pink constructed a compactification of the moduli space of Drinfeld modules
that is similar to the Satake-Baily-Borel compactification of the moduli space of polarized
abalian varieties. In a special case, it had been previously constructed by Kapranov [15].
In [20], Pink, sketched a method for constructing a compactification of the moduli space
of Drinfeld modules that is similar to the toroidal compactification of the moduli space of
polarized abelian varieties (in part, using ideas of K. Fujiwara). However, the details of the
construction have not been published. Our plan for constructing toroidal compactifications
seems to be different from that of [20].
4.7.3. We give a rough outline of the relationship that we envision between this paper and the
analytic theory of toroidal compactifications. Suppose that F is a function field, and fix a place
v of F . Let O be the ring of all elements of F which are integral outside v . In [6], the notion of
a Drinfeld O-module of rank d is defined, and the moduli space of such Drinfeld modules is
constructed.
Let Cv be the completion of an algebraic closure of Fv and let | |: Cv → R≥0 be the absolute
value which extends the normalized absolute value of Fv . Let Ω ⊂ Pd −1 (Cv ) be the (d − 1)dimensional Drinfeld upper half-space consisting of all points (z 1 : · · · : z d ) ∈ Pd −1 (Cv ) such
that (z i )1≤i ≤d is linearly independent over Fv .
59
For a compact open subgroup K of GLd (AvF ), the set of Cv -points of the moduli space M K
of Drinfeld O-modules of rank d with K -level structure is expressed as
M K (Cv ) = GLd (F )\(Ω × GLd (AvF )/K )
(see [6]).
Consider the case V = F d in §3 and §4. We have a map Ω → X v which sends (z 1 : · · · : z d ) ∈ Ω
Pd
to the class of the the norm Vv = Fvd → R≥0 given by (a 1 , . . . , a d ) 7→ | i =1 a i z i | for a i ∈ Fv . This
map induces a canonical continuous map
(1) M K (Cv ) = GLd (F )\(Ω × GLd (AvF )/K ) → GLd (F )\(X v × GLd (AvF )/K ).
The map (1) extends to a canonical continuous map
♭
(2) M̄ KKP (Cv ) → GLd (F )\(X̄ F,v
× GLd (AvF )/K ),
where M̄ KKP denotes the compactification of Kapranov and Pink of M K . In particular, M̄ KKP is
♭
related to X̄ F,v
. On the other hand, the toroidal compactifications of M K should be related to
X̄ F,v . If we denote by M̄ Ktor the projective limit of all toroidal compactifications of M K , then the
map of (1) should extend to a canonical continuous map
(3) M̄ Ktor (Cv ) → GLd (F )\(X̄ F,v × GLd (AvF )/K ).
that fits in a commutative diagram
M̄ Ktor (Cv )
GLd (F )\(X̄ F,v × GL(AvF )/K )
M KKP (Cv )
♭
GLd (F )\(X̄ F,v
× GLd (AvF )/K ).
4.7.4. The expected map of 4.7.3(3) is the analogue of the canonical continuous map from
the projective limit of all toroidal compactifications of the moduli space of polarized abelian
varieties to the reductive Borel-Serre compactification (see [10, 16]).
From the point of view of our study, the reductive Borel-Serre compactification and X̄ F,v
are enlargements of spaces of norms. A polarized abelian variety A gives a norm on the polarized Hodge structure associated to A (the Hodge metric). This relationship between a polarized abelian variety and a norm forms the foundation of the relationship between the toroidal
compactifications of a moduli space of polarized abelian varieties and the reductive BorelSerre compactification. This is similar to the relationship between M K and the space of norms
X v given by the map of 4.7.3(1), as well as the relationship between M̄ Ktor and X̄ F,v given by
4.7.3(3).
4.7.5. In the usual theory of toroidal compactifications, cone decompositions play an important role. In the toroidal compactifications of 4.7.3, the simplices of Bruhat-Tits buildings
60
♭
(more precisely, the simplices contained in the fibers of X̄ F,v → X̄ F,v
) should play the role of the
cones in cone decompositions.
4.7.6. We are preparing a paper in which our space X̄ F,S with F a number field and with S
containing a non-archimedean place appears in the following way.
Let F be a number field, and let Y be a polarized projective smooth variety over F . Let
m
m ≥ 0, and let V = H dR
(Y ) be the de Rham cohomology. For a place v of F , let Vv = Fv ⊗F V .
For an archimedean place v of F , it is well known that Vv has a Hodge metric. For v non-
archimedean, we can under certain assumptions define a Hodge metric on Vv by the method
illustrated in the example of 4.7.7 below. The [Fv : Qv ]-powers of these Hodge metrics for v ∈ S
Q
are norms and therefore provide an element of v ∈S X Vv . When Y degenerates, this element
Q
of v ∈S X Vv can converge to a boundary point of X̄ F,S .
4.7.7. Let Y be an elliptic curve over a number field F , and take m = 1.
Let v be a non-archimedean place of F , and assume that Fv ⊗F Y is a Tate elliptic curve
of q -invariant qv ∈ Fv× with |qv | < 1. Then the first log-crystalline cohomology group D of the
special fiber of this elliptic curve is a free module of rank 2 over the Witt vectors W (k v ) with a
basis (e 1 , e 2 ) on which the Frobenius ϕ acts as ϕ(e 1 ) = e 1 and ϕ(e 2 ) = p e 2 , where p = char k v .
The first ℓ-adic étale cohomology group of this elliptic curve is a free module of rank 2 over Zℓ
with a basis (e 1,ℓ , e 2,ℓ ) such that the inertia subgroup of Gal(F̄v /Fv ) fixes e 1 . The monodromy
operator N satisfies
N e 2 = ξ′v e 1 ,
N e 1 = 0,
N e 2,ℓ = ξ′v e 1,ℓ ,
N e 1,ℓ = 0
where ξ′v = ordFv (qv ) > 0. The standard polarization 〈 , 〉 of the elliptic curve satisfies 〈e 1 , e 2 〉 =
1 and hence
〈N e 2, e 2 〉 = ξ′v ,
〈e 1 , N −1 e 1 〉 = 1/ξ′v
〈N e 2,ℓ, e 2,ℓ 〉 = ξ′v ,
〈e 1,ℓ , N −1 e 1,ℓ 〉 = 1/ξ′v .
1
For V = H dR
(Y ), we have an isomorphism Vv ∼
= Fv ⊗W (k v ) D. The Hodge metric | |v on Vv is
defined by
|a 1 e 1 + a 2 e 2 |v = max(ξ−1/2
|a 1 |p , ξv1/2 |a 2 |p )
v
for a 1 , a 2 ∈ Fv , where | |p denotes the absolute value on Fv satisfying |p |p = p −1 and
ξv := −ξ′v log(|̟v |p ) = − log(|qv |p ) > 0,
where ̟v is a prime element of Fv . That is, to define the Hodge metric on Vv , we modify the
naive metric (coming from the integral structure of the log-crystalline cohomology) by using
61
ξv which is determined by the polarization 〈 , 〉, the monodromy operator N , and the integral
structures of the log-crystalline and ℓ-adic cohomology groups (for ℓ 6= p ).
This is similar to what happens at an archimedean place v . We have Y (C) ∼
= C× /qvZ with
qv ∈ Fv× . Assume for simplicity that we can take |qv | < e −2π where | | denotes the usual abso-
lute value. Then qv is determined by Fv ⊗F Y uniquely. Let ξ := − log(|qv |) > 2π. If v is real, we
further assume that qv > 0 and that we have an isomorphism Y (Fv ) ∼
= F × /q Z which is compatv
v
ible with Y (C) ∼
= C× /qvZ . Then in the case v is real (resp., complex), there is a basis (e 1 , e 2 ) of Vv
such that (e 1 , (2πi )−1 e 2 ) is a Z-basis of H 1 (Y (C), Z) and such that the Hodge metric | |v on Vv
−1/2
satisfies |e 1 |v = ξv
1/2
1/2
−1/2
and |e 2 |v = ξv (resp., ||e 2 |v − ξv | ≤ πξv
Consider for example the elliptic curves
y2
).
= x (x − 1)(x − t ) with t ∈ F = Q, t 6= 0, 1. As t
approaches 1 ∈ Qv for all v ∈ S, the elliptic curves Fv ⊗F Y satisfy the above assumptions for all
Q
v ∈ S, and each qv approaches 0, so ξv tends to ∞. The corresponding elements of v ∈S X Vv
defined by the classes of the | |v for v ∈ S converge to the unique boundary point of X̄ F,S with
associated parabolic equal to the Borel subgroup of upper triangular matrices in PGLV for the
basis (e 1 , e 2 ).
We hope that this subject about X̄ F,S is an interesting direction to be studied. It may be
related to the asymptotic behaviors of heights of motives in degeneration studied in [18].
4.7.8. Suppose that F is a function field and let v be a place of F . Let Γ be as in 1.3.
Kondo and Yasuda [17] proved that the image of H d −1(Γ\X v , Q) → H dBM
−1 (Γ\X v , Q) is gener-
ated by modular symbols, where H ∗BM denotes Borel-Moore homology. Our hope is that the
compactification Γ\X̄ F,v of Γ\X v is useful in further studies of modular symbols.
Let ∂ := X̄ F,v \ X v . Then we have an isomorphism
H ∗BM (Γ\X v , Q) ∼
= H ∗ (Γ\X̄ F,v , Γ\∂ , Q)
and an exact sequence
· · · → H i (Γ\X̄ F,v , Q) → H i (Γ\X̄ F,v , Γ\∂ , Q) → H i −1 (Γ\∂ , Q) → H i −1 (Γ\X̄ F,v , Q) → . . . .
Since Γ\X v → Γ\X̄ F,v is a homotopy equivalence by Thm. 4.6.3, we have
∼
H ∗ (Γ\X v , Q) −
→ H ∗ (Γ\X̄ F,v , Q).
Hence the result of Kondo and Yasuda shows that the kernel of
H dBM
(Γ\X v , Q) ∼
= H d −1 (Γ\X̄ F,v , Γ\∂ , Q) → H d −2 (Γ\∂ , Q)
−1
is generated by modular symbols.
62
If we want to prove that H dBM
−1 (Γ\X v , Q) is generated by modular symbols, it is now sufficient
to prove that the kernel of H d −2 (Γ\∂ , Q) → H d −2 (Γ\X̄ F,v , Q) is generated by the images (i.e.,
boundaries) of modular symbols.
4.7.9. In 4.7.8, assume d = 2. Then we can prove that H 1BM (Γ\X v , Q) is generated by modular
symbols. In this case, the map H 0 (Γ\∂ , Q) = Map(Γ\∂ , Q) → H 0 (X̄ F,v , Q) = Q is just summation,
and it is clear that the kernel of it is generated by the boundaries of modular symbols.
References
[1] BOREL, A., Some finiteness properties of adele groups over number fields, Publ. Math. Inst.
Hautes Études Sci. 16 (1963), 5–30.
[2] BOREL, A., JI, L., Compactifications of symmetric and locally symmetric spaces, Mathematics: Theory & Applications, Birkhäuser, Boston, MA, 2006.
[3] BOREL, A., SERRE J.P., Corners and arithmetic groups, Comment. Math. Helv. 4 (1973), 436–
491.
[4] BRUHAT, F., TITS, J., Groupes réductifs sur un corps local: I. Données radicielles valuées,
Publ. Math. Inst. Hautes Études Sci. 41 (1972), 5–251.
[5] DELIGNE, P., HUSEMÖLLER, D., Survey of Drinfel’d modules, Current trends in arithmetical
algebraic geometry (Arcata, Calif., 1985), Contemp. Math. 67, Amer. Math. Soc., Providence, RI, 1987, 25–91.
[6] DRINFELD, V.G., Elliptic modules, Mat. Sb. (N.S.) 94(136) (1974), 594–627, 656 (Russian),
English translation: Math. USSR-Sb. 23 (1974), 561–592.
[7] GÉRARDIN, P., Harmonic functions on buildings of reductive split groups, Operator algebras and group representations, Monogr. Stud. Math. 17, Pitman, Boston, MA, 1984, 208–
221.
[8] GODEMENT, R., Domaines fondamentaux des groupes arithmétiques, Séminaire Bourbaki
8, no. 257 (1962–64), 201–205.
[9] GOLDMAN, O., IWAHORI, N., The space of p -adic norms, Acta Math. 109 (1963), 137–177.
[10] GORESKY, M., TAI, Y., Toroidal and reductive Borel-Serre compactifications of locally symmetric spaces, Amer. J. Math. 121 (1999), 1095–1151.
63
[11] GUIVARC’H, Y., RÉMY, B., Group-theoretic compactifcation of Bruhat-Tits buildings, Ann.
Sci. Éc. Norm. Supér. 39 (2006), 871–920.
[12] HARDER, G. Minkowskische Reduktionstheorie über Funktionenkörpern, Invent. Math. 7
(1969) 33–54.
[13] HARDER, G., Chevalley groups over function fields and automorphic forms, Ann. of Math.
100 (1974), 249–306.
[14] JI, L., MURTY, V.K., SAPER, L., SCHERK, J. The fundamental group of reductive Borel-Serre
and Satake compactifications, Asian J. Math. 19 (2015), 465–485.
[15] KAPRANOV, M.M., Cuspidal divisors on the modular varieties of elliptic modules, Izv. Akad.
Nauk SSSR Ser. Mat. 51 (1987), 568–583, 688, English translation: Math. USSR-Izv. 30
(1988), 533–547.
[16] KATO, K., USUI, S., Classifying spaces of degenerating polarized Hodge structures, Ann. of
Math. Stud., Princeton Univ. Press, 2009.
[17] KONDO, S., YASUDA, S., The Borel-Moore homology of an arithmetic quotient of the BruhatTits building of PGL of a non-archimedean local field in positive characteristic and modular symbols, preprint, arXiv:1406.7047.
[18] KOSHIKAWA,
T.,
On heights of motives with semistable reduction,
preprint,
arXiv:1505.01873.
[19] L ANDVOGT, E., A compactification of the Bruhat-Tits building, Lecture Notes in Math.
1619, Springer-Verlag, Berlin, 1996.
[20] PINK, R., On compactification of Drinfeld moduli schemes, Sûrikaisekikenkyûsho
Kôkyûroku 884 (1994), 178–183.
[21] PINK, R., Compactification of Drinfeld modular varieties and Drinfeld modular forms of
arbitrary rank, Manuscripta Math. 140 (2013), 333–361.
[22] SATAKE, I., On representations and compactifications of symmetric Riemannian spaces,
Ann. of Math. 71 (1960), 77–110.
[23] SATAKE, I., On compactifications of the quotient spaces for arithmetically defined discontinuous groups, Ann. of Math. 72 (1960), 555–580.
64
[24] WERNER, A., Compactification of the Bruhat-Tits building of PGL by lattices of smaller
rank, Doc. Math. 6 (2001), 315–341.
[25] WERNER, A., Compactification of the Bruhat-Tits building of PGL by semi-norms, Math. Z.
248 (2004), 511–526.
[26] ZUCKER, S., L 2 -cohomology of warped products and arithmetic groups, Invent. Math. 70
(1982), 169–218.
[27] ZUCKER, S., Satake compactifications, Comment. Math. Helv. 58 (1983), 312–343.
65
| 4 |
Learning to learn by gradient descent
by gradient descent
arXiv:1606.04474v2 [] 30 Nov 2016
Marcin Andrychowicz1 , Misha Denil1 , Sergio Gómez Colmenarejo1 , Matthew W. Hoffman1 ,
David Pfau1 , Tom Schaul1 , Brendan Shillingford1,2 , Nando de Freitas1,2,3
1
Google DeepMind
2
University of Oxford
3
Canadian Institute for Advanced Research
marcin.andrychowicz@gmail.com
{mdenil,sergomez,mwhoffman,pfau,schaul}@google.com
brendan.shillingford@cs.ox.ac.uk, nandodefreitas@google.com
Abstract
The move from hand-designed features to learned features in machine learning has
been wildly successful. In spite of this, optimization algorithms are still designed
by hand. In this paper we show how the design of an optimization algorithm can be
cast as a learning problem, allowing the algorithm to learn to exploit structure in
the problems of interest in an automatic way. Our learned algorithms, implemented
by LSTMs, outperform generic, hand-designed competitors on the tasks for which
they are trained, and also generalize well to new tasks with similar structure. We
demonstrate this on a number of tasks, including simple convex problems, training
neural networks, and styling images with neural art.
1
Introduction
Frequently, tasks in machine learning can be expressed as the problem of optimizing an objective
function f (θ) defined over some domain θ ∈ Θ. The goal in this case is to find the minimizer
θ∗ = arg minθ∈Θ f (θ). While any method capable of minimizing this objective function can be
applied, the standard approach for differentiable functions is some form of gradient descent, resulting
in a sequence of updates
θt+1 = θt − αt ∇f (θt ) .
The performance of vanilla gradient descent, however, is hampered by the fact that it only makes use
of gradients and ignores second-order information. Classical optimization techniques correct this
behavior by rescaling the gradient step using curvature information, typically via the Hessian matrix
of second-order partial derivatives—although other choices such as the generalized Gauss-Newton
matrix or Fisher information matrix are possible.
Much of the modern work in optimization is based around designing update rules tailored to specific
classes of problems, with the types of problems of interest differing between different research
communities. For example, in the deep learning community we have seen a proliferation of optimization methods specialized for high-dimensional, non-convex optimization problems. These include
momentum [Nesterov, 1983, Tseng, 1998], Rprop [Riedmiller and Braun, 1993], Adagrad [Duchi
et al., 2011], RMSprop [Tieleman and Hinton, 2012], and ADAM [Kingma and Ba, 2015]. More
focused methods can also be applied when more structure of the optimization problem is known
[Martens and Grosse, 2015]. In contrast, communities who focus on sparsity tend to favor very
different approaches [Donoho, 2006, Bach et al., 2012]. This is even more the case for combinatorial
optimization for which relaxations are often the norm [Nemhauser and Wolsey, 1988].
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
This industry of optimizer design allows differameter update
s
par
ent communities to create optimization methods which exploit structure in their problems
of interest at the expense of potentially poor
performance on problems outside of that scope.
Moreover the No Free Lunch Theorems for Optimization [Wolpert and Macready, 1997] show
that in the setting of combinatorial optimization,
no algorithm is able to do better than a random
optimizer
optimizee
strategy in expectation. This suggests that speerror signal
cialization to a subclass of problems is in fact
the only way that improved performance can be
achieved in general.
Figure 1: The optimizer (left) is provided with
In this work we take a different tack and instead performance of the optimizee (right) and proposes
propose to replace hand-designed update rules updates to increase the optimizee’s performance.
with a learned update rule, which we call the op- [photos: Bobolas, 2009, Maley, 2011]
timizer g, specified by its own set of parameters
φ. This results in updates to the optimizee f of
the form
θt+1 = θt + gt (∇f (θt ), φ) .
(1)
A high level view of this process is shown in Figure 1. In what follows we will explicitly model
the update rule g using a recurrent neural network (RNN) which maintains its own state and hence
dynamically updates as a function of its iterates.
1.1
Transfer learning and generalization
The goal of this work is to develop a procedure for constructing a learning algorithm which performs
well on a particular class of optimization problems. Casting algorithm design as a learning problem
allows us to specify the class of problems we are interested in through example problem instances.
This is in contrast to the ordinary approach of characterizing properties of interesting problems
analytically and using these analytical insights to design learning algorithms by hand.
It is informative to consider the meaning of generalization in this framework. In ordinary statistical
learning we have a particular function of interest, whose behavior is constrained through a data set of
example function evaluations. In choosing a model we specify a set of inductive biases about how
we think the function of interest should behave at points we have not observed, and generalization
corresponds to the capacity to make predictions about the behavior of the target function at novel
points. In our setting the examples are themselves problem instances, which means generalization
corresponds to the ability to transfer knowledge between different problems. This reuse of problem
structure is commonly known as transfer learning, and is often treated as a subject in its own right.
However, by taking a meta-learning perspective, we can cast the problem of transfer learning as one
of generalization, which is much better studied in the machine learning community.
One of the great success stories of deep-learning is that we can rely on the ability of deep networks to
generalize to new examples by learning interesting sub-structures. In this work we aim to leverage
this generalization power, but also to lift it from simple supervised learning to the more general
setting of optimization.
1.2
A brief history and related work
The idea of using learning to learn or meta-learning to acquire knowledge or inductive biases has a
long history [Thrun and Pratt, 1998]. More recently, Lake et al. [2016] have argued forcefully for
its importance as a building block in artificial intelligence. Similarly, Santoro et al. [2016] frame
multi-task learning as generalization, however unlike our approach they directly train a base learner
rather than a training algorithm. In general these ideas involve learning which occurs at two different
time scales: rapid learning within tasks and more gradual, meta learning across many different tasks.
Perhaps the most general approach to meta-learning is that of Schmidhuber [1992, 1993]—building
on work from [Schmidhuber, 1987]—which considers networks that are able to modify their own
weights. Such a system is differentiable end-to-end, allowing both the network and the learning
2
algorithm to be trained jointly by gradient descent with few restrictions. However this generality
comes at the expense of making the learning rules very difficult to train. Alternatively, the work
of Schmidhuber et al. [1997] uses the Success Story Algorithm to modify its search strategy rather
than gradient descent; a similar approach has been recently taken in Daniel et al. [2016] which uses
reinforcement learning to train a controller for selecting step-sizes.
Bengio et al. [1990, 1995] propose to learn updates which avoid back-propagation by using simple
parametric rules. In relation to the focus of this paper the work of Bengio et al. could be characterized
as learning to learn without gradient descent by gradient descent. The work of Runarsson and
Jonsson [2000] builds upon this work by replacing the simple rule with a neural network.
Cotter and Conwell [1990], and later Younger et al. [1999], also show fixed-weight recurrent neural
networks can exhibit dynamic behavior without need to modify their network weights. Similarly this
has been shown in a filtering context [e.g. Feldkamp and Puskorius, 1998], which is directly related
to simple multi-timescale optimizers [Sutton, 1992, Schraudolph, 1999].
Finally, the work of Younger et al. [2001] and Hochreiter et al. [2001] connects these different threads
of research by allowing for the output of backpropagation from one network to feed into an additional
learning network, with both networks trained jointly. Our approach to meta-learning builds on this
work by modifying the network architecture of the optimizer in order to scale this approach to larger
neural-network optimization problems.
2
Learning to learn with recurrent neural networks
In this work we consider directly parameterizing the optimizer. As a result, in a slight abuse of notation
we will write the final optimizee parameters θ∗ (f, φ) as a function of the optimizer parameters φ and
the function in question. We can then ask the question: What does it mean for an optimizer to be
good? Given a distribution of functions f we will write the expected loss as
h
i
L(φ) = Ef f θ∗ (f, φ) .
(2)
As noted earlier, we will take the update steps gt to be the output of a recurrent neural network m,
parameterized by φ, whose state we will denote explicitly with ht . Next, while the objective function
in (2) depends only on the final parameter value, for training the optimizer it will be convenient to
have an objective that depends on the entire trajectory of optimization, for some horizon T,
L(φ) = Ef
" T
X
#
wt f (θt )
where
t=1
θt+1 = θt + gt ,
gt
= m(∇t , ht , φ) .
ht+1
(3)
Here wt ∈ R≥0 are arbitrary weights associated with each time-step and we will also use the notation
∇t = ∇θ f (θt ). This formulation is equivalent to (2) when wt = 1[t = T ], but later we will describe
why using different weights can prove useful.
We can minimize the value of L(φ) using gradient descent on φ. The gradient estimate ∂L(φ)/∂φ can
be computed by sampling a random function f and applying backpropagation to the computational
graph in Figure 2. We allow gradients to flow along the solid edges in the graph, but gradients
along the dashed edges are dropped. Ignoring gradients along the dashed edges amounts to making
the assumption
that the gradients of the optimizee do not depend on the optimizer parameters, i.e.
∂∇t ∂φ = 0. This assumption allows us to avoid computing second derivatives of f .
Examining the objective in (3) we see that the gradient is non-zero only for terms where wt 6= 0. If
we use wt = 1[t = T ] to match the original problem, then gradients of trajectory prefixes are zero
and only the final optimization step provides information for training the optimizer. This renders
Backpropagation Through Time (BPTT) inefficient. We solve this problem by relaxing the objective
such that wt > 0 at intermediate points along the trajectory. This changes the objective function, but
allows us to train the optimizer on partial trajectories. For simplicity, in all our experiments we use
wt = 1 for every t.
3
t-2
ft-2
Optimizee
θt-2
t-1
ft-1
θt-1
+
ht-2
gt
∇t-1
m
θt+1
+
gt-1
∇t-2
Optimizer
θt
+
gt-2
t
ft
∇t
m
ht-1
m
ht
ht+1
Figure 2: Computational graph used for computing the gradient of the optimizer.
2.1
Coordinatewise LSTM optimizer
One challenge in applying RNNs in our setting is that we want to be able to optimize at least tens of
thousands of parameters. Optimizing at this scale with a fully connected RNN is not feasible as it
would require a huge hidden state and an enormous number of parameters. To avoid this difficulty we
will use an optimizer m which operates coordinatewise on the parameters of the objective function,
similar to other common update rules like RMSprop and ADAM. This coordinatewise network
architecture allows us to use a very small network that only looks at a single coordinate to define the
optimizer and share optimizer parameters across different parameters of the optimizee.
Different behavior on each coordinate is achieved by using separate activations for each objective
function parameter. In addition to allowing us to use a small network for this optimizer, this setup has
the nice effect of making the optimizer invariant to the order of parameters in the network, since the
same update rule is used independently on each coordinate.
∇1
θ1
LSTMn
+
…
∇n
LSTM1
…
…
…
θn
f
…
We implement the update rule for each coordinate using a two-layer Long Short Term Memory
(LSTM) network [Hochreiter and Schmidhuber,
1997], using the now-standard forget gate architecture. The network takes as input the optimizee gradient for a single coordinate as well
as the previous hidden state and outputs the update for the corresponding optimizee parameter.
We will refer to this architecture, illustrated in
Figure 3, as an LSTM optimizer.
+
The use of recurrence allows the LSTM to learn
dynamic update rules which integrate informa- Figure 3: One step of an LSTM optimizer. All
tion from the history of gradients, similar to LSTMs have shared parameters, but separate hidmomentum. This is known to have many desir- den states.
able properties in convex optimization [see e.g.
Nesterov, 1983] and in fact many recent learning procedures—such as ADAM—use momentum in
their updates.
Preprocessing and postprocessing Optimizer inputs and outputs can have very different magnitudes depending on the class of function being optimized, but neural networks usually work robustly
only for inputs and outputs which are neither very small nor very large. In practice rescaling inputs
and outputs of an LSTM optimizer using suitable constants (shared across all timesteps and functions
f ) is sufficient to avoid this problem. In Appendix A we propose a different method of preprocessing
inputs to the optimizer inputs which is more robust and gives slightly better performance.
4
Quadratics
10 1
Loss
MNIST
MNIST, 200 steps
ADAM
RMSprop 0
10
SGD
NAG
LSTM
10 0
10 -1
20
40
60
80
100
20
40
60
Step
80
100
120
140
160
180
200
Figure 4: Comparisons between learned and hand-crafted optimizers performance. Learned optimizers are shown with solid lines and hand-crafted optimizers are shown with dashed lines. Units for the
y axis in the MNIST plots are logits. Left: Performance of different optimizers on randomly sampled
10-dimensional quadratic functions. Center: the LSTM optimizer outperforms standard methods
training the base network on MNIST. Right: Learning curves for steps 100-200 by an optimizer
trained to optimize for 100 steps (continuation of center plot).
3
Experiments
In all experiments the trained optimizers use two-layer LSTMs with 20 hidden units in each layer.
Each optimizer is trained by minimizing Equation 3 using truncated BPTT as described in Section 2.
The minimization is performed using ADAM with a learning rate chosen by random search.
We use early stopping when training the optimizer in order to avoid overfitting the optimizer. After
each epoch (some fixed number of learning steps) we freeze the optimizer parameters and evaluate its
performance. We pick the best optimizer (according to the final validation loss) and report its average
performance on a number of freshly sampled test problems.
We compare our trained optimizers with standard optimizers used in Deep Learning: SGD, RMSprop,
ADAM, and Nesterov’s accelerated gradient (NAG). For each of these optimizer and each problem
we tuned the learning rate, and report results with the rate that gives the best final error for each
problem. When an optimizer has more parameters than just a learning rate (e.g. decay coefficients for
ADAM) we use the default values from the optim package in Torch7. Initial values of all optimizee
parameters were sampled from an IID Gaussian distribution.
3.1
Quadratic functions
In this experiment we consider training an optimizer on a simple class of synthetic 10-dimensional
quadratic functions. In particular we consider minimizing functions of the form
f (θ) = kW θ − yk22
for different 10x10 matrices W and 10-dimensional vectors y whose elements are drawn from an IID
Gaussian distribution. Optimizers were trained by optimizing random functions from this family and
tested on newly sampled functions from the same distribution. Each function was optimized for 100
steps and the trained optimizers were unrolled for 20 steps. We have not used any preprocessing, nor
postprocessing.
Learning curves for different optimizers, averaged over many functions, are shown in the left plot of
Figure 4. Each curve corresponds to the average performance of one optimization algorithm on many
test functions; the solid curve shows the learned optimizer performance and dashed curves show
the performance of the standard baseline optimizers. It is clear the learned optimizers substantially
outperform the baselines in this setting.
3.2
Training a small neural network on MNIST
In this experiment we test whether trainable optimizers can learn to optimize a small neural network
on MNIST, and also explore how the trained optimizers generalize to functions beyond those they
were trained on. To this end, we train the optimizer to optimize a base network and explore a series
of modifications to the network architecture and training procedure at test time.
5
Loss
MNIST, 40 units
10 0
20
40
60
MNIST, 2 layers
ADAM
RMSprop
SGD
NAG
LSTM
80
100
20
40
60
Steps
MNIST, ReLU
80
100
20
40
60
80
100
Figure 5: Comparisons between learned and hand-crafted optimizers performance. Units for the
y axis are logits. Left: Generalization to the different number of hidden units (40 instead of 20).
Center: Generalization to the different number of hidden layers (2 instead of 1). This optimization
problem is very hard, because the hidden layers are very narrow. Right: Training curves for an MLP
with 20 hidden units using ReLU activations. The LSTM optimizer was trained on an MLP with
sigmoid activations.
Final Loss
LSTM
ADAM
NAG
10 0
Layers
10 -1
1
2
5
20
35
50
5
20
35
Hidden units
50
5
20
35
50
Figure 6: Systematic study of final MNIST performance as the optimizee architecture is varied,
using sigmoid non-linearities. The vertical dashed line in the left-most plot denotes the architecture
at which the LSTM is trained and the horizontal line shows the final performance of the trained
optimizer in this setting.
In this setting the objective function f (θ) is the cross entropy of a small MLP with parameters θ.
The values of f as well as the gradients ∂f (θ)/∂θ are estimated using random minibatches of 128
examples. The base network is an MLP with one hidden layer of 20 units using a sigmoid activation
function. The only source of variability between different runs is the initial value θ0 and randomness
in minibatch selection. Each optimization was run for 100 steps and the trained optimizers were
unrolled for 20 steps. We used input preprocessing described in Appendix A and rescaled the outputs
of the LSTM by the factor 0.1.
Learning curves for the base network using different optimizers are displayed in the center plot of
Figure 4. In this experiment NAG, ADAM, and RMSprop exhibit roughly equivalent performance the
LSTM optimizer outperforms them by a significant margin. The right plot in Figure 4 compares the
performance of the LSTM optimizer if it is allowed to run for 200 steps, despite having been trained
to optimize for 100 steps. In this comparison we re-used the LSTM optimizer from the previous
experiment, and here we see that the LSTM optimizer continues to outperform the baseline optimizers
on this task.
Generalization to different architectures Figure 5 shows three examples of applying the LSTM
optimizer to train networks with different architectures than the base network on which it was trained.
The modifications are (from left to right) (1) an MLP with 40 hidden units instead of 20, (2) a
network with two hidden layers instead of one, and (3) a network using ReLU activations instead of
sigmoid. In the first two cases the LSTM optimizer generalizes well, and continues to outperform
the hand-designed baselines despite operating outside of its training regime. However, changing
the activation function to ReLU makes the dynamics of the learning procedure sufficiently different
that the learned optimizer is no longer able to generalize. Finally, in Figure 6 we show the results
of systematically varying the tested architecture; for the LSTM results we again used the optimizer
trained using 1 layer of 20 units and sigmoid non-linearities. Note that in this setting where the
6
CIFAR-10
CIFAR-5
CIFAR-2
ADAM
RMSprop
SGD
NAG
LSTM
LSTM-sub
Loss
10 0
10 -1
10 0
200
400
600
800 1000
200
400
600
Step
800 1000
200
400
600
800 1000
Figure 7: Optimization performance on the CIFAR-10 dataset and subsets. Shown on the left is the
LSTM optimizer versus various baselines trained on CIFAR-10 and tested on a held-out test set. The
two plots on the right are the performance of these optimizers on subsets of the CIFAR labels. The
additional optimizer LSTM-sub has been trained only on the heldout labels and is hence transferring
to a completely novel dataset.
test-set problems are similar enough to those in the training set we see even better generalization than
the baseline optimizers.
3.3
Training a convolutional network on CIFAR-10
Next we test the performance of the trained neural optimizers on optimizing classification performance
for the CIFAR-10 dataset [Krizhevsky, 2009]. In these experiments we used a model with both
convolutional and feed-forward layers. In particular, the model used for these experiments includes
three convolutional layers with max pooling followed by a fully-connected layer with 32 hidden units;
all non-linearities were ReLU activations with batch normalization.
The coordinatewise network decomposition introduced in Section 2.1—and used in the previous
experiment—utilizes a single LSTM architecture with shared weights, but separate hidden states,
for each optimizee parameter. We found that this decomposition was not sufficient for the model
architecture introduced in this section due to the differences between the fully connected and convolutional layers. Instead we modify the optimizer by introducing two LSTMs: one proposes parameter
updates for the fully connected layers and the other updates the convolutional layer parameters. Like
the previous LSTM optimizer we still utilize a coordinatewise decomposition with shared weights
and individual hidden states, however LSTM weights are now shared only between parameters of the
same type (i.e. fully-connected vs. convolutional).
The performance of this trained optimizer compared against the baseline techniques is shown in
Figure 7. The left-most plot displays the results of using the optimizer to fit a classifier on a held-out
test set. The additional two plots on the right display the performance of the trained optimizer on
modified datasets which only contain a subset of the labels, i.e. the CIFAR-2 dataset only contains
data corresponding to 2 of the 10 labels. Additionally we include an optimizer LSTM-sub which was
only trained on the held-out labels.
In all these examples we can see that the LSTM optimizer learns much more quickly than the baseline
optimizers, with significant boosts in performance for the CIFAR-5 and especially CIFAR-2 datsets.
We also see that the optimizers trained only on a disjoint subset of the data is hardly effected by this
difference and transfers well to the additional dataset.
3.4
Neural Art
The recent work on artistic style transfer using convolutional networks, or Neural Art [Gatys et al.,
2015], gives a natural testbed for our method, since each content and style image pair gives rise to a
different optimization problem. Each Neural Art problem starts from a content image, c, and a style
image, s, and is given by
f (θ) = αLcontent (c, θ) + βLstyle (s, θ) + γLreg (θ)
The minimizer of f is the styled image. The first two terms try to match the content and style of
the styled image to that of their first argument, and the third term is a regularizer that encourages
smoothness in the styled image. Details can be found in [Gatys et al., 2015].
7
Double resolution
Loss
Neural art, training resolution
20
40
60
Step
80 100 120
20
40
60
Step
ADAM
RMSprop
SGD
NAG
LSTM
80 100 120
Figure 8: Optimization curves for Neural Art. Content images come from the test set, which was not
used during the LSTM optimizer training. Note: the y-axis is in log scale and we zoom in on the
interesting portion of this plot. Left: Applying the training style at the training resolution. Right:
Applying the test style at double the training resolution.
Figure 9: Examples of images styled using the LSTM optimizer. Each triple consists of the content
image (left), style (right) and image generated by the LSTM optimizer (center). Left: The result of
applying the training style at the training resolution to a test image. Right: The result of applying a
new style to a test image at double the resolution on which the optimizer was trained.
We train optimizers using only 1 style and 1800 content images taken from ImageNet [Deng et al.,
2009]. We randomly select 100 content images for testing and 20 content images for validation of
trained optimizers. We train the optimizer on 64x64 content images from ImageNet and one fixed
style image. We then test how well it generalizes to a different style image and higher resolution
(128x128). Each image was optimized for 128 steps and trained optimizers were unrolled for 32
steps. Figure 9 shows the result of styling two different images using the LSTM optimizer. The
LSTM optimizer uses inputs preprocessing described in Appendix A and no postprocessing. See
Appendix C for additional images.
Figure 8 compares the performance of the LSTM optimizer to standard optimization algorithms. The
LSTM optimizer outperforms all standard optimizers if the resolution and style image are the same
as the ones on which it was trained. Moreover, it continues to perform very well when both the
resolution and style are changed at test time.
Finally, in Appendix B we qualitatively examine the behavior of the step directions generated by the
learned optimizer.
4
Conclusion
We have shown how to cast the design of optimization algorithms as a learning problem, which
enables us to train optimizers that are specialized to particular classes of functions. Our experiments
have confirmed that learned neural optimizers compare favorably against state-of-the-art optimization
methods used in deep learning. We witnessed a remarkable degree of transfer, with for example the
LSTM optimizer trained on 12,288 parameter neural art tasks being able to generalize to tasks with
49,152 parameters, different styles, and different content images all at the same time. We observed
similar impressive results when transferring to different architectures in the MNIST task.
The results on the CIFAR image labeling task show that the LSTM optimizers outperform handengineered optimizers when transferring to datasets drawn from the same data distribution.
References
F. Bach, R. Jenatton, J. Mairal, and G. Obozinski. Optimization with sparsity-inducing penalties. Foundations
and Trends in Machine Learning, 4(1):1–106, 2012.
8
S. Bengio, Y. Bengio, and J. Cloutier. On the search for new learning rules for ANNs. Neural Processing Letters,
2(4):26–30, 1995.
Y. Bengio, S. Bengio, and J. Cloutier. Learning a synaptic learning rule. Université de Montréal, Département
d’informatique et de recherche opérationnelle, 1990.
Y. Bengio, N. Boulanger-Lewandowski, and R. Pascanu. Advances in optimizing recurrent networks. In
International Conference on Acoustics, Speech and Signal Processing, pages 8624–8628. IEEE, 2013.
F. Bobolas. brain-neurons, 2009. URL https://www.flickr.com/photos/fbobolas/3822222947. Creative Commons Attribution-ShareAlike 2.0 Generic.
N. E. Cotter and P. R. Conwell. Fixed-weight networks can learn. In International Joint Conference on Neural
Networks, pages 553–559, 1990.
C. Daniel, J. Taylor, and S. Nowozin. Learning step size controllers for robust neural network training. In
Association for the Advancement of Artificial Intelligence, 2016.
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database.
In Computer Vision and Pattern Recognition, pages 248–255. IEEE, 2009.
D. L. Donoho. Compressed sensing. Transactions on Information Theory, 52(4):1289–1306, 2006.
J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization.
Journal of Machine Learning Research, 12:2121–2159, 2011.
L. A. Feldkamp and G. V. Puskorius. A signal processing framework based on dynamic neural networks
with application to problems in adaptation, filtering, and classification. Proceedings of the IEEE, 86(11):
2259–2277, 1998.
L. A. Gatys, A. S. Ecker, and M. Bethge. A neural algorithm of artistic style. arXiv Report 1508.06576, 2015.
A. Graves, G. Wayne, and I. Danihkela. Neural Turing machines. arXiv Report 1410.5401, 2014.
S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
S. Hochreiter, A. S. Younger, and P. R. Conwell. Learning to learn using gradient descent. In International
Conference on Artificial Neural Networks, pages 87–94. Springer, 2001.
D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In International Conference on Learning
Representations, 2015.
A. Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009.
B. M. Lake, T. D. Ullman, J. B. Tenenbaum, and S. J. Gershman. Building machines that learn and think like
people. arXiv Report 1604.00289, 2016.
T. Maley. neuron, 2011. URL https://www.flickr.com/photos/taylortotz101/6280077898. Creative
Commons Attribution 2.0 Generic.
J. Martens and R. Grosse. Optimizing neural networks with Kronecker-factored approximate curvature. In
International Conference on Machine Learning, pages 2408–2417, 2015.
G. L. Nemhauser and L. A. Wolsey. Integer and combinatorial optimization. John Wiley & Sons, 1988.
Y. Nesterov. A method of solving a convex programming problem with convergence rate o (1/k2). In Soviet
Mathematics Doklady, volume 27, pages 372–376, 1983.
J. Nocedal and S. Wright. Numerical optimization. Springer Science & Business Media, 2006.
M. Riedmiller and H. Braun. A direct adaptive method for faster backpropagation learning: The RPROP
algorithm. In International Conference on Neural Networks, pages 586–591, 1993.
T. P. Runarsson and M. T. Jonsson. Evolution and design of distributed learning rules. In IEEE Symposium on
Combinations of Evolutionary Computation and Neural Networks, pages 59–63. IEEE, 2000.
A. Santoro, S. Bartunov, M. Botvinick, D. Wierstra, and T. Lillicrap. Meta-learning with memory-augmented
neural networks. In International Conference on Machine Learning, 2016.
J. Schmidhuber. Evolutionary principles in self-referential learning; On learning how to learn: The meta-meta-...
hook. PhD thesis, Institut f. Informatik, Tech. Univ. Munich, 1987.
J. Schmidhuber. Learning to control fast-weight memories: An alternative to dynamic recurrent networks.
Neural Computation, 4(1):131–139, 1992.
J. Schmidhuber. A neural network that embeds its own meta-levels. In International Conference on Neural
Networks, pages 407–412. IEEE, 1993.
J. Schmidhuber, J. Zhao, and M. Wiering. Shifting inductive bias with success-story algorithm, adaptive levin
search, and incremental self-improvement. Machine Learning, 28(1):105–130, 1997.
N. N. Schraudolph. Local gain adaptation in stochastic gradient descent. In International Conference on
Artificial Neural Networks, volume 2, pages 569–574, 1999.
R. S. Sutton. Adapting bias by gradient descent: An incremental version of delta-bar-delta. In Association for
the Advancement of Artificial Intelligence, pages 171–176, 1992.
S. Thrun and L. Pratt. Learning to learn. Springer Science & Business Media, 1998.
T. Tieleman and G. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent
magnitude. COURSERA: Neural Networks for Machine Learning, 4:2, 2012.
P. Tseng. An incremental gradient (-projection) method with momentum term and adaptive stepsize rule. Journal
on Optimization, 8(2):506–531, 1998.
D. H. Wolpert and W. G. Macready. No free lunch theorems for optimization. Transactions on Evolutionary
Computation, 1(1):67–82, 1997.
9
A. S. Younger, P. R. Conwell, and N. E. Cotter. Fixed-weight on-line learning. Transactions on Neural Networks,
10(2):272–283, 1999.
A. S. Younger, S. Hochreiter, and P. R. Conwell. Meta-learning with backpropagation. In International Joint
Conference on Neural Networks, 2001.
10
4
16
−2
−8
0
LSTM
ADAM
SGD
6
64
128
−2
0
64
128
Figure 10: Updates proposed by different optimizers at different optimization steps for two different
coordinates.
A
Gradient preprocessing
One potential challenge in training optimizers is that different input coordinates (i.e. the gradients
w.r.t. different optimizee parameters) can have very different magnitudes. This is indeed the case e.g.
when the optimizee is a neural network and different parameters correspond to weights in different
layers. This can make training an optimizer difficult, because neural networks naturally disregard
small variations in input signals and concentrate on bigger input values.
To this aim we propose to preprocess the optimizer’s inputs. One solution would be to give the
optimizer (log(|∇|), sgn(∇)) as an input, where ∇ is the gradient in the current timestep. This has a
problem that log(|∇|) diverges for ∇ → 0. Therefore, we use the following preprocessing formula
(
log(|∇|)
,
sgn(∇)
if |∇| ≥ e−p
p
∇k →
(−1, ep ∇)
otherwise
where p > 0 is a parameter controlling how small gradients are disregarded (we use p = 10 in all our
experiments).
We noticed that just rescaling all inputs by an appropriate constant instead also works fine, but the
proposed preprocessing seems to be more robust and gives slightly better results on some problems.
B
Visualizations
Visualizing optimizers is inherently difficult because their proposed updates are functions of the full
optimization trajectory. In this section we try to peek into the decisions made by the LSTM optimizer,
trained on the neural art task.
Histories of updates We select a single optimizee parameter (one color channel of one pixel in the
styled image) and trace the updates proposed to this coordinate by the LSTM optimizer over a single
trajectory of optimization. We also record the updates that would have been proposed by both SGD
and ADAM if they followed the same trajectory of iterates. Figure 10 shows the trajectory of updates
for two different optimizee parameters. From the plots it is clear that the trained optimizer makes
bigger updates than SGD and ADAM. It is also visible that it uses some kind of momentum, but its
updates are more noisy than those proposed by ADAM which may be interpreted as having a shorter
time-scale momentum.
Proposed update as a function of current gradient Another way to visualize the optimizer
behavior is to look at the proposed update gt for a single coordinate as a function of the current
gradient evaluation ∇t . We follow the same procedure as in the previous experiment, and visualize
the proposed updates for a few selected time steps.
These results are shown in Figures 11–13. In these plots, the x-axis is the current value of the gradient
for the chosen coordinate, and the y-axis shows the update that each optimizer would propose should
the corresponding gradient value be observed. The history of gradient observations is the same for all
methods and follows the trajectory of the LSTM optimizer.
11
The shape of this function for the LSTM optimizer is often step-like, which is also the case for
ADAM. Surprisingly the step is sometimes in the opposite direction as for ADAM, i.e. the bigger the
gradient, the bigger the update.
C
Neural Art
Shown below are additional examples of images styled using the LSTM optimizer. Each triple
consists of the content image (left), style (right) and image generated by the LSTM optimizer (center).
D
Information sharing between coordinates
In previous sections we considered a coordinatewise architecture, which corresponds by analogy
to a learned version of RMSprop or ADAM. Although diagonal methods are quite effective in
practice, we can also consider learning more sophisticated optimizers that take the correlations
between coordinates into effect. To this end, we introduce a mechanism allowing different LSTMs to
communicate with each other.
Global averaging cells
The simplest solution is to designate a subset of the cells in each LSTM layer for communication.
These cells operate like normal LSTM cells, but their outgoing activations are averaged at each step
across all coordinates. These global averaging cells (GACs) are sufficient to allow the networks to
implement L2 gradient clipping [Bengio et al., 2013] assuming that each LSTM can compute the
square of the gradient. This architecture is denoted as an LSTM+GAC optimizer.
NTM-BFGS optimizer
We also consider augmenting the LSTM+GAC architecture with an external memory that is shared
between coordinates. Such a memory, if appropriately designed could allow the optimizer to learn
algorithms similar to (low-memory) approximations to Newton’s method, e.g. (L-)BFGS [see Nocedal
and Wright, 2006]. The reason for this interpretation is that such methods can be seen as a set of
independent processes working coordinatewise, but communicating through the inverse Hessian
approximation stored in the memory. We designed a memory architecture that, in theory, allows the
12
10
0
−10
Step 1
Step 2
Step 3
Step 4
Step 5
Step 6
Step 7
Step 8
Step 9
Step 10
Step 11
Step 12
Step 13
Step 14
Step 15
Step 16
Step 17
Step 18
Step 19
Step 20
Step 21
Step 22
Step 23
Step 24
Step 25
Step 26
Step 27
Step 28
10
0
−10
10
0
−10
10
0
−10
10
0
−10
10
0
−10
10
0
−10
10
0
−10
−400
0
400
Step 29
−400
0
400
Step 30
−400
0
400
Step 31
−400
0
400
Step 32
Figure 11: The proposed update direction for a single coordinate over 32 steps.
13
10
0
−10
Step 1
Step 2
Step 3
Step 4
Step 5
Step 6
Step 7
Step 8
Step 9
Step 10
Step 11
Step 12
Step 13
Step 14
Step 15
Step 16
Step 17
Step 18
Step 19
Step 20
Step 21
Step 22
Step 23
Step 24
Step 25
Step 26
Step 27
Step 28
10
0
−10
10
0
−10
10
0
−10
10
0
−10
10
0
−10
10
0
−10
10
0
−10
−400
0
400
Step 29
−400
0
400
Step 30
−400
0
400
Step 31
−400
0
400
Step 32
Figure 12: The proposed update direction for a single coordinate over 32 steps.
14
10
0
−10
Step 1
Step 2
Step 3
Step 4
Step 5
Step 6
Step 7
Step 8
Step 9
Step 10
Step 11
Step 12
Step 13
Step 14
Step 15
Step 16
Step 17
Step 18
Step 19
Step 20
Step 21
Step 22
Step 23
Step 24
Step 25
Step 26
Step 27
Step 28
10
0
−10
10
0
−10
10
0
−10
10
0
−10
10
0
−10
10
0
−10
10
0
−10
−400
0
400
Step 29
−400
0
400
Step 30
−400
0
400
Step 31
−400
0
400
Step 32
Figure 13: The proposed update direction for a single coordinate over 32 steps.
15
LSTM
r1
i1
a1
LSTM
b1
LSTM
r2
i2
a2
LSTM
b2
LSTM
r3
i3
a3
LSTM
b3
LSTM
…
…
…
LSTM
…
LSTM
…
…
…
LSTM
…
LSTM
…
…
…
LSTM
…
LSTM
…
…
…
LSTM
…
LSTM
rn
in
an
LSTM
bn
x
M
=
Mt
+
Outer
Product
ΔMt
=
Mt+1
Figure 14: Left: NTM-BFGS read operation. Right: NTM-BFGS write operation.
network to simulate (L-)BFGS, motivated by the approximate Newton method BFGS, named for
Broyden, Fletcher, Goldfarb, and Shanno. We call this architecture an NTM-BFGS optimizer, because
its use of external memory is similar to the Neural Turing Machine [Graves et al., 2014]. The pivotal
differences between our construction and the NTM are (1) our memory allows only low-rank updates;
(2) the controller (including read/write heads) operates coordinatewise.
In BFGS an explicit estimate of the full (inverse) Hessian is built up from the sequence of observed
gradients. We can write a skeletonized version of the BFGS algorithm, using Mt to represent the
inverse Hessian approximation at iteration t, as follows
gt = read(Mt , θt )
θt+1 = θt + gt
Mt+1 = write(Mt , θt , gt ) .
Here we have packed up all of the details of the BFGS algorithm into the suggestively named read
and write operations, which operate on the inverse Hessian approximation Mt . In BFGS these
operations have specific forms, for example read(Mt , θt ) = −Mt ∇f (θt ) is a specific matrix-vector
multiplication and the BFGS write operation corresponds to a particular low-rank update of Mt .
In this work we preserve the structure of the BFGS updates, but discard their particular form. More
specifically the read operation remains a matrix-vector multiplication but the form of the vector
used is learned. Similarly, the write operation remains a low-rank update, but the vectors involved
are also learned. Conveniently, this structure of interaction with a large dynamically updated state
corresponds in a fairly direct way to the architecture of a Neural Turing Machine (NTM), where Mt
corresponds to the NTM memory [Graves et al., 2014].
Our NTM-BFGS optimizer uses an LSTM+GAC as a controller; however, instead of producing the
update directly we attach one or more read and write heads to the controller. Each read head produces
a read vector rt which is combined with the memory to produce a read result it which is fed back
into the controller at the following time step. Each write head produces two outputs, a left write
vector at and a right write vector bt . The two write vectors are used to update the memory state by
accumulating their outer product. The read and write operation for a single head is diagrammed in
Figure 14 and the way read and write heads are attached to the controller is depicted in Figure 15.
In can be shown that NTM-BFGS with one read head and 3 write heads can simulate inverse Hessian
BFGS assuming that the controller can compute arbitrary (coordinatewise) functions and have access
to 2 GACs.
NTM-L-BFGS optimizer
In cases where memory is constrained we can follow the example of L-BFGS and maintain a low
rank approximation of the full memory (vis. inverse Hessian). The simplest way to do this is to store a
sliding history of the left and right write vectors, allowing us to form the matrix vector multiplication
required by the read operation efficiently.
16
gt
ht
Read
it
Controller
…
hk
ht+1
LSTM1
LSTM2
∇t
rt
gk
rt+1
at
bt
Mt+1
Write
k
∇
ik
hk
LSTMk
rk
ak
bk
Mt
Figure 15: Left: Interaction between the controller and the external memory in NTM-BFGS. The
controller is composed of replicated coordinatewise LSTMs (possibly with GACs), but the read and
write operations are global across all coordinates. Right: A single LSTM for the kth coordinate in
the NTM-BFGS controller. Note that here we have dropped the time index t to simplify notation.
17
| 9 |
arXiv:1501.03411v4 [] 10 Dec 2015
PERINORMALITY – A GENERALIZATION OF KRULL
DOMAINS
NEIL EPSTEIN AND JAY SHAPIRO
Abstract. We introduce a new class of integral domains, the perinormal domains, which fall strictly between Krull domains and weakly normal domains. We establish basic properties of the class, and in the case
of universally catenary domains we give equivalent characterizations of
perinormality. (Later on, we point out some subtleties that occur only in
the non-Noetherian context.) We also introduce and explore briefly the
related concept of global perinormality, including a relationship with divisor class groups. Throughout, we provide illuminating examples from
algebra, geometry, and number theory.
1. Introduction
Motivated in part by the classical concept of a ring extension satisfying
going-down from Cohen and Seidenberg [CS46], the concept of the goingdown domain has been fruitful in non-Noetherian commutative ring theory
(see for example [Dob73, Dob74, DP76]); for Noetherian rings it merely
coincides with domains of dimension ≤ 1 [Dob73, Proposition 7]. By definition, a ring extension R ⊆ S satisfies going-down if whenever p ⊂ q are
prime ideals of R and Q ∈ Spec S with Q ∩ R = q, there is some prime
ideal P ∈ Spec S with P ⊂ Q and P ∩ R = p (a condition that is satisfied
whenever S is flat over R). Then an integral domain R is a going-down
domain if for every (local) overring S of R, the inclusion R ⊆ S satisfies
going-down. (In fact by [DP76, Theorem 1] it doesn’t matter whether one
specifies ‘local’ or not.)
It is natural to ask which overrings of an integral domain R satisfy goingdown over it. It is classical that any flat R-algebra (hence any flat overring)
will satisfy going-down over R [Mat86, Theorem 9.5]. Moreover, the flat
local overrings are precisely the rings Rp where p is a prime ideal of R
[Ric65, Theorem 2]. In this context, since going-down domains have proven
to be a useful concept, it makes sense to explore the orthogonal concept:
• When does it happen that the only local overrings that satisfy goingdown over R are the localizations at prime ideals?
Date: December 11, 2015.
2010 Mathematics Subject Classification. 13B21, 13F05, 13F45.
Key words and phrases. Krull domain, going down, perinormal, globally perinormal,
universally catenary.
1
2
NEIL EPSTEIN AND JAY SHAPIRO
We call such a ring perinormal, and it is the subject of this paper. (The
related concept of global perinormality stipulates that the only overrings,
local or not, that satisfy going-down over the base are localizations of the
base ring at multiplicative sets.)
It turns out that the class of perinormal rings is closely related to Krull
domains (and so Noetherian normal domains) and weakly normal (hence
seminormal) domains in that Krull domain ⇒ perinormal ⇒ weakly normal and (R1 ), with neither implication reversible. Moreover, for universally
catenary domains (and somewhat more generally), we can characterize perinormal domains as those domains R such that no prime localization Rp has
an overring that induces a bijection on prime spectra. When R is smooth
in codimension 1, we can restrict our attention to integral overrings of these
Rp (cf. Theorem 4.7). On the other hand, the only perinormal going-down
domains are Prüfer domains.
The structure of the paper is as follows. We start by establishing some
basic facts in Section 2, including Proposition 2.5 which shows that perinormality is a local property and Proposition 2.4, which reframes perinormality
in terms of flatness. Section 3 explores the relationship of perinormality to
(generalized) Krull domains, weakly normal domains, and (R1 ) domains.
Theorem 3.10, Corollary 3.4, and Proposition 3.2 respectively show that
perinormality is implied by the first and implies the latter two properties. We also exhibit some sharpening examples. Section 4 is dedicated to
Theorem 4.7, which gives the two characterizations of perinormal domains
among the Noetherian domains mentioned above. In Section 5, we find Theorem 5.2, which exhibits a method for producing perinormal domains that
are not integrally closed. Section 6 is devoted to the related notion of global
perinormality; in particular, we give a partial characterization (see Theorem 6.4) of which Krull domains may be globally perinormal, along with
examples relevant to algebraic number theory. It turns out that the theory
of perinormality is a bit different when one includes non-Noetherian rings;
in Section 7, we point out the subtleties in a series of examples, including
the fact that not every integrally closed domain is perinormal (unlike in the
Noetherian case). We end with a list of interesting questions in Section 8.
Conventions: All rings are commutative with identity, and ring homomorphisms and containments preserve the multiplicative identity. The term
local means only that the ring has a unique maximal ideal. An overring of
an integral domain R is a ring sitting between R and its fraction field.
2. First properties
Definition 2.1. Let R be an integral domain. We say R is perinormal if
whenever S is a local overring of R such that the inclusion R ⊆ S satisfies
going-down, it follows that S is a localization of R (necessarily at a prime
ideal).
PERINORMALITY – A GENERALIZATION OF KRULL DOMAINS
3
We say that R is globally perinormal if the same conclusion holds when
the condition on S being local is dropped (so that this time, the localization
is just at a multiplicative set).
Remark. The term perinormal is meant to reflect several aspects of the
property: (1) It is closely related to the properties of seminormality and
weak normality (cf. Corollary 3.4). (2) It is closely related to the concept of
normality in the Noetherian case (cf. Theorem 3.10), which is usually the
only situation where the word “normal” is used for integral closedness in the
literature. (3) Perinormality is not a weakening of the property of integral
closedness in general (cf. Example 7.1), whence the prefix peri- (unlike weak
and semi-, which both imply weakenings).
Lemma 2.2. A homomorphism of commutative rings R → S satisfies goingdown if and only if for all P ∈ Spec S, the induced map Spec (SP ) →
Spec (RP ∩R ) is surjective.
Proof. This follows immediately from the definition.
Recall the following theorem of Richman.
Theorem 2.3 ([Ric65, Theorem 2]). Let A be an integral domain and B
an overring of A. Then B is flat over A if and only if Bm = Am∩A for all
maximal ideals m of B.
This theorem allows us to characterize perinormality in terms of flatness:
Proposition 2.4. A domain R is perinormal if and only if every overring
of R that satisfies going-down is flat over R.
Proof. Suppose R is perinormal. Let S be an overring of R that satisfies
going-down over R. Let m be a maximal ideal of R. Clearly Sm satisfies
going-down over R (since going-downness is transitive and Sm is going-down
over S), so by perinormality, Sm is a localization of R, so that necessarily,
Sm = Rm∩R . Then by Theorem 2.3, S is flat over R.
Conversely, suppose every overring of R that satisfies going-down is flat
over R. Let (S, m) be a local overring of R that satisfies going-down. Then
it is flat over R, so again by Theorem 2.3, S = Rm∩R is a localization of R.
Thus, R is perinormal.
Next, we show that perinormality is a local property.
Proposition 2.5. If R is perinormal, so is RW for every multiplicative set
W . Conversely, if Rm is perinormal for all maximal ideals m of R, then so
is R.
Proof. First suppose R is perinormal. Let S be a local overring of RW that
satisfies going-down. Then S satisfies going-down over R (since no prime
ideal of R lain over by a prime of S can intersect W ) and so S = RV for some
multiplicatively closed subset V of R. But then V is also a multiplicatively
closed subset of RW , and S = (RW )V . Therefore RW is perinormal.
4
NEIL EPSTEIN AND JAY SHAPIRO
Conversely, suppose that Rm is perinormal for all maximal ideals m of R.
Let (S, n) be a local overring of R such that the inclusion R ⊆ S satisfies
going-down. Let m be a maximal ideal of R such that n∩R ⊆ m. Then Rm ⊆
S satisfies going-down, so that by perinormality of Rm , S = (Rm )n∩Rm =
Rn∩R . Thus R is perinormal.
Example 2.6. Any valuation domain R is globally perinormal because every
overring of R is a localization, as is easily shown. It then follows from
Proposition 2.5 that every Prüfer domain is perinormal.
3. (R1 ) domains, weakly normal domains, and generalized
Krull domains
In this section, we fit perinormality into the context of three known important classes of integral domains. Namely, generalized Krull =⇒ Krull =⇒
perinormal =⇒ weakly normal and (R1 ), with neither arrow reversible.
Definition 3.1. We say that a commutative ring R satisfies (R1 ) if RP is
a valuation domain whenever P is a height one prime of R.
Remark. It seems that in the literature, the term (R1 ) is only used for
Noetherian rings (cf. [Mat86, p. 183]). Here we have extended it to arbitrary commutative rings in a way that both coincides with the established
definition in the Noetherian case and suits our purpose in the general case.
Proposition 3.2. Any perinormal domain R satisfies (R1 ).
Proof. Let p be a height one prime of R. Let (V, m) be a valuation overring
of R such that m ∩ R = p. (If R is Noetherian, we can choose V to be
Noetherian as well.) Then the map R → V trivially satisfies going-down.
Thus, V is a localization of R, whence V = Rm∩R = Rp , completing the
proof that R satisfies (R1 ).
Proposition 3.3. If (R, m) is a local perinormal domain, then for any integral overring S of R such that Spec S → Spec R is a bijection, we have
R = S.
Proof. Let S be an integral overring of R such that Spec S → Spec R is
bijective. By integrality of the extension, some prime ideal n of S lies over
m; by bijectivity, there can be only one such prime; since fibers of Spec maps
on integral extensions are antichains, n is maximal, and the unique maximal
ideal of S.
Now, let p ⊂ q be a chain of primes in R, and let Q ∈ Spec S with
Q ∩ R = q. Since the Spec map is surjective, there is some P ∈ Spec S with
P ∩ R = p. By the ‘going up’ property of integral extensions, there is some
Q′ ∈ Spec S such that P ⊆ Q′ and Q′ ∩ R = q. But then by injectivity
of the Spec map, Q′ = Q. This shows that the inclusion R ⊆ S satisfies
going-down; hence S is a localization of R since R is perinormal. But since
the map R → S is a local homomorphism of local rings, the only way S can
be a localization of R is if R = S.
PERINORMALITY – A GENERALIZATION OF KRULL DOMAINS
5
Recall that an integral domain R is weakly normal 1 if for any integral
overring S of R such that the map Spec S → Spec R is a bijection and for
all P ∈ Spec S (where we set p := P ∩ R), the corresponding field extensions
Rp /pRp → SP /P SP is purely inseparable, it follows that R = S.
A domain R is seminormal if [Swa80] whenever x is an element of the
fraction field with x2 , x3 ∈ R, we have x ∈ R. However, it is equivalent
to say that for any integral overring S such that Spec S → Spec R is a
bijection and the corresponding field extensions Rp /pRp → SP /P SP are
isomorphisms, then R = S. From this, it is clear that every weakly normal
domain is seminormal, and that for a domain that contains Q, the converse
holds.
Recall that both weak normality and seminormality are local properties
in the sense of Proposition 2.5. Also every normal domain is weakly normal.
For all this and more, cf. Vitulli’s survey article on weak normality and
seminormality [Vit11].
Corollary 3.4. If R is perinormal, then it is weakly normal (hence seminormal).
Proof. Since both perinormality and weak normality are local properties,
we may assume R is local. Now let S be an integral overring of R where
Spec S → Spec R is a bijection such that for any P ∈ Spec S, the field
extension Rp /pRp → SP /P SP is purely inseparable (where p = P ∩ R).
Then by Proposition 3.3, R = S. It follows that R is weakly normal.
We next present two examples to show that the converse to Corollary 3.4
is false, even under some additional restrictions.
Example 3.5. Not all weakly normal (resp. seminormal) domains are perinormal, even in dimension 1. For example, A = R[x, ix] is seminormal, even
weakly normal, without being perinormal. Failure of perinormality arises
from the fact that C[x] is going-down over A (with the same fraction field
C(x)) without being a localization of it. To see seminormality, merely observe that A consists of those polynomials whose constant term is real, and
if f ∈ C[x] is such that its square and cube have real constant term, it follows that the constant term of f has its square and cube in R, whence the
constant term of f is in R already.
Example 3.6. (Thanks to Karl Schwede for this example.) Even for finitely
generated algebras over algebraically closed fields, weakly normal (R1 ) domains are not necessarily perinormal. For an example, consider R = k[x, y, xz, yz, z 2 ]
where k is any field of characteristic not equal to 2. Let A = k[x, y, z], and
note that A is the integral closure of R. Hence every prime ideal of R is
contracted from A. Let P ∈ Spec A. If P + (x, y), then z ∈ RP ∩R , whence
RP ∩R = AP is regular. Therefore, RP ∩R is normal, weakly normal, and
perinormal. This also shows that R satisfies (R1 ).
1This is not the original definition [AB69], but it is equivalent [Yan83, Remark 1].
6
NEIL EPSTEIN AND JAY SHAPIRO
Further, Yanagihara [Yan83, Proposition 1] has shown that an arbitrary
pullback of a weakly normal inclusion is also a weakly normal inclusion.
Hence, we may conclude that R is weakly normal, as R/I is a subring of
A/I, where I = (x, y, xz, yz)R = (x, y)A, and the extension k[z 2 ] ֒→ k[z]
is weakly normal, since char k 6= 2. (One may similarly show the ring is
seminormal even when char k = 2 by using [GT80, 4.3] in place of [Yan83,
Proposition 1], which shows the analogous fact for seminormal inclusions.)
However, the ring R(x,y)∩R = k(z 2 )[x, y, xz, yz](x,y,xz,yz) is not perinormal.
Its integral closure is A(x,y) = k(z)[x, y](x,y) . Then the map R(x,y)∩R →
A(x,y) induces a bijection on spectra because for all other primes, we have
an isomorphism, whereas the localness of the integral closure shows that we
also have bijectivity at the maximal ideal. But the two rings are unequal
because z ∈
/ R(x,y)∩R . Then since R(x,y)∩R is not perinormal, neither is R.
Lemma 3.7. Let R be an integral domain, S an overring of R, and p ∈
Spec S such that V := Rp∩R is a valuation domain of dimension 1. Then
Rp∩R = Sp as subrings of the fraction field K of R, and ht p = 1.
Proof. We have V = Rp∩R ⊆ Sp ⊆ K. But Sp 6= K, since p 6= 0. On the
other hand, V is a valuation domain, so every overring is a localization at a
prime ideal. Since V has only two primes, the only possibilities are V and
K. Since Sp 6= K, it follows that Sp = V . Finally, ht p = dim Sp = dim V =
1.
Definition 3.8. For a commutative ring R, Spec 1 (R) denotes the set of all
height one primes of R.
Proposition 3.9. Let R be an (R1 ) domain and let S be an overring such
that the extension R ⊆ S satisfies going-down. Then S satisfies (R1 ), and
the map Spec S → Spec R induces an injective map Spec 1 (S) → Spec 1 (R)
whose image consists of those height one primes p of R such that pS 6= S.
Proof. First we need to show that given a height one prime Q of S, q := Q∩R
is a height one prime of R. We have q 6= 0 because R ⊆ S is an essential
extension of R-modules; hence ht q ≥ 1. On the other hand, suppose there
is some p ∈ Spec R with 0 ( p ( q. Then by going-down, there is some
P ∈ Spec S with P ∩ R = p. But then P 6= 0 (again by essentiality of the
extension), whence 0 ( P ( Q is a chain of primes in S, so that ht Q ≥ 2,
a contradiction. Then by Lemma 3.7, S satisfies (R1 ).
Next, let p ∈ Spec 1 (R). If pS = S, then no prime of S can lie over p.
On the other hand, if pS 6= S, then there is some maximal ideal Q of S
with pS ⊆ Q. Then the going-down property implies that there is some
P ∈ Spec S with P ∩ R = p. Moreover, Lemma 3.7 along with the (R1 )-ness
of R implies that SP = Rp and ht P = 1. Finally, if there is some other
prime ideal P ′ of S with P ′ ∩ S = p, then we have SP = Rp = SP ′ . But
different prime ideals of a ring always give rise to different localizations, so
P = P ′ , finishing the proof that the map of Spec 1 ’s is injective.
PERINORMALITY – A GENERALIZATION OF KRULL DOMAINS
7
Consider the following properties that an integral domain R may have:
T
(1) R = p∈Spec 1 (R) Rp .
(2) For any nonzero element r ∈ R, the set {p ∈ Spec 1 (R) | r ∈ p} is
finite.
(3) Rp is a DVR for all p ∈ Spec 1 (R).
One says R is a Krull domain (resp. generalized Krull domain) if it satisfies
properties (1–3) (resp. properties (1), (2), and (R1 )).
Recall the Mori-Nagata theorem (cf. [Fos73, Theorem 4.3]), which says
that the integral closure of any Noetherian domain is Krull (though not
necessarily Noetherian); hence every Noetherian normal domain is Krull.
Theorem 3.10. If R is a generalized Krull domain (e.g. Noetherian normal),
then R is perinormal.
Proof. Let (S, m) be a local overring of R such that the inclusion R ⊆ S
satisfies going-down. Let Q = m ∩ R; RQ is then also a generalized Krull
domain [Gil72, Corollary 43.6]. Note that the going-down condition implies
that the map Spec S → Spec RQ is surjective. Hence by Proposition 3.9, we
get a bijective map Spec 1 (S) → Spec 1 (RQ ), and for each P ∈ Spec 1 (S) and
corresponding p = P ∩ R ∈ Spec 1 (RQ ), we have (RQ )p = SP by Lemma 3.7.
Therefore
\
\
RQ ⊆ S ⊆
SP =
(RQ )p = RQ .
P ∈Spec 1 (S)
p∈Spec 1 (RQ )
That is, S = RQ , so R is perinormal.
4. Local characterizations of perinormality
In this section, after a preliminary exploration of how (R1 ) domains interact with overrings and the special relationship that occurs between two
rings that share a nonzero ideal, we provide two surprising characterizations
of perinormal domains within a large class of integral domains.
Lemma 4.1. Let R be an (R1 ) integral domain whose integral closure R′
is a generalized Krull domain and such that for all P ∈ Spec 1 R′ , P ∩ R ∈
Spec 1 R. If there is a maximal ideal of R that contains all the height one
primes of R, then R is local.
Proof. Let m be a maximal ideal of R, and suppose that m contains all height
one primes of R. By Lemma 3.7, since R is an (R 1 ) domain, if p ∈ Spec 1 R
and P ∈ Spec 1 R′ lies over p, then Rp = RP′ . As height one primes of R′
contract to height one primes of R, we have
\
\
\
Rm ⊆
(Rm )p =
RP =
RP = R ′ ,
p∈Spec 1 (Rm )
P ⊆m,ht P =1
P ∈Spec 1 R
where the last equality follows since R′ is generalized Krull. We have shown
that Rm is integral over R, which can only happen if R = Rm .
8
NEIL EPSTEIN AND JAY SHAPIRO
Lemma 4.2. Let (R, m) be an (R1 ) local domain whose integral closure R′
is a generalized Krull domain and such that for all P ∈ Spec 1 R′ , P ∩ R ∈
Spec 1 R. Let S be an integral overring of R that satisfies going-down over
R. Then S is local.
Proof. By Proposition 3.9, S satisfies (R1 ) and the map Spec S → Spec R
∼
induces a bijection Spec 1 (S) → Spec 1 (R). Now let n be a maximal ideal of S
that contains mS; it exists because S is integral over R. Then the extension
R ⊆ Sn is going-down and mSn 6= Sn , so Proposition 3.9 applies again to
∼
produce a bijection Spec 1 (Sn ) → Spec 1 (R). This bijection composes with
the inverse of the previous bijection to give a bijection of Spec 1 (Sn ) with
Spec 1 (S). Hence, for all height one primes p of S, we have pSn 6= Sn –
that is, p ⊆ n. Moreover, R and S have the same integral closure, which
is generalized Krull by hypothesis. As height one primes of R′ contract to
height one primes of R, one can show using the properties of integrality that
the same holds for all intermediate rings. Thus by Lemma 4.1, S must be
local.
Next we give conditions on a domain A that ensure that height one primes
of A′ contract to height one primes of A. We first need to recall some
definitions. A ring A is called catenary if given a pair p1 ⊂ p2 of prime ideals
of A such that there exists a saturated chain of prime ideals between the
two, then all such saturated chains have the same length. We say that A is
universally catenary if it is Noetherian and every finitely generated A-algebra
is catenary. It is clear that being catenary and hence being universally
catenary is closed under localization.
Let A ⊆ B be integral domains. Then tr.degA B denotes the transcendence degree of the fraction field of B over that of A. Recall that the ring
extension is said to satisfy the dimension (or altitude) formula if the following equality holds for all P ∈ Spec B:
htP + tr.degA/p B/P = ht p + tr.degA B
where p = P ∩ A (see for example [Mat86, p. 119]). We note that if in
addition B is integral over A, then tr.degA B = 0 = tr.degA/p B/P in which
case the height of a prime of B is invariant under contraction to A.
Lemma 4.3. Let R be a universally catenary integral domain with integral
closure R′ . Then every height one prime ideal of R′ contracts to a height
one prime ideal of R.
Proof. Since R is Noetherian, by [Rat80, Corollary 2.5] it will suffice to
show that if f ∈ R[x]′ , then the height of a prime ideal of R[x, f ] is invariant
under contraction to R[x]. Since R[x] is also universally catenary and R[x, f ]
is module finite over R[x] (in particular algbera finite), it follows that the
extension R[x] ⊆ R[x, f ] satisfies the dimension formula (see for example
[Mat86, Theorem 15.6]). Since it is also an integral extension, we have that
the height is invariant under contraction as desired.
PERINORMALITY – A GENERALIZATION OF KRULL DOMAINS
9
Remark. The universal catenarity assumption is not particularly restrictive, as almost every Noetherian ring that arises in algebraic geometry,
number theory, and everyday commutative algebra is universally catenary.
Indeed, the class of universally catenary rings is closed under localization
and finitely generated algebra extensions. Moreover, it includes CohenMacaulay rings (including Dedekind domains, fields, and regular local rings;
see [Mat86, Theorem 17.9]) and complete Noetherian local rings (e.g. power
series rings in finitely many variables over a field or over the p-adics; see
[Mat86, Theorem 29.4]).
The following three results are well-known to experts, and some of their
statements appear (without proofs) in [Fon80]. However, we include them
here for completeness and to make the paper self-contained.
Lemma 4.4. Let R ⊆ T be an inclusion of commutative rings, and let I
be an ideal that is common to R and T . (That is, I is an ideal of R and
IT = I.) Let W be a multiplicatively closed subset of T , set V := W ∩R, and
suppose that I ∩W 6= ∅. Then the natural map RV → TW is an isomorphism.
Proof. Let z ∈ I ∩ W . To see injectivity, let vr ∈ RV (with r ∈ R, v ∈ V )
such that vr = 0 in TW . Then for some w ∈ W , we have wr = 0. Moreover,
zw ∈ I ∩ W ⊆ R ∩ W = V and (zw)r = 0, whence vr = 0 in RV .
To see surjectivity, let wt ∈ TW (with t ∈ T , w ∈ W ). Then zt ∈ IT ⊆ R
zt
and zw ∈ I ∩ W ⊆ R ∩ W = V , so that wt = zw
∈ RV .
Corollary 4.5. Let R ⊆ T be an inclusion of commutative rings, and let I
be an ideal common to R and T . Let z ∈ I, and let P ∈ Spec T with I * P .
Then the natural maps Rz → Tz and RP ∩R → TP are isomorphisms.
Proof. In the first case, apply Lemma 4.4 with V = W = {z n | n ∈ N}. In
the second case, apply the same lemma with W = T \ P .
Corollary 4.6. Let R ⊆ T be integral domains that share a common nonzero
ideal I. Then the induced map of fraction fields is an isomorphism.
Proof. Apply Lemma 4.4 with W = T \ {0}.
Theorem 4.7. Let R be a universally catenary integral domain with fraction
field K. The following are equivalent.
(a) R is perinormal.
(b) For each p ∈ Spec R, Rp is the only ring S between Rp and K such
that the induced map Spec S → Spec Rp is an order-reflecting bijection.
(c) R satisfies (R1 ), and for each p ∈ Spec R, Rp is the only ring
S between Rp and its integral closure such that the induced map
Spec S → Spec Rp is an order-reflecting bijection.
Proof. We note that we only need the universal catenarity condition for the
implication (c) =⇒ (a).
10
NEIL EPSTEIN AND JAY SHAPIRO
(a) =⇒ (b): Since perinormality localizes, we may assume that (R, p) is
local. Now let S be a ring between R and K such that Spec S → Spec R is
an order-reflecting bijection. Thus S satisfies going-down over R. Since S
is local, the perinormality assumption on R implies that S is a localization
of R. As the Spec map is onto, we conclude that R = S.
(b) =⇒ (c): To see that R satisfies (R1 ), let p be a height one prime
of R. Let V be a valuation ring centered on p. Then all nonzero prime
ideals of V contract to p, and their intersection q is also a prime ideal of V .
Since q contains no prime ideals other than itself and (0), we have ht q = 1.
Now, the map Rp → Vq induces a bijection on Spec, so Rp = Vq , a valuation
domain. On the other hand, the second condition in (c) follows trivially
from (b).
(c) =⇒ (a): Let (S, n) be a going-down local overring of R. Let p =
n ∩ R. Note that Rp satisfies (R1 ), so that by Proposition 3.9, the map
∼
Spec S → Spec Rp induces a bijection Spec 1 (S) → Spec 1 (Rp ) where by
Lemma 3.7, the corresponding localizations of S and Rp coincide. Hence S
is (R1 ). Since Rp is also universally catenary it follows by integrality and
Lemma 4.3 that the Spec map Spec (Rp )′ → Spec Rp induces a bijection
∼
Spec 1 (Rp )′ → Spec 1 Rp , where again the corresponding localizations of Rp
and (Rp )′ coincide. Thus,
\
\
Rp ⊆ S ⊆
SQ =
(Rp )P = (Rp )′ ,
Q∈Spec 1 (S)
P ∈Spec 1 (Rp )
where the last equality follows since (Rp )′ is a Krull domain. Hence S is
integral over Rp .
Next, we claim that the map Spec S → Spec Rp is injective. To see this,
let Q be a prime ideal of Rp , and let W := Rp \ Q. Then the inclusion
(Rp )Q ⊆ SW is integral, it satisfies going-down, and QSW 6= SW . Moreover,
′
the integral closure of Rp is RR\p
, a Krull domain. Thus by Lemmas 4.2
and 4.3, SW is local. But this means that only one prime of S lies over Q,
whence the map Spec S → Spec Rp is injective.
However, the map is also surjective, since S is integral over Rp . Therefore
the map is bijective.
Finally, we must show that the map is order-reflecting – that is, if q1 ⊆
q2 in Rp , then the corresponding primes in S are also so ordered. So let
Qj ∈ Spec S with Qj ∩ Rp = qj , j = 1, 2. By going-down, there is some
P ∈ Spec S with P ⊆ Q2 and P ∩ Rp = q1 = Q1 ∩ Rp . But then by the
injectivity of the Spec map, P = Q1 , whence Q1 ⊆ Q2 . Hence, condition (c)
applies and Rp = S, whence R is perinormal.
Recall [FO70] that a ring extension A ⊆ B is called minimal if there are
no rings properly between A and B.
Corollary 4.8. Let (R, m) be a universally catenary local domain. Assume
that dim R ≥ 2 and that the map R → S is a minimal ring extension, where
PERINORMALITY – A GENERALIZATION OF KRULL DOMAINS
11
S is the integral closure of R. Then R is perinormal if and only if S is not
local.
Proof. By [FO70, Theorem 2.2], m is also an ideal of S. Now let p ∈ Spec R
with p 6= m. Let P, P ′ ∈ Spec S with P ∩ R = P ′ ∩ R = p. Then by
Corollary 4.5, SP = Rp = SP ′ , whence P = P ′ . Also, by integrality any
maximal ideal of S must contract to m. Hence, there is a bijection between
the nonmaximal primes of R and those of S.
Suppose S is local. The only possibility of non-bijection of Spec happens
at the maximal ideals, but it is clear that the unique maximal ideal of S
contracts to m. Thus, R → S induces a bijection on Spec even though
R 6= S. Then by the implication (a) =⇒ (b) of Theorem 4.7, R cannot be
perinormal.
On the other hand, if S is not local, then by minimality of the extension,
there is no local integral overring of R = Rm other than R itself. Also, for
any p ∈ Spec R \ {m}, Rp is integrally closed (because it equals SP , where
P contracts to p), so again there is no local integral overring. The same
observation shows that R satisfies (R1 ), since none of the height one primes
of R are maximal. Then by the implication (c) =⇒ (a) of Theorem 4.7, R
is perinormal.
We close this section by presenting an example that shows that Theorem 4.7 and Corollary 4.8 are in some sense best possible.
Example 4.9. The fact that the last two results are false for arbitrary
Noetherian rings can be demonstrated by [Nag62, Appendix, Example 2]
with m = 0. This example consists of a Noetherian normal ring S with
exactly two maximal ideals m1 and m2 where ht m1 = 1 and ht m2 = 2 and
a field k ⊂ S such that the canonical map k → S/mi is an isomorphism for
i = 1, 2. If R = k + J, where J = m1 ∩ m2 , then Nagata shows that S is the
integral closure of R.
We next claim that the set Spec 1 R is in natural bijection with the set X of
height one primes contained in m2 , and that the corresponding localizations
are equal. To see this, let p ∈ Spec 1 R. Then by integrality, there is some
P ∈ Spec 1 S with P ∩ R = p. But all height one primes of S are in m2
except m1 , and m1 ∩ R = J ) p. Thus, P ⊂ m2 . Hence, contraction gives a
surjective map X ։ Spec 1 R. Finally, if P, P ′ ∈ X with P ∩ R = P ′ ∩ R = p,
then SP = Rp = SP ′ (by Corollary 4.5), whence P = P ′ .
Hence R satisfies (R1 ) and the ring Sm2 satisfies going-down over R. However, the ring Sm2 cannot be a localization of R as the maximal ideal of the
former contracts to the maximal ideal of R. Therefore R is not perinormal.
On the other hand, S is a minimal ring extension of R by [PPL06, Theorem
3.3(b)]. Hence there are no local rings strictly between R and R′ = S and
so condition (c) of Theorem 4.7 is satisfied. Thus both Theorem 4.7 and
Corollary 4.8 are false without some assumption on R.
12
NEIL EPSTEIN AND JAY SHAPIRO
5. Gluing points of generalized Krull domains in high
dimension
In this section, we exhibit a method for constructing perinormal domains
out of pre-existing generalized Krull domains, such that the new domains
enjoy an arbitrary degree of branching-like behavior. We explain how to
interpret this construction either in the algebraic context of pullbacks or the
geometric context of gluing points.
We begin with the following result, which may be known, but we include
a proof for the convenience of the reader.
Lemma 5.1. Let R ⊆ S ⊆ T be ring extensions. Let X := {P ∈ Spec S |
P ∩ R is a maximal ideal}. Suppose that the induced map (Spec S \ X) →
(Spec R\Max R) is injective and the inclusion R ⊆ S satisfies INC. If R ⊆ T
satisfies going-down, so does S ⊆ T .
Proof. Let P1 ⊂ P2 be a chain of two prime ideals of S such that there exists
Q2 ∈ Spec T with Q2 ∩ S = P2 . Then setting pj := Pj ∩ R, j = 1, 2, we have
p1 6= p2 (by INC) and Q2 ∩ R = p2 . Then by the going-down hypothesis
on the extension R ⊆ T , there is a prime ideal Q1 of T with Q1 ⊆ Q2 and
Q1 ∩ R = p1 . But then we have P1 ∩ R = p1 = Q1 ∩ R = (Q1 ∩ S) ∩ R, so
by injectivity of the map in question (since p1 is a non-maximal ideal of R),
we have P1 = Q1 ∩ S, completing the proof.
Theorem 5.2. Let S be a semilocal generalized Krull domain and let m1 , . . . , mn
be its maximal ideals. Assume that n ≥ 2, and ht mj ≥ 2 for all 1 ≤ j ≤ n.
Further suppose that the fields S/mi are all isomorphic to the same field k.
For each i = 1, 2, . . . , n fix an isomorphism αi : k → S/mi . Let R be the
pullback in the diagram
R
f
/S
p
g
k
h
/ S/J
Qn
where J := m1 ∩ · · · ∩ mn = j=1 mj , p is the canonical projection, and h is
Qn
the composition of the maps k →
Qn i=1 S/mi (given by λ 7→ (α1 (λ), . . . , αn (λ))
and the isomorphism between i=1 S/mi and S/J (given by the Chinese Remainder Theorem). Then R is local and perinormal. Also, R is globally
perinormal if S is. But R is not integrally closed, because its integral closure is S.
Proof. We first note that it follows from the properties of a pullback that
as h is an injection (resp. p is a surjection), f is an injection (resp. g is
a surjection). Thus we can view R as a subring of S where J = ker g is
a common nonzero ideal of both rings. Then it follows from Corollary 4.6
that R and S have the same field of fractions.
Next we show that S is integral over R (and hence equals the integral
closure of R). To see this, let s ∈ S. Since J is a common ideal of R and S,
PERINORMALITY – A GENERALIZATION OF KRULL DOMAINS
13
we have
k∼
= R/J ֒→ S/J = S/(m1 ∩ . . . ∩ mn ) ∼
=
n
Y
(S/mj ) ∼
= k × · · · × k,
j=1
where the composite map is just the diagonal embedding. Now k × · · · × k
is integral over k, which means that S/J is integral over R/J. In particular,
there is some monic g ∈ (R/J)[X] such that g(s̄) = 0. But then g lifts to a
monic polynomial G ∈ R[X] such that G(s) ∈ J. Say G(s) = j ∈ J. Then
H(X) := G(X) − j is a monic polynomial over R such that H(s) = 0. It
follows that the integral closure of R is S.
Now we claim that R is local. This will follow if we can show that J is
the Jacobson radical of R, since we already have that J is a maximal ideal
of R. To this end, it suffices to show that for each j ∈ J, 1 − j is a unit of
R. If not, then 1 − j ∈ p for some prime ideal of R, so that 1 − j ∈ pS. But
since 1− j is a unit of S (since J is the Jacobson radical of S), it follows that
pS = S, which contradicts the lying over property of the integral extension
R ⊆ S. This contradiction proves the claim.
Before showing that R is perinormal, we collect some observations about
the relationship between Spec R and Spec S. Let P ∈ Spec S and p = P ∩ R.
By integrality P is a nonmaximal ideal of S if and only if p is a nonmaximal
ideal of R. Furthermore in this case by Corollary 4.5, we have Rp = SP ,
whence ht P = ht p. Since we are assuming that no maximal ideal of S has
height 1, each height one prime of S must contract to a height one prime of
R. Moreover by integrality each height one prime of R is lain over by a prime
of S. Thus the Spec map induces a bijection Spec 1 (S) → Spec 1 (R) where
the corresponding localizations coincide. In particular R satisfies (R1 ).
Now let T be an overring of R such that R ⊆ T satisfies going-down.
Case 1: Suppose JT = T . Then S ⊆ T . To see this, let n be a maximal
ideal of T . Since J * n, we have n ∩ R = p ( J. Then there is some
nonmaximal P ∈ Spec S with P ∩ R = p (since S is integral over R), whence
we have Rp = SP . Hence, S ⊆ SP = R
Tp ⊆ Tn . Since n was an arbitrary
maximal ideal of T , it follows that S ⊆ n∈Max T Tn = T .
Next, since the map Spec S → Spec R is injective on non-maximal ideals,
S is integral over R, and R ⊆ T satisfies going-down, it follows from
Lemma 5.1 that the extension S ⊆ T satisfies going-down. Thus, if T is
local or S is globally perinormal, we have that T = SW for some multiplicative subset W of S. On the other hand, for any maximal ideal mi of S, we
have SWQ= T = JT ⊆ mi T = mi SW , so mi ∩ W 6= ∅. Let zi ∈ mi ∩ W , and
let z := ni=1 zi . Note that z ∈ J and that z is a unit in T . Thus by Corollary 4.5, T = SW = (Sz )V = (Rz )V = RV ′ for appropriate multiplicative
sets V and V ′ , so that T is a localization of R.
Case 2: On the other hand if JT 6= T , then by Proposition 3.9, the
∼
map Spec T → Spec R induces a bijection Spec 1 (T ) → Spec 1 (R), and by
Lemma 3.7 the corresponding localizations are equal. Since we have a similar
14
NEIL EPSTEIN AND JAY SHAPIRO
bijection between Spec 1 (S) and Spec 1 (R), we get
\
\
R⊆T ⊆
TP =
SQ = S.
P ∈Spec 1 (T )
Q∈Spec 1 (S)
where the last equality follows from S being a generalized Krull domain. It
follows that T is integral over R and that J is a common ideal to R, S, and
T , so we have
k∼
= R/J ⊆ T /J ⊆ S/J ∼
= k × · · · × k.
Thus, T /J must be isomorphic to a product of some finite number of copies
of k. But by Lemma 4.2, T is local. Hence, T /J is local as well. Therefore,
T /J ∼
= k, whence T = R.
Example 5.3. For a geometrically relevant example of the above, let B =
k[X, Y ], let pj = (xj , yj ) ∈ k2 be distinct ordered pairs (points of k2 ) for
1 ≤ j ≤ t, let nj := (X − xj , Y − yj ) (the maximal ideal corresponding to
T
T
pj ), J := tj=1 nj , and A := k + tj=1 nj . Note that J is a maximal ideal
of A. Then by the above theorem, the ring AJ is perinormal (even globally
perinormal!), but it isn’t normal unless t = 1 (since there are t maximal
ideals lying over JAJ in the integral closure of AJ .)
By [Fer03, Théorème 5.1], Spec A can be seen, quite precisely, as the
algebro-geometric result of gluing together the points p1 , . . . , pt of A2k together, and Spec AJ is the (geometric) localization at the resulting singular
point.
6. Global perinormality
Next, we explore the related but quite distinct concept of global perinormality. In particular, for Krull domains, there is a strong and surprising
relationship to the divisor class group. We illustrate with examples from
algebraic number theory.
Proposition 6.1. Let R be a globally perinormal domain, and let W be a
multiplicative subset of R. Then RW is globally perinormal as well.
Proof. Let S be an overring of RW that satisfies going-down. Let Q ∈
Spec S. Then by Lemma 2.2, the map Spec SQ → Spec (RW )Q∩RW is surjective. But (RW )Q∩RW = RQ∩R canonically, so that the map Spec SQ →
Spec RQ∩R is surjective. Since Q ∈ Spec S was arbitrarily chosen, Lemma 2.2
applies again to show that the map R → S satisfies going-down, whence since
R is globally perinormal, S must be a localization of R. That is, S = RV for
some multiplicative subset V of R. But since RW ⊆ S, we have W ⊆ V , so
that V ′ := V RW is a multiplicative subset of RW , and S = RV = (RW )V ′ ,
finishing the proof that RW is globally perinormal.
We next give a result analogous to Proposition 2.4.
Proposition 6.2. Let R be a perinormal domain. Then R is globally perinormal if and only if every flat overring of R is a localization of R.
PERINORMALITY – A GENERALIZATION OF KRULL DOMAINS
15
Proof. Suppose R is globally perinormal. Let S be a flat overring of R.
Then S satisfies going-down over R (by flatness), so by global perinormality,
S = RW for some multiplicative set W ⊆ R.
Conversely, suppose every flat overring of R is a localization of R. Let
S be an overring that satisfies going-down over R. By perinormality and
Proposition 2.4, S is flat over R, whence by assumption S is a localization
of R. Hence, R is globally perinormal.
Proposition 6.3. Let R be a generalized Krull domain and let S be a goingdown overring of R. Then
\
S=
Rp =: R∆ ,
p∈∆
where ∆ := {p ∈ Spec 1 (R) | pS 6= S}.
Proof. For any maximal ideal m of S, Sm is local overring of R such that
R ⊆ Sm satisfies going-down. Hence by Theorem 3.10, Sm is a localization of
R – i.e., Sm = Rm∩R . Now, for every p ∈ ∆, there is some such m ∈ Max S
with pS ⊆ m, whereas when p ∈ Spec 1 (R) \ ∆, there is no such m. Also,
every such Rm∩R is generalized Krull, by [Gil72, Corollary 43.6]. Thus:
\
\
\
\
Rm∩R =
Sm =
S=
(Rm∩R )P
m∈Max S
m∈Max S
=
\
m∈Max S
\
p∈Spec 1 (R), pS⊆m
m∈Max S
Rp =
\
P ∈Spec 1 (Rm∩R )
Rp = R∆ .
p∈∆
The next theorem involves the divisor class group; hence we restrict our
attention to Krull domains (rather than generalized Krull domains), which
is where the theory of the divisor class group is most well developed.
Theorem 6.4. Let R be a Krull domain.
(1) If the divisor class group Cl(R) of R is torsion, then R is globally
perinormal.
(2) The converse holds when dim R = 1.
Proof. To prove part (1), it is enough to show that ever flat overring of R is
a localization of R, due to Proposition 6.2 and Theorem 3.10. So let S be a
flat overring of R. Then by Theorem 2.3, Sm = Rm∩R for all maximal ideals
m of S. In particular, S is an intersection of localizations of R at prime
ideals of R. But recall that Heinzer and Roitman prove [HR04, Corollary
2.9] that for a Krull domain with torsion divisor class group, any intersection
of localizations of R at prime ideals is in fact a localization of R. Thus, S
is a localization of R, whence R is globally perinormal.
As for part (2), the following statement was proved independently in
[Dav64, Theorem 2], [Gol64, Corollary (1)], and [GO64, Corollary 2.6]:
16
NEIL EPSTEIN AND JAY SHAPIRO
Let R be a Dedekind domain. Then the class group Cl(R) of
R is torsion if and only if every overring of R is a localization
of R.
But any 1-dimensional Krull domain is a Dedekind domain [Mat86, Theorem
12.5]. Hence, (2) follows.
Example 6.5. The ring of integers OK of any finite algebraic extension of
K of Q is globally perinormal. This is because OK is a Dedekind domain
(hence Krull) with finite (hence torsion) class group (cf. [FT93, Theorem
31]). The result then follows from Theorem 6.4.
Example 6.6. If Rm is globally perinormal for all maximal ideals m, it does
not follow that R is globally perinormal, even when R is a Dedekind domain
finitely generated over a field. To see this, let E be any elliptic curve, with
Weierstraß equation f = 0, considered as an affine curve in A2C . Then as a
group, E = E(C) is analytically isomorphic (as an algebraic group) to C/Λ
for some lattice Λ [Sil86, Corollary VI.5.1.1], which in turn is abstractly
isomorphic (as a group) to R/Z × R/Z. The latter has uncountably many
non-torsion elements (namely, whenever either of the two coordinates is irrational). On the other hand, E(C) is isomorphic to a particular subgroup
(the so-called degree 0 part) of the divisor class group of the Dedekind domain R = C[X, Y ]/(f ) [Sil86, Proposition III.3.4], as the latter is the affine
coordinate ring of E(C). Thus, Cl(R) contains (uncountably many) nontorsion elements, so by Theorem 6.4(2), R is not globally perinormal. But
Rm is a DVR for any m ∈ Max R (since R is a Dedekind domain), so Rm is
globally perinormal.
On page 17, we have constructed a chart tracking many of the dependencies we have discussed so far. Note that none of the arrows are reversible,
and that a crossed-out arrow indicates a specific non-implication.
7. Some subtleties of the non-Noetherian case
As usual, nuances exist for general commutative rings that do not come
up when one assumes all rings involved are Noetherian. We explore some of
these in the current section.
Example 7.1. There is a non-Noetherian one-dimensional local integrally
closed domain that isn’t perinormal. In fact, any integrally closed onedimensional local domain that isn’t a valuation domain will suffice. For
example, let K/F be a purely transcendental field extension, let X be an
analytic indeterminate over K, let V := K[[X]], and then let R = F + XV .
Then R is easily seen to be local with maximal ideal XV and integrally
closed (but not completely integrally closed) in its fraction field K((X)). To
see that R has dimension 1, let p ∈ Spec R with 0 ( p ( XV . Then by
Lemma 4.4, the map Rp → VR\p is an isomorphism, so that VR\p is local
and hence equals either V or K((X)). The former is impossible since every
nonunit of R is a nonunit of V , and the latter means that p = 0, which
PERINORMALITY – A GENERALIZATION OF KRULL DOMAINS
17
Dedekind
Noetherian
normal
OK , where K is an
algebraic number field
Krull, with
\
torsion Cl(R)
globally
perinormal
\
Krull
integrally
closed
generalized
Krull
weakly
normal
perinormal
Prüfer
(R1 )
seminormal
contradicts the assumption. Hence, Spec R = {0, XV }, whence dim R = 1.
But V is a going-down local overring of R that is not a localization.
Example 7.2. There exist non-Prüfer, integrally closed integral domains
(necessarily non-Noetherian) that are perinormal but not generalized Krull.
For a concrete example, let k be a field, x, y indeterminates over k, and
R = k[x, y, xy , xy2 , xy3 , . . .], considered as a subring of k(x, y). If m = xR,
then m is a maximal ideal of R of height two (see below). If P is any other
maximal ideal of R, then x 6∈ P . Thus R ⊆ k[x, y]p , where p = P ∩ k[x, y].
Hence RP = k[x, y]p , and so in particular RP is a Krull domain, whence
perinormal. It is also now clear that R is not Prüfer. On the other hand, it
is known (though apparently not written down) that Rm is a valuation ring.
Specifically, Rm is the valuation ring associated to the valuation ν on k(x, y)
with value group Z × Z (ordered lexicographically), where ν(x) = (0, 1) and
ν(y) = (1, 0). We give a brief outline as to why Rm = V , where V is the
valuation ring of ν.
Clearly R ⊂ V , since y/x ∈ V . Moreover the maximal ideal of V is
generated by x, and so Rm ⊆ V . For the reverse containment we can write
an arbitrary element of V as f /g, where f, g ∈ k[x, y]. Evidently we can
assume that g is not divisible by y. Thus ν(g) = (0, m) for some nonnegative
integer m. We can then write g = xm h(x) + yp(x, y), where h(x) ∈ k[x] and
p(x, y) ∈ k[x, y]. Since ν(f ) ≥ ν(g), v(f ) = (n, t) where either n > 0 or
n = 0 and t ≥ m. In either case one can show that f /g can be written in
the form F/G, where F, G ∈ k[x, y] and G ∈
/ m. Thus f /g ∈ Rm , showing
the two rings are equal.
Hence, R is integrally closed. Finally to complete the example we must
know that R is not a generalized Krull domain. However, m = xR is a
18
NEIL EPSTEIN AND JAY SHAPIRO
T
principal prime ideal of height two. Thus x−1 ∈ ( p∈Spec 1 (R) Rp ) \ R, contradicting the definition of a generalized Krull domain.
Example 7.3. Let V be any rank 1 valuation ring and n ∈ N. Recall
that generalized Krull domains are closed under finite polynomial extension [Gil72, Theorem 43.11(3)]. Thus, V [X1 , . . . , Xn ] is a generalized Krull
domain (since V is obviously generalized Krull), hence perinormal (by Theorem 3.10). This provides a large class of examples of perinormal domains,
of arbitrary Krull dimension, that are neither Krull nor Prüfer, even locally.
8. Questions
We close with an incomplete but intriguing list of questions suitable for
further research on perinormality and global perinormality.
Question 1. Let k be a field, and let X, Y, Z, W be indeterminates over that
field. Is the normal hypersurface R = k[X, Y, Z, W ]/(XW − Y Z) globally
perinormal or not? Note that its divisor class group is well-known to be
infinite cyclic [Fos73, Proposition 14.8]. If “yes”, this answer would mean
that the converse to Theorem 6.4(1) is false in dimension 3. If “no”, this
answer would provide evidence that the converse may be true.
Question 2. Let R be a perinormal domain, K its fraction field, L a (finite?)
extension field of K, and S the integral closure of R in L. Is S perinormal?
In the non-Noetherian case, this question is interesting even when we
further stipulate that R is integrally closed.
Question 3. Let R be an integral domain and X an indeterminate over R.
What can one say about the perinormality of R in relation to the perinormality of R[X]? Does one imply the other?
Question 4. Let R be a Noetherian local domain whose completion R̂ is
also a domain. If R is perinormal, is R̂ perinormal as well? What about the
converse?
Question 5. Is every completely integrally closed domain perinormal?
Acknowledgments
We wish to thank David Dobbs, Tiberiu Dumitrescu, Alan Loper, Karl
Schwede, and Dana Weston for interesting and useful conversations at various stages of the project. We also wish to thank the referee for many useful
comments and improvements, especially for Propositions 2.4 and 6.2, which
are due to the referee.
References
[AB69]
[CS46]
Aldo Andreotti and Enrico Bombieri, Sugli omeomorfismi delle varietà algebriche, Ann. Scuola Norm. Sup. Pisa (3) 23 (1969), 431–450.
Irvin S. Cohen and Abraham Seidenberg, Prime ideals and integral dependence,
Bull. Amer. Math. Soc. 52 (1946), 252–261.
PERINORMALITY – A GENERALIZATION OF KRULL DOMAINS
19
[Dav64] Edward D. Davis, Overrings of commutative rings. II. Integrally closed overrings,
Trans. Amer. Math. Soc. 110 (1964), 196–212.
[Dob73] David E. Dobbs, On going down for simple overrings, Proc. Amer. Math. Soc.
39 (1973), no. 3, 515–519.
[Dob74]
, On going down for simple overrings. II, Comm. Algebra 1 (1974), 439–
458.
[DP76] David E. Dobbs and Ira J. Papick, On going down for simple overrings. III, Proc.
Amer. Math. Soc. 54 (1976), 35–38.
[Fer03] Daniel Ferrand, Conducteur, descente et pincement, Bull. Soc. Math. France 131
(2003), no. 4, 553–585.
[FO70] Daniel Ferrand and Jean-Pierre Olivier, Homomorphisms minimaux d’anneaux,
J. Algebra 16 (1970), 461–471.
[Fon80] Marco Fontana, Topologically defined classes of commutative rings, Ann. Mat.
Pura Appl. (4) 123 (1980), 331–355.
[Fos73] Robert M. Fossum, The divisor class group of a Krull domain, Ergebnisse
der Mathematik und ihrer Grenzgebiete, vol. 74, Springer-Verlag, New YorkHeidelberg, 1973.
[FT93] Albrecht Fröhlich and Martin J. Taylor, Algebraic number theory, Cambridge
Studies in Advanced Mathematics, vol. 27, Cambridge Univ. Press, Cambridge,
1993.
[Gil72] Robert Gilmer, Multiplicative ideal theory, Pure and Applied Mathematics,
no. 12, Dekker, New York, 1972.
[GO64] Robert Gilmer and Jack Ohm, Integral domains with quotient overrings, Math.
Ann. 153 (1964), 97–103.
[Gol64] Oscar Goldman, On a special class of Dedekind domains, Topology 3 (1964),
no. suppl. 1, 113–118.
[GT80] Silvio Greco and Carlo Traverso, On seminormal schemes, Comp. Math. 40
(1980), no. 3, 325–365.
[HR04] William Heinzer and Moshe Roitman, Well-centered overrings of an integral domain, J. Algebra 272 (2004), no. 2, 435–455.
[Mat86] Hideyuki Matsumura, Commutative ring theory, Cambridge Studies in Advanced
Mathematics, no. 8, Cambridge Univ. Press, Cambridge, 1986, Translated from
the Japanese by M. Reid.
[Nag62] Masayoshi Nagata, Local rings, Interscience Tracts in Pure and Appl. Math.,
no. 13, Interscience, 1962.
[PPL06] Gabriel Picavet and Martine Picavet-L’Hermitte, About minimal morphisms,
Multiplicative ideal theory in commutative algebra, Springer, New York, 2006,
pp. 369–386.
[Rat80] Louis J. Ratliff, Jr., Notes on three integral dependence theorems, J. Algebra 66
(1980), no. 2, 600–619.
[Ric65] Fred Richman, Generalized quotient rings, Proc. Amer. Math. Soc. 16 (1965),
794–799.
[Sil86] Joseph H. Silverman, The arithmetic of elliptic curves, Graduate Texts in Mathematics, vol. 106, Springer-Verlag, New York, 1986.
[Swa80] Richard G. Swan, On seminormality, J. Algebra 67 (1980), 210–229.
[Vit11] Marie A. Vitulli, Weak normality and seminormality, Commutative algebra:
Noetherian and non-Noetherian perspectives, Springer-Verlag, 2011, pp. 441–
480.
[Yan83] Hiroshi Yanagihara, Some results on weakly normal ring extensions, J. Math.
Soc. Japan 35 (1983), no. 4, 649–661.
20
NEIL EPSTEIN AND JAY SHAPIRO
Department of Mathematical Sciences, George Mason University, Fairfax,
VA 22030
E-mail address: nepstei2@gmu.edu
Department of Mathematical Sciences, George Mason University, Fairfax,
VA 22030
E-mail address: jshapiro@gmu.edu
| 0 |
Submitted to the Annals of Statistics
arXiv: arXiv:1610.03944
RANK VERIFICATION FOR EXPONENTIAL FAMILIES
By Kenneth Hung and William Fithian
arXiv:1610.03944v2 [stat.ME] 3 Jul 2017
University of California, Berkeley
Many statistical experiments involve comparing multiple population groups. For example, a public opinion poll may ask which of
several political candidates commands the most support; a social scientific survey may report the most common of several responses to
a question; or, a clinical trial may compare binary patient outcomes
under several treatment conditions to determine the most effective
treatment. Having observed the “winner” (largest observed response)
in a noisy experiment, it is natural to ask whether that candidate,
survey response, or treatment is actually the “best” (stochastically
largest response). This article concerns the problem of rank verification — post hoc significance tests of whether the orderings discovered
in the data reflect the population ranks. For exponential family models, we show under mild conditions that an unadjusted two-tailed
pairwise test comparing the first two order statistics (i.e., comparing
the “winner” to the “runner-up”) is a valid test of whether the winner
is truly the best. We extend our analysis to provide equally simple
procedures to obtain lower confidence bounds on the gap between the
winning population and the others, and to verify ranks beyond the
first.
1. Introduction.
1.1. Motivating Example: Iowa Republican Caucus Poll. Table 1 shows the result of a Quinnipiac University poll asking 890 Iowa Republicans their preferred candidate for the Republican
presidential nomination (Quinnipiac University Poll Institute, 2016). Donald Trump led with
31% of the vote, Ted Cruz came second with 24%, Marco Rubio third with 17%, and ten other
candidates including “Don’t know” trailed behind.
Rank
1*
2*
3*
4*
5
6
7
..
.
Candidate
Result
Votes
Trump
31%
276
Cruz
24%
214
Rubio
17%
151
Carson
8%
71
Paul
4%
36
Bush
4%
36
Huckabee
3%
27
..
..
..
.
.
.
Table 1
Results from a February 1, 2016 Quinnipiac University poll of 890 Iowa Republicans. To compute the last
column (Votes), we make the simplifying assumption that the reported percentages in the third column (Result)
are raw vote shares among survey respondents. The asterisks indicate that the rank is verified at level 0.05 by a
stepwise procedure.
MSC 2010 subject classifications: Primary 62F07; secondary 62F03
Keywords and phrases: ranking, selective inference, exponential family
1
imsart-aos ver. 2014/10/16 file: verifying-winner.tex date: July 4, 2017
2
K. HUNG AND W. FITHIAN
Seeing that Trump leads this poll, several salient questions may occur to us: Is Trump really
winning, and if so by how much? Furthermore, is Cruz really in second, is Rubio really in third,
and so on? Note that there is implicitly a problem of multiple comparisons here, because if
Cruz had led the poll instead, we would be asking a different set of questions (“Is Cruz really
winning,” etc.). Indeed, the selection issue appears especially pernicious due to the so-called
“winner’s curse”: given that Trump leads the poll, it more likely than not overestimates his
support.
Nevertheless, if we blithely ignore the selection issue, we might carry out the following
analyses to answer the questions we posed before at significance level α = 0.05. We assume
for simplicity that the poll represents a simple random sample of Iowa Republicans; i.e., that
the data are a multinomial sample of size 890 and underlying probabilities (πTrump , πCruz , . . .).
(The reality is a bit more complicated: before releasing the data, Quinnipiac has post-processed
it to make the reported result more representative of likely caucus-goers. The raw data is
proprietary.)
1. Is Trump really winning? If Trump and Cruz were in fact tied, then Trump’s share of their
combined 490 votes would be distributed as Binomial (490, 0.5). Because the (two-tailed)
p-value for this pairwise test is p = 0.006, we reject the null and conclude that Trump is
really winning.
2. By how much? Using an exact 95% interval for the same binomial model, we conclude
Trump has at least 7.5% more support than Cruz (i.e., πTrump ≥ 1.075 πCruz ) and also
leads the other candidates by at least as much.
3. Is Cruz in second, Rubio in third, etc.? We can next compare Cruz to Rubio just as we
compared Trump to Cruz (again rejecting because 214 is significantly more than half of
365), then Rubio to Carson, and so on, continuing until we fail to reject. The first four
comparisons are all significant at level 0.05, but Paul and Bush are tied so we stop.
Perhaps surprisingly, all of the three procedures described above are statistically valid despite
their ostensibly ignoring the implicit multiple-comparisons issue. In other words, Procedures 1
and 2 control the Type I error rate at level α and Procedure 3 controls the familywise error
rate (FWER) at level α. The remainder of this article is devoted to justifying these procedures
for the multinomial family, and extending to analogous procedures in other exponential family
settings. While methods analogous to Procedures 1 and 2 have been justified previously for
balanced independent samples from log-concave location families (Gutmann and Maymin, 1987;
Stefansson, Kim and Hsu, 1988), they have not been justified in exponential families before
now.
1.2. Generic Problem Setting and Main Result. Generically, we will consider data drawn
from an exponential family model with density
(1)
X ∼ exp θ0 x − ψ (θ) g (x) ,
with respect to either the Lebesgue measure on Rn or counting measure on Zn . We assume further that g (x) is symmetric with respect to permutation, and Schur concave, a mild technical
condition defined in Section 2. In addition to the multinomial family, model (1) also encompasses settings such as comparing independent binomial treatment outcomes in a clinical trial,
competing sports teams under a Bradley–Terry model, entries of a Dirichlet distribution, and
many more; see Section 2 for these and other examples.
imsart-aos ver. 2014/10/16 file: verifying-winner.tex date: July 4, 2017
RANK VERIFICATION FOR EXP. FAMILIES
3
We will generically use the term population to refer to the treatment group, sports team,
political candidate, etc. represented by a given random variable Xj . As we will see, θj ≥ θk if
and only if Xj is stochastically larger than Xk ; thus, there is a well-defined stochastic ordering
of the populations that matches the ordering of the entries of θ. We will refer to the population
with maximal θj as the best, the population with second largest θj as the second best, the one
with maximal Xj as the winner, and the one with the second-largest Xj as the runner-up,
where ties between observations are broken randomly to obtain a full ordering. Following the
convention in the ranking and selection literature, we assume that if there are multiple largest
θj , then one is arbitrarily marked as the best. Note that in cases where it is more interesting to
ask which is the smallest population (for example, if Xj is the number of patients on treatment j
who suffer a heart attack during a trial) we can change the variables to −X and the parameters
to −θ; this does not affect the Schur concavity assumption.
Write the order statistics of X as
X[1] ≥ X[2] ≥ · · · ≥ X[n] ,
where [j] will denote the random index for the j-th order statistic. Thus, θ[j] is the entry of θ
corresponding to the j-th order statistic of X (so θ[1] might not equal maxj θj , for example).
In each of the above examples, there is a natural exact test we could apply to test θj = θk
for any two fixed populations j and k. In the multinomial case, we would apply the conditional
binomial test based on the combined total Xj + Xk as discussed in the previous section. For
the case of independent binomials we would apply Fisher’s exact test, again conditioning on
Xj + Xk . These are both examples of a generic UMPU pairwise test in which we condition on
the other n − 2 indices (notated X\{j,k} ) and Xj + Xk , and reject the null if Xj is outside the
α/2 and 1 − α/2 quantiles of the conditional law Lθj =θk (Xj | Xj + Xk , X\{j,k} ). Crucially, this
null distribution does not depend on the value of θ provided that θj = θk . We call this test
the (two-tailed) unadjusted pairwise test since it makes no explicit adjustment for selection.
Similarly, inverting this test for other values of θj − θk yields an unadjusted pairwise confidence
interval. (To avoid trivialities in the discrete case, we assume these procedures are appropriately
randomized at the rejection thresholds to give exact level-α control.)
Generalizing the procedures described in Section 1.1 we obtain the following:
1. Is the winner really the best? To test the hypothesis H : θ[1] ≤ maxj6=[1] θj : Carry out
the unadjusted pairwise test comparing the winner to the runner-up. If the test rejects
at level α, reject H and declare that the winner is really the best.
2. By how much? To construct a lower confidence bound for θ[1] − maxj6=[1] θj : Construct
the unadjusted pairwise confidence interval comparing the winner to the runner-up, and
report the lower confidence bound obtained for θ[1] − θ[2] if it is nonnegative, report −∞
otherwise.
3. Is the runner-up really the second best, etc.? Continue by comparing the runner-up to
the second runner-up, again using the unadjusted pairwise test, and so on down the list
comparing adjacent values. Stop the first time the test does not reject; if there are j
rejections, declare that
θ[1] > θ[2] > · · · > θ[j] > max θ[j]
k>j
Procedures 2 and 3 are conservative stand-ins for exact, but slightly more involved, conditional
inference procedures. In particular, as we will see, reporting −∞ in Procedure 2 is typically
much more conservative than is necessary.
imsart-aos ver. 2014/10/16 file: verifying-winner.tex date: July 4, 2017
4
K. HUNG AND W. FITHIAN
We now state our main theorem: under a mild technical assumption, Procedures 1–3 described above are statistically valid, even accounting for the selection.
Theorem 1.
Assume the model (1) holds and g (x) is a Schur-concave function. Then:
1. Procedure
lation not
2. Procedure
3. Procedure
1 has exact level α conditional on H being true (conditional on the best popuwinning), and marginally has level α · P(H is true) ≤ α 1 − n1 .
2 gives a conservative 1 − α lower confidence bound for θ[1] − maxj6=[1] θj .
3 is a conservative stepwise procedure with FWER no larger than α.
n
Note that Theorem 1 implies that we could actually replace α with n−1
α to obtain a more
powerful version of Procedure 1 when n is not too large.
We define Schur-concavity and discuss its properties in Section 2. Because any log-concave
and symmetric function is Schur-concave, Theorem 1 applies to all of the cases discussed above.
The proof combines the conditional selective-inference framework of Fithian, Sun and Taylor
(2014) with classical multiple-testing methods, as well as new technical tools involving majorization and Schur-concavity.
Note that these procedures make an implicit adjustment for selection because they use twotailed, rather than one-tailed, unadjusted tests. If we instead based our tests on an independent
realization X ∗ = (X1∗ , . . . , Xn∗ ) then, for example, Procedure 1 could use a right-tailed version
of the unadjusted pairwise test. In the case n = 2, Procedure 1 amounts to a simple two-tailed
test of the null hypothesis θ1 = θ2 , and it is intuitively clear that a one-tailed test would be too
liberal. More surprising is that, no matter how large n is, Procedures 1–3 require no further
adjustment beyond what is required when n = 2.
1.3. Related work. Rank verification has been studied extensively in the ranking and selection literature. See Gupta and Panchapakesan (1971, 1985) for surveys of the subset selection
literature. The two main formulations of ranking and selection are closely related to procedures for multiple comparisons with the best treatment (Edwards and Hsu, 1983; Hsu, 1984),
but more powerful methods are available in some cases for procedures involving only the first
sample rank, the problem of comparisons with the sample best; see Hsu (1996) for an overview
and discussion of the relationships between these problems.
Comparisons with the sample best have been especially well-studied and the validity of Procedures 1 and 2 have been established in a different setting: balanced independent samples from
log-concave location families. Gutmann and Maymin (1987) prove the validity of Procedure 1
in this setting, and Bofinger (1991); Maymin and Gutmann (1992); Karnnan and Panchapakesan (2009) give similar results for other models including scale and location-scale families.
Stefansson, Kim and Hsu (1988) provide an alternative proof for the validity of Procedure 1
in the same setting, leading to a lower confidence bound analogous to that of Procedure 2;
interestingly, the proof involves a very early application of the partitioning principle, later developed into fundamental technique in multiple comparisons (Finner and Strassburger, 2002).
These results use very different technical tools than the ones we use here, require independence
between the different groups (ruling out, for example, the multinomial family), and do not
address the exponential family case. Because most exponential families are not location-scale
families (the Gaussian being a notable exception), and because our results involve more general
dependence structures, both our proof techniques and our technical results are complementary
to the techniques and results in the above works.
imsart-aos ver. 2014/10/16 file: verifying-winner.tex date: July 4, 2017
RANK VERIFICATION FOR EXP. FAMILIES
5
For the multinomial case, Gupta and Nagel (1967), discussed in Section 3.1, remain the state
of the art in finite-sample tests; Gupta and Wong (1976) discuss related approaches for Poisson
models. Berger (1980) mentions an alternative, simpler rule which performs a binomial test
on each population, but its power does not necessarily increase as the size m of observations
increases in cases like Multinomial(m; 2/3, 1/3, 0, . . . , 0). Nettleton (2009) proves validity for an
asymptotic version of the winner-versus-runner-up test, and Gupta and Liang (1989) consider
an empirical Bayes approach for selecting the best binomial population wherein a parametric
prior distribution is assumed for the success probabilities for the different populations. Ng and
Panchapakesan (2007) discuss an exact test for a modified problem in which the maximum
count is fixed instead of the total count; that is, we sample until the leading candidate has
at least m votes. As Section 3.1 shows, our test can be much more powerful than the one in
Gupta and Nagel (1967), especially if there are many candidates, because of the way our critical
rejection threshold for X[1] − X[2] adapts to the data. Thus, our work closes a significant gap
in the ranking and selection literature, extending the result of Gutmann and Maymin (1987)
and others to new families like the multinomial, independent binomials, and many others.
1.4. Outline. Section 2 defines Schur concavity, and gives several examples satisfying this
condition. Section 3 justifies Procedure 1 and compares its power to that of Gupta and Nagel
(1967). Sections 4 and 5 justify Procedures 2 and 3 respectively, and Section 6 concludes.
2. Majorization and Schur concavity.
2.1. Definitions and basic properties. We start by reviewing the notion of majorization,
defined on both Rn and Zn .
Definition 1. For two vectors a and b in Rn (or Zn ), suppose sorting the two vectors in
descending order gives a(1) ≥ · · · ≥ a(n) and b(1) ≥ · · · ≥ b(n) . We say that a b (a majorizes
b) if for 1 ≤ i < n,
a(1) + · · · + a(i) ≥ b(1) + · · · + b(i) ,
and
a(1) + · · · + a(n) = b(1) + · · · + b(n) .
This forms a partial order in Rn (or Zn ).
Intuitively, majorization is a partial order that monitors the evenness of a vector: the more
even a vector is, the “smaller” it is. There are two properties of majorization that we will use
in the proofs.
Lemma 2.
1. Suppose (x1 , x2 , x3 , . . .) and (x1 , y2 , y3 , . . .) are two vectors in Rn . Then
(x1 , x2 , x3 , . . .) (x1 , y2 , y3 , . . .) if and only if (x2 , x3 , . . .) (y2 , y3 , . . .) .
2. (Principle of transfer) If x1 > x2 and t ≥ 0, then
(x1 + t, x2 , x3 , . . .) (x1 , x2 + t, x3 , . . .) .
If t ≤ 0, the majorization is reversed.
imsart-aos ver. 2014/10/16 file: verifying-winner.tex date: July 4, 2017
6
K. HUNG AND W. FITHIAN
Proof.
1. The property follows from an equivalent formulation of majorization listed in Marshall,
Olkin and Arnold (2010), where x y if and only if
n
X
j=1
xn =
n
X
yn
n
X
and
j=1
(xj − a)+ ≥
j=1
n
X
(yj − a)+ for all a ∈ R.
j=1
2. Proved in Marshall, Olkin and Arnold (2010).
Definition 2.
A function g is Schur-concave if x y implies g (x) ≤ g (y).
A Schur-concave function is symmetric by default since a b and b a if and only if b is a
permutation of the coordinates of a. Conversely a symmetric and log-concave function is Schurconcave (Marshall, Olkin and Arnold, 2010). Interestingly, Gupta, Huang and Panchapakesan
(1984) also show that, in the context of independent location families, Schur concavity of the
probability density is equivalent to monotone likelihood ratio.
2.2. Examples. Many common exponential family models have Schur-concave carrier densities. Below we give a few examples:
Example 1 (Independent binomial treatment outcomes in a clinical trial). If each of n
different treatments are applied to m patients independently, the number of positive outcomes
Xj for treatment j is Binomial (m, pj ). The best treatment would be the treatment with the
highest success probability pj . The joint distribution of X is given by
X
pj
1
p (x) ∝ exp
xj log
1 − pj x1 ! (m − x1 )! · · · xn ! (m − xn )!
j
The carrier measure above is Schur-concave. The unadjusted pairwise test in this family is
Fisher’s exact test.
Example 2 (Competitive sports under the Bradley–Terry model). Suppose n players compete in a round robin tournament, where player j has ability θj , and the probability of player
j winning against player k is
eθj −θk
e(θj −θk )/2
= (θ −θ )/2
.
θ
−θ
1+e j k
e j k + e(θk −θj )/2
Let Yjk be an indicator for the match between player j and k, where we take Yjk = 1 if
j beats k and Yjk = 0 if k beats j. For symmetry, we will also adopt the convention that
Yjk + Ykj = 1. Thus the joint distribution of Y = (Yjk )j6=k is
X
X
p (y) ∝ exp
2θj
yjk = exp 2θ0 x ,
j
k6=j
P
where xj =
k6=j yjk . In other words, if Xj is the number of wins by player j, then X =
(X1 , . . . , Xn ) is a sufficient statistic with distribution
p (x) = exp 2θ0 x g (x) ,
imsart-aos ver. 2014/10/16 file: verifying-winner.tex date: July 4, 2017
RANK VERIFICATION FOR EXP. FAMILIES
7
where g (x) is a function that counts the number of possible tournament results giving the net
win vector x. A bijection proof shows that x is indeed Schur-concave. Therefore, we can use
Procedures 1–3 to compare player qualities.
After conditioning on U (X) = (X1 + X2 , X3 , . . . , Xn ), and under the assumption θ1 = θ2 ,
every feasible configuration of Y is equally likely. If n is not too large (say, no more than 40
players), we can find the conditional distribution of X1 by enumerating over the configurations;
for larger n, computation might pose a more serious problem, requiring us for example to
compute the p-value using Markov Chain Monte Carlo techniques (Besag and Clifford, 1989).
Example 3 (Comparing the variances of different normal populations). Suppose there
are n normal populations with laws N (µj , σj2 ) and m independent observations from each of
them. The sample variance for population j can be denoted as Rj . By Cochran’s theorem,
(m − 1) Rj ∼ σj2 χ2m−1 , and thus the joint distribution of R is
!(m−3)/2
2
(m − 1) rj
r∼
e−(m−1)rj /2σj 1{rj >0}
2
σj
j=1
Y
n
m−1
m−1
(m−3)/2
∝ exp −
r
−
·
·
·
−
r
rj
1{r>0} .
1
n
2σn2
2σ12
j=1
n
Y
Q
(m−3)/2
1{r>0} , which is Schur-concave. Thus, we can use ProceThe carrier measure is nj=1 rj
dures 1–3 to find populations with the smallest or largest variances. In this example, the distribution of X1 /(X1 + X2 ) conditional on (X1 + X2 , X3 , . . . , X4 ) is distributed as Beta(m/2, m/2)
under the null, or equivalently X1 /X2 is conditionally distributed as Fm,m ; hence a (two-tailed)
F -test is valid for comparing the top two populations.
3. Verifying the Winner: Is the Winner Really the Best?. First, we justify the
notion that the population with largest θj is also the largest population in stochastic order:
Theorem 3. For a multivariate exponential family with a symmetric carrier distribution,
X1 ≥ X2 in stochastic order if and only if θ1 ≥ θ2 .
Proof. It suffices to prove the “if” part, as the “only if” part can be follows from swapping
the role of θ1 and θ2 . For any fixed a, and x1 ≥ a and x2 < a, we have x1 > x2 and
exp (θ1 x1 + θ2 x2 + · · · + θn xn − ψ (θ)) g (x) ≥ exp (θ1 x2 + θ2 x1 + · · · + θn xn − ψ (θ)) g (x) .
Integrating both sides over the region {x : x1 ≥ a, x2 < a} gives
P [X1 ≥ a, X2 < a] ≥ P [X1 < a, X2 ≥ a] .
Now adding P [X1 ≥ a, X2 ≥ a] to both probabilities gives
P [X1 ≥ a] ≥ P [X2 ≥ a] ,
meaning that X1 is greater than X2 in stochastic order.
Before proving our main result for Procedure 1, we give the following lemmas, the first of
which clarifies a key idea in the proof, and the second is needed for a sharper bound in (2).
imsart-aos ver. 2014/10/16 file: verifying-winner.tex date: July 4, 2017
8
K. HUNG AND W. FITHIAN
Lemma 4 (Berger, 1982). If pj are valid p-values for testing null hypothesis H0j
S, then
p∗ = maxj pj is a valid p-value for the union null (i.e. disjunction null) hypothesis H0 = j H0j .
Proof. Under H0 , one of the H0j is true; without loss of generality assume it is H01 . Then,
P [p∗ ≤ α] ≤ P [p1 ≤ α] ≤ α.
Therefore p∗ is a valid p-value for the union null hypothesis.
Lemma 5.
If θ1 ≥ maxj6=1 θj , then P [1 wins] ≥ n1 .
Proof. We can prove so with a coupling argument: for any sequence x1 , x2 , . . . , xn , define
τ (x) = {τ (xj )}j=1,...,n , obtained by swapping x1 with the largest value in the sequence x.
Hence
exp (θ1 τ (x1 ) + · · · + θn τ (xn ) − ψ (θ)) g (X) ≥ exp (θ1 x1 + · · · + θn xn − ψ (θ)) g (X) .
If we integrating both sides over Rn (or Zn in the case of counting measure), the right hand
side gives 1. Since τ is an n-to-1 mapping, the left hand side is n times the integral over
{x1 ≥ maxj>1 xj }. In other words,
nP [1 wins] ≥ 1
as desired.
In the case of counting measure, the above argument follows if a subscript is attached to
identical observations uniformly to ensure strict ordering.
We are now ready to prove our result for Procedure 1, restated here for reference.
Part 1 of Theorem 1. Assume the model (1) holds and g (x) is a Schur-concave function.
Procedure 1 (the unadjusted pairwise test) has level α conditional on the best population not
winning.
Proof. Let j ∗ denote the (fixed) index of the best population, so θj ∗ ≥ maxj6=j ∗ θj . The
type I error — the probability of incorrectly declaring any other j to be the best — is
[
X
P
declare j best ≤
P [declare j best | j wins] P [j wins] ,
j6=j ∗
j6=j ∗
recalling that ties are broken randomly, so there is only one winner in any realization. Thus,
it is enough to bound Pθ [declare j best | j wins] ≤ α, for each j 6= j ∗ , and for all θ with
j ∗ ∈ arg maxj θj . Then we will have
[
X
n−1
α,
(2)
P
declare j best ≤
α · P [j wins] = αP [j ∗ does not win] ≤
n
∗
∗
j6=j
j6=j
where the last inequality follows from Lemma 5.
We start by assuming that we are working with the Lebesgue measure rather than the
counting measure (eliminating the possibility of ties). The necessary modification of the proof
for the counting measure case is provided at the end of this proof.
imsart-aos ver. 2014/10/16 file: verifying-winner.tex date: July 4, 2017
RANK VERIFICATION FOR EXP. FAMILIES
9
To minimize notational clutter, we consider only the case where the winner is 1, i.e. X1 ≥
maxj>1 Xj . Furthermore, we will denote the runner-up with 2. This is not necessarily true,
but we will use it as a shorthand to simplify our notation. For other cases, the following proof
remains valid under relabeling and can thus be applied. In this case, we will test the null
hypothesis H01 : θ1 ≤ maxj>1 θj , which is the union of the null hypotheses H01j : θ1 ≤ θj
for j ≥ 2. For each of these we can construct an exact p-value p1j , which is valid under H01j
conditional on A1 , the event that X1 is the winner. Hence by Lemma 4, a test that rejects when
p1∗ = maxj p1j ≤ α is valid for H01 conditional on A1 . Procedure 1 performs an unadjusted
pairwise test comparing X1 to X2 . Hence it is sufficient to show that p12 = p1∗ and that
rejecting when p12 ≤ α coincides with the unadjusted pairwise test.
Our proof has three main parts: (1) deriving p1j for each j ≥ 2, (2) showing that p12 ≥ p1j
for each j ≥ 2, and (3) showing that p12 is an unadjusted pairwise p-value.
Derivation of p1j . Following the framework in Fithian, Sun and Taylor (2014), we first construct the p-values by conditioning on the selection event where the winner is 1:
A1 = X1 ≥ max Xj .
j>1
For convenience, we let
Djk =
Xj − Xk
2
and
Mjk =
Xj + Xk
.
2
We then re-parametrize to replace X1 and Xj with D1j and M1j . The distribution is now
an exponential family with sufficient statistics D1j , M1j , X\{1,j} and corresponding natural
parameters θ1 − θj , θ1 + θj , θ\{1,j} . We now consider
(3)
Lθ1 −θj =0 D1j M1j , X\{1,j} , A1 .
We can rewrite the selection event in terms of our new parameterization as
A1 = {X1 ≥ Xj } ∩ X1 ≥ max Xk
k6=1,j
= {D1j ≥ 0} ∩ D1j ≥ max Xk − M1j .
k6=1,j
The conditional law of D1j in (3), in particular, is a truncated distribution.
p d1j | M1j , X\{1,j} , A1 ∝ exp ((θ1 − θj ) d1j + θ2 X2 + · · · + (θ1 + θj ) M1j + · · · + θn Xn )
g (M1j + dij , X2 , . . . , Mij − dij , . . . Xn ) 1A1
(a)
∝ g (M1j + d1j , X2 , . . . , M1j − d1j , . . . Xn ) 1A1 ,
where at step (a), conditioning on X\{1,j} and M1j removes dependence on θ\{1,j} and θ1 + θj
respectively, while θ1 − θj is taken to be 0 under our null hypothesis. Note that we consider
this as a one-dimensional distribution of D1j on R, where M1j and X\{1,j} are treated as fixed.
The p-value for H01j is thus
R∞
D1j g (M1j + z, X2 , . . . , M1j − z, . . . , Xn ) dz
(4)
p1j = R ∞
.
max{X2 −M1j ,0} g (M1j + z, X2 , . . . , M1j − z, . . . , Xn ) dz
imsart-aos ver. 2014/10/16 file: verifying-winner.tex date: July 4, 2017
10
K. HUNG AND W. FITHIAN
Finally, by construction, p1j satisfies
PH01j p1j < α M1j , X\{1,j} , A1 ≤ α
a.s.,
Marginalizing over M1j , X\{1,j} ,
PH01j [p1j < α | A1 ] ≤ α.
Therefore these p1j are indeed valid p-values.
Demonstration that p1∗ = p12 . We now proceed to show that p12 , the p-value comparing the
winner to the runner-up, is the largest of all p1j . Without loss of generality, it is sufficient to
show that p12 ≥ p13 .
From the first part of this proof, both p-values are constructed by conditioning on X\{1,2,3} .
Upon conditioning these, (X1 , X2 , X3 ) follows an exponential family distribution, with carrier
distribution
gX4 ,...,Xn (X1 , X2 , X3 ) = g (X1 , . . . , Xn ) ,
here X4 , . . . , Xn are used in the subscript as they are conditioned on and no longer considered
as variables. The first point in Lemma 2 says that the function gX4 ,...,Xn is Schur-concave as
well. We have reduced the problem to the case when n = 3: we can apply the result for n = 3
to gX4 ,...,Xn to yield p12 ≥ p13 for n > 3.
We have reduced to the case when n = 3. The p-values thus are
R∞
g (M12 + z, M12 − z, X3 ) dz
12
p12 = RD∞
,
0 g (M12 + z, M12 − z, X3 ) dz
R∞
g (M13 + z, X2 , M13 − z) dz
p13 = R ∞ D13
max{X2 −M13 ,0} g (M13 + z, X2 , M13 − z) dz
The maximum in the denominator of p13 prompts us to consider two separate cases. First,
we suppose X2 < M13 . Changing variables such that the lower limits of both integrals in the
numerator are 0, we can re-parametrize the integrals above to give
R∞
g (X1 + z, X2 − z, X3 ) dz
p12 = R ∞0
0 g (M12 + z, M12 − z, X3 ) dz
R∞
g (X1 + z, X2 − z, X3 ) dz
= R ∞0
,
−D12 g (X1 + z, X2 − z, X3 ) dz
R∞
g (X1 + z, X2 , X3 − z) dz
p13 = R ∞0
0 g (M13 + z, X2 , M13 − z) dz
R∞
g (X1 + z, X2 , X3 − z) dz
= R ∞0
.
−D13 g (X1 + z, X2 , X3 − z) dz
To help see the re-parametrization, each of these integrals can be thought of in terms of integrals
along segments and rays. For example p12 can be represented in terms of integrals A and B in
Figure 1. Specifically,
B
p12 =
A+B
imsart-aos ver. 2014/10/16 file: verifying-winner.tex date: July 4, 2017
11
RANK VERIFICATION FOR EXP. FAMILIES
(M12 , M12 , X3 )
A
(X1 , X2 , X3 )
+ (0, 1, 0)
X=
X1 +X2 +X3
, X, X
3
B
+ (1, 0, 0)
+ (0, 0, 1)
A1
Fig 1. The p-value p12 can be written in terms of integral A along the segment and B along the ray. The diagram
is drawn a level set of x1 + x2 + x3 . The green region represents the selection event A1 .
Figure 2 has both the p-values shown on the same diagram. Proving p12 ≥ p13 is the same
as proving
B
D
B
D
≥
⇐⇒
≥ .
A+B
C +D
A
C
We will prove so by extending A to include à on the diagram. We denote the sum A + à as
A0 . Formally,
Z 0
Z 0
0
(5)
A =
g (X1 + z, X2 − z, X3 ) dz ≥
g (X1 + z, X2 − z, X3 ) dz = A.
−D13
−D12
It is thus sufficient to show that B ≥ D and C ≥ A0 .
Indeed from the second point in Lemma 2 we have
(X1 + z, X2 − z, X3 ) (X1 + z, X2 , X3 − z)
for z ≤ 0 and the majorization reversed for z ≥ 0. This majorization relation is indicated as
the dotted line in Figure 2. So Schur-concavity shows that
g (X1 + z, X2 − z, X3 ) ≤ g (X1 + z, X2 , X3 − z)
for z ≤ 0, and the inequality reversed for z ≥ 0. Taking integrals on both sides yields the
desired inequality.
For the second case where X2 ≥ M13 , the segment C will reach the line x1 = x2 first before
it reaches x1 = x3 , ending at (X2 , X2 , X1 − X2 + X3 ) instead. But we can still extend A by
à to (X2 , X1 , X3 ). The rest of the proof follows. In either cases, p12 ≥ p13 , or in generality,
p12 ≥ p1j for j > 1. In other words, p12 = p1∗ .
p12 is an unadjusted pairwise p-value. Before conditioning on A1 , the distribution in (3) is
symmetric around 0 under θ1 = θj . Since the denominator of p12 integrates over half of this
symmetric distribution, it is always equal to 1/2. Thus, the one-sided conditional test at level
α is equivalent to the one-sided unadjusted test at level α/2, or equivalently the two-sided
unadjusted pairwise test at level α.
imsart-aos ver. 2014/10/16 file: verifying-winner.tex date: July 4, 2017
12
K. HUNG AND W. FITHIAN
(M13 , X2 + D13 , X3 )
(M12 , M12 , X3 )
D
Ã
A
C
(X1 , X2 , X3 )
B
(M13 , X2 , M13 )
A1
Fig 2. The p-value p12 can be written in terms of integral A along the segment and B along the ray; and p13 in
terms of C and D. A0 would refer to the sum of A with the dashed line portion labeled as Ã, formally explained
in Equation (5). The majorization relation is indicated by the dotted line.
Modification for counting measure. Now suppose the exponential family is defined on the
counting measure instead. If ties are broken independently and randomly, the end points on
the rays can be considered as “half an atom” if the coordinates are integers (or a smaller
fraction of an atom in case of a multi-way tie). The number of atoms on each ray is the same
(after the extension Ã) and the atoms on each ray can be paired up in exactly the same way
as illustrated in Figure 2, with the inequalities above still holding for each pair of the atoms.
Summing these inequalities yields our desired result.
3.1. Power Comparison in the Multinomial Case. As the construction of this test follows
Fithian, Sun and Taylor (2014), it uses UMPU selective level-α tests for the pairwise p-values.
This section compares the power of our procedure to the best previously known method for
verifying multinomial ranks, by Gupta and Nagel (1967). They devise a rule to select a subset
that includes the maximum πj . In other words, if the selected subset is J (X), it guarantees
"
#
(6)
P arg max πj ∈ J (X) ≥ 1 − α.
j
This is achieved by finding an integer d, as a function on m, n and α, and selecting the subset
J (X) = j : Xj ≥ max Xk − d .
k
We take d(m, n, α) to be the smallest integer such that (6) holds for any π; Gupta and Nagel
(1967) provide an algorithm for determining d.
Subset selection is closely related to testing whether the winner is the best. In particular,
we can define a test that declares j the best whenever J (X) = {j}. If J (X) satisfies (6), this
test is valid at level α. We next compare the power of the resulting test against the power of
imsart-aos ver. 2014/10/16 file: verifying-winner.tex date: July 4, 2017
RANK VERIFICATION FOR EXP. FAMILIES
13
our Procedure 1 in a multinomial example with π ∝ eδ , 1, . . . , 1 , for several combinations of
m and n.
Figure 3 gives the power curves for Multinomial (m, π) and
π ∝ eδ , 1, . . . , 1 ,
for various combinations of m and n. For their method, we use α = 0.05; but in light of the extra
n
factor of n−1
n in (2), we will apply the selective procedure with n−1 α such that the marginal
type I error rate of both procedures are controlled at α. Their test coincides with our test at
n = 2; however as n grows, the selective test shows significantly more power than Gupta and
Nagel’s test.
Fig 3. Power curves as a function of δ. The plots in the first row all have m = 50 and the second row m = 250.
The solid line and the dashed line are the power for the selective test and Gupta and Nagel’s test, respectively.
To interpret, e.g., the upper right panel of Figure 3, suppose that in a poll of m = 50
respondents, one candidate enjoys 30% support and the other n − 1 = 9 split the remainder
0.3
(δ = log 0.7/9
≈ 1.35). Then our procedure has power approximately 0.3 to detect the best
candidate, while Gupta and Nagel’s procedure has power around 0.1.
To understand why our method is more powerful, note that both procedures operate by
comparing X[1] − X[2] to some threshold, but the two methods differ in how that threshold
is determined. The threshold from Gupta and Nagel (1967) is fixed and depends on m and n
alone, whereas in our procedure the threshold depends on X[1] + X[2] , a data-adaptive choice.
imsart-aos ver. 2014/10/16 file: verifying-winner.tex date: July 4, 2017
14
K. HUNG AND W. FITHIAN
The difference between the two methods is amplified when n is large and π(1) 1/2. In
that case, d from Gupta and Nagel is usually computed based on the worst-case scenario
π = 12 , 12 , 0, . . . , 0 ; i.e. d is the upper α quantile of
1
≈ Normal (0, m) .
X1 − X2 ∼ m − 2 · Binomial m,
2
√
Thus d ≈ mzα , where zα is the upper α quantile of a standard Gaussian. On the other hand,
n
our method defines a threshold based on the upper n−1
· α2 quantile of
1
X1 − X2 | X1 + X2 ∼ X1 + X2 − 2 · Binomial X1 + X2 ,
,
2
√
which is approximately X1 + X2 zα/2 . If π(1) 1/2 then with high probability X1 + X2 m,
making our test much more liberal.
4. Confidence Bounds on Differences: By How Much?. By generalizing the above,
we can construct a lower confidence bound for θ[1] − maxj6=[1] θj . Here we provide a more
powerful Procedure 2’ first. We will proceed by inverting a statistical test of the hypothesis
δ
H0[1]
: θ[1] − maxj6=[1] θj ≤ δ, which can be written as a union of null hypotheses:
δ
H0[1]
=
[
H0[1]j : θ[1] − θj ≤ δ.
j6=[1]
By Lemma 4, we can construct selective exact one-tailed p-values pδ[1]j for each of these by
conditioning on A[1] , M[1]j and X\{[1],j} , giving us an exact test for H0[1] by rejecting whenever
maxj6=[1] pδ[1]j < α.
Theorem 6.
The p-values constructed above satisfy pδ[1][2] ≥ pδ[1]j for any j 6= [1].
Proof. Again we start with assuming X1 ≥ X2 ≥ maxj>2 Xj for convenience. The p-values
in question are derived from the conditional law
Lθ1 −θj =δ (D1j | M1j , X2 , . . . , Xn , A) ,
which is the truncated distribution
p (d1j ) ∝ exp ((θ1 − θj ) d1j + θ2 X2 + · · · + (θ1 + θj ) M1j + · · · + θn Xn )
g (M1j + d1j , X2 , . . . , M1j − d1j , . . . Xn ) 1A1
∝ exp (δd1j ) g (M1j + d1j , X2 , . . . , M1j − d1j , . . . Xn ) 1A1 .
The p-values thus are
R∞
pδ1j = R ∞
D1j
exp (δz) g (M1j + z, X2 , . . . , M1j − z, . . . , Xn ) dz
max{X2 −M1j ,0} exp (δz) g (M1j
+ z, X2 , . . . , M1j − z, . . . , Xn ) dz
.
imsart-aos ver. 2014/10/16 file: verifying-winner.tex date: July 4, 2017
RANK VERIFICATION FOR EXP. FAMILIES
15
As before in Part 1 of Theorem 1, the conditioning reduces to the case where n = 3. Once
again it is sufficient to show that p12 ≥ p13 . We have the same two cases. If X2 < M13 , then
R∞
exp (δ (z + D12 )) g (X1 + z, X2 − z, X3 ) dz
δ
p12 = R ∞0
−D12 exp (δ (z + D12 )) g (X1 + z, X2 − z, X3 ) dz
R∞
exp (δz) g (X1 + z, X2 − z, X3 ) dz
= R ∞0
−D12 exp (δz) g (X1 + z, X2 − z, X3 ) dz
R∞
exp (δ (z + D13 )) g (X1 + z, X2 , X3 − z) dz
δ
p13 = R ∞0
−D13 exp (δ (z + D13 )) g (X1 + z, X2 , X3 − z) dz
R∞
exp (δz) g (X1 + z, X2 , X3 − z) dz
= R ∞0
.
−D13 exp (δz) g (X1 + z, X2 , X3 − z) dz
The same argument in Figure 2 shows that pδ12 ≥ pδ13 . This is again true for the case where
X2 ≥ M13 as well.
In other words, Procedure 2’ can be summarized as: Find the minimum δ such that pδ[1][2] ≤ α.
And by construction, Procedure 2’ gives exact 1 − α confidence bound for θ[1] − maxj6=[1] θj .
Part 2 of Theorem 1. Assume the model (1) holds and g (x) is a Schur-concave function.
Procedure 2 (the lower bound of unadjusted pairwise confidence interval) gives a conservative
1 − α lower confidence bound for θ[1] − maxj6=[1] θj .
Proof. When Procedure 2 reports −∞ as a confidence lower bound, it is definitely valid
and conservative. It remains to show that when Procedure 2 reports a finite confidence lower
bound, it is smaller than the confidence lower bound reported by Procedure 2’.
If Procedure 2 reports a finite confidence lower bound δ ∗ , then δ ∗ ≥ 0. Also
R∞
exp (δ ∗ z) g (M12 + z, X2 , . . . , M12 − z, . . . , Xn ) dz
α
(7)
= RD∞12
∗
2
−∞ exp (δ z) g (M12 + z, X2 , . . . , M12 − z, . . . , Xn ) dz
as Procedure 2 is constructed from an unadjusted two-tail pairwise confidence interval. However,
as δ ∗ ≥ 0, we have
R0
exp (δ ∗ z) g (M12 + z, X2 , . . . , M12 − z, . . . , Xn ) dz
R−∞
≤1
∞
∗
0 exp (δ z) g (M12 + z, X2 , . . . , M12 − z, . . . , Xn ) dz
R∞
exp (δ ∗ z) g (M12 + z, X2 , . . . , M12 − z, . . . , Xn ) dz
R−∞
≤ 2.
∞
∗
0 exp (δ z) g (M12 + z, X2 , . . . , M12 − z, . . . , Xn ) dz
Multiplying this to (7), we have
R∞
exp (δ ∗ z) g (M12 + z, X2 , . . . , M12 − z, . . . , Xn ) dz
12
α ≥ RD∞
,
∗
0 exp (δ z) g (M12 + z, X2 , . . . , M12 − z, . . . , Xn ) dz
indicating that δ ∗ is smaller than the confidence bound that Procedure 2’ would report. Hence
δ ∗ is a valid and conservative.
Note that Procedure 2 reporting −∞ in case of δ ∗ ≤ 0 is rather extreme. In reality, we can
always just adopt Procedure 2’ in the case when Procedure 1 rejects. In fact, by Procedure 2’,
the multinomial example for polling in Section 1.1 can give a stronger lower confidence bound,
that πTrump / maxj6=Trump πj ≥ 1.108 (Trump leads the field by at least 10.8%).
imsart-aos ver. 2014/10/16 file: verifying-winner.tex date: July 4, 2017
16
K. HUNG AND W. FITHIAN
5. Verifying Other Ranks: Is the Runner-Up Really the Second Best, etc.?. Often we will be interested in verifying ranks beyond the winner. More generally, we could imagine
declaring that the first j populations are all in the correct order, that is
θ[1] > · · · > θ[j] > max θ[k] .
(8)
k>j
Let j0 denote the largest j for which (8) is true. Note that j0 is both random and unknown,
because it depends on both the data and population ranks. Procedure 3 declares that j0 ≥ j if
the unadjusted pairwise tests between X[k] and X[k+1] , reject at level α for all of k = 1, . . . , j.
In terms of the Iowa polling example of Section 1, we would like to produce a statement of the
form “Trump has the most support, Cruz has the second-most, and Rubio has the third-most.”
Procedure 3 performs unadjusted pairwise tests to ask if Cruz is really the runner-up upon
verifying that Trump is the best, and if Rubio is really the second runner-up upon verifying
that Cruz is the runner-up, etc., until we can no longer infer that a certain population really
holds its rank.
While we aim to declare more populations to be in the correct order, declaring too many
populations, i.e. out-of-place populations, to be in the right order is undesirable. It is possible to
consider false discovery rate (the expected portion of out-of-place populations declared) here,
but we restrict our derivation to FWER (the probability of having any out-of-place populations
declared).
Formally, let ĵ0 denote the number of ranks validated by a procedure (the number of rejections).
h
iThen the FWER of ĵ0 is the probability that too many rejections are made; i.e.
P ĵ0 > j0 . For example, suppose that the top three data ranks and population ranks coincide,
but not the fourth (j0 = 3). Then we will have made a Type I error if we declare that the top
five ranks are correct (ĵ0 = 5), but not if we declare that the top two are correct (ĵ0 = 2). In
other words, ĵ0 is a lower confidence bound for j0 .
To show that Procedure 3 is valid, we will prove the validity of a more liberal Procedure 3’,
described in Algorithm 1. Procedure 3 is equivalent
to Procedure 3’ for the most part, except
that Procedure 3 conditions on a larger event X[j] ≥ maxk>j X[k] in Line 7.
Theorem 7. Procedure 3’ is a stepwise procedure that an estimate ĵ0 of j0 at the FWER
controlled at α, where j0 is given by
j0 = max θ[1] > · · · > θ[j] > max θ[k] .
j
k>j
Proof. We will first show that Procedure 3’ falls into the sequential goodness-of-fit testing
framework proposed by Fithian, Taylor and Tibshirani (2015). We thus analyze Procedure 3’ as
a special case of the BasicStop procedure on random hypothesis, described in the same paper.
This enables us to construct valid selective p-values and derive Procedure 3’.
Application of the sequential goodness-of-fit testing framework.
X[n] , we can set up a sequence of nested models
M1 (X) ⊆ · · · ⊆ Mn (X) ,
Upon observing X[1] ≥ · · · ≥
where Mj (X) = θ : θ[1] > · · · > θ[j] > max θ[k] .
c
k>j
imsart-aos ver. 2014/10/16 file: verifying-winner.tex date: July 4, 2017
RANK VERIFICATION FOR EXP. FAMILIES
17
Algorithm 1. Procedure 3’, a more liberal version of Procedure 3
1
2
3
4
5
6
7
8
input : X1 , . . . , Xn
output: ĵ0 , an estimate for j0
# Initialization
τj ← [j];
# Consider τj as part of the observation and the fixed realization of the random index [j]
Xτ0 ← ∞;
j ← 0;
rejected ← true;
while rejected do
j ← j + 1;
Dτj ← Xτj − Xτj+1 ;
Set up the distribution of Dτj τj+1 , conditioned on
• the variables Xτ1 , . . . , Xτj−1 , Xτj+2 , . . . , Xτn ] , and
• the event Xτj−1 ≥ Xτj ≥ maxk>j Xτk ) ;
# The distribution of Dτj τj+1 depends only on θτj − θτj+1 now
test H0 : θτj − θτj+1 ≤ 0 against H1 : θτj − θτj+1 > 0 according to the distribution of Dτj τj+1 ;
Set rejected as the output of the test;
9
10
11
end
ĵ0 ← j − 1;
If we define the j-th null hypothesis as
e 0j : θ[j] ≤ max θ[k] ,
H
k>j
e 01 , . . . , H
e 0j are all false if and only if θ ∈
then H
/ Mj (X).
In other words, Mj (X) is a family of distributions that does not have all first j ranks correct.
e 0j , stating that without
As we will see later, each step in Procedure 3’ is similar to testing H
the first j ranks correct, it is hard to explain the observations. Thus, returning ĵ0 = j amounts
e 01 , . . . , H
e 0j , or equivalently determining that the models M1 (X), . . . , Mj (X) do
to rejecting H
not fit the data.
e 0j provided intuition in the setting up the nested models, they
While the null hypotheses H
are rather cumbersome to work with. Inspired by Fithian, Taylor and Tibshirani (2015), we
will instead consider another sequence of random hypothesis that are more closely related to
the nest models,
H0j : θ ∈ Mj (X) ,
or equivalently, that θ[1] , . . . , θ[j] are not the best j parameters
in order.
Adapting this notation, the FWER can be viewed as P reject H0(j0 +1) .
Special case of the BasicStop procedure. While impractical, Procedure 3’ can be thought of as
performing all n tests first, producing a sequence of p-values pj , and returning
(9)
ĵ0 = min {j : pj > α} − 1.
This is a special case of the BasicStop procedure. Instead of simply checking that Procedure
3’ fits all the requirement for FWER control in BasicStop, we will give the construction of
Procedure 3’, assuming that we are to estimate j0 with BasicStop.
imsart-aos ver. 2014/10/16 file: verifying-winner.tex date: July 4, 2017
18
K. HUNG AND W. FITHIAN
In general, the FWER for BasicStop can be rewritten as P [pj0 +1 ≤ α]. This is however
difficult to analyze, as j0 itself is random and dependent on X, thus we break the FWER down
as follows:
X
P [pj0 +1 ≤ α] =
P [pj0 +1 ≤ α | j0 = j] P [j0 = j]
j
=
X
=
X
P [pj+1 ≤ α | j0 = j] P [j0 = j]
j
P [pj+1 ≤ α | θ ∈ Mj+1 (X) \ Mj (X)] P [j0 = j] .
j
We emphasize here that θ is not random, but Mj+1 is. Thus it suffices to construct the p-values
such that
(10)
P [pj ≤ α | θ ∈ Mj (X) \ Mj−1 (X)] ≤ α
for all j.
Considerations for conditioning. By smoothing, we are free to condition on additional variables in (10). A logical choice that simplified (10) is conditioning on the variables Mj−1 (X)
and Mj (X). Note that the choice of the model Mj (X), once again, based solely on the random
indices [1], . . . , [j], so conditioning on both Mj−1 (X) and Mj (X) is equivalent to conditioning
on the random indices [1], . . . , [j], which in turns is equivalent to conditioning on the σ-field
generated by the partition of the observation space X
Xτ1 ≥ · · · ≥ Xτj ≥ max Xτk : τ is any permutation of (1, . . . , n) ,
k>j
or colloquially, the set of all possible choices
of [1], . . . , [j]. Within each set in this partition,
the event {θ ∈ Mj (X) \ Mj−1 (X)} is simply θτ1 > · · · > θτj and θτj ≤ maxk>j θτk , a trivial
event.
As a brief summary, we want to construct p-values pj such that
pj ≤ α Xτ1 ≥ · · · ≥ Xτj ≥ max Xτk .
P θτ1 >···>θτj
θτj ≤maxk>j θτk
k>j
Construction of the p-values. To avoid the clutter in the subscripts, we will drop the τ in the
subscript. Hence our goal is now
P θ1 >···>θj
pj ≤ α X1 ≥ · · · ≥ Xj ≥ max Xk
θj ≤maxk>j θk
k>j
Construction of pj for other permutations τ can be obtained similarly.
There are many valid options for pj (such as constant α). We will follow the idea in the proof
of Part 1 of Theorem 1 here. pj is intended to test H0j : θ ∈ Mj (X), which is equivalent to
the union of the null hypotheses:
1. θk ≤ θk+1 for k = 1, . . . , j − 1, and
e 0j .)
2. θj ≤ θk for k = j + 1, . . . , n. (The union of these null hypotheses is H
imsart-aos ver. 2014/10/16 file: verifying-winner.tex date: July 4, 2017
19
RANK VERIFICATION FOR EXP. FAMILIES
Since the joint distribution of X, restricted to {X1 ≥ · · · ≥ Xj ≥ maxk>j Xk }, remains in
the exponential family, we can construct the p-values for each of the hypotheses above by
conditioning on the variables corresponding to the nuisance parameters here, similar to the
proof of Part 1 of Theorem 1. Then we can take pj as the maximum of such p-values.
For the hypothesis H0jk : θj ≤ θk , we can construct pjk , by considering the survival function
of the conditional law
X1 ≥ · · · ≥ Xj ≥ max X` , X\{j,k} , Mjk
Lθj =θk Djk
`>j
= Lθj =θk Djk
X
≥ Xj ≥ max X` and Xj ≥ Mjk , X\{j,k} , Mjk
j−1
`>j
`6=k
Once again, Xj+1 = max`>j X` is simply shorthand for simplifying our notation. Now the
p-values are similar to the ones in Equation (4), for k > j:
R Xj−1
Djk g (X1 , . . . , Mjk + z, . . . , Mjk − z, . . . , Xn ) dz
pjk = R X
.
j−1
g
(X
,
.
.
.
,
M
+
z,
.
.
.
,
M
−
z,
.
.
.
,
X
)
dz
1
n
jk
jk
max{Xj+1 −Mjk ,0}
We can graphically represent pjk in Figure 4, a diagram analogous to Figure 2.
D
C
B
(Xj , Xj+1 , Xk )
Truncation for Xj ≤ Xj−1
(Mjk , Xj+1 + Djk , Xk )
Mj(j+1) , Mj(j+1) , Xk
Ã
A
(Mjk , Xj+1 , Mjk )
A1
Fig 4. The two p-values constructed corresponds to taking integrals of g along these segments, that lie on a level
set of xj + xj+1 + xk . The dashed line corresponds to extension in (5). The dotted line on the far right is the
truncation that enforces Xj < Xj−1 .
We have pj(j+1) ≥ maxk>j pjk by Section 3: the upper truncation for Xj can be represented
by cropping Figure 2 along a vertical line, shown in Figure 4. Considering pj(j+1) is sufficient in
rejecting all the H0jk . We will take pj∗ = pj(j+1) , noting that this is the p-value that Procedure
3’ would produce. In fact, pj∗ is also the p-value we would have constructed if we were to reject
e 0j .
only H
Upon constructing pj , one should realize that the p-values for testing θk ≤ θk+1 would have
been constructed in earlier iterations of BasicStop, as pk∗ . In other words, pj = maxk≤j pk∗ is
imsart-aos ver. 2014/10/16 file: verifying-winner.tex date: July 4, 2017
20
K. HUNG AND W. FITHIAN
the sequence of p-values that works with BasicStop. However, from (9),
ĵ0 = min j : max pk∗ > α − 1 = min {j : pj∗ > α} − 1,
k≤j
so it is safe to apply BasicStop to pj∗ directly, yielding Procedure 3’.
Part 3 of Theorem 1. Assume the model (1) holds and g (x) is a Schur-concave function.
Procedure 3 is a conservative stepwise procedure with FWER no larger than α.
Proof. The p-values pj(j+1) obtained in Procedure 3’ are always smaller than their counterpart in Procedure 3, as the upper truncation at Xj−1 is on the upper tail. Therefore Procedure
3 is conservative and definitely valid.
6. Discussion. Combining ideas from conditional inference and multiple testing, we have
proven the validity of several very simple and seemingly “naive” procedures for significance
testing of sample ranks. In particular, we have shown that an unadjusted pairwise test comparing the winner with the runner-up is a valid significance test for the first rank. Our result
complements and extends pre-exisiting analogous results for location and location-scale families with independence between observations. Our approach is considerably more powerful than
previously known solutions. We provide similarly straightforward conservative methods for producing a lower confidence bound for the difference between the winner and runner up, and for
verifying ranks beyond the first.
Claims reporting the “winner” are commonly made in the scientific literature, usually with
no significance level reported or an incorrect method applied. For example, Uhls and Greenfield
(2012) asked n = 20 elementary and middle school students which of seven personal values they
most hoped to embody as adults, with “Fame” (8 responses) being the most commonly selected,
with “Benevolence” (5 responses) second. The authors’ main finding — which appeared in the
abstract, the first paragraph of the article, and later a CNN.com headline (Alikhani, 2011) —
was that “Fame” was the most likely response, accompanied by a significance level of 0.006,
which the authors computed by testing whether the probability of selecting “Fame” was larger
than 1/7. The obvious error in the authors’ reasoning could have been avoided if they had
performed an equally straightforward two-tailed binomial test of “Fame” vs. “Benevolence,”
which would have produced a p-value of 0.58.
Reproducibility. A git repository containing with the code generating the image in this
paper is available at https://github.com/kenhungkk/verifying-winner.
References.
Alikhani, L. (2011). Study: Tween TV today is all about fame.
Berger, R. L. (1980). Minimax subset selection for the multinomial distribution. Journal of Statistical Planning
and Inference 4 391–402.
Berger, R. L. (1982). Multiparameter hypothesis testing and acceptance sampling. Technometrics.
Besag, J. and Clifford, P. (1989). Generalized monte carlo significance tests. Biometrika 76 633–642.
Bofinger, E. (1991). Selecting “Demonstrably best” or “Demonstrably worst” exponential population. Australian Journal of Statistics 33 183–190.
Edwards, D. G. and Hsu, J. C. (1983). Multiple comparisons with the best treatment. Journal of the American
Statistical Association 78 965–971.
Finner, H. and Strassburger, K. (2002). The partitioning principle: a powerful tool in multiple decision
theory. The Annals of Statistics 30 1194–1213.
imsart-aos ver. 2014/10/16 file: verifying-winner.tex date: July 4, 2017
RANK VERIFICATION FOR EXP. FAMILIES
21
Fithian, W., Sun, D. L. and Taylor, J. E. (2014). Optimal Inference After Model Selection. arXiv.org.
Fithian, W., Taylor, J. E. and Tibshirani, R. J. (2015). Selective Sequential Model Selection. arXiv.org.
Gupta, S. S., Huang, D.-Y. and Panchapakesan, S. (1984). On some inequalities and monotonicity results
in selection and ranking theory. In Inequalities in statistics and probability (Lincoln, Neb., 1982) 211–227.
Inst. Math. Statist., Hayward, CA, Hayward, CA.
Gupta, S. S. and Liang, T. (1989). Selecting the best binomial population: parametric empirical Bayes approach. Journal of Statistical Planning and Inference 23 21–31.
Gupta, S. S. and Nagel, K. (1967). On selection and ranking procedures and order statistics from the multinomial distribution. Sankhyā: The Indian Journal of Statistics 29.
Gupta, S. S. and Panchapakesan, S. (1971). On Multiple Decision (Subset Selection) Procedures Technical
Report, Purdue University.
Gupta, S. S. and Panchapakesan, S. (1985). Subset Selection Procedures: Review and Assessment. American
Journal of Mathematical and Management Sciences 5 235–311.
Gupta, S. S. and Wong, W.-Y. (1976). On Subset Selection Procedures for Poisson Processes and Some
Applications to the Binomial and Multinomial Problems Technical Report.
Gutmann, S. and Maymin, Z. (1987). Is the selected population the best? The Annals of Statistics 15 456–461.
Hsu, J. C. (1984). Constrained Simultaneous Confidence Intervals for Multiple Comparisons with the Best. The
Annals of Statistics 12 1136–1144.
Hsu, J. (1996). Multiple comparisons: theory and methods. CRC Press.
Quinnipiac University Poll Institute (2016). First-Timers Put Trump Ahead In Iowa GOP Caucus, Quinnipiac University Poll Finds; Sanders Needs First-Timers To Tie Clinton In Dem Caucus.
Karnnan, N. and Panchapakesan, S. (2009). Does the Selected Normal Population Have the Smallest Variance? American Journal of Mathematical and Management Sciences 29 109–123.
Marshall, A. W., Olkin, I. and Arnold, B. (2010). Inequalities: Theory of Majorization and Its Applications.
Springer Series in Statistics. Springer, New York, NY.
Maymin, Z. and Gutmann, S. (1992). Testing retrospective hypotheses. The Canadian Journal of Statistics.
La Revue Canadienne de Statistique 20 335–345.
Nettleton, D. (2009). Testing for the supremacy of a multinomial cell probability. Journal of the American
Statistical Association 104 1052–1059.
Ng, H. K. T. and Panchapakesan, S. (2007). Is the selected multinomial cell the best? Sequential Analysis
26 415–423.
Stefansson, G., Kim, W.-C. and Hsu, J. C. (1988). On confidence sets in multiple comparisons. In Statistical
Decision Theory and Related Topics IV 89–104. . . . Decision Theory and . . . .
Uhls, Y. T. and Greenfield, P. M. (2012). The value of fame: preadolescent perceptions of popular media
and their relationship to future aspirations. Developmental psychology.
Department of Mathematics
951 Evans Hall, Suite 3840
Berkeley, CA 94720-3840
E-mail: kenhung@berkeley.edu
Department of Statistics
301 Evans Hall
Berkeley, CA 94720
E-mail: wfithian@berkeley.edu
imsart-aos ver. 2014/10/16 file: verifying-winner.tex date: July 4, 2017
| 10 |
arXiv:1704.06398v2 [] 24 Jul 2017
Tail sums of Wishart and GUE eigenvalues beyond the
bulk edge.
Iain M. Johnstone∗
Stanford University and Australian National University
July 26, 2017
Abstract
Consider the classical Gaussian unitary ensemble of size N and the real Wishart
ensemble WN (n, I). In the limits as N → ∞ and N/n → γ > 0, the expected number of eigenvalues that exit the upper bulk edge is less than one, 0.031 and 0.170
respectively, the latter number being independent of γ. These statements are consequences of quantitative bounds on tail sums of eigenvalues outside the bulk which
are established here for applications in high dimensional covariance matrix estimation.
1
Introduction
This paper develops some tail sum bounds on eigenvalues outside the bulk that are needed
for results on estimation of covariance matrices in the spiked model, Donoho et al. [2017].
This application is described briefly in Section 4. It depends on properties of the eigenvalues
of real white Wishart matrices, distributed as WN (n, I), which are the main focus of this
note.
Specifically, suppose that A ∼ WN (n, I), and that λ1 ≥ . . . ≥ λN are eigenvalues
of the sample covariance matrix n−1 A. In the limit N/n → γ > 0, it is well known
that the empirical distribution of {λi } converges to the Marcenko-Pastur law (see e.g.
Pastur and Shcherbina [2011, Corollary 7.2.5]), which is supported on an interval Iγ —
√
augmented with 0 if γ > 1 — having upper endpoint λ(γ) = (1 + γ)2 . We focus on the
eigenvalues λi that exit this “bulk” interval Iγ on the upper side. In statistical application,
such exiting eigenvalues might be mistaken for “signal” and so it is useful to have some
bounds on what can happen under the null hypothesis of no signal. Section 3 studies the
mean value behavior of quantities such as
N
X
TN =
[λi − λ(γ)]q+ ,
i=1
q≥0
Thanks to David Donoho and Matan Gavish for discussions and help. Peter Forrester provided an
important reference. Work on this paper was completed during a visit to the Research School of Finance,
Actuarial Studies and Statistics, A. N. U., whose hospitality and support is gratefully acknowledged. Work
supported in part by the U.S. National Science Foundation and the National Institutes of Health.
∗
1
which for q = 0 reduces to the number TN0 of exiting eigenvalues.
a.s.
It is well known that the largest eigenvalue λ1 → λ(γ) [Geman, 1980], and that closed
intervals outside the bulk support contain no eigenvalues for N large with probability
one [Bai and Silverstein, 1998]. However these and even large deviation results for λ1
[Majumdar and Vergassola, 2009] and TN0 [Majumdar and Vivo, 2012] seem not to directly
yield the information on E(TN ) that we need. Marino et al. [2014] looked at the variance
of TN0 using methods related to those of this note. Recently, Chiani [2017] has studied the
probability that all eigenvalues of Gaussian, Wishart and double Wishart random matrices
lie within the bulk, and derived universal limiting values of 0.6921 and 0.9397 in the real
and complex cases respectively.
In summary, the motivation for this note is high-dimensional covariance estimation, but
there are noteworthy byproducts: the asymptotic values of E(TN0 ) are perhaps suprisingly
small, and numerically for the Gaussian Unitary Ensemble (GUE), it is found that the
chance of even two exiting eigenvalues is very small, of order 10−6 .
2
The Gaussian Unitary Ensemble Case (GUE)
We begin with GUE to illustrate the methods in the simplest setting, and to note an error
in the literature. Recall that the Gaussian Unitary ensemble GUE(N) is the Gaussian
probability measure on the space of N × N Hermitian matrices with density proportional
to exp{− 21 NtrA2 }.
Theorem 1. Let λ1 , . . . , λN be eigenvalues of an N-by-N matrix from the GUE. Denote
by λ+ = 2 the upper edge of the Wigner semicircle, namely, the asymptotic density of the
eigenvalues. For q ≥ 0, let
TN =
N
X
i=1
(λi − λ+ )q+ .
(1)
Then, with a constant cq specified at (3) below,
E(TN ) = cq N −2q/3 (1 + o(1)).
In particular, for q = 0 and TN = #{i : λi > λ+ },
1
E(TN ) → c0 = √ ≈ 0.030629.
6 3π
(2)
Proof. We use the so-called one-point function and bounds due to Tracy and Widom [1994,
1996]. To adapt to their
let (yi )N
1 be the eigenvalues of GUE with joint density
Pnnotation,
2
2
proportional to exp(− 1 yi )∆ (y), where ∆(y) is the usual Vandermonde. In
√ this scaling
the eigenvalue bulk
p concentrates as the
psemi-circle on [−µN , µN ] with µN = 2N.
We have yi = N/2 λi and µN = N/2 λ+ , for λ+ = 2, so that
TN =
N
X
1
(λi − λ+ )q+ =
N
2 q/2 X
N
2
1
(yi − µN )q+ .
From the determinantal structure of GUE, the marginal density of a single (unordered)
eigenvalue yi is given by the one-point function
N
−1
SN (y, y) = N
−1
N
−1
X
φ2k (y),
k=0
where φk (y) are the (Hermite) functions obtained by orthonormalizing y k e−y
2 q/2 Z ∞
E(TN ) =
(y − µN )q SN (y, y)dy.
N
µN
2 /2
. Thus
Now introduce the TW scaling
τN = √
y = µN + τN x,
1
,
2N 1/6
and let Ai denote the Airy function. Tracy and Widom [1996, p 745-6] show that
SτN (x, x) = τN SN (µN + τN x, µN + τN x)
Z ∞
→ KA (x, x) =
Ai2 (x + z)dz,
0
with the convergence being dominated: SτN (x, x) ≤ M 2 e−2x . Consequently,
2τ 2 q/2 Z ∞
N
xq SτN (x, x)dx
E(TN ) =
N
0
Z ∞
−2q/3
∼N
xq KA (x, x)dx.
0
In particular, E(TN ) = O(N −2q/3 ), and if q = 0, then E(TN ) converges to a positive
constant.
Integration by parts and Olver et al. [2010, 9.11.15] yield
Z ∞
Z ∞ Z ∞
q
cq =
x KA (x, x)dx =
xq
Ai2 (z)dzdx
0
0
x
Z ∞
(3)
1
2Γ(q + 1)
q+1
2
=
x Ai (x)dx = √
q+1 0
π12(2q+9)/6 Γ((2q + 9)/6)
√
For q = 0 the constant becomes c0 = 1/(6 3π).
Remarks. 1. Ullah [1983] states, in our notation, that the expected number of
eigenvalues above the bulk edge, E(TN ) ∼ 0.25N −1/2 . This claim cannot be correct: a
counterexample uses the limiting law F2 for y(1) = maxi yi of Tracy and Widom [1994]:
√
(4)
E(TN ) ≥ Pr(y(1) > 2N ) → 1 − F2 (0) = 0.030627.
We evaluated numerically in Mathematica the formulas (U3), (U6) and (U7) for p =
(2/N)E(TN ) given in Ullah [1983]. While numerical results from intermediate formula (U3)
3
Table 1: For
√ GUE(N), the probabilities pN (k) of exactly k eigenvalues exceeding the upper
bulk edge 2N, along with the expected number E(TN ), to be compared with limiting
value (2).
N
pN (1)
pN (2)
pN (3)
E(TN )
10
25
50
100
250
500
2.868 · 10−2
2.955 · 10−2
2.994 · 10−2
3.019 · 10−2
3.039 · 10−2
3.048 · 10−2
1.36 · 10−6
1.70 · 10−6
1.88 · 10−6
2.00 · 10−6
2.09 · 10−6
2.14 · 10−6
6.9 · 10−14
1.4 · 10−13
1.9 · 10−13
2.3 · 10−13
2.6 · 10−13
2.8 · 10−13
0.028681
0.029551
0.029944
0.030195
0.030392
0.030480
are consistent with our (2), neither those from (U6) nor those from the final result (U7)
are consistent with (U3), or indeed with each other!
2. The striking closeness of the right side of (4) to (2) led us to use the Matlab toolbox
of Bornemann [2010] to evaluate numerically
√
(n)
pN (k) = Pr( exactly k of {yi } > 2N) = E2 (k, J)
√
with J = ( 2N, ∞), in the notation of Bornemann [2010]. The results, in Table 1, confirm
that the probability of 2 or more eigenvalues exiting the bulk is very small, of order 10−6 ,
for all N. This is also suggested by the plots of the densities of y(1) , y(2) , . . . in the scaling
limit in Figure 4 of Bornemann [2010], which itself extends Figure 2 of Tracy and Widom
[1994].
3
The real Wishart case
Suppose λi are eigenvalues of n−1 XX ⊤ for X a N × n matrix with i.i.d. N(0, 1) entries.
√
Assume that γN = N/n → γ ∈ (0, 1]. Set λ(γ) = (1 + γ)2 .
We recall the scaling for the Tracy-Widom law from the largest eigenvalue λ1 :
λ1 = λ(γN ) + N −2/3 τ (γN )WN
where WN converges in distribution to W ∼ T W1 and τ (γ) =
√ √
γ( γ + 1)4/3 .
Theorem 2. (a) Suppose η(λ, c) ≥ 0 is jointly continuous in λ and c, and satisfies
η(λ, c) = 1
η(λ, c) ≤ Mλ
for λ ≤ λ(c)
for some M and all λ.
Suppose also that cN − γN = O(N −2/3 ). Then for q > 0,
E
X
N
i=1
[η(λi , cN ) − 1]
4
q
→ 0.
(5)
(b) Suppose cN − γN ∼ sσ(γ)N −2/3 , where σ(γ) = τ (γ)/λ′ (γ) = γ(1 +
X
Z
N
q
q
−2q/3
E
[λi − λ(cN )]+ ∼ τ (γ)N
i=1
√
γ)1/3 . Then
∞
s
(x − s)q+ K1 (x, x)dx.
(6)
where K1 is defined at (9) below.
(c) In particular, let Nn = #{i : λi ≥ λ(cN )} and suppose that cN − γN = o(N −2/3 ). Then
Z ∞
ENn → c0 =
K1 (x, x)dx ≈ 0.17.
0
Remarks. 1. Part (b) represents a sharpening of (5) that is relevant when η(λ) = η(λ, γ)
is Hölder continuous in λ near the bulk edge λ(γ),
η(λ) − η(λ(γ)) ∼ (λ − λ(γ))q+ .
The example q = 1/2 occurs commonly for optimal shrinkage rules η ∗ (λ) in Donoho et al.
[2017].
2. Section 4 explains why we allow cN to differ from γN .
Proof. Define
TN =
N
X
(
[η(λ, c) − 1]q
F (λ, c) =
[λ − λ(c)]q+
F (λi , cN ),
i=1
(a)
(b).
We adapt the discussion here to the notation used in Tracy and Widom [1998] and Johnstone
[2001]. Let (yi )N
1 = nλi be the eigenvalues of WN (n, I) with joint density function PN (y1 , . . . , yN )
with explicit form given, for example, in [Johnstone, 2001, eq. (4.1)]. We obtain
Z ∞
E(TN ) =
F (y/n, cN )R1 (y)dy,
0
R
where R1 (y1 ) = N (0,∞)N−1 PN (y1 , . . . , yN )dy2 · · · dyN is the one-point (correlation) function. It follows from Tracy and Widom [1998, p814–16] that
R1 (y) = T1 (y) = 21 tr KN (y, y)
(7)
where KN (x, y) is the 2 × 2 matrix kernel associated with PN , see e.g. [Tracy and Widom,
1998, eq. (3.1)]. It follows from Widom [1999] that
1
tr
2
KN (y, y) = S(y, y) + ψ(y)(ǫφ)(y) = S1 (y, y),
(8)
where the functions S(y, y ′), ψ(y) and φ(y) are defined in terms of orthonormalized Laguerre
polynomials in Widom [1999] and studied further in Johnstone [2001]. The function ǫ(x) =
1
sgnx and the operator ǫ denotes convolution with the kernel ǫ(x − y).
2
5
For convergence, introduce the Tracy-Widom scaling
y = µN + σN x,
1
2
and nh = n + 21 and define
p
√
1/3
σN = c(Nh /nh )Nh ,
µN = ( Nh + nh )2 ,
√
√
√
where c(γ) = (1 + γ)1/3 (1 + 1/ γ) = (1 + γ)1/3 λ′ (γ) We now rescale the scalar-valued
function (8):
S1τ (x, x) = σN S1 (µN + σN x, µN + σN x).
where we set Nh = N +
We can rewrite our target E(TN ) using (7), (8) and this rescaling in the form
Z ∞
E(TN ) =
F (ℓN (x), cN )S1τ (x, x)dx,
δN
where ℓN (x) = (µN +σN x)/n, δN = (nλ(cN )−µN )/σN and we used the fact that F (λ, c) = 0
for λ ≤ λ(c).
It follows from [Johnstone, 2001, eq. (3.9)] that
Z ∞
Z ∞
S1τ (x, x) = 2
φτ (x + u)ψτ (x + u)du + ψτ (x) cφ −
φτ (u)du .
0
x
It is shown in equations (3.7), 3.8) and Sec. 5 of that paper that
1
φτ (x), ψτ (x) → √ Ai(x)
2
and, uniformly in N and in intervals of x that are bounded below, that
φτ (x), ψτ (x) = O(e−x ).
√
Along with cφ → 1/ 2 (cf. App. A7 of same paper), this shows that
Z ∞
Z ∞
2
1
S1τ (x, x) → K1 (x, x) =
Ai (x + z)dz + 2 Ai(x) 1 −
Ai(z)dz > 0
0
(9)
x
with the convergence being dominated
S1τ (x, x) ≤ M 2 e−2x + M ′ e−x .
(10)
Before completing the argument for (a) – (c), we note it is easily checked that
n−1 µN = λ(γN ) + O(N −1 ),
so that
δN =
n
[λ(cN ) − λ(γN )] + O(N −1/3 ).
σN
If cN − γN = θN N −2/3 for θN = O(1) then
δN ∼
n −2/3
N
θN λ′ (γ) ∼ θN /σ(γ),
σN
6
(11)
since we have
N 2/3 σN /n ∼ σ(γ)λ′ (γ) = τ (γ).
(12)
In case (a), then, δN ≥ −A for some A. We then have ℓN (x) → λ(γ) for all x ≥ −A,
and so from joint continuity
η(ℓN (x), cN ) → η(λ(γ), γ) = 1,
and hence for all x ≥ −A,
F (ℓN (x), cN ) = [η(ℓN (x), cN ) − 1]q → 0
(13)
The convergence is dominated since the assumption η(λ, c) ≤ Mλ implies that |F (ℓN (x), cN )| ≤
C(1 + |x|q ). Hence the convergence (13) along with (10) and the dominated convergence
theorem implies (5).
For case (b),
Z ∞
2q/3
N
E(TN ) =
[N 2/3 (ℓN (x) − λ(cN ))]q+ S1τ (x, x)dx.
δN
Observe that
N 2/3 (λ(γN ) − λ(cN )) ∼ N 2/3 λ′ (γ)(γN − cN ) ∼ −sτ (γ),
and so from (11) and (12), we have
N 2/3 (ℓN (x)−λ(cN )) = O(N −1/3 )+N 2/3 (λ(γN )−λ(cN ))+N 2/3 n−1 σN x ∼ τ (γ)(x−s). (14)
In addition, from (14), we have
N 2/3 |ℓN (x) − λ(cN )| ≤ M(1 + |x|),
so that the convergence is dominated and (6) is proven.
For case (c), we have only to evaluate
Z ∞
Z ∞
Z
1
c0 =
K1 (x, x)dx =
KA (x, x)dx + 4
0
0
∞
G′ (x)dx = I1 + I2 ,
0
I1 was evaluated in the previous section and G(x) = [1 −
Rwhere
∞
Ai(z)dz = 1/3, from Olver et al. [2010, 9.10.11], we obtain
0
4I2 = G(∞) − G(0) = 1 − (2/3)2 = 5/9,
with the result
5
1
≈ 0.031 + 0.139 = 0.16952.
c0 = √ +
6 3π 36
7
R∞
x
Ai(z)dz]2 . Since
4
Application to covariance estimation
We indicate how Theorem 2 is applied to covariance estimation in the spiked model studied
in Donoho et al. [2017]. Consider a sequence of statistical problems indexed by dimension
p and sample size n. In the nth problem X̌ ∼ Np (0, Σ) where p = pn sastisfies pn /n →
γ ∈ (0, 1] and the population covariance matrix Σ = Σp has fixed ordered eigenvalues
ℓ1 ≥ . . . ≥ ℓr > 1 for all n, and then ℓr+1 = . . . = ℓpn = 1.
Suppose that the sample covariance matrix Š = Šn,pn has eigenvalues λ̌1 ≥ . . . ≥ λ̌p
and corresponding eigenvectors v1 , . . . , vp . Consider shrinkage estimators of the form
Σ̂η =
p
X
η(λ̌j , cp )vj vj⊤ ,
(15)
j=1
where η(λ, c) is a continuous bulk shrinker, that is, satisfies the conditions (a) of Theorem
2. Without loss of generality, as explained in the reference cited, we may also assume that
λ → η(λ, c) is non-decreasing. In the spiked model, the typical choice for cp in practice
would be to set cp = p/n, and we adopt this choice below.
It is useful to analyse an “oracle” or “rank-aware” variant of (15) which takes advantage
of the assumed structure of Σp , especially the fixed rank r of Σp − I:
Σ̂η,r =
r
X
η(λ̌j , cp )vj vj⊤
+
p
X
vj vj⊤ .
j=r+1
j=1
The error in estimation of Σ using Σ̂ is measured by a loss function Lp (Σ, Σ̂). One seeks
conditions under which the losses Lp (Σ, Σ̂η ) and Lp (Σ, Σ̂η,r ) are asymptotically equivalent.
They consider a large class of loss functions which satisfy a Lipschitz condition which
implies that, for some q,
|Lp (Σ, Σ̂η ) − Lp (Σ, Σ̂η,r )| ≤ C(ℓ1 , η(λ̌1))
p
X
[η(λ̌j , cp ) − 1]q .
j=r+1
Suppose now that Π : Rp → Rp−r is a projection on the span of the p − r unit eigenvectors
of Σ. Let X = ΠX̌ and let λ1 ≥ · · · ≥ λp−r denote the eigenvalues of n−1 XX ⊤ . By the
Cauchy interlacing Theorem (e.g. [Bhatia, 1997, p. 59]), we have
λ̌j ≤ λj−r
for r + 1 ≤ j ≤ p,
(16)
where the (λi )p−r
i=1 are the eigenvalues of a white Wishart matrix Wp−r (n, I). From the
monotonicity of η,
p−r
p
X
X
q
[η(λi , cp ) − 1]q .
(17)
[η(λ̌j , cp ) − 1] ≤
i=1
j=r+1
Now apply part (a) of Theorem 2 with the identifications
N ← p − r,
cN ← cp .
8
Clearly γN = N/n → γ and
cN − γ N =
N +r N
−
= O(N −2/3 ),
n
n
since r is fixed. We conclude that the right side of (17) and hence |Lp (Σ, Σ̂η ) − Lp (Σ, Σ̂η,r )|
converge to 0 in L1 and in probability.
Part (c) of Theorem 2 helps to give an example where the losses Lp (Σ, Σ̂η ) and Lp (Σ, Σ̂η,r )
−1
are not asymptotically equivalent. Indeed, let Lp (Σ, Σ̂η ) = kΣ̂−1
η − Σ k, with k · k denoting
matrix operator norm. Here the
shrinkage rule η = η ∗ (λ, c) is discontinuous at the
√ optimal
upper bulk edge λ(c) = (1 + c)2 :
η ∗ (λ, c) = 1
∗
η (λ, c) → 1 +
√
c
for λ ≤ λ(c)
for λ ↓ λ(c).
Proposition 3 of Donoho et al. [2017] shows that
D
−1
−1
−1
kΣ̂−1
η − Σ k − kΣ̂η,r − Σ k → W,
(18)
where W has a two point distribution (1 − π)δ0 + πδw with non-zero
√ probability
π = Pr(T W1 > 0) at location w = f (ℓ+ ) − f (ℓr ), where ℓ+ = 1 + c and the function
1/2
c(ℓ − 1)
f (ℓ) =
ℓ(ℓ − 1 + γ)
is strictly decreasing for ℓ ≥ ℓ+ .
Part (c) of Theorem 2, along with interlacing inequality (16), is used in the proof to
establish that Nn = #{i ≥ r + 1 : λin > λ+ (cn )}, the number of noise eigenvalues exiting
the bulk, is bounded in probability.
5
Final Remarks
It is apparent that the same methods will show that the value of c0 for the Gaussian
Orthogonal Ensemble will be the same as for the real Wishart (Laguerre Orthogonal Ensemble), and similarly that the value of c0 for the white complex Wishart (Laguerre Unitary
Ensemble) will agree with that for GUE.
Some natural questions are left for further work. First, the evaluation of c0 for values
of β other than 1 and 2, and secondly universality, i.e. that the limiting constants do not
require the assumption of Gaussian matrix entries.
Finally, this article appears in a special issue dedicated to the memory of Peter Hall.
Hall’s many contributions to high dimensional data have been reviewed by Samworth [2016].
However, it seems that Peter did not publish specifically on problems connected with the
application of random matrix theory to statistics — the exception that proves the rule of
his extraordinary breadth and depth of interests. Nevertheless the present author’s work
on this specific topic, as well as on many others, has been notably advanced by Peter’s
support — academic, collegial and financial – in promoting research visits to Australia and
contact with specialists there in random matrix theory, particularly at the University of
Melbourne, Peter’s academic home since 2006.
9
References
Z. D. Bai and Jack W. Silverstein. No eigenvalues outside the support of the limiting spectral distribution of large-dimensional sample covariance matrices. Annals of Probability,
26(1):316–345, 1998. ISSN 0091-1798.
Rajendra Bhatia. Matrix Analysis, volume 169 of Graduate Texts in Mathematics. SpringerVerlag, New York, 1997. ISBN 0-387-94846-5.
F. Bornemann. On the numerical evaluation of distributions in random matrix theory:
a review. Markov Processes and Related Fields, 16(4):803–866, 2010. ISSN 1024-2953.
arXiv:0904.1581.
M. Chiani. On the probability that all eigenvalues of Gaussian, Wishart, and double
Wishart random matrices lie within an interval. IEEE Transactions on Information
Theory, 63(7):4521–4531, 2017.
DLMF. NIST Digital Library of Mathematical Functions. http://dlmf.nist.gov/, Release
1.0.9 of 2014-08-29, 2014. Online companion to Olver et al. [2010].
David Donoho, Matan Gavish, and Iain M. Johnstone. Optimal shrinkage of eigenvalues
in the spiked covariance model. arxiv:1311.0851v3; in press, Annals of Statistics, 2017.
Stuart Geman. A limit theorem for the norm of random matrices. Annals of Probability,
8:252–261, 1980.
Iain M. Johnstone. On the distribution of the largest eigenvalue in principal components
analysis. Annals of Statistics, 29:295–327, 2001.
Satya N. Majumdar and Massimo Vergassola. Large deviations of the maximum eigenvalue
for Wishart and Gaussian random matrices. Physical Review Letters, 102:060601, Feb
2009.
Satya N. Majumdar and Pierpaolo Vivo. Number of relevant directions in principal component analysis and Wishart random matrices. Physical Review Letters, 108:200601, May
2012.
Ricardo Marino, Satya N. Majumdar, Grégory Schehr, and Pierpaolo Vivo. Phase transitions and edge scaling of number variance in Gaussian random matrices. Physical Review
Letters, 112:254101, Jun 2014.
F. W. J. Olver, D. W. Lozier, R. F. Boisvert, and C. W. Clark, editors. NIST Handbook
of Mathematical Functions. Cambridge University Press, New York, NY, 2010. Print
companion to DLMF.
Leonid Pastur and Mariya Shcherbina. Eigenvalue Distribution of Large Random Matrices,
volume 171 of Mathematical Surveys and Monographs. American Mathematical Society,
Providence, RI, 2011. ISBN 978-0-8218-5285-9.
Richard J. Samworth. Peter Hall’s work on high-dimensional data and classification. Annals
of Statistics, 44(5):1888–1895, 2016. ISSN 0090-5364.
10
Craig A. Tracy and Harold Widom. Level-spacing distributions and the Airy kernel. Communications in Mathematical Physics, 159:151–174, 1994.
Craig A. Tracy and Harold Widom. On orthogonal and symplectic matrix ensembles.
Communications in Mathematical Physics, 177:727–754, 1996.
Craig A. Tracy and Harold Widom. Correlation functions, cluster functions, and spacing
distributions for random matrices. Journal of Statistical Physics, 92:809–835, 1998.
N Ullah. Number of energy levels outside Wigner’s semicircle. Journal of Physics A:
Mathematical and General, 16(18):L767, 1983.
H. Widom. On the relation between orthogonal, symplectic and unitary ensembles. Journal
of Statistical Physics, 94:347–363, 1999.
11
| 10 |
Event excitation for event-driven control and optimization of
multi-agent systems
Yasaman Khazaeni and Christos G. Cassandras
Division of Systems Engineering
and Center for Information and Systems Engineering
Boston University, MA 02446
arXiv:1604.00691v1 [math.OC] 3 Apr 2016
yas@bu.edu,cgc@bu.edu
Abstract— We consider event-driven methods in a general
framework for the control and optimization of multi-agent
systems, viewing them as stochastic hybrid systems. Such
systems often have feasible realizations in which the events
needed to excite an on-line event-driven controller cannot occur,
rendering the use of such controllers ineffective. We show that
this commonly happens in environments which contain discrete
points of interest which the agents must visit. To address this
problem in event-driven gradient-based optimization problems,
we propose a new metric for the objective function which
creates a potential field guaranteeing that gradient values
are non-zero when no events are present and which results
in eventual event excitation. We apply this approach to the
class of cooperative multi-agent data collection problems using the event-driven Infinitesimal Perturbation Analysis (IPA)
methodology and include numerical examples illustrating its
effectiveness.
I. I NTRODUCTION
The modeling and analysis of dynamic systems has historically been founded on the time-driven paradigm provided
by a theoretical framework based on differential (or difference) equations: we postulate the existence of an underlying
“clock” and with every “clock tick” a state update is performed which synchronizes all components of the system. As
systems have become increasingly networked, wireless, and
distributed, the universal value of this paradigm has come
to question, since it may not be feasible to guarantee the
synchronization of all components of a distributed system,
nor is it efficient to trigger actions with every time step when
such actions may be unnecessary. The event-driven paradigm
offers an alternative to the modeling, control, communication, and optimization of dynamic systems. The main idea
in event-driven methods is that actions affecting the system
state need not be taken at each clock tick. Instead, one can
identify appropriate events that trigger control actions. This
approach includes the traditional time-driven view if a clocktick is considered a system “event”. Defining the right events
is a crucial modeling step and has to be carried out with a
good understanding of the system dynamics.
The importance of event-driven behavior in dynamic systems was recognized in the development of Discrete Event
The authors work is supported in part by NSF under grants CNS1239021, ECCS-1509084, and IIP-1430145, by AFOSR under grant
FA9550-15-1-0471, by ONR under grant N00014-09-1-1051, and by the
Cyprus Research Promotion Foundation under Grant New Infrastructure
Project/Strategic/0308/26.
Systems (DES) and later Hybrid Systems (HS) [1]. More
recently there have been significant advances in applying
event-driven methods (also referred to as “event-based” and
“event-triggered”) to classical feedback control systems; e.g.,
see [2], [3], [4], as well as [5] and [6] and references
therein. Event-driven approaches are also attractive in receding horizon control, where it is computationally inefficient
to re-evaluate an optimal control value over small time
increments as opposed to event occurrences defining appropriate planning horizons for the controller (e.g., see [7]).
In distributed networked systems, event-driven mechanisms
have the advantage of significantly reducing communication
among networked components which cooperate to optimize
a given objective. Maintaining such cooperation normally
requires frequent communication among them; it was shown
in [8] that we can limit ourselves to event-driven communication and still achieve optimization objectives while
drastically reducing communication costs (hence, prolonging
the lifetime of a wireless network), even when delays are
present (as long as they are bounded).
Clearly, the premise of these methods is that the events
involved are observable so as to “excite” the underlying
event-driven controller. However, it is not always obvious
that these events actually take place under every feasible
control: it is possible that under some control no such events
are excited, in which case the controller may be useless.
In such cases, one can resort to artificial “timeout events”
so as to eventually take actions, but this is obviously inefficient. Moreover, in event-driven optimization mechanisms
this problem results in very slow convergence to an optimum
or in an algorithm failing to generate any improvement in the
decision variables being updated.
In this work, we address this issue of event excitation in
the context of multi-agent systems. In this case, the events
required are often defined by an agent “visiting” a region
or a single point in a mission space S ⊂ R2 . Clearly, it is
possible that such events never occur for a large number
of feasible agent trajectories. This is a serious problem
in trajectory planning and optimization tasks which are
common in multi-agent systems seeking to optimize different
objectives associated with these tasks, including coverage,
persistent monitoring or formation control [9], [10], [11],
[12], [13], [14], [15], [16]. At the heart of this problem is
the fact that objective functions for such tasks rely on a non-
zero reward (or cost) metric associated with a subset S + ⊂ S
of points, while all other points in S have a reward (or cost)
which is zero since they are not “points of interest” in the
mission space. We propose a novel metric which allows all
points in S to acquire generally non-zero reward (or cost),
thus ensuring that all events are ultimately excited. This leads
to a new method allowing us to apply event-based control
and optimization to a large class of multi-agent problems. We
will illustrate the use of this method by considering a general trajectory optimization problem in which Infinitesimal
Perturbation Analysis (IPA) [1] is used as an event-driven
gradient estimation method to seek optimal trajectories for
a class of multi-agent problems where the agents must
cooperatively visit a set of target points to collect associated
rewards (e.g., to collect data that are buffered at these points.)
This defines a family within the class of Traveling Salesman
Problems (TSPs) [17] for which most solutions are based on
techniques typically seeking a shortest path in the underlying
graph. These methods have several drawbacks: (i) they are
generally combinatorially complex, (ii) they treat agents as
particles (hence, not accounting for limitations in motion
dynamics which should not, for instance, allow an agent to
form a trajectory consisting of straight lines), and (iii) they
become computationally infeasible as on-line methods in the
presence of stochastic effects such as random target rewards
or failing agents. As an alternative we seek solutions in terms
of parameterized agent trajectories which can be adjusted on
line as a result of random effects and which are scalable,
hence computationally efficient, especially in problems with
large numbers of targets and/or agents. This approach was
successfully used in [18], [19].
In section II we present the general framework for multiagent problems and address the event excitation issue. In
section III we overview the event-driven IPA methodology
and how it is applied to a general hybrid system optimization
problem. In section IV we introduce a data collection problem as an application of the general framework introduced
in section II and will show simulation results of applying the
new methodology to this example in section V.
II. E VENT-D RIVEN O PTIMIZATION IN M ULTI -AGENT
S YSTEMS
Multi-agent systems are commonly modeled as hybrid
systems with time-driven dynamics describing the motion
of the agents or the evolution of physical processes in a
given environment, while event-driven behavior characterizes
events that may occur randomly (e.g., an agent failure) or
in accordance to control policies (e.g., an agent stopping
to sense the environment or to change directions). In some
cases, the solution of a multi-agent dynamic optimization
problem is reduced to a policy that is naturally parametric. As
such, a multi-agent system can be studied with parameterized
controllers aiming to meet certain specifications or to optimize a given performance metric. Moreover, in cases where
such a dynamic optimization problem cannot be shown to be
reduced to a parametric policy, using such a policy is still
near-optimal or at least offers an alternative.
Fig. 1.
Multi-agent system in a dynamic setting, blue areas are obstacles
In order to build a general framework for multi-agent
optimization problems, assuming S as the mission space,
we introduce the function R(w) : S → R as a “property” of
point w ∈ S. For instance, R(w) could be a weight that gives
relative importance to one point in S compared to another.
Setting R(w) > 0 for only a finite number of points implies
that we limit ourselves to a finite set of points of interest
while the rest of S has no significant value.
Assuming F to be the set of all feasible agent states, We
define P (w, s) : S × F → R to capture the cost/reward
resulting from how agents with state s ∈ F interact with
w ∈ S. For instance, in coverage problems if an “event”
occurs at w, then P (w, s) is the probability of agents jointly
detecting such events based on the relative distance of each
agent from w.
In general settings, the objective is to find the best state
vector s1 , · · · , sN so that N agents achieve a maximal
reward (minimal cost) from interacting with the mission
space S:
Z
min J =
P (w, s)R(w)dw
(1)
s∈F
S
This static problem can be extended to a dynamic version
where the agents determine optimal trajectories si (t), t ∈
[0, T ], rather than static states:
Z TZ
min J =
P (w, s(u(t)))R(w, t)dwdt
(2)
u(t)∈U
0
S
subject to motion dynamics:
ṡj (t) = fj (sj , uj , t), j = 1, · · · , N
(3)
In Fig. 1, such a dynamic multi agent system is illustrated.
As an example, consensus problems are just a special case
of (1). Suppose that we consider a finite set of points w ∈ S
which coincide with the agents states s1 , ..., sN (which are
not necessarily their locations). Then we can set P (w, s) =
ksi − sj k2 and, therefore, replace the integral in (1) by a
sum. In this case, R(w) = Ri is just the weight that an
agent carries in the consensus algorithm. An optimum occurs
when ksi − sj k2 = 0 for all i, j, i.e., all agents “agree”
and consensus is reached. This is a special case because
of the simplicity in P (w, s) making the problem convex so
that a global optimum can be achieved, in contrast to most
problems we are interested in.
As for the formulation in (2), consider a trajectory planning problem where N mobile agents are tasked to visit
M stationary targets in the mission space S. Target behavior is described through state variables xi (t) which may
model reward functions, the amount of data present at i, or
other problem-dependent target properties. More formally, let
(Ω, F, P) be an appropriately defined probability space and
ω ∈ Ω a realization of the system where target dynamics are
subject to random effects:
ẋi (t) = gi (xi (t), ω)
(4)
gi (·) is as such that xi (t) is monotonically increasing by t
and it resets to zero each time a target is completely emptied
by an agent. In the context of (2), we assume the M targets
are located at pointswi , i = 1, · · · , M and define
R(xi (t), w) if w ∈ C(wi )
R(w, t) =
(5)
0
otherwise
to be the value of point w, where C(wi ) is a compact 2manifold in R2 containing wi which can be considered to be
a region defined by the sensing range of that target relative
to agents (e.g., a disk centered at wi ). Note that R(w, t) is
also a random variable defined on the same probability space
above. Given that only points w ∈ C(wi ) have value for the
agents, there is an infinite number of points w ∈
/ C(wi ) such
that R(w, t) = 0 provided the following condition holds:
Condition 1: If ∃i such that w ∈ C(wi ) then w ∈
/ C(wj )
holds ∀j 6= i.
This condition is to ensure that two targets do not share
any point w in their respective sensing ranges. Also it ensures
that the set {C(wi ) | i = 1 : · · · , M } does not create a
compact partitioning of the mission space and there exist
points w which do not belong to any of the C(wi ).
Viewed as a stochastic hybrid system, we may define different modes depending on the states of agents or targets and
events that cause transitions between these modes. Relative
to a target i, any agent has at least two modes: being at a
point w ∈ C(wi ), i.e., visiting this target or not visiting it.
Within each mode, agent j’s dynamics, dictated by (3), and
target i’s dynamics in (4) may vary. Accordingly, there are
0
at least two types of events in such a system: (i) δij
events
+
occur when agent j initiates a visit at target i, and (ii) δij
events occur when agent j ends a visit at target i. Additional
event types may be included depending on the specifics of
a problem, e.g., mode switches in the target dynamics or
agents encountering obstacles.
An example is shown in Fig. 2, where target sensing ranges
are shown with green circles and agent trajectories are shown
in dashed lines starting at a base shown by a red triangle.
In the blue trajectory, agent 1 moves along the trajectory
that passes through points A → B → C → D. It is easy
to see that when passing through points A and C we have
0
δi1
and δi00 1 events, while passing through B and D we
+
have δi1
and δi+0 1 events. The red trajectory is an example
where none of the events is excited. Suppose we consider
an on-line trajectory adjustment process in which the agent
improves its trajectory based on its performance measured
through (5). In this case, R(w, t) = 0 over all t, as long
as the agent keeps using the red trajectory, i.e., no event
ever occurs. Therefore, if an event-driven approach is used
to control the trajectory adjustment process, no action is ever
triggered and the approach is ineffective. In contrast, in the
blue trajectory the controller can extract useful information
from every observed event; such information (e.g., a gradient
of J with respect to controllable parameters as described in
the next section) can be used to adjust the current trajectory
Fig. 2.
Sample trajectories
so as to improve the objective function J in (1) or (2).
Therefore, if we are to build an optimization framework
for this class of stochastic hybrid systems to allow the application of event-driven methods by calculating a performance
measure gradient, then a fundamental property required is the
occurrence of at least some events in a sample realization. In
particular, the IPA method [20] is based on a single sample
realization of the system over which events are observed
along with their occurrence times and associated system
states. Suppose that the trajectories can be controlled through
a set of parameters forming a vector θ. Then, IPA provides
an unbiased estimate of the gradient of a performance metric
J(θ) with respect to θ. This gradient is then used to improve
the trajectory and ultimately seek an optimal one when
appropriate conditions hold.
As in the example of Fig. 2, it is possible to encounter
trajectory realizations where no events occur in the system.
In the above example, this can easily happen if the trajectory
does not pass through any target. The existence of such
undesirable trajectories is the direct consequence of Condition 1. This lack of event excitation results in event-based
controllers being unsuitable.
New Metric: In order to overcome this issue we propose
a new definition for R(w, t) in (5) as follows:
M
X
R(w, t) =
hi (xi (t), di (w))
(6)
i=1
where w ∈ S, hi (·) is a function of the target’s state xi (t)
and di (w) = kwi −wk. Note that, if hi (·) is properly defined,
(6) yields R(w, t) > 0 at all points.
While the exact form of hi (·) depends on the problem, we
impose the condition that hi (·) is monotonically decreasing
in di (w). We can think of hi (·) as a value function associated
with point wi . Using the definition of R(w, t), this value
is spread out over all points w ∈ S rather than being
concentrated at the single point wi . This creates a continuous
potential field for the agents leading to a non-zero gradient of
the performance measure even when the trajectories do not
excite any events. This non-zero gradient will then induce
trajectory adjustments that naturally bring them toward ones
with observable events.
Finally, recalling the definition in (2), we also define:
N
X
P (w, s) =
ksj (t) − wk2
(7)
j=1
the total quadratic travel cost for agents to visit point w.
In Section IV, we will show how to apply R(w, t) and
P (w, s) defined as above in order to determine optimal agent
trajectories for a class of multi-agent problems of the form
(2). First, however, we review in the next section the eventdriven IPA calculus which allows us to estimate performance
gradients with respect to controllable parameters.
III. E VENT-D RIVEN IPA C ALCULUS
Let us fix a particular value of the parameter θ ∈ Θ and
study a resulting sample path of a general SHS. Over such a
sample path, let τk (θ), k = 1, 2, · · · denote the occurrence
times of the discrete events in increasing order, and define
τ0 (θ) = 0 for convenience. We will use the notation τk
instead of τk (θ) when no confusion arises. The continuous
state is also generally a function of θ, as well as of t, and is
thus denoted by x(θ, t). Over an interval [τk (θ), τk+1 (θ)),
the system is at some mode during which the time-driven
state satisfies ẋ = fk (x, θ, t), in which x is any of the
continuous state variables of the system and ẋ denotes ∂x
∂t .
Note that we suppress the dependence of fk on the inputs
u ∈ U and d ∈ D and stress instead its dependence on
the parameter θ which may generally affect either u or d
or both. The purpose of perturbation analysis is to study
how changes in θ influence the state x(θ, t) and the event
times τk (θ) and, ultimately, how they influence interesting
performance metrics that are generally expressed in terms of
these variables.
An event occurring at time τk+1 (θ) triggers a change
in the mode of the system, which may also result in new
dynamics represented by fk+1 . The event times τk (θ) play
an important role in defining the interactions between the
time-driven and event-driven dynamics of the system.
Following the framework in [20], consider a general
performance function J of the control parameter θ:
J(θ; x(θ, 0), T ) = E[L(θ; x(θ, 0), T )]
(8)
where L(θ; x(θ, 0), T ) is a sample function of interest evaluated in the interval [0, T ] with initial conditions x(θ, 0).
For simplicity, we write J(θ) and L(θ). Suppose that there
are K events, with occurrence times generally dependent on
θ, during the time interval [0, T ] and define τ0 = 0 and
τN +1 = T . Let Lk : Rn × Θ × R+ → R be a function and
define L(θ) by
K Z τk+1
X
L(θ) =
Lk (x, θ, t)dt
(9)
k=0
τk
where we reiterate that x = x(θ, t) is a function of θ and
t. We also point out that the restriction of the definition of
J(θ) to a finite horizon T which is independent of θ is made
merely for the sake of simplicity. Returning to the stochastic
setting, the ultimate goal of the iterative process shown is to
maximize Eω [L(θ, ω)], where we use ω to emphasize dependence on a sample path ω of a SHS (clearly, this is reduced to
L(θ) in the deterministic case). Achieving such optimality is
possible under standard ergodicity conditions imposed on the
underlying stochastic processes, as well as the assumption
that a single global optimum exists; otherwise, the gradientbased approach is simply continuously attempting to improve
the observed performance L(θ, ω). Thus, we are interested
in estimating the gradient
dEω [L(θ, ω)]
dJ(θ)
=
(10)
dθ
dθ
based on directly observed data. We
by evaluating dL(θ,ω)
dθ
obtain θ ∗ by optimizing J(θ) through an iterative scheme
of the form
θn+1 = θn − ηn Hn (θn ; x(θ, 0), T, ωn ), n = 0, 1, · · · (11)
where ηn is a step size sequence and Hn (θn ; x(θ, 0), T, ωn )
at θ = θn . In using IPA,
is the estimate of dJ(θ)
dθ
,
Hn (θn ; x(θ, 0), T, ωn ) is the sample derivative dL(θ,ω)
dθ
which is an unbiased estimate of dJ(θ)
if
the
condition
dθ
(dropping the symbol ω for simplicity)
dL(θ) dE[L(θ)]
dJ(θ)
E
=
=
(12)
dθ
dθ
dθ
is satisfied, which turns out to be the case under mild
technical conditions. The conditions under which algorithms
of the form (11) converge are well-known (e.g., see [21]).
Moreover, in addition to being unbiased, it can be shown that
such gradient estimates are independent of the probability
laws of the stochastic processes involved and require minimal
information from the observed sample path. The process
is based on analyzing
through which IPA evaluates dL(θ)
dθ
how changes in θ influence the state x(θ, t) and the event
times τk (θ). In turn, this provides information on how L(θ)
is affected, because it is generally expressed in terms of these
variables. Given θ = [θ1 , ..., θl ]T , we use the Jacobian matrix
notation:
∂x(θ, t)
∂τk (θ)
x0 (θ, t) =
, τk 0 =
, k = 1, · · · , K (13)
∂θ
∂θ
for all state and event time derivatives. For simplicity of
notation, we omit θ from the arguments of the functions
above unless it is essential to stress this dependence. It is
shown in [20] that x0 (t) satisfies:
dx0 (t)
∂fk (t) 0
∂fk (t)
=
x (t) +
(14)
dt
∂x
∂θ
for t ∈ [τk (θ), τk+1| (θ)) with boundary condition
x0 (τk+ ) = x0 (τk− ) + [fk−1 (τk− ) − fk (τk+ )]τk0
(15)
for k = 0, · · · , K. We note that whereas x(t) is often
continuous in t, x0 (t) may be discontinuous in t at the event
times τk ; hence, the left and right limits above are generally
different. If x(t) is not continuous in t at t = τk (θ), the value
of x(τk+ ) is determined by the reset function r(q, q 0 , x, ν, δ)
and
dr(q, q 0 , x, ν, δ)
x0 (τk+ ) =
(16)
dθ
0 +
Furthermore, once the initial condition x (τk ) is given,
the linearized state trajectory x0 (t) can be computed in the
interval t ∈ [τk (θ), τk+1 (θ)) by solving (14) to obtain
R t ∂fk (u)
h Z t ∂f (v) R t ∂fk (u)
i
du
du
−
k
0
∂x
τk
x (t) = e
e τk ∂x
dv + ξk
∂θ
τk
(17)
with the constant ξk determined from x0 (τk+ ). In order to
complete the evaluation of x0 (τk+ ) we need to also determine
τk0 . If the event at τk (θ) is exogenous τk0 = 0 and if the event
at τk (θ) is endogenous:
h ∂g
i ∂g
∂gk 0 −
k
k
τk0 = −
fk (τk− )
+
x (τk )
(18)
∂x
∂θ
∂x
∂gk
+
where gk (x, θ) = 0 and it is defined as long as ∂x fk (τk ) 6=
0 (details may be found in [20].)
The derivative evaluation process involves using the IPA
calculus in order to evaluate the IPA derivative dL
dθ . This is
accomplished by taking derivatives in (9) with respect to θ:
Z τk+1
K
dL(θ) X d
Lk (x, θ, t)dt
(19)
=
dθ
dθ τk
k=0
Applying the Leibnitz rule, we obtain, for every k =
0, · · · , ZK,
τk+1
d
Lk (x, θ, t)dt
dθ τk
Z τk+1 h
∂Lk (x, θ, t) i
∂Lk (x, θ, t) 0
dt
x (t) +
=
∂x
∂θ
τk
0
+ Lk (x(τk+1 ), θ, τk+1 )τk+1
− Lk (x(τk ), θ, τk )τk0
(20)
In summary the three equations (15), (17) and (18) form
the basis of the IPA calculus and allow us to calculate the
final derivative in (20). In the next section IPA is applied to
a data collection problem in a multi-agent system.
connected to a target i even if there are other agents l with
pil (t) > 0; this is not the only possible model, but we adopt
it based on the premise that simultaneous downloading of
packets from a common source creates problems of proper
data reconstruction. This means that j in (22) is the index
of the agent that is connected to target i at time t.
The dynamics of xi (t) in (22) results in two new event
types added to what was defined earlier, (i) ξi0 events occur
when xi (t) reaches zero, and (ii) ξi+ events occur when xi (t)
leaves zero.
The performance measure is the total content of data left
at targets at the end of a finite mission time T . Thus, we
define J1 (t) to be the following (recalling that {σi (t)} are
random processes):
M
X
J1 (t) =
αi E[xi (t)]
(23)
i=1
IV. T HE DATA C OLLECTION P ROBLEM
We consider a class of multi-agent problems where the
agents must cooperatively visit a set of target points to collect
associated rewards (e.g., to collect data that are buffered at
these points.). The mission space is S ⊂ R2 . This class of
problems falls within the general formulation introduced in
(2). The state of the system is the position of agent j time
t, sj (t) = [sxj (t), syj (t)] and the state of the target i, xi (t).
The agent’s dynamics (3) follow a single integrator:
ṡxj (t) = uj (t) cos θj (t),
ṡyj (t) = uj (t) sin θj (t) (21)
where uj (t) is the scalar speed of the agent (normalized so
that 0 ≤ uj (t) ≤ 1) and θj (t) is the angle relative to the
positive direction, 0 ≤ θj (t) < 2π. Thus, we assume that
each agent controls its speed and heading.
We assume the state of the target xi (t) represents the
amount of data that is currently available at target i (this can
be modified to different state interpretations). The dynamics
of xi (t) in (4) for this problem are:
0
if xi (t) = 0 and σi (t) ≤ µij p(sj (t), wi )
ẋi (t) =
σi (t) − µij p(sj (t), wi )
otherwise
(22)
i.e., we model the data at the target as satisfying simple flow
dynamics with an exogenous (generally stochastic) inflow
σi (t) and a controllable rate with which an agent empties
the data queue given by µij p(sj (t), wi ). For brevity we set
p(sj (t), wi ) = pij (t) which is the normalized data collection
rate from target i by agent j and µij is a nominal rate
corresponding to target i and agent j.
Assuming M targets are located at wi ∈ S, i = 1, . . . , M,
and have a finite range of ri , then agent j can collect data
from wi only if dij (t) = kwi − sj (t)k ≤ ri . We then
assume that: (A1) pij (t) ∈ [0, 1] is monotonically nonincreasing in the value of dij (t) = kwi − sj (t)k, and (A2)
it satisfies pij (t) = 0 if dij (t) > ri . Thus, pij (t) can
model communication power constraints which depend on
the distance between a data source and an agent equipped
with a receiver (similar to the model used in [22]) or sensing
range constraints if an agent collects data using on-board
sensors. For simplicity, we will also assume that: (A3) pij (t)
is continuous in dij (t) and (A4) only one agent at a time is
where αi is a weight factor for target i. We can now
formulate a stochastic optimization problem P1 where the
control variables are the agent speeds and headings denoted
by the vectors u(t) = [u1 (t), . . . , uN (t)] and θ(t) =
[θ1 (t), . . . , θN (t)] respectively (omitting their dependence on
the full system state at t).
Z
1 T
J1 (t)dt
(24)
P1 :
min J(T ) =
T 0
u(t),θ(t)
where 0 ≤ uj (t) ≤ 1, 0 ≤ θj (t) < 2π, and T is a given
finite mission time. This problem can be readily placed into
the general framework (2). In particular, the right hand side
of (24) is:"
#
Z T XZ
1
αi
E
2 xi (t)dwdt
T
0
C(wi ) πri
i
"Z Z
# (25)
T
X αi 1{w ∈ C(wi )}
1
xi (t)dwdt
= E
T
πri2
0
S i
This is now in the form of the general framework in (2) with
X αi 1{w ∈ C(wi )}
R(w, t) =
xi (t)
(26)
πri2
i
and
P (sj (t), w) = 1
(27)
Recalling the definition in (5), only points within the sensing
range of each target have non-zero values, while all other
point value are zero, which is the case in (26) above. In
addition, (27) simply shows that there is no meaningful
dynamic interaction between an agent and the environment.
Problem P1 is a finite time optimal control problem. In
order to solve this, following previous work in [19] we
proceed with a standard Hamiltonian analysis leading to a
Two Point Boundary Value Problem (TPBVP) [23]. We omit
this, since the details are the same as the analysis in [19]. The
main result of the Hamiltonian analysis is that the optimal
speed is always the maximum value, i.e.,
u∗j (t) = 1
(28)
Hence, we only need to calculate the optimal θj (t). This
TPBVP is computationally expensive and easily becomes
intractable when problem size grows. The ultimate solution
of the TPBVP is a set of agent trajectories that can be
put in a parametric form defined by a parameter vector
θ and then optimized over θ. If the parametric trajectory
family is broad enough, we can recover the true optimal
trajectories; otherwise, we can approximate them within
some acceptable accuracy. Moreover, adopting a parametric
family of trajectories and seeking an optimal one within it has
additional benefits: it allows trajectories to be periodic, often
a desirable property, and it allows one to restrict solutions to
trajectories with desired features that the true optimal may
not have, e.g., smoothness properties to achieve physically
feasible agent motion.
Parameterizing the trajectories and using gradient based
optimization methods, in light of the discussions from the
previous sections, enables us to make use of Infinitesimal
Perturbation Analysis (IPA) [20] to carry out the trajectory
optimization process. We represent each agent’s trajectory
through general parametric equations
sxj (t) = fx (θj , ρj (t)),
syj (t) = fy (θj , ρj (t))
(29)
where the function ρj (t) controls the position of the agent
on its trajectory at time t and θj is a vector of parameters
controlling the shape and location of the trajectory. Let θ =
[θ1 , . . . , θN ]. We now revisit problem P1 in (24):
Z
1 T
(30)
J1 (θ, t)dt
min J(θ, T ) =
θ∈Θ
T 0
and will bring in the equations that were introduced in the
previous section in order to calculate an estimate of dJ(θ)
dθ
as in (10). For this problem due to the continuity of xi (t)
the last two terms in (20) vanish. From (23) we have:
Z τk+1 X
Z τk+1 X
M
M
d
αi xi (θ, t)dt =
αi x0i (θ, t)dt (31)
dθ τk
τk
i=1
i=1
In summary, the evaluation of (31) requires the state
derivatives x0i (t) explicitly and s0j (t) implicitly, (dropping the
dependence on θ for brevity). The latter are easily obtained
for any specific choice of f and g in (29). The former require
a rather laborious use of (15),(17),(18) which, reduces to a
simple set of state derivative dynamics as shown next.
Proposition 1. After an event occurrence at t = τk ,
the state derivatives x0i (τk+ ) with respect to the controllable
parameter θ satisfy the following:
if e(τk ) = ξi0
0
0
−
+
0 +
xi (τk ) =
x0 (τ ) − µil (t)pil (τk )τk if e(τk ) = δij
i0 k−
xi (τk )
otherwise
where l 6= j with pil (τk ) > 0 if such l exists and
0
τk =
∂dij (sj ) 0
∂sj sj
∂dij (sj )
∂sj ṡj (τk )
−1
.
Proof: The proof is omitted due to space limitations,
but it is very similar to the proofs of Propositions 1-3 in
[24].
As is obvious from Proposition 1, the evaluation of x0i (t)
is entirely dependent on the occurrence of events ξi0 and
+
+
δij
in a sample realization, i.e., ξi0 and δij
cause jumps in
this derivative which carry useful information. Otherwise,
x0i (τk+ ) = x0i (τk− ) is in effect and these gradients remain
unchanged. However, we can easily have realizations where
0
no events occur in the system (specifically, events of type δij
+
and δij ) if the trajectory of agents in the sample realization
does not pass through any target. This lack of event excitation
results in the algorithm in (11) to stall.
In the next section we overcome the problem of no event
excitation using the definitions in (6) and (7). We accomplish
this by adding a new metric to the objective function that
generates a non-zero sensitivity with respect to θ.
A. Event Excitation
Our goal here is to select a function hi (·) in (6) with the
property of “spreading” the value of xi (t) over all w ∈ S.
We begin by determining the convex hull produced by the
targets, since the trajectories need not go outside this convex
hull. Let T = {w1 , w2 , · · · , wM } be the set of all target
points. Then, the convex hull of these points is
X
M
X
C=
βi w i |
βi = 1, ∀i, βi ≥ 0
(32)
i=1
i
Given that C ⊂ S, we seek some R(w, t) that satisfies the
following property for constants ci > 0:
Z
M
X
R(w, t)dw =
ci xi (t)
(33)
C
i=1
so that R(w, t) can be viewed as a continuous density defined
for all points w ∈ C which results in a total value equivalent
to a weighted sum of the target states xi (t), i = 1, . . . , M . In
order to select an appropriate h(xi (t), di (w)) in (6), we first
define d+
i (w) = max(kw − wi k, ri ) where ri is the target’s
sensing range. We then define:
M
X
αi xi (t)
(34)
R(w, t) =
d+ (w)
i=1 i
Here, we are spreading a target’s reward (numerator) over
all w so as to obtain the “total weighted reward density” at
w. Note that d+
i (w) = max(kw − wi k, ri ) > 0 to ensure
that the target reward remains positive and fixed for points
w ∈ C(wi ). Moreover, following (7),
N
X
P (w, s(t)) =
ksj (t) − wk2
(35)
j=1
Using these definitions we introduce a new objective function
metric which is added to
the objective function in (24):
hZ
i
J2 (t) = E
P (w, s(t))R(w, t)dw
(36)
C
The expectation is a result of P (w, s(t)) and R(w, t) being
random variables defined on the same probability space as
xi (t).
Proposition 2. For R(w, t) in (34), there exist ci > 0,
i = 1, . . . , M , such that:
Z
M
X
R(w, t)dw =
ci xi (t)
(37)
C
i=1
Proof: We have
Z
Z X
M
αi xi (t)
R(w, t) =
dw
+
C
C i=1 di (w)
(38)
Z
M
X
xi (t)
=
αi
dw
+
C di (w)
i=1
R
(t)
We now need to find the value of C dx+i(w)
for each target
i
i. To do this we first look at the case of one target in a 2D
space and for now we assume C is just a disk with radius Λ
around the target (black circle with radius Λ in Fig. 3). We
can now calculate the above integral for this target using the
polar coordinates:
Z
Z 2π Z Λ
xi (t)
xi (t)
dw =
drdθ
+
max(r
i , r)
0
0
C di (w)
Z 2π Z ri
Z 2π Z Λ
xi (t)
xi (t)
(39)
=
drdθ +
drdθ
ri
r
0
ri
0
0
Λ
= xi (t) 2π 1 + log( )
ri
In our case C is the convex
hull of all targets. We will use
the
idea to calculate the
R xsame
i (t)
dw
for the actual con+
C di (w)
vex hull. We do this for an interior target i.e., a target inside
the convex hull. Extending the
same to targets on the edge is
straightforward. Using the same Fig. 3. One Target R(w, t)
polar coordinate for each θ we Calculation
define Λ(θ) to be the distance of
the target to the edge of C in the direction of θ. (C shown
by a red polygon in Fig. 3).
Z 2π Z Λ
Z
xi (t)
xi (t)
dw
=
drdθ
+
+
0
0 di (r, θ)
C di (w)
Z 2π Z ri
Z 2π Z Λ(θ)
xi (t)
xi (t)
=
drdθ (40)
drdθ +
r
r
i
0
0
0
ri
Z 2π
Λ(θ)
= xi (t) 2π +
log(
)dθ
r
i
0
The second part in (40) has to be calculated knowing Λ(θ)
but since we assumed the target is inside the convex hull we
know Λ(θ) ≥ ri . This means log( Λ(θ)
ri ) > 0 and the xi (t)’s
multiplier is a positive value. We can define ci in (37) as:
Z 2π
Λ(θ)
ci = αi 2π +
)dθ
log(
(41)
r
i
0
The significance of J2 (t) is that it accounts for the
movement of agents through P (w, s(t)) and captures the
target state values through R(w, t). Introducing this term in
the objective function in the following creates a non-zero
gradient even if the agent trajectories are not passing through
any targets. We now combine the two metrics in (24) and
(36) and define problem P2:
Z
1 T
P2 :
min J(T ) =
J1 (t) + J2 (t) dt (42)
T 0
u(t),θ(t)
In this problem, the second term is responsible for adjusting
the trajectories towards the targets by creating a potential
field, while the first term is the original performance metric
which is responsible for adjusting the trajectories so as to
maximize the data collected once an agent is within a target’s
sensing range. It can be easily shown that the results in (28)
hold for problem P2 as well, through the same Hamiltonian
analysis presented in [19]. When sj (t) follows the parametric
functions in (29), the new metric simply becomes a function
of the parameter vector θ and we have:
Z
1 T
J1 (θ, t) + J2 (θ, t) dt
(43)
θ∈Θ
T 0
The new objective function’s derivative follows the same
procedure that was described previously. The first part’s
derivative can be calculated from (31). For the second part
we have:
Z τk+1 Z
min J(θ, T ) =
d
dθ
P (w, θ, t)R(w, θ, t)dw
τk
Z
C
τk+1 Z
=
τk
C
h dP (w, θ, t)
dθ
R(w, θ, t) + P (w, θ, t)
dR(w, θ, t) i
dw
dθ
(44)
In the previous section, we raised the problem of no events
being excited in a sample realization, in which case the total
derivative in (31) is zero and the algorithm in (11) stalls.
Now, looking at (44) we can see that if no events occur
the second part in the integration which involves dR(w,θ,t)
dθ
PM
will be zero, since i=1 x0i (t) = 0 at all t. However, the
first part in the integral does not depend on the events, but
calculates the sensitivity of P (w, s(t)) in (35) with respect
to the parameter θ. Note that the dependence on θ comes
through the parametric description of s(t) through (29). This
term ensures that the algorithm in (11) does not stall and
adjusts trajectories so as to excite the desired events.
V. S IMULATION R ESULTS
We provide some simulation results based on an elliptical parametric description for the trajectories in (29). The
elliptical trajectory formulation is:
sxj (t) = Aj + aj cos ρj (t) cos φj − bj sin ρj (t) sin φj
syj (t) = Bj + aj cos ρj (t) sin φj + bj sin ρj (t) cos φj
(45)
Here, θj = [Aj , Bj , aj , bj , φj ] where Aj , Bj are the coordinates of the center, aj and bj are the major and minor axis
respectively while φj ∈ [0, π) is the ellipse orientation which
is defined as the angle between the x axis and the major axis
of the ellipse. The time-dependent parameter ρj (t) is the
eccentric anomaly of the ellipse. Since an agent is moving
with constant speed of 1 on this trajectory, based on (28),
we have ṡxj (t)2 + ṡyj (t)2 = 1, which gives
h
2
ρ̇j (t) = a sin ρj (t) cos φj + bj cos ρj (t) sin φj
2 i− 21
+ a sin ρj (t) sin φj − bj cos ρj (t) cos φj
(46)
The first case we consider is a problem with one agent and
seven targets located on a circle, as shown in Fig. 4. We
consider a deterministic case with σi (t) = 0.5 for all i. The
other problem parameters are T = 50, µij = 100, ri = 0.2
and αi = 1. A target’s sensing range is denoted with solid
black circles with the target location at the center. The blue
polygon indicates the convex hull produced by the targets.
The direction of motion on a trajectory is shown with the
small arrow. Starting with an initial trajectory shown in light
blue, the on-line trajectory optimization process converges
to the trajectory passing through all targets in an efficient
manner (shown in dark solid blue). In contrast, starting with
this trajectory - which does not pass through any targets
using the event-based IPA calculus to estimate the objective
function gradient.
R EFERENCES
Fig. 4.
One agent and seven target scenario
Fig. 5.
Two agent and seven targets scenario
- problem P1 does not converge and the initial trajectory
remains unchanged. At the final trajectory, J1∗ = 0.0859
and J ∗ = 0.2128. Using the obvious shortest path solution,
the actual optimal value for J1 is 0.0739 that results from
moving on the edges of the convex hull (which allows for
shorter agent travel times).
In the second case, 7 targets are randomly distributed
and two agents are cooperatively collecting the data. The
problem parameters are σi = 0.5, µij = 10, ri = 0.5, αi =
1, T = 50. The initial trajectories for both agents are shown
in light green and blue respectively. We can see that both
agent trajectories converge so as to cover all targets, shown
in dark green and blue ellipses. At the final trajectories,
J1∗ = 0.1004 and J ∗ = 0.2979. Note that we may use these
trajectories to initialize the corresponding TPBVP, another
potential benefit of this approach. This is a much slower
process which ultimately converges to J1∗ = 0.0991 and
J ∗ = 0.2776.
VI. C ONCLUSIONS
We have addressed the issue of event excitation in a
class of multi-agent systems with discrete points of interest.
We proposed a new metric for such systems that spreads
the point-wise values throughout the mission space and
generates a potential field. This metric allows us to use eventdriven trajectory optimization for multi-agent systems. The
methodology is applied to a class of data collection problems
[1] C. G. Cassandras and S. Lafortune, Introduction to Discrete Event
Systems. Secaucus, NJ, USA: Springer-Verlag New York, Inc., 2006.
[2] W. Heemels, J. Sandee, and P. P. J. van den Bosch, “Analysis of eventdriven controllers for linear systems,” International journal of control,
vol. 81, no. 4, pp. 571–590, 2008.
[3] A. Anta and P. Tabuada, “To sample or not to sample: Self-triggered
control for nonlinear systems,” Automatic Control, IEEE Trans. on,
vol. 55, pp. 2030–2042, Sept 2010.
[4] S. Trimpe and R. D’Andrea, “Event-based state estimation with
variance-based triggering,” Automatic Control, IEEE Trans. on,
vol. 59, no. 12, pp. 3266–3281, 2014.
[5] M. Miskowicz, Event-Based Control and Signal Processing. CRC
Press, 2015.
[6] C. G. Cassandras, “The event-driven paradigm for control, communication and optimization,” Journal of Control and Decision, vol. 1,
no. 1, pp. 3–17, 2014.
[7] Y. Khazaeni and C. G. Cassandras, “A new event-driven cooperative receding horizon controller for multi-agent systems in uncertain
environments,” In Proceedings of IEEE 53rd Annual Conference on
Decision and Control, pp. 2770–2775, Dec 2014.
[8] M. Zhong and C. G. Cassandras, “Asynchronous distributed optimization with event-driven communication,” Automatic Control, IEEE
Trans. on, vol. 55, no. 12, pp. 2735–2750, 2010.
[9] M. Schwager, D. Rus, and J.-J. Slotine, “Decentralized, adaptive
coverage control for networked robots,” The International Journal of
Robotics Research, vol. 28, no. 3, pp. 357–375, 2009.
[10] C. G. Cassandras, X. Lin, and X. Ding, “An optimal control approach
to the multi-agent persistent monitoring problem,” IEEE Trans. on Aut.
Cont., vol. 58, pp. 947–961, April 2013.
[11] M. Cao, A. Morse, C. Yu, B. Anderson, and S. Dasgupta, “Maintaining
a directed, triangular formation of mobile autonomous agents,” Communications in Information and Systems, vol. 11, no. 1, p. 1, 2011.
[12] K.-K. Oh and H.-S. Ahn, “Formation control and network localization
via orientation alignment,” IEEE Trans. on Automatic Control, vol. 59,
pp. 540–545, Feb 2014.
[13] H. Yamaguchi and T. Arai, “Distributed and autonomous control
method for generating shape of multiple mobile robot group,” in Proc.
of the IEEE International Conf. on Intelligent Robots and Systems,
vol. 2, pp. 800–807 vol.2, Sep 1994.
[14] J. Desai, V. Kumar, and J. Ostrowski, “Control of changes in formation
for a team of mobile robots,” in Proc. of the IEEE International Conf.
on Robotics and Automation, vol. 2, pp. 1556–1561, 1999.
[15] M. Ji and M. B. Egerstedt, “Distributed coordination control of
multi-agent systems while preserving connectedness.,” IEEE Trans.
on Robotics, vol. 23, no. 4, pp. 693–703, 2007.
[16] J. Wang and M. Xin, “Integrated optimal formation control of multiple
unmanned aerial vehicles,” IEEE Trans. on Control Systems Technology, vol. 21, pp. 1731–1744, Sept 2013.
[17] D. L. Applegate, R. E. Bixby, V. Chvatal, and W. J. Cook, The traveling salesman problem: a computational study. Princeton University
Press, 2011.
[18] X. Lin and C. G. Cassandras, “An optimal control approach to the
multi-agent persistent monitoring problem in two-dimensional spaces,”
IEEE Trans. on Automatic Control, vol. 60, pp. 1659–1664, June 2015.
[19] Y. Khazaeni and C. G. Cassandras, “An optimal control approach for
the data harvesting problem,” in 54th IEEE Conf. on Decision and
Cont., pp. 5136–5141, 2015.
[20] C. G. Cassandras, Y. Wardi, C. G. Panayiotou, and C. Yao, “Perturbation analysis and optimization of stochastic hybrid systems,” European
Journal of Cont., vol. 16, no. 6, pp. 642 – 661, 2010.
[21] H. Kushner and G. Yin, Stochastic Approximation and Recursive
Algorithms and Applications. Springer, 2003.
[22] J. L. Ny, M. a. Dahleh, E. Feron, and E. Frazzoli, “Continuous path
planning for a data harvesting mobile server,” Proc. of the IEEE Conf.
on Decision and Cont., pp. 1489–1494, 2008.
[23] A. E. Bryson and Y. C. Ho, Applied optimal control: optimization,
estimation and control. CRC Press, 1975.
[24] Y. Khazaeni and C. G. Cassandras, “An optimal control approach for
the data harvesting problem,” arXiv:1503.06133.
| 3 |
arXiv:1610.03625v2 [] 8 Apr 2017
THE FSZ PROPERTIES OF SPORADIC SIMPLE GROUPS
MARC KEILBERG
Abstract. We investigate a possible connection between the F SZ properties
of a group and its Sylow subgroups. We show that the simple groups G2 (5)
and S6 (5), as well as all sporadic simple groups with order divisible by 56 are
not F SZ, and that neither are their Sylow 5-subgroups. The groups G2 (5)
and HN were previously established as non-F SZ by Peter Schauenburg; we
present alternative proofs. All other sporadic simple groups and their Sylow
subgroups are shown to be F SZ. We conclude by considering all perfect groups
available through GAP with order at most 106 , and show they are non-F SZ
if and only if their Sylow 5-subgroups are non-F SZ.
Introduction
The F SZ properties for groups, as introduced by Iovanov et al. [4], arise from
considerations of certain invariants of the representation categories of semisimple
Hopf algebras known as higher Frobenius-Schur indicators [5, 10, 11]. See [9] for
a detailed discussion of the many important uses and generalizations of these invariants. When applied to Drinfeld doubles of finite groups, these invariants are
described entirely in group theoretical terms, and are in particular invariants of
the group itself. The F SZ property is then concerned with whether or not these
invariants are always integers—which gives the Z in F SZ.
While the F SZ and non-F SZ group properties are well-behaved with respect to
direct products [4, Example 4.5], there is currently little reason to suspect a particularly strong connection to proper subgroups which are not direct factors. Indeed,
by [2, 4] the symmetric groups Sn are F SZ, while there exist non-F SZ groups of
order 56 . Therefore, Sn is F SZ but contains non-F SZ subgroups for all sufficiently
large n. On the other hand, non-F SZ groups can have every proper subquotient
be F SZ. Even the known connection to the one element centralizers—see the comment following Definition 1.1—is relatively weak. In this paper we will establish a
few simple improvements to this situation, and then proceed to establish a number
of examples of F SZ and non-F SZ groups that support a potential connection to
Sylow subgroups. We propose this connection as Conjecture 2.7.
We will make extensive use of GAP [3] and the AtlasRep[15] package. Most of
the calculations were designed to be completed with only 2GB of memory or (much)
less available—in particular, using only a 32-bit implementation of GAP—, though
in a few cases a larger workspace was necessary. In all cases the calculations can
2010 Mathematics Subject Classification. Primary: 20D08; Secondary: 20F99, 16T05, 18D10.
Key words and phrases. sporadic groups, simple groups, Monster group, Baby Monster group,
Harada-Norton group, Lyons group, projective symplectic group, higher Frobenius-Schur indicators, FSZ groups, Sylow subgroups.
This work is in part an outgrowth of an extended e-mail discussion between Geoff Mason,
Susan Montgomery, Peter Schauenburg, Miodrag Iovanov, and the author. The author thanks
everyone involved for their contributions, feedback, and encouragement.
1
2
MARC KEILBERG
be completed in workspaces with no more than 10GB of memory available. The
author ran the code on an Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz machine
with 12GB of memory. All statements about runtime are made with respect to this
computer. Most of the calculations dealing with a particular group were completed
in a matter of minutes or less, though calculations that involve checking large
numbers of groups can take several days or more across multiple processors.
The structure of the paper is as follows. We introduce the relevant notation,
definitions, and background information in Section 1. In Section 2 we present a
few simple results which offer some connections between the F SZ (or non-F SZ)
property of G and certain of its subgroups. This motivates the principle investigation of the rest of the paper: comparing the F SZ properties for certain groups and
their Sylow subgroups. In Section 3 we introduce the core functions we will need to
perform our calculations in GAP. We also show that all groups of order less than
2016 (except possibly those of order 1024) are F SZ. The remainder of the paper
will be dedicated to exhibiting a number of examples that support Conjecture 2.7.
In Section 4 we show that the simple groups G2 (5), HN , Ly, B, and M , as
well as their Sylow 5-subgroups, are all non-F SZ5 . In Section 5 we show that all
other sporadic simple groups (including the Tits group) and their Sylow subgroups
are F SZ. This is summarized in Theorem 5.4. The case of the simple projective
symplectic group S6 (5) is handled in Section 6, which establishes S6 (5) as the second
smallest non-F SZ simple group after G2 (5). It follows from the investigations
of Schauenburg [13] that HN is then the third smallest non-F SZ simple group.
S6 (5) was not susceptible to the methods of Schauenburg [13], and requires further
modifications to our own methods to complete in reasonable time. We finish our
examples in Section 7 by examining those perfect groups available through GAP,
and show that they are F SZ if and only if their Sylow subgroups are F SZ. Indeed,
they are non-F SZ if and only if their Sylow 5-subgroup is non-F SZ5 .
Of necessity, these results also establish that various centralizers and maximal
subgroups in the groups in question are also non-F SZ5 , which can be taken as
additional examples. If the reader is interested in F SZ properties for other simple
groups, we note that Schauenburg [13] has checked all simple groups of order at
most |HN | = 273,030,912,000,000 = 214 ·36 ·56 ·7·11·19, except for S6 (5) (which we
resolve here); and that several families of simple groups were established as F SZ
by Iovanov et al. [4].
We caution the reader that the constant recurrence of the number 5 and Sylow
5-subgroups of order 56 in this paper is currently more of a computationally convenient coincidence than anything else. The reasons for this will be mentioned during
the course of the paper.
1. Background and Notation
Let N be the set of positive integers. The study of F SZ groups is connected to
the following sets.
Definition 1.1. Let G be a group, u, g ∈ G, and m ∈ N. Then we define
Gm (u, g) = {a ∈ G : am = (au−1 )m = g}.
Note that Gm (u, g) = ∅ if u 6∈ CG (g), and that in all cases Gm (u, g) ⊆ CG (g).
In particular, letting H = CG (g), then when u ∈ H we have
Gm (u, g) = Hm (u, g).
THE FSZ PROPERTIES OF SPORADIC SIMPLE GROUPS
3
The following will then serve as our definition of the F SZm property. It’s equivalence to other definitions follows easily from [4, Corollary 3.2] and applications of
the Chinese remainder theorem.
Definition 1.2. A group G is F SZm if and only if for all g ∈ G, u ∈ CG (g), and
n ∈ N coprime to the order of g, we have
|Gm (u, g)| = |Gm (u, g n )|.
We say a group is F SZ if it is F SZm for all m.
The following result is useful for reducing the investigation of the F SZ properties
to the level of conjugacy classes or even rational classes.
Lemma 1.3. For any group G and u, g, x ∈ G we have a bijection Gm (u, g) →
Gm (ux , g x ) given by a 7→ ax .
If n ∈ N is coprime to |G| and r ∈ N is such that rn ≡ 1 mod |G|, we also have
a bijection Gm (u, g n ) → Gm (ur , g).
Proof. The first part is [5, Proposition 7.2] in slightly different notation. The second
part is [14, Corollary 5.5].
All expressions of the form Gm (u, g n ) will implicitly assume that n is coprime to
the order of g. We are free to replace n with an equivalent value which is coprime
to |G| whenever necessary. Moreover, when computing cardinalities |Gm (u, g)| it
suffices to compute the cardinalities |Hm (u, g)| for H = CG (g), instead. This latter
fact is very useful when attempting to work with groups of large order, or groups
with centralizers that are easy to compute in, especially when the group is suspected
of being non-F SZ.
+
, the union of which yields
Remark 1.4. There are stronger conditions called F SZm
+
the F SZ + condition, which are also introduced by Iovanov et al. [4]. The F SZm
condition is equivalent to the centralizer of every non-identity element with order
not in {1, 2, 3, 4, 6} being F SZm , which is in turn equivalent to the sets Gm (u, g) and
Gm (u, g n ) being isomorphic permutation modules for the two element centralizer
CG (u, g) [4, Theorem 3.8], with u, g, n satisfying the same constraints as for the
F SZm property. Here the action is by conjugation. We note that while the F SZ
property is concerned with certain invariants being in Z, the F SZ + property is not
concerned with these invariants being non-negative integers. When the invariants
are guaranteed to be non-negative is another area of research, and will also not be
considered here.
Example 1.5. The author has shown that quaternion groups and certain semidirect products defined from cyclic groups are always F SZ [7, 8]. This includes
the dihedral groups, semidihedral groups, and quasidihedral groups, among many
others.
Example 1.6. Iovanov et al. [4] showed that several groups and families of groups
are F SZ, including:
• All regular p-groups.
• Zp ≀r Zp , the Sylow p-subgroup of Sp2 , which is an irregular F SZ p-group.
• P SL2 (q) for a prime power q.
4
MARC KEILBERG
• Any direct product of F SZ groups. Indeed, any direct product of F SZm
groups is also F SZm , as the cardinalities of the sets in Definition 1.1 split
over the direct product in an obvious fashion.
• The Mathieu groups M11 and M12 .
• Symmetric and alternating groups. See also [2].
Because of the first item, Susan Montgomery has proposed that we use the term
F S-regular instead of F SZ, and F S-irregular for non-F SZ. Similarly for F Sm regular and F Sm -irregular. These seem reasonable choices, but for this paper the
author will stick with the existing terminology.
Example 1.7. On the other hand, Iovanov et al. [4] also established that non-F SZ
groups exist by using GAP [3] to show that there are exactly 32 isomorphism classes
of groups of order 56 which are not F SZ5 .
Example 1.8. The author has constructed examples of non-F SZpj p-groups for
all primes p > 3 and j ∈ N in [6]. For j = 1 these groups have order pp+1 , which is
the minimum possible order for any non-F SZ p-group. Combined, [4, 6, 13] show,
among other things, that the minimum order of non-F SZ 2-groups is at least 210 ,
and the minimum order for non-F SZ 3-groups is at least 38 . It is unknown if any
non-F SZ 2-groups or 3-groups exist, however.
Example 1.9. Schauenburg [13] provides several equivalent formulations of the
F SZm properties, and uses them to construct GAP [3] functions which are useful
for testing the property. Using these functions, it was shown that the Chevalley
group G2 (5) and the sporadic simple group HN are not F SZ5 . These groups
were attacked directly, using advanced computing resources for HN , often with
an eye on computing the values of the indicators explicitly. We will later present
an alternative way of using GAP to prove that these groups, and their Sylow 5subgroups, are not F SZ5 . We will not attempt to compute the actual values of the
indicators, however.
One consequence of these examples is that the smallest known order for a nonF SZ group is 56 = 15,625. The groups with order divisible by pp+1 for p > 5
that are readily available through GAP are small in number, problematically large,
and frequently do not have convenient representations. Matrix groups have so far
proven too memory intensive for what we need to do, so we need permutation
or polycyclic presentations for accessible calculations. For these reasons, all of
the examples we pursue in the following sections will hone in on the non-F SZ5
property for groups with order divisible by 56 , and which admit known or reasonably
computable permutation representations. In most of the examples, 56 is the largest
power of 5 dividing the order, with the Monster group, the projective symplectic
group S6 (5), and the perfect groups of order 12 · 57 being the exceptions.
2. Obtaining the non-F SZ property from certain subgroups
Our first elementary result offers a starting point for investigating non-F SZm
groups of minimal order.
Lemma 2.1. Let G be a group with minimal order in the class of non-F SZm
groups. Then |Gm (u, g)| 6= |Gm (u, g n )| for some (n, |G|) = 1 implies g ∈ Z(G).
Proof. If not then CG (g) is a smaller non-F SZm group, a contradiction.
THE FSZ PROPERTIES OF SPORADIC SIMPLE GROUPS
5
The result applies to non-F SZm groups in a class that is suitably closed under
the taking of centralizers. For example, we have the following version for p-groups.
Corollary 2.2. Let P be a p-group with minimal order in the class of non-F SZpj
p-groups. Then |Ppj (u, g)| 6= |Ppj (u, g n )| for some p ∤ n implies g ∈ Z(P ).
Example 2.3. From the examples in the previous section, we know the minimum
possible order for a non-F SZp p-group for p > 3 is pp+1 . It remains unknown if
the examples of non-F SZpj p-groups from [6] for j > 1 have minimal order among
non-F SZpj p-groups. We also know that to check if a group of order 210 or 38 is
F SZ it suffices to assume that g is central.
Next, we determine a condition for when the non-F SZ property for a normal
subgroup implies the non-F SZ property for the full group.
Lemma 2.4. Let G be a group and suppose H is a non-F SZm normal subgroup
with m coprime to [G : H]. Then G is non-F SZm .
Proof. Let u, g ∈ H and (n, |g|) = 1 be such that |Hm (u, g)| 6= |Hm (u, g n )|. By
the index assumption, for all x ∈ G we have xm ∈ H ⇔ x ∈ H, so by definitions
Gm (u, g) = Hm (u, g) and Gm (u, g n ) = Hm (u, g n ), which gives the desired result.
Corollary 2.5. Let G be a finite group and suppose P is a normal non-F SZpj
Sylow p-subgroup of G for some prime p. Then G is non-F SZpj .
Corollary 2.6. Let G be a finite group and P a non-F SZpj Sylow p-subgroup of
G. Then the normalizer NG (P ) is non-F SZpj .
Sadly, we will find no actual use for Corollary 2.5 in the examples we consider
in this paper. However, this result, [13, Lemma 8.7], and the examples we collect
in the remainder of this paper do suggest the following conjectural relation for the
F SZ property.
Conjecture 2.7. A group is F SZ if and only if all of its Sylow subgroups are F SZ.
Some remarks on why this conjecture may involve some deep results to establish
affirmatively seems in order.
Consider a group G and let u, g ∈ G and n ∈ N with (n, |G|) = 1. Suppose that
g has order a power of p, for some prime p. Then
[
Gpj (u, g) =
Gpj (u, g) ∩ P x ,
where the union runs over all distinct conjugates P x in CG (g) of a fixed Sylow
p-subgroup P of CG (g). Let Ppxj (u, g) = Gpj (u, g) ∩ P x . Then |Gpj (u, g)| =
S
S
|Gpj (u, g n )| if and only if there is a bijection P x (u, g) → P x (u, g n ). In the
special case u ∈ P , if P was F SZpj we would have a bijection P (u, g) → P (u, g n ),
x
x
but this does not obviously guarantee a bijection
g n ) for all conS x P (u, g)
S →xP (u,
n
jugates. Attempting to get a bijection P (u, g) → P (u, g ) amounts, via
the Inclusion-Exclusion Principle, to controlling the intersections of any number of
conjugates and how many elements those intersections contribute to Gpj (u, g) and
Gpj (u, g n ). There is no easy or known way to predict the intersections of a collection of Sylow p-subgroups for a completely arbitrary G, so any positive affirmation
of the conjecture will impose a certain constraint on these intersections.
6
MARC KEILBERG
Moreover, we have not considered the case of the sets Gm (u, g) where m has
more than one prime divisor, nor those cases where u, g do not order a power of
a fixed prime, so a positive affirmation of the conjecture is also expected to show
that the F SZm properties are all derived from the F SZpj properties for all prime
powers dividing m. On the other hand, a counterexample seems likely to involve
constructing a large group which exhibits a complex pattern of intersections in its
Sylow p-subgroups for some prime p, or otherwise exhibits the first example of a
group which is F SZpj for all prime powers but is nevertheless not F SZ.
Example 2.8. All currently known non-F SZ groups are either p-groups (for which
the conjecture is trivial), are nilpotent (so are just direct products of their Sylow
subgroups), or come from perfect groups (though the relevant centralizers need not
be perfect). The examples of both F SZ and non-F SZ groups we establish here
will also all come from perfect groups and p-groups. In the process we obtain,
via the centralizers and maximal subgroups considered, an example of a solvable,
non-nilpotent, non-F SZ group; as well as an example of a non-F SZ group which
is neither perfect nor solvable. All of these examples, of course, conform to the
conjecture.
3. GAP functions and groups of small order
The current gold standard for general purpose testing of the F SZ properties in
GAP [3] is the FSZtest function of Schauenburg [13]. In certain specific situations,
the function FSInd from [4] can also be useful for showing a group is non-F SZ.
However, with most of the groups we will consider in this paper both of these
functions are impractical to apply directly. The principle obstruction for FSZtest
is that this function needs to compute both conjugacy classes and character tables
of centralizers, and this can be a memory intensive if not wholly inaccessible task.
For FSInd the primary obstruction, beyond its specialized usage case, is that it
must completely enumerate, store, and sort the entire group (or centralizer). This,
too, can quickly run into issues with memory consumption.
We therefore need alternatives for testing (the failure of) the FSZ properties
which can sidestep such memory consumption issues. For Section 7 we will also
desire functions which can help us detect and eliminate the more ”obviously” FSZ
groups. We will further need to make various alterations to FSZtest to incorporate
these things, and to return a more useful value when the group is not F SZ.
The first function we need, FSZtestZ, is identical to FSZtest—and uses several of
the helper functions found in [13]—except that instead of calculating and iterating
over all rational classes of the group it iterates only over those of the center. It
needs only a single input, which is the group to be checked. If it finds that the
group is non-F SZ, rather than return false it returns the data that established
the non-F SZ property. Of particular importance are the values m and z. If the
group is not shown to be non-F SZ by this test, then it returns fail to indicate
that the test is typically inconclusive.
FSZtestZ := function (G)
l o c a l CT, zz , z , c l , div , d , c h i , m, b ;
c l := R a t i o n a l C l a s s e s ( Center (G) ) ;
c l := F i l t e r e d ( c l , c−>not Order ( R e p r e s e n t a t i v e ( c ) )
THE FSZ PROPERTIES OF SPORADIC SIMPLE GROUPS
7
in [ 1 , 2 , 3 , 4 , 6 ] ) ;
for z z in c l do
z := R e p r e s e n t a t i v e ( z z ) ;
d i v := F i l t e r e d ( D i v i s o r s I n t ( Exponent (G) / Order ( z ) ) ,
m−>not Gcd (m, Order ( z ) ) in [ 1 , 2 , 3 , 4 , 6 ] ) ;
i f Length ( d i v ) < 1 then continue ; f i ;
CT := O r d i n a r y C h a r a c t er T a b le (G) ;
for c h i in I r r (CT) do
for m in d i v do
i f not I s R a t ( beta (CT, z , m, c h i ) )
then return [ z ,m, c h i ,CT ] ;
fi ;
od ;
od ;
od ;
#t h e t e s t i s i n c o n c l u s i v e i n g e n e r a l
return f a i l ;
end ;
This function is primarily useful for testing groups with minimal order in a class
closed under centralizers, such as in Lemma 2.1 and Corollary 2.2. Or for any group
with non-trivial center that is suspected of failing the F SZ property at a central
value.
We next desire a function which can quickly eliminate certain types of groups as
automatically being F SZ. For this, the following result on groups of small order is
helpful.
Theorem 3.1. Let G be a group with |G| < 2016 and |G| 6= 1024. Then G is F SZ.
Proof. By Lemma 2.1 it suffices to run FSZtestZ over all groups in the SmallGroups
library of GAP. This library includes all groups with |G| < 2016, except those of
order 210 = 1024. In practice, the author also used the function IMMtests introduced below, but where the check on the size of the group is constrained initially
to 100 by [4, Corollary 5.5], and can be increased whenever desired to eliminate all
groups of orders already completely tested. This boils down to quickly eliminating
p-groups and groups with relatively small exponent. By using the closure of the
F SZ properties with respect to direct products, one need only consider a certain
subset of the orders in question rather than every single one in turn, so as to avoid
essentially double-checking groups. We note that the groups of order 1536 take the
longest to check. The entire process takes several days over multiple processors,
but is otherwise straightforward.
8
MARC KEILBERG
We now define the function IMMtests. This function implements most of the
more easily checked conditions found in [4] that guarantee the F SZ property, and
calls FSZtestZ when it encounters a suitable p-group. The function returns true
if the test conclusively establishes that the group is F SZ; the return value of
FSZtestZ if it conclusively determines the group is non-F SZ; and fail otherwise.
Note that whenever this function calls FSZtestZ that test is conclusive by Corollary 2.2, so it must adjust a return value of fail to true.
IMMtests := function (G)
l o c a l sz , b , l , p2 , p3 , po ;
i f I s A b e l i a n (G)
then return true ;
fi ;
s z := S i z e (G) ;
i f ( s z < 2 0 1 6 ) and ( not s z =1024)
then return true ;
fi ;
i f IsPGroup (G) then
#R e g u l a r p−g r o u p s are a l w a y s FSZ .
l := C o l l e c t e d ( F a c t o r s I n t ( s z ) ) [ 1 ] ;
i f l [1] >= l [ 2 ] or Exponent (G) = l [ 1 ]
then return true ;
fi ;
s z := Length ( U p p e r C e n t r a l S e r i e s (G) ) ;
i f l [ 1 ] = 2 then
i f l [2 ] < 1 0 or s z < 3
or Exponent (G)<64
then return true ;
e l i f l [ 2 ] = 1 0 and s z >= 3
then
b := FSZtestZ (G) ;
i f I s L i s t ( b ) then return b ;
e l s e return true ;
fi ;
fi ;
e l i f l [ 1 ] = 3 then
i f l [2 ] < 8 or s z < 4
or Exponent (G)<27
THE FSZ PROPERTIES OF SPORADIC SIMPLE GROUPS
9
then return true ;
e l i f l [ 2 ] = 8 and sz >=4
then
b := FSZtestZ (G) ;
i f I s L i s t ( b ) then return b ;
e l s e return true ;
fi ;
fi ;
e l i f sz < l [1]+1
then return true ;
e l i f s z = l [ 1 ] + 1 and s z=l [ 2 ]
then
b := FSZtestZ (G) ;
i f I s L i s t ( b ) then return b ;
e l s e return true ;
fi ;
fi ;
else
#c h e c k t h e e x p o n e n t f o r non−p−g r o u p s
l := F a c t o r s I n t ( Exponent (G) ) ;
p2 := Length ( P o s i t i o n s ( l , 2 ) ) ;
p3 := Length ( P o s i t i o n s ( l , 3 ) ) ;
po := F i l t e r e d ( l , x−>x > 3 );
i f F o r A l l ( C o l l e c t e d ( po ) , x−>x [ 2 ] < 2 ) and
( ( p2 < 4 and p3 < 4 )
or ( p2 < 6 and p3 < 2 ) )
then return true ;
fi ;
fi ;
#t e s t s were i n c o n c l u s i v e
return f a i l ;
end ;
We then incorporate these changes into a modified version of FSZtest, which we
give the same name. Note that this function also uses the function beta and its
corresponding helper functions from [13]. It has the same inputs and outputs as
FSZtestZ, except that the test is definitive, and so returns true when the group is
F SZ.
FSZtest := function (G)
l o c a l C, CT, zz , z , c l , div , d , c h i , m, b ;
b := IMMtests (G ) ; ;
10
MARC KEILBERG
i f not b= f a i l
then return b ;
fi ;
c l := R a t i o n a l C l a s s e s (G) ;
c l := F i l t e r e d ( c l , c−>not Order ( R e p r e s e n t a t i v e ( c ) )
in [ 1 , 2 , 3 , 4 , 6 ] ) ;
for z z in c l do
z := R e p r e s e n t a t i v e ( z z ) ;
C := C e n t r a l i z e r (G, z ) ;
d i v := F i l t e r e d ( D i v i s o r s I n t ( Exponent (C) / Order ( z ) ) ,
m−>not Gcd (m, Order ( z ) ) in [ 1 , 2 , 3 , 4 , 6 ] ) ;
i f Length ( d i v ) < 1 then continue ; f i ;
# Check f o r t h e e a s y c a s e s
b := IMMtests (C ) ;
i f b=true
then continue ;
e l i f I s L i s t ( b ) then
i f R a t i o n a l C l a s s (C, z)= R a t i o n a l C l a s s (C, b [ 2 ] )
then return b ;
fi ;
fi ;
CT := O r d i n a r y C h a r a c t er T a b le (C) ;
for c h i in I r r (CT) do
for m in d i v do
i f not I s R a t ( beta (CT, z , m, c h i ) )
then return [m, z , c h i ,CT ] ;
fi ;
od ;
od ;
od ;
return true ;
end ;
Our typical procedure will be as follows: given a group G, take its Sylow 5subgroup P and find u, g ∈ P such that |P5 (u, g)| 6= |P5 (u, g n )| for 5 ∤ n, and
then show that |G5 (u, g)| 6= |G5 (u, g n )|. The second entry in the list returned by
THE FSZ PROPERTIES OF SPORADIC SIMPLE GROUPS
11
FSZtest gives precisely the g value we need. But it does not provide the u value
directly, nor the n. As it turns out, we can always take n = 2 when o(g) = 5, but
for other orders this need not necessarily hold.
In order to acquire these values we introduce the function FSIndPt below, which
is a variation on FSInd [4]. This function has the same essential limitation that
FSInd does, in that it needs to completely enumerate, store, and sort the elements
of the group. This could in principle be avoided, at the cost of increased run-time.
However our main use for the function is to apply it to Sylow 5-subgroups which
have small enough order that this issue does not pop up.
The inputs are a group G, m ∈ N and g ∈ G. It is best if one in fact passes in
CG (g) for G, but the function will compute the centralizer regardless. The function
looks for an element u ∈ CG (g) and an integer j coprime to the order of g such that
|Gm (u, g)| 6= |Gm (u, g j )|. The output is the two element list [u,j] if such data
exists, otherwise it returns fail to indicate that the test is normally inconclusive.
Note that by Lemma 1.3 and centrality of g in C = CG (g) we need only consider
the rational classes in C to find such a u.
FSIndPt:= function (G,m, g )
l o c a l GG, C, Cl , g u c o e f f , elG , Gm, a l i s t ,
a u l i s t , u m l i s t , npos , j , n , u , pr ;
C := C e n t r a l i z e r (G, g ) ;
GG := EnumeratorSorted (C ) ; ;
elG := S i z e (C ) ;
Gm := L i s t (GG, x−>P o s i t i o n (GG, xˆm) ) ;
pr := P r imeResidues ( Order ( g ) ) ;
for Cl in R a t i o n a l C l a s s e s (C) do
u := R e p r e s e n t a t i v e ( Cl ) ;
npos := [ ] ;
a l i s t := [ ] ;
a u l i s t := [ ] ;
u m l i s t := [ ] ;
g u c o e f f := [ ] ;
u m l i s t := L i s t (GG, a−>P o s i t i o n (GG,
( a∗ I n v e r s e ( u ) ) ˆm) ) ; ;
#The f o l l o w i n g computes t h e c a r d i n a l i t i e s
# o f G m( u , g ˆn ) .
for n in pr
12
MARC KEILBERG
do
npos := P o s i t i o n (GG, g ˆn ) ;
a l i s t := P o s i t i o n s (Gm, npos ) ;
a u l i s t := P o s i t i o n s ( u m l i s t , npos ) ;
g u c o e f f [ n ] := S i z e (
Intersection ( alist , aulist ));
#Check i f we ’ ve fou n d our u
i f not g u c o e f f [ n ] = g u c o e f f [ 1 ]
then return [ u , n ] ;
fi ;
od ;
od ;
#No u was fou n d f o r t h i s G,m, g
return f a i l ;
end ;
Lastly, we introduce the function FSZSetCards, which is the most naive and
straightforward way of computing both |Gm (u, g)| and |Gm (u, g n )|. The inputs are
a set C of group elements—normally this would be CG (g), but could be a conjugacy
class or some other subset or subgroup—; group elements u, g; and integers m, n
such that g 6= g n . The output is a two element list, which counts the number of
elements of C in Gm (u, g) in the first entry and the number of elements of C in
Gm (u, g n ) in the second entry. It is left to the user to check that the inputs satisfy
whatever relations are needed, and to then properly interpret the output.
FSZSetCards := function (C, u , g ,m, n )
l o c a l c o n t r i b s , apow , aupow , a ;
c o n t r i b s := [ 0 , 0 ] ;
for a in C do
apow := a ˆm;
aupow := ( a ∗ I n v e r s e ( u ) ) ˆm;
i f ( apow = g and aupow = g ) then
c o n t r i b s [ 1 ] := c o n t r i b s [ 1 ] + 1 ;
e l i f ( apow=g ˆn and aupow=gˆn ) then
c o n t r i b s [ 2 ] := c o n t r i b s [ 2 ] + 1 ;
fi ;
od ;
return ( c o n t r i b s ) ;
end ;
THE FSZ PROPERTIES OF SPORADIC SIMPLE GROUPS
13
As long as C admits a reasonable iterator in GAP then this function can compute
these cardinalities with a very minimal consumption of memory. Any polycyclic or
permutation group satisfies this, as well as any conjugacy class therein. However,
for a matrix group GAP will attempt to convert to a permutation representation,
which is usually very costly.
The trade-off, as it often is, is in the speed of execution. For permutation groups
the run-time can be heavily impacted by the degree, such that it is almost always
worthwhile to apply SmallerDegreePermutationRepresentation whenever possible. If the reader wishes to use this function on some group that hasn’t been tested
before, the author would advise adding in code that would give you some ability
to gauge how far along the function is. By default there is nothing in the above
code, even if you interrupt the execution to check the local variables, to tell you if
the calculation is close to completion. Due to a variety of technical matters it is
difficult to precisely benchmark the function, but when checking a large group it
is advisable to acquire at least some sense of whether the calculation may require
substantial amounts of time.
Remark 3.2. Should the reader opt to run our code to see the results for themselves,
they may occasionally find that the outputs of FSZSetCards occur in the opposite
order we list here. This is due to certain isomorphisms and presentations for groups
calculated in GAP not always being guaranteed to be identical every single time you
run the code. As a result, the values for u or g may sometimes be a coprime power
(often the inverse) of what they are in other executions of the code. Nevertheless,
there are no issues with the function proving the non-F SZ property thanks to
Lemma 1.3, and there is sufficient predictability to make the order of the output
the only variation.
While very naive, FSZSetCards will suffice for most of our purposes, with all
uses of it completing in an hour or less. However, in Section 6 we will find an
example where the expected run-time for this function is measured in weeks, and
for which FSZtest requires immense amounts of memory—Schauenburg [13] says
that FSZtest for this group consumed 128 GB of memory without completing!
We therefore need a slightly less naive approach to achieve a more palatable runtime in this case. We leave this to Section 6, but note to the reader that the method
this section uses can also be applied to all of the other groups for which FSZSetCards
suffices. The reason we bother to introduce and use FSZSetCards is that the method
of Section 6 relies on being able to compute conjugacy classes, which can hit memory
consumption issues that FSZSetCards will not encounter. It is not our goal with
these functions to find the most efficient, general-purpose procedure. Instead we
seek to highlight some of the ways in which computationally problematic groups
may be rendered tractable by altering the approach one takes, and to show that the
non-F SZ property of these groups can be demonstrated in a (perhaps surprisingly)
short amount of time and with very little memory consumption.
4. The non-F SZ sporadic simple groups
The goal for this section is to show that the Chevalley group G2 (5), and all
sporadic simple groups with order divisible by 56 , as well as their Sylow 5-subgroups,
are non-F SZ5 . We begin with a discussion of the general idea for the approach.
Our first point of observation is that the only primes p such that pp+1 divides the
order of any of these groups have p ≤ 5. Indeed, a careful analysis of the non-F SZ
14
MARC KEILBERG
groups of order 56 found in [4] shows that several of them are non-split extensions
with a normal extra-special group of order 55 , which can be denoted in AtlasRep
notation as 51+4 .5. Consulting the known maximal subgroups for these groups we
can easily infer that the Sylow 5-subgroups of HN , G2 (5), B, and Ly have this same
form, and that the Monster has such a p-subgroup. Indeed, G2 (5) is a maximal
subgroup of HN , and B and Ly have maximal subgroups containing a copy of
HN , so these Sylow subgroups are all isomorphic. Furthermore, the Monster’s
Sylow 5-subgroup has the form 51+6 .52 , a non-split extension of the elementary
abelian group of order 25 by an extra special group of order 57 . Given this, we
suspect that these Sylow 5-subgroups are all non-F SZ5 , and that this will cause
the groups themselves to be non-F SZ5 .
We can then exploit the fact that non-trivial p-groups all have non-trivial centers
to obtain centralizers in the parent group that contain a Sylow 5-subgroup. In the
case of G = HN or G = G2 (5), we can quickly find u, g ∈ P , with P a Sylow
5-subgroup of G, such that |P5 (u, g)| 6= |P5 (u, g 2 )|, and show that for H = CG (g)
we have |H5 (u, g)| 6= |H5 (u, g 2 )|. Since necessarily |H5 (u, g)| = |G5 (u, g)| and
|H5 (u, g 2 )| = |G5 (u, g 2 )|, this will show that HN and G2 (5) are non-F SZ5 . Unfortunately, it turns out that P is not normal in H in either case, so the cardinalities of
these sets in H must be checked directly, rather than simply applying Corollary 2.5.
The remaining groups require a little more work, for various reasons.
In the case of the Monster, there is a unique non-identity conjugacy class yielding
a centralizer with order divisible by 59 . So we are free to pick any subgroup G
of M that contains a centralizer with this same order. Fortunately, not only is
such a (maximal) subgroup known, but Bray and Wilson [1] have also computed
a permutation representation for it. This is available in GAP via the AtlasRep
package. This makes all necessary calculations for the Monster accessible. The
Sylow 5-subgroup is fairly easily shown to be non-F SZ5 directly. However, the
centralizer we get in this way has large order, and its Sylow 5-subgroup is not
normal, making it impractical to work with on a personal computer. However,
further consultation of character tables shows that the Monster group has a unique
conjugacy class of an element of order 10 whose centralizer is divisible by 56 . So we
may again pick any convenient (maximal) subgroup with such a centralizer, and it
turns out the same maximal subgroup works. We construct the appropriate element
of order 10 by using suitable elements from Sylow subgroups of the larger centralizer,
and similarly to get the element u. Again it turns out that the Sylow 5-subgroup of
this smaller subgroup is not normal, so we must compute the set cardinalities over
the entire centralizer in question. However, this centralizer is about 1/8000-th the
size of the initial one, and we are subsequently able to calculate the appropriate
cardinalities in under an hour.
The Baby Monster can then be handled by using the fact that the Monster
contains the double cover of B as the centralizer of an involution to obtain the centralizer we need in B from a centralizer in M . The author thanks Robert Wilson
for reminding them of this fact. For the Lyons group, the idea is much the same
as for HN and G2 (5), with the additional complication that the AtlasRep package
does not currently contain any permutation representations for Ly. To resolve this,
we obtain a permutation representation for Ly, either computed directly in GAP
THE FSZ PROPERTIES OF SPORADIC SIMPLE GROUPS
15
or downloaded [12]. This is then used to construct a suitable permutation representation of the maximal subgroup in question. Once this is done the calculations
proceed without difficulties.
These calculations all make extensive use of the functions given in Section 3.
4.1. Chevalley group G2 (5). We now show that G2 (5) and its Sylow 5-subgroups
are not F SZ5 . This was independently verified in [13]. Since G2 (5) is of relatively
small order, it can be attacked quickly and easily.
Theorem 4.1. The simple Chevalley group G2 (5) and its Sylow 5-subgroup are
non-F SZ5 .
Proof. The claims follow from running the following GAP code.
G := AtlasGroup ( ”G2 ( 5 ) ” ) ; ;
P := SylowSubgroup (G, 5 ) ; ;
# The f o l l o w i n g shows P i s n ot FSZ 5
g := FSZtestZ (P ) [ 2 ] ;
# Find u
u := FSIndPt (P , 5 , g ) [ 1 ] ; ;
C := C e n t r a l i z e r (G, g ) ; ;
#Check t h e c a r d i n a l i t i e s
FSZSetCards (C, u , g , 5 , 2 ) ;
The output is [0,625], so it follows that G and P are both non-F SZ5 as desired.
We note that P is not normal in C, and indeed C is a perfect group of order
375,000 = 23 · 3 · 56 .
The call to FSZSetCards above runs in approximately 11 seconds, which is approximately the amount of time necessary to run FSZtest on G2 (5) directly. In
this case, the use of FSZSetCards is not particularly efficient, as the groups in
question are of reasonably small sizes and permutation degree. Nevertheless, this
demonstrates the basic method we will employ for all subsequent groups.
4.2. The Harada-Norton group. For the group HN the idea proceeds similarly
as for G2 (5).
Theorem 4.2. The Harada-Norton simple group HN and its Sylow 5-subgroup are
not F SZ5 .
Proof. To establish the claims it suffices to run the following GAP code.
G := AtlasGroup ( ”HN” ) ; ;
P := SylowSubgroup (G, 5 ) ; ;
# G, t h u s P, has v e r y l a r g e d e g r e e .
# P o l y c y c l i c g r o u p s are e a s i e r t o work w i t h .
i s o P := IsomorphismPcGroup (P ) ; ;
P := Image ( i s o P ) ; ;
16
MARC KEILBERG
#Find u , g f o r P
g := FSZtestZ (P ) [ 2 ] ;
u := FSIndPt (P , 5 , g ) [ 1 ] ;
g := Image ( Inver seGener a lMa pping ( i s o P ) , g ) ; ;
u := Image ( Inver seGener a lMa pping ( i s o P ) , u ) ; ;
C := C e n t r a l i z e r (G, g ) ; ;
iso C := IsomorphismPcGroup (C ) ; ;
C := Image ( iso C ) ; ;
FSZSetCards (C, Image ( isoC , u ) , Image ( isoC , g ) , 5 , 2 ) ;
This code executes in approximately 42 minutes, with approximately 40 of that
spent finding P . The final output is [3125,0], so we conclude that both P and HN
are non-F SZ5 , as desired.
P is again not a normal subgroup of C, so we again must test the entire centralizer
rather than just P . We note that |C| = 25 56 = 500,000. Indeed, C is itself nonF SZ5 of necessity, and the fact that the call to IsomorphismPcGroup did not fail
means that C is solvable, and in particular not perfect and not a p-group.
4.3. The Monster group. We will now consider the Monster group M . The full
Monster group is famously difficult to compute in. But, as detailed in the beginning of the section, by consulting character tables of M and its known maximal
subgroups, we can find a maximal subgroup which contains a suitable centralizer
(indeed, two suitable centralizers) and also admits a known permutation representation [1].
Theorem 4.3. The Monster group M and its Sylow 5-subgroup are not F SZ5 .
Proof. The Sylow 5-subgroup of M has order 59 . Consulting the character table of
M , we see that M has a unique conjugacy class yielding a proper centralizer with
order divisible by 59 , and a unique conjugacy class of an element of order 10 whose
centralizer has order divisible by 56 ; moreover, the order of the latter centralizer is
precisely 12 million, and in particular is not divisible by 57 . It suffices to consider
any maximal subgroups containing such centralizers. The maximal subgroup of
shape 51+6
: 2.J2 .4, which is the normalizer associated to a 5B class, is one such
+
choice.
We first show that the Sylow 5-subgroup of M is not F SZ5 .
G :=
P :=
isoP
P :=
AtlasGroup ( ” 5 ˆ ( 1 + 6 ) : 2 . J2 . 4 ” ) ; ;
SylowSubgroup (G, 5 ) ; ;
:= IsomorphismPcGroup (P ) ; ;
Image ( i s o P ) ; ;
ex := FSZtestZ (P ) ;
The proper centralizer with order divisible by 59 is still impractical to work with.
So we will use the data for P to construct the element of order 10 mentioned above.
THE FSZ PROPERTIES OF SPORADIC SIMPLE GROUPS
17
zp := ex [ 2 ] ; ;
zp := Image ( Inver seGener a lMa pping ( i s o P ) , zp ) ; ;
C := C e n t r a l i z e r (G, zp ) ; ;
Q := SylowSubgroup (C , 2 ) ; ;
zq := F i r s t ( Center (Q) , q−>Order ( q)>1 and
S i z e ( C e n t r a l i z e r (G, zp ∗q ) ) = 1 2 0 0 0 0 0 0 ) ; ;
#This g i v e s us t h e g and c e n t r a l i z e r we want .
g := zp ∗ zq ; ;
C := C e n t r a l i z e r (G, g ) ; ;
#Reducing t h e p e r m u t a t i o n d e g r e e w i l l
#s a v e a l o t o f compu t at ion t ime l a t e r .
iso C := S m a l l e r D e g r e e P e r m u t a t i o n R e p r e s e n t a t i o n (C ) ; ;
C := Image ( iso C ) ; ;
g := Image ( isoC , g ) ; ;
zp := Image ( isoC , zp ) ; ;
zq := Image ( isoC , zq ) ; ;
#Now
P :=
isoP
P :=
proceed to con st ru ct a choice of u .
SylowSubgroup (C , 5 ) ; ;
:= IsomorphismPcGroup (P ) ; ;
Image ( i s o P ) ; ;
ex := FSIndPt (P, 5 , Image ( iso P , zp ) ) ;
up := Image ( Inver seGener a lMa pping ( i s o P ) , ex [ 1 ] ) ; ;
#D e f i n e our c h o i c e o f u .
#In t h i s case , u has o r d e r 5 0 .
u := up∗ zq ; ;
#F i n a l l y , we compute t h e c a r d i n a l i t i e s
# of the relevant s e t s .
FSZSetCards (C, u , g , 5 , 7 ) ;
This final function yields [0,15000], which proves that M is not F SZ5 , as desired.
This final function call takes approximately 53 minutes to complete, while all
preceding operations can complete in about 5 minutes combined—though the conversion of C to a lower degree may take more than this, depending. The lower
degree C has degree 18, 125, but requires (slightly) more than 2 GB of memory to
acquire. This conversion can be skipped to keep the memory demands well under
2GB, but the execution time for FSZSetCards will inflate to approximately a day
and a half.
18
MARC KEILBERG
Remark 4.4. In the first definition of C above, containing the full Sylow 5-subgroup
of M , we have |C| = 9.45 × 1010 = 28 · 33 · 59 · 7. For the second definition of C,
corresponding to the centralizer of an element of order 10, we have |C| = 1.2×107 =
28 · 3 · 56 . The first centralizer is thus 7875 = 32 · 53 · 7 times larger than the second
one. Either one is many orders of magnitude smaller than |M | ≈ 8.1 × 1053 , but
the larger one was still too large to work with for practical purposes.
4.4. The Baby Monster. We can now consider the Baby Monster B.
Theorem 4.5. The Baby Monster B and its Sylow 5-subgroup are both non-F SZ5 .
Proof. The Baby Monster is well known to have a maximal subgroup of the form
HN.2, so it follows that B and HN have isomorphic Sylow 5-subgroups. By Theorem 4.2 HN has a non-F SZ5 Sylow 5-subgroup, so this immediately gives the
claim about the Sylow 5-subgroup of B.
From the character table of B we see that there is a unique non-identity conjugacy class whose centralizer has order divisible by 56 . This corresponds to an element of order 5 from the 5B class, and the centralizer has order 6,000,000 = 27 ·3·56 .
In the double cover 2.B of B, this centralizer is covered by the centralizer of an
element of order 10. This centralizer necessarily has order 12, 000, 000. Since M
contains 2.B as a maximal subgroup, and there is a unique centralizer of an element
of order 10 in M with order divisible by 12,000,000, these centralizers in 2.B and M
are isomorphic. We have already computed this centralizer in M in Theorem 4.3.
To obtain the centralizer in B, we need only quotient by an appropriate central
involution. In the notation of the proof of Theorem 4.3, this involution is precisely
zq.
GAP will automatically convert this quotient group D into a lower degree representation, yielding a permutation representation of degree 3125 for the centralizer.
This will require as much as 8GB of memory to complete. Moreover, the image
of zp from Theorem 4.3 in this quotient group yields the representative of the 5B
class we desire, denoted here by g. Using the image of up in the quotient for u,
we can then easily run FSZSetCards(C,u,g,5,2) to get a result of [15000,3125],
which shows that B is non-F SZ5 as desired. This final call completes in about 4
minutes.
Note that in M the final return values summed to 15,000, with one of the values
0, whereas in B they sum to 18,125 and neither is zero. This reflects how there
is no clear relationship between the F SZ properties of a group and its quotients,
even when the quotient is by a (cyclic) central subgroup. In particular, it does not
immediately follow that the quotient centralizer would yield the non-F SZ property
simply because the centralizer in M did, or vice versa.
Moreover, we also observe that the cardinalities computed in Theorem 4.2 implies that for a Sylow 5-subgroup P of B we have P5 (u, g 2 ) = ∅, so the 3125 ”extra”
elements obtained in B5 (u, g 2 ) come from non-trivial conjugates of P . This underscores the expected difficulties in a potential proof (or disproof) of Conjecture 2.7.
4.5. The Lyons group. There is exactly one other sporadic group with order
divisible by 56 (or pp+1 for p > 3): the Lyons group Ly.
Theorem 4.6. The maximal subgroup of Ly of the form 51+4 : 4.S6 has a faithful
permutation representation on 3,125 points, given by the action on the cosets of
THE FSZ PROPERTIES OF SPORADIC SIMPLE GROUPS
19
4.S6 . Moreover, this maximal subgroup, Ly, and their Sylow 5-subgroups are all
non-F SZ5 .
Proof. It is well-known that Ly contains a copy of G2 (5) as a maximal subgroup,
and that the order of Ly is not divisible by 57 . Therefore Ly and G2 (5) have
isomorphic Sylow 5-subgroups, and by Theorem 4.1 this Sylow subgroup is not
F SZ5 .
Checking the character table for Ly as before, we find there is a unique nonidentity conjugacy class whose corresponding centralizer has order divisible by 56 .
In particular, the order of this centralizer is 2,250,000 = 24 ·32 ·56 , and it comes from
an element of order 5. So any maximal subgroup containing an element of order 5
whose centralizer has this order will suffice. The maximal subgroup 51+4 : 4.S6 is
the unique such choice.
The new difficulty here is that, by default, there are only matrix group representations available though the AtlasRep package for Ly and 51+4 : 4.S6 , which
are ill-suited for our purposes. However, faithful permutation representations for
Ly are known, and they can be constructed through GAP with sufficient memory
available provided one uses a well-chosen method. A detailed description of how to
acquire the permutation representation on 8,835,156 points, as well as downloads
for the generators (including MeatAxe versions courtesy of Thomas Breuer) can be
found on the web, courtesy Pfeiffer [12].
Using this, we can then obtain a permutation representation for the maximal subgroup 51+4 : 4.S6 on 8,835,156 points using the programs available on the online
ATLAS [16]. This in turn is fairly easily converted into a permutation representation on a much smaller number of points, provided one has up to 8 GB of memory
available, via SmallerDegreePermutationRepresentation. The author obtained
a permutation representation on 3125 points, corresponding to the action on the
cosets of 4.S6 . The exact description of the generators is fairly long, so we will not
reproduce them here. The author is happy to provide them upon request. One can
also proceed in a fashion similar to some of the cases handled in [1] to find such a
permutation representation.
Once this smaller degree representation is obtained, it is then easy to apply
the same methods as before to show the desired claims about the F SZ5 properties. We can directly compute the Sylow 5-subgroup, then find u, g through
FSZtestZ and FSIndPt irrespectively, set C to be the centralizer of g, then run
FSZSetCards(C,u,g,5,2). This returns [5000,625], which gives the desired nonF SZ5 claims.
Indeed, FSZtest can be applied to (both) the centralizer and the maximal subgroup once this permutation representation is obtained. This will complete quickly,
thanks to the relatively low orders and degrees involved. We also note that the centralizer C so obtained will not have a normal Sylow 5-subgroup, and is a perfect
group. The maximal subgroup in question is neither perfect nor solvable, and does
not have a normal Sylow 5-subgroup.
5. The F SZ sporadic simple groups
We can now show that all other sporadic simple groups and their Sylow subgroups
are F SZ.
20
MARC KEILBERG
Example 5.1. Any group which is necessarily F SZ (indeed, F SZ + ) by [4, Corollary 5.3] necessarily has all of its Sylow subgroups F SZ, and so satisfies the conjecture. This implies that all of the following sporadic groups, as well as their Sylow
p-subgroups, are F SZ (indeed, F SZ + ).
• The Mathieu groups M11 , M12 , M22 , M23 , M24 .
• The Janko groups J1 , J2 , J3 , J4 .
• The Higman-Simms group HS.
• The McLaughlin group M cL.
• The Held group He.
• The Rudvalis group Ru.
• The Suzuki group Suz.
• The O’Nan group O′ N .
• The Conway group Co3 .
• The Thompson group T h.
• The Tits group 2 F4 (2)′ .
Example 5.2. Continuing the last example, it follows that the following are the
only sporadic simple groups not immediately in compliance with the conjecture
thanks to [4, Corollary 5.3].
• The Conway groups Co1 , Co2 .
• The Fischer groups F i22 , F i23 , F i′24 .
• The Monster M .
• The Baby Monster B.
• The Lyons group Ly.
• The Harada-Norton group HN .
The previous section showed that the last four groups were all non-F SZ5 and have
non-F SZ5 Sylow 5-subgroups, and so conform to the conjecture. By exponent considerations the Sylow subgroups of the Conway and Fischer groups are all F SZ + .
The function FSZtest can be used to quickly show that Co1 , Co2 , F i22 , and F i23
are F SZ, and so conform to the conjecture.
This leaves just the largest Fischer group F i′24 .
Theorem 5.3. The sporadic simple group F i′24 and its Sylow subgroups are all
F SZ.
Proof. The exponent of F i′24 can be calculated from its character table and shown
to be
24,516,732,240 = 24 · 33 · 5 · 7 · 11 · 13 · 17 · 23 · 29.
As previously remarked, this automatically implies that the Sylow subgroups are all
F SZ (indeed, F SZ + ). By [4, Corollary 5.3] it suffices to show that every centralizer
of an element with order not in {1, 2, 3, 4, 6} in F i′24 that contains an element of
order 16 is F SZ. There is a unique conjugacy class in F i′24 for an element with
order (divisible by) 16. The centralizer of such an element has order 32, and is
isomorphic to Z16 × Z2 . So it suffices to consider the elements of order 8 in this
centralizer, and show that their centralizers (in F i′24 ) are F SZ. Every such element
has a centralizer of order 1536 = 29 · 3. So by Theorem 3.1 the result follows.
The following is GAP code verifying these claims.
G := AtlasGroup ( ” Fi2 4 ’ ” ) ; ;
THE FSZ PROPERTIES OF SPORADIC SIMPLE GROUPS
21
GT := C h a r a c t e r T a b l e ( ” Fi2 4 ’ ” ) ; ;
P o s i t i o n s ( O r d e r s C l a s s R e p r e s e n t a t i v e s (GT) mod 1 6 , 0 ) ;
exp := Lcm( O r d e r s C l a s s R e p r e s e n t a t i v e s (GT) ) ;
C o l l e c t e d ( F a c t o r s I n t ( exp ) ) ;
SetExponent (G, exp ) ; ;
P := SylowSubgroup (G, 2 ) ; ;
#There are many ways t o g e t an e l e m e n t o f o r d e r 1 6 .
#Here ’ s a v e r y crude , i f non−d e t e r m i n i s t i c , one .
x := Random(P ) ; ;
while not Order ( x ) = 16 do x:=Random(P ) ; od ;
C := C e n t r a l i z e r (G, x ) ; ;
c e n t s := F i l t e r e d (C, y−>Order ( y ) = 8 ) ; ;
c e n t s := L i s t ( c e n t s , y−>C e n t r a l i z e r (G, y ) ) ; ;
L i s t ( cents , Size ) ;
The following then summarizes our results on sporadic simple groups.
Theorem 5.4. The following are equivalent for a sporadic simple group G.
(1) G is not F SZ.
(2) G is not F SZ5 .
(3) The order of G is divisible by 56 .
(4) G has a non-F SZ Sylow subgroup.
(5) The Sylow 5-subgroup of G is not F SZ5 .
Proof. Combine the results of this section and the previous one.
6. The symplectic group S6 (5)
In [13] it was mentioned that the symplectic group S6 (5) was likely to be the
second smallest non-F SZ simple group, after G2 (5). Computer calculations there
ran into issues when checking a particular centralizer, as the character table needed
excessive amounts of memory to compute. Our methods so far also place this group
at the extreme end of what’s reasonable. In principle the procedure and functions
we’ve introduced so far can decide that this group is non-F SZ in an estimated two
weeks of uninterrupted computations, and with nominal memory usage. However,
we can achieve a substantial improvement that completes the task in about 8 hours
(on two processes; 16 hours for a single process), while maintaining nominal memory
usage.
The simple yet critical observation comes from [13, Definition 3.3]. In particular,
if a ∈ Gm (u, g), then am = g implies that for all b ∈ classCG (g) (a) we have bm = g.
So while FSZSetCards acts as naively as possible and iterates over all elements of
22
MARC KEILBERG
C = CG (g), we in fact need to only iterate over the elements of those conjugacy
classes of C whose m-th power is g (or g n ). GAP can often compute the conjugacy
classes of a finite permutation or polycyclic group quickly and efficiently. So while
it is plausible that finding these conjugacy classes can be too memory intensive
for certain centralizers, there will nevertheless be centralizers for which all other
methods are too impractical for either time or memory reasons, but for which this
reduction to conjugacy classes makes both time and memory consumption a nonissue. The otherwise problematic centralizer of S6 (5) is precisely such a case, as we
will now see.
Theorem 6.1. The projective symplectic group S6 (5) and its Sylow 5-subgroup are
both non-F SZ5 .
Proof. As usual, our first task is to show that the Sylow 5-subgroup is non-F SZ5 ,
and then use the data obtained from that to attack S6 (5).
G :=
P :=
isoP
P :=
AtlasGroup ( ” S6 ( 5 ) ” ) ; ;
SylowSubgroup (G, 5 ) ; ;
:= IsomorphismPcGroup (P ) ; ;
Image ( i s o P ) ; ;
#Show P i s non−FSZ 5 , and
#g e t t h e g we need v i a FS Zt est Z
g := FSZtestZ (P ) [ 2 ] ;
#Get t h e u we need v i a FSIndPt
u := FSIndPt (P , 5 , g ) [ 1 ] ;
One can of course store the results of FSZtestZ and FSIndPt directly to see the
complete data returned, and then extract the specific data need.
We can then show that G = S6 (5) is itself non-F SZ5 by computing G5 (u, g) and
G5 (u, g 2 ) with the following code.
G :=
isoG
G :=
g :=
u :=
uinv
C e n t r a l i z e r (G, g ) ; ;
:= S m a l l e r D e g r e e P e r m u t a t i o n R e p r e s e n t a t i o n (G ) ; ;
Image ( isoG ) ; ;
Image ( isoG , g ) ; ;
Image ( isoG , u ) ; ;
:= I n v e r s e ( u ) ; ;
#Now we compute t h e c o n j u g a c y c l a s s e s
# of the c e n t r a l i z e r .
c l := C o n j u g a c y C l a s s e s (G ) ; ;
#We t h e n need o n l y c o n s i d e r t h o s e
# c l a s s e s w i t h a s u i t a b l e 5− t h power
cand1 := F i l t e r e d ( c l , x−>R e p r e s e n t a t i v e ( x)ˆ5=g ) ; ;
cand2 := F i l t e r e d ( c l , x−>R e p r e s e n t a t i v e ( x)ˆ5=g ˆ 2 ) ; ;
THE FSZ PROPERTIES OF SPORADIC SIMPLE GROUPS
23
#There i s i n f a c t o n l y one c o n j u g a c y
# c l a s s in both cases .
Length ( cand1 ) ;
Length ( cand2 ) ;
cand1 := cand1 [ 1 ] ; ;
cand2 := cand2 [ 1 ] ; ;
#The f o l l o w i n g computes | G 5 ( u , g ) |
Number ( cand1 , x−>(x∗ uinv )ˆ5=g ) ;
#The f o l l o w i n g computes | G 5 ( u , g ˆ 2 ) |
Number ( cand2 , x−>(x∗ uinv )ˆ5=g ˆ 2 ) ;
This code shows that
|G5 (u, g)| = 1,875,000;
|G5 (u, g 2 )| = 375,000.
Therefore S6 (5) is non-F SZ5 , as desired.
The calculation of |G5 (u, g)| takes approximately 8.1 hours, and the calculation of |G5 (u, g 2 )| takes approximately 7.45 hours. The remaining calculations are
done in significantly less combined time. We note that the calculations of these
two cardinalities can be done independently, allowing each one to be calculated
simultaneously on separate GAP processes.
We also note that the centralizer in S6 (5) under consideration in the above
is itself a perfect group; is a permutation group of degree 3125 and order 29.25
billion; and has a non-normal Sylow 5-subgroup. Moreover, it can be shown that
the g we found yields the only rational class of P at which P fails to be F SZ. One
consequence of this, combined with the character table of S6 (5), is that, unlike in
the case of the Monster group, we are unable to switch to any other centralizer
with a smaller Sylow 5-subgroup to demonstrate the non-F SZ5 property.
Similarly as with the Baby Monster group, it is interesting to note that |P5 (u, g)| =
62,500 and |P5 (u, g 2 )| = 0 for P, u, g as in the proof. These cardinalities can be
quickly computed exactly as they were for S6 (5), simply restricted to P , or using the slower FSZSetCards, with the primary difference being that now there are
multiple conjugacy classes to check and sum over.
Before continuing on to the next section, where we consider small order perfect
groups available in GAP, we wish to note a curious dead-end, of sorts.
Lemma 6.2. Given u, g ∈ G with [u, g] = 1, let C = CG (g), D = CC (u), and
m ∈ N. Then a ∈ Gm (u, g) if and only if ad ∈ Gm (u, g) for some/any d ∈ D.
Proof. This is noted by Iovanov et al. [4] when introducing the concept of an F SZ +
group. It is an elementary consequence of the fact that D = CG (u, g) centralizes
both g and u by definition.
So suppose we have calculated those conjugacy classes in C whose m-th power is
g. As in the above code, we can iterate over all elements of these conjugacy classes
in order to compute |Gm (u, g)|. However, the preceding lemma shows that we could
instead partition each such conjugacy class into orbits under the D action. The
24
MARC KEILBERG
practical upshot then being that we need only consider a single element of each
orbit in order to compute |Gm (u, g)|.
In the specific case of the preceding theorem, we can show that the single conjugacy classes cand1 and cand2 both have precisely 234 million elements, and that
D is a non-abelian group of order 75,000, and is in fact the full centralizer of u in
S6 (5). Moreover the center of C is generated by g, and so has order 5. Thus in the
best-case scenario partitioning these conjugacy classes into D orbits can result in
orbits with |D/Z(C)| = 15,000 elements each. The cardinalities we computed can
also be observed to be multiples 15,000. That would constitute a reduction of more
than four orders of magnitude on the total number of elements we would need to
check. While this is a best-case scenario, since D also has index 390,000 in C it
seems very plausible that such a partition would produce a substantial reduction in
the number of elements to be checked. So provided that calculating these orbits can
be done reasonably quickly, we would expect a significant reduction in run-time.
There is a practical problem, however. The problem being that, as far as the
author can tell, there is no efficient way for GAP to actually compute this partition.
Doing so evidently requires that GAP fully enumerate and store the conjugacy class
in question. In our particular case, a conjugacy class of 234 million elements in a
permutation group of degree 3125 simply requires far too much memory—in excess
of 1.5 terabytes. As such, while the lemma sounds promising, it seems to be lacking
in significant practical use for computer calculations. It seems likely, in the author’s
mind, that any situation in which it is useful could have been handled in reasonable
time and memory by other methods. Nevertheless, the author cannot rule out the
idea as a useful tool.
7. Perfect groups of order less than 106
We now look for examples of additional non-F SZ perfect groups. The library
of perfect groups stored by GAP has most perfect groups of order less than 106 ,
with a few exceptions noted in the documentation. So we can iterate through the
available groups, of which there are 1097 at the time this paper was written. We
can use the function IMMtests from Section 3 to show that most of them are F SZ.
#Get a l l a v a i l a b l e s i z e s
G l i s t := F i l t e r e d ( S i z e s P e r f e c t G r o u p s ( ) ,
n−>N r P e r f e c t L i b r a r y G r o u p s ( n ) > 0 ) ; ;
#Get a l l a v a i l a b l e p e r f e c t g r o u p s
G l i s t := L i s t ( G l i s t ,
n−>L i s t ( [ 1 . . N r P e r f e c t L i b r a r y G r o u p s ( n ) ] ,
k−>P er fectGr o up ( IsPermGroup , n , k ) ) ) ; ;
G l i s t := F l a t ( G l i s t ) ; ;
#Remove t h e o b v i o u s l y FSZ on es
F l i s t := F i l t e r e d ( G l i s t , G−>not IMMtests (G)=true ) ; ;
This gives a list of 63 perfect groups which are not immediately dismissed as being
F SZ.
THE FSZ PROPERTIES OF SPORADIC SIMPLE GROUPS
25
Theorem 7.1. Of the 1097 perfect groups of order less than 106 available through
the GAP perfect groups library, exactly 7 of them are not F SZ, all of which are
extensions of A5 . All seven of them are non-F SZ5 . Four of them have order
375,000 = 23 · 3 · 56 , and three of them have order 937,500 = 22 · 3 · 57 . Their perfect
group ids in the library are:
[375000, 2],
[375000, 8],
[375000, 9],
[937500, 3],
[937500, 4],
[937500, 5]
[375000, 11],
Proof. Continuing the preceding discussion, we can apply FSZtest to the 63 groups
in Flist to obtain the desired result. This calculation takes approximately two days
of total calculation time on the author’s computer, but can be easily split across
multiple GAP instances. Most of the time is spent on the F SZ groups of orders
375,000 and 937,500.
On the other hand, we can also consider the Sylow subgroups of all 1097 available
perfect groups, and test them for the F SZ property.
Theorem 7.2. If G is one of the 1097 perfect groups of order less than 106 available
through the GAP perfect groups library, then the following are equivalent.
(1)
(2)
(3)
(4)
G
G
G
G
is not F SZ.
has a non-F SZ Sylow subgroup.
has a non-F SZ5 Sylow 5-subgroup.
is not F SZ5 .
Proof. Most of the GAP calculations we need to perform now are quick, and the
problem is easily broken up into pieces, should it prove difficult to compute everything at once. The most memory intensive case requires about 1.7 GB to test.
With significantly more memory available than this, the cases can simply be tested
by FSZtest en masse, which will establish the result relatively quickly—a matter of
hours. We sketch the details here and leave it to the interested reader to construct
the relevant code. Recall that it is generally worthwhile to convert p-groups into
polycyclic groups in GAP via IsomorphismPcGroup.
Let Glist be constructed in GAP as before. Running over each perfect group,
we can easily construct their Sylow subgroups. We can then use IMMtests from
Section 3 to eliminate most cases. There are 256 Sylow subgroups, each from a
distinct perfect group, for which IMMtests is inconclusive; and there are exactly 4
cases where IMMtests definitively shows the non-F SZ property, which are precisely
the Sylow 5-subgroups of each of the non-F SZ perfect groups of order 375,000.
These 4 Sylow subgroups are all non-F SZ5 . We can also apply FSZtestZ to the
Sylow 5-subgroups of the non-F SZ perfect groups of order 937,500 to conclude
that they are all non-F SZ5 . All other Sylow subgroups remaining that come from
a perfect group of order less than 937,500 can be shown to be F SZ by applying
FSZtest without difficulty. Of the three remaining Sylow subgroups, one has a
direct factor of Z5 , and the other factor is easily tested and shown to be F SZ,
whence this Sylow subgroup is F SZ. This leaves two other cases, which are the
Sylow 5-subgroups of the perfect groups with ids [937500,7] and [937500,8]. The
second of these is easily shown to be FSZ by FSZtest. The first can also be
26
MARC KEILBERG
tested by FSZtest, but this is the case that requires the most memory and time—
approximately 15 minutes and the indicated 1.7 GB. In this case as well the Sylow
subgroups are F SZ. This completes the proof.
References
[1] John N. Bray and Robert A. Wilson. Explicit representations of maximal
subgroups of the monster. Journal of Algebra, 300(2):834 – 857, 2006.
ISSN 0021-8693. doi: http://dx.doi.org/10.1016/j.jalgebra.2005.12.017. URL
http://www.sciencedirect.com/science/article/pii/S0021869305007313.
[2] Pavel Etingof.
On some properties of quantum doubles of finite groups.
Journal of Algebra, 394:1 – 6, 2013.
ISSN 00218693.
doi:
http://dx.doi.org/10.1016/j.jalgebra.2013.07.004.
URL
http://www.sciencedirect.com/science/article/pii/S0021869313003529.
[3] GAP.
GAP – Groups, Algorithms, and Programming, Version 4.8.4.
http://www.gap-system.org, Jun 2016.
[4] M. Iovanov, G. Mason, and S. Montgomery. F SZ-groups and Frobenius-Schur
indicators of quantum doubles. Math. Res. Lett., 21(4):1–23, 2014.
[5] Yevgenia Kashina, Yorck Sommerhäuser, and Yongchang Zhu.
On
higher Frobenius-Schur indicators. Mem. Amer. Math. Soc., 181(855):
viii+65, 2006.
ISSN 0065-9266.
doi: 10.1090/memo/0855.
URL
http://dx.doi.org/10.1090/memo/0855.
[6] M. Keilberg. Examples of non-FSZ p-groups for primes greater than three.
ArXiv e-prints, September 2016. under review.
[7] Marc Keilberg.
Higher indicators for some groups and
their doubles.
J. Algebra Appl.,
11(2):1250030, 38,
2012.
ISSN 0219-4988.
doi:
10.1142/S0219498811005543.
URL
http://dx.doi.org/10.1142/S0219498811005543.
[8] Marc Keilberg.
Higher Indicators for the Doubles of Some Totally Orthogonal Groups.
Comm. Algebra, 42(7):2969–2998, 2014.
ISSN 0092-7872.
doi:
10.1080/00927872.2013.775651.
URL
http://dx.doi.org/10.1080/00927872.2013.775651.
[9] C. Negron and S.-H. Ng. Gauge invariants from the powers of antipodes. ArXiv
e-prints, September 2016.
[10] Siu-Hung Ng and Peter Schauenburg.
Higher Frobenius-Schur indicators for pivotal categories.
In Hopf algebras and generalizations, volume 441 of Contemp. Math., pages 63–90. Amer. Math.
Soc., Providence, RI, 2007.
doi: 10.1090/conm/441/08500.
URL
http://dx.doi.org/10.1090/conm/441/08500.
[11] Siu-Hung Ng and Peter Schauenburg. Central invariants and higher indicators for semisimple quasi-Hopf algebras. Trans. Amer. Math. Soc., 360(4):
1839–1860, 2008. ISSN 0002-9947. doi: 10.1090/S0002-9947-07-04276-6. URL
http://dx.doi.org/10.1090/S0002-9947-07-04276-6.
[12] Markus J. Pfeiffer.
Computing a (faithful) permutation representation of Lyons’ sporadic simple group,
2016.
URL
https://www.morphism.de/~markusp/posts/2016-06-20-computing-permutation-representation-ly
[13] P. Schauenburg. Higher frobenius-schur indicators for drinfeld doubles of finite groups through characters of centralizers. ArXiv e-prints, April 2016. in
preparation.
THE FSZ PROPERTIES OF SPORADIC SIMPLE GROUPS
27
[14] Peter Schauenburg. Some quasitensor autoequivalences of drinfeld doubles of
finite groups. Journal of Noncommutative Geometry, 11:51–70, 2017. doi:
10.4171/JNCG/11-1-2. URL http://dx.doi.org/10.4171/JNCG/11-1-2.
[15] R. A. Wilson, R. A. Parker, S. Nickerson, J. N. Bray, and T. Breuer. AtlasRep, a gap interface to the atlas of group representations, Version 1.5.1.
http://www.math.rwth-aachen.de/~Thomas.Breuer/atlasrep, Mar 2016.
Refereed GAP package.
[16] Robert Wilson, Peter Walsh, Jonathan Tripp, Ibrahim Suleiman, Richard
Parker, Simon Norton, Simon Nickerson, Steve Linton, John Bray, and
Rachel Abbott. ATLAS of Finite Group Representations - Version 3. URL
http://brauer.maths.qmul.ac.uk/Atlas/v3/. Accessed: October 3, 2016.
E-mail address: keilberg@usc.edu
| 4 |
1
Multiple Right-Hand Side Techniques in
Semi-Explicit Time Integration Methods for
Transient Eddy Current Problems
arXiv:1611.06721v2 [] 23 Sep 2017
Jennifer Dutiné∗ , Markus Clemens∗ , and Sebastian Schöps†
∗ Chair of Electromagnetic Theory, University of Wuppertal, 42119 Wuppertal, Germany
† Graduate School CE, Technische Universität Darmstadt, 64293 Darmstadt, Germany
Abstract—The spatially discretized magnetic vector potential
formulation of magnetoquasistatic field problems is transformed
from an infinitely stiff differential algebraic equation system into
a finitely stiff ordinary differential equation (ODE) system by
application of a generalized Schur complement for nonconducting
parts. The ODE can be integrated in time using explicit time
integration schemes, e.g. the explicit Euler method. This requires
the repeated evaluation of a pseudo-inverse of the discrete curlcurl matrix in nonconducting material by the preconditioned conjugate gradient (PCG) method which forms a multiple right-hand
side problem. The subspace projection extrapolation method
and proper orthogonal decomposition are compared for the
computation of suitable start vectors in each time step for the
PCG method which reduce the number of iterations and the
overall computational costs.
∂Ω
Ωc
Ωs
Ωn
Fig. 1. Computational domain Ω split into three regions: conductive and
nonlinearly permeable (Ωc ), nonconductive with constant permeability (Ωn )
and nonconductive with excitation (Ωs ).
Index Terms—Differential equations, eddy currents.
I. I NTRODUCTION
I
N the design process of transformers, electric machines,
etc., simulations of magnetoquasistatic field problems are
an important tool. In particular in multi-query scenarios,
as needed e.g. in the case of uncertainty quantification or
optimization, using efficient and fast algorithms is important.
The spatial discretization of the magnetic vector potential
formulation of eddy current problems yields an infinitely stiff
differential-algebraic equation system of index 1 (DAE). It
can only be integrated in time using implicit time integration
schemes, as e.g. the implicit Euler method, or singly diagonal
implicit Runge-Kutta schemes [1], [2]. Due to the nonlinear
B-H-characteristic of ferromagnetic materials large nonlinear
equation system have to be linearized, e.g. by the NewtonRaphson method, and resolved in every implicit time step. At
least one Newton-Raphson iteration is required per time step.
The Jacobian and the stiffness matrix have to be updated in
every iteration.
A linearization within each time step is avoided if explicit
time integration methods are used. First approaches for this
were published in [3] and [4], where different methods are
used in the conductive and nonconductive regions respectively.
In [3], the Finite Difference Time Domain (FDTD) method
is applied in the conductive regions, while the solution in
the nonconductive regions is computed using the Boundary
Element Method (BEM) [3]. In [4] an explicit time integration
method and the discontinuous Galerkin finite element method
Corresponding author: J. Dutiné (email: dutine@uni-wuppertal.de)
(DG-FEM) are applied in conductive materials, while the finite
element method based on continuous shape functions and an
implicit time integration scheme are used in nonconductive
domains [4]. In another recent approach, a similar DG-FEM
explicit time stepping approach is used for an H − Φ formulation of the magnetoquasistatic field problem [5].
This work is based on an approach originally presented
in [6], where the magnetoquasistatic DAE based on an
~ ∗ −field formulation is transformed into a finitely stiff orA
dinary differential equation (ODE) system by applying a
generalized Schur complement.
The structure of this paper is as follows: Section II introduces the mathematical formulation of the eddy current
problem and the transformation to an ordinary differential
equation. In Section III the time stepping and the resulting
multiple right-hand side problem are discussed. Here, also the
use of the subspace projection extrapolation method and of
the proper orthogonal decomposition method as multiple righthand side techniques is described. In Section IV the simulation
results for validating the presented approach and the effect of
subspace projection extrapolation method and of the proper
orthogonal decomposition method on a nonlinear test problem
are presented. The main results of this paper are summarized
in Section V.
II. M ATHEMATICAL F ORMULATION
~ ∗ −formulation is given
The eddy current problem in the A
by the partial differential equation
~
∂A
~ ∇×A
~ = ~JS ,
κ
+∇× ν ∇×A
(1)
∂t
c 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including
reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of
any copyrighted component of this work in other works.
2
~ is the time-dependent
where κ is the electrical conductivity, A
magnetic vector potential, ν is the reluctivity that can be
~ S iS (t), where
nonlinear in ferromagnetic materials and ~JS = X
~ S distributes
iS (t) is the time-dependent source current and X
the current density spatially. Furthermore, initial values and
boundary conditions are needed. The weak formulation of (1)
~
leads to the variational problem: find A
Z
~
∂A
w
~ ·κ
dΩ +
∂t
Ω
Z
Z
~ ∇×A
~ dΩ =
∇×w
~ ·ν ∇×A
w
~ · J~s dΩ
Ω
Ω
for all w
~ ∈ H0 (curl, Ω) where we denote the spatial domain
with Ω and assume Dirichlet conditions at the boundary
∂Ω, see Fig. 1. Discretization and choosing test and ansatz
functions from the same space according to Ritz-Galerkin, i.e.,
~ r, t) ≈
A(~
N
dof
X
w
~ i (~r)ai (t)
where Mc is the conductivity matrix, Kc is the nonlinear
curl-curl reluctivity matrix in conducting regions, Kn is the
typically constant curl-curl matrix in nonconducting regions,
Kcn is a coupling matrix, and jSn is the source current
typically defined in the nonconducting domain only. The
conductivity matrix in (3) is not invertible and therefore the
problems consists of differential-algebraic equations (DAEs).
The numerical solution of these systems is more difficult than
in the case of ordinary differential equations (ODEs). The level
of difficulty is measured by the DAE index, which can be
roughly interpreted as the number of differentiations needed to
obtain an ODE from the DAE [1]. System (3) is essentially an
index-1 DAE with the speciality that the algebraic constraint,
i.e., the second equation in (3), is formally not uniquely
solvable for an without defining a gauge condition due to the
nullspace of the discrete curl-curl operator Kn . However, it is
well known that many iterative solvers have a weak gauging
property, e.g. [7], such that a formal regularization can be
avoided.
Relying on this weak gauging property, the generalized
Schur complement
(4)
where K+
n represents a pseudo-inverse of Kn in matrix form,
is applied to (3) and transforms the DAE into
d
ac + KS (ac )ac
dt
an
= −Kcn K+
n js,n ,
(7)
are computed in the m-th time step, where ∆t is the time step
size. The Courant-Friedrich-Levy (CFL) criterion determines
the maximum stable time step size of explicit time integration
methods [1]. For the explicit Euler method
∆t ≤
2
λmax M−1
c KS (ac )
(8)
is an estimation for the maximum stable time step size,
where λmax is the maximum eigenvalue [8]. The maximum
eigenvalue can be estimated using the power method [10].
III. M ULTIPLE R IGHT-H AND S IDE P ROBLEM
leads to a spatially discretized symmetric equation system in
time domain. Separation of the degrees of freedom (dofs) into
two vectors ac storing the dofs allocated in conducting regions
(if ~r ∈ Ωc ) and an holding the dofs allocated in nonconducting
regions (if ~r ∈ Ωn ∪ Ωs ) yields the DAE system
Kc (ac ) Kcn ac
0
Mc 0 d ac
+
=
, (3)
jSn
0 0 dt an
Kn an
K>
cn
Mc
+ m
+ > m
am
n = Kn js,n − Kn Kcn ac
(2)
i=0
>
KS (ac ) := Kc (ac ) − Kcn K+
n Kcn ,
evaluated using the preconditioned conjugate gradient method
(PCG) [9]. The finitely stiff ODE (5a) can be integrated
explicitly in time, e.g. by using the explicit Euler method.
Using this time integration method, the expressions
m−1
m
m−1 m−1
am
+∆tM−1
Kcn K+
)ac
, (6)
c = ac
c
n js,n −KS (ac
(5a)
+ >
= K+
n js,n − Kn Kcn ac . (5b)
A regularization of Kn by a grad-div or tree/cotree gauging
can be used alternatively [6], [8]. Here, the pseudo-inverse is
As the matrix Kn remains constant within each explicit time
step, the repeated evaluation of a pseudo-inverse K+
n in (6),
(7) forms a multiple right-hand side (mrhs) problem of the
form
Kn ap = jp ⇔ ap = K+
(9)
n jp .
Here, jp represents one of the right-hand side vectors
> m−1
> m
. Instead of computing the matrix
jm
s,n , Kcn ac , and Kcn ac
+
Kn explicitly, a vector ap is computed according to (9) using
the preconditioned conjugate gradient (PCG) method [9].
Improved start vectors for the PCG method can be obtained
by the subspace projection extrapolation (SPE) method or
the proper orthogonal decomposition (POD) method. In the
SPE method, the linearly independent column vectors of a
matrix USPE are formed by a linear combination of an
orthonormalized basis of the subspace spanned by solutions
ap from previous time steps. The modified Gram-Schmidt
method is used for this orthonormalization procedure [11]. The
improved start vector x0,SPE is then computed by [12]
−1 >
x0,SPE := USPE U>
USPE K>
(10)
SPE Kn USPE
cn jp .
As only the last column vector in the matrix USPE changes
in every time step, all other matrix-column-vector products in
computing Kn USPE in (10) are reused from previous time
steps in a modification of the procedure in [12] referred to as
the ”Cascaded SPE” (CSPE) [9].
When using the POD method for the PCG start vector
generation, NPOD solution vectors from previous time steps
form the column vectors of a snapshot matrix X which is
decomposed into
X = UΣV>
(11)
using the singular value decomposition (SVD) [13], [14],
[15]. Here, U and V are orthonormal matrices and Σ is a
diagonal matrix of the singular values ordered by magnitude
(σi ≥ σj for i < j). The index k is chosen such that the information of the largest singular values is kept
σk
≤ εPOD .
(12)
σ1
3
The threshold value εPOD is here chosen as εPOD := 10−4 .
A measure how much information is kept can be computed
by the relative information criterion [16]
k
P
σi
i=1
NP
POD
!
≈ 1.
(13)
σi
i=1
Defining UPOD = [U:,1 , ... , U:,k ] as the first k columns of U
allows to compute an improved start vector x0,POD by
−1 >
x0,POD := UPOD U>
UPOD K>
POD Kn UPOD
cn jp . (14)
The repeated evaluation of M−1
in (6) also forms a mrhs
c
problem, and both the POD and the CSPE method can be used
for computing improved start vectors for the PCG method.
In the case of small matrix dimensions of the regular matrix
Mc , the inverse can also be computed directly using GPUacceleration.
IV. N UMERICAL VALIDATION
The ferromagnetic TEAM 10 benchmark problem is used
for numerical validation of the presented explicit time integration scheme for magnetoquasistatic fields [17]. The domain
consists of two square-bracket-shaped steel plates opposite
of each other and a rectangular steel plate between them,
resulting in two 0.5 mm wide air gaps. The model geometry is shown in Fig. 2. The position where the magnetic
field is evaluated is marked as S1. The excitation current
iS = (1 − exp(−t/τ )), where τ = 0.5 s, is applied for a
time interval of 120 ms starting at t = 0 s [17]. The resulting
magnetic flux density is computed for this time interval.
The finite element method (FEM) using 1st order edge
elements is used for the spatial discretization [18]. All simulations are computed on a workstation with an Intel Xeon E5
processor and an NVIDIA TESLA K80 GPU. The conjugate
gradient method is preconditioned by an algebraic multigrid
method [19]. The matrix Mc is inverted using the Magmalibrary and GPU-acceleration [20].
A fine mesh resulting in about 700,000 dofs and the implicit
Euler method are used to validate the simulation code. A good
agreement between the measured results published in [17] and
the simulation of this fine spatial discretization is shown in Fig.
2. The required simulation time of this simulation using the
implicit Euler method is 5.38 days using an in-house implicit
time integration magnetoquasistatic code.
For benchmarking the proposed mrhs techniques for the
(semi-)explicit time integration scheme, a model with a coarse
spatial discretization yielding about 30,000 dofs and the explicit Euler method is used. For this spatial discretization, the
resulting maximum stable time step size according to (8) is
∆tCFL = 1.2 µs. Both meshes are presented in Fig. 3 The
results for the average magnetic flux density are compared with
the results obtained using the same discretization in space and
the implicit Euler method for time integration and show good
agreement, depicted in Fig. 2. The resulting field plots for both
spatial discretizations are shown in Fig. 4. The simulation time
for the implicit time integration method is still 2.58 h.
Fig. 2. Comparison of results for the average magnetic flux density evaluated
at position S1 and model geometry as inset.
TABLE I
S IMULATION T IME AND AVERAGE N UMBER OF PCG I TERATIONS U SING
D IFFERENT S TART V ECTORS x0
Start vector
Avg. Number of PCG Iterations
x0 := am−1
p
3.16
Simulation Time
2.35 h
x0 := x0,POD
2.18
17.35 h
x0 := x0,CSPE
1.02
1.62 h
The effect of computing improved start vectors using POD
or CSPE on the average number of PCG iterations and on
the simulation time is compared to using the solution from
as start vector for the PCG
the previous time step am−1
p
method. An overview is presented in Table I and shows that
both the CSPE and the POD start vector generation methods
significantly reduce the number of PCG iterations. When using
CSPE the number of column vectors in the operator USPE in
(10) is increased during the simulation to improve the spectral
information content of USPE . This number remains below 20.
Thus, only small systems have to be solved for the inversion
in (10) and the effort to perform all computations of the
CSPE method is low. This is also confirmed by the simulation
time which is shortest when using CSPE. The simulation time
resulting from using explicit time integration and CSPE for
start vector generation is 63 % of the simulation time of the
implicit reference simulation. A bar plot showing the reduced
simulation time by using the explicit Euler scheme and CSPE
compared to using the standard formulation and the implicit
Euler method for time integration is depicted in Fig. 5.
In case of the POD, the amount of information kept
according to (13) is > 0.99 during the entire simulation.
However, the computational effort for performing the SVD and
the computations in (14) is higher than the effort for CSPE.
Although the number of PCG iterations is further decreased,
the simulation time resulting from using POD for start vector
generation is higher than when using apm−1 as start vector for
the PCG method due to the costs of the repeated SVD.
4
using the POD and the CSPE method. Although both reduce
the number of PCG iterations needed, the computational
effort of the CSPE is significantly lower than for the POD
method. Reducing the computational effort of the POD, e.g. by
accelerating the computation of the SVD is subject to further
investigations. Using the CSPE method, the overall simulation
time was reduced by 37 % compared to the simulation time of
the implicit reference simulation.
Fig. 3. Meshes resulting in about 700,000 dofs (left) and in about 30,000
dofs (right).
ACKNOWLEDGMENT
This work was supported by the Deutsche Forschungsgemeinschaft (DFG) under grant numbers CL143/11-1 and
SCHO1562/1-1. The third author is supported by the Excellence Initiative of the German Federal and State Governments
and The Graduate School of CE at TU Darmstadt.
R EFERENCES
| B |/ T
0.00
0.35
0.70
1.05
1.50
~ for the spatial discretization
Fig. 4. Field plots of the magnetic flux density B
with about 700,000 dofs (left) and with about 30,000 dofs (right).
Fig. 5. Comparison of simulation times.
V. C ONCLUSION
The magnetic vector potential formulation of eddy current
problems was transformed into an ODE system of finite
stiffness using a generalized Schur complement. The resulting
ODE system was integrated in time using the explicit Euler
method. A pseudo-inverse of the singular curl-curl matrix in
nonconducting material was evaluated using the PCG method.
Improved start vectors for the PCG method were calculated
[1] E. Hairer and G. Wanner, Solving ordinary differential equations II: stiff
and differential-algebraic problems, 2nd rev. ed., Springer, Berlin, 1996.
[2] A. Nicolet, ”Implicit Runge-Kutta methods for transient magnetic field
computations,” IEEE Trans. Magn., vol. 32, pp. 1405–1408, May 1996
[3] T. V. Yioutlsis, K. S. Charitou, C. S. Antonopoulos, and T. D. Tsiboukis,
”A finite difference time domain scheme for transient eddy current
problems,” IEEE Trans. Mag., vol. 37, pp. 3145–3149, Sep. 2001.
[4] S. Außerhofer, O. Bı́ro, and K. Preis, ”Discontinuous Galerkin finite
elements in time domain eddy-current problems,” IEEE Trans. Magn.,
vol. 45, pp. 1300–1303, Feb. 2009.
[5] J. Smajic, ”DG-FEM for time domain H − Φ eddy-current analysis,”
presented at the 17th biennial Conference on Electromagnetic Field
Computation (CEFC) 2016, Nov. 2016.
[6] M. Clemens, S. Schöps, H. De Gersem, and A. Bartel, ”Decomposition
and regularization of nonlinear anisotropic curl-curl DAEs,” Compel, vol.
30, pp. 1701–1714, 2011.
[7] M. Clemens and T. Weiland, ”Transient eddy-current calculation with the
FI-method,” IEEE Trans. Magn., vol. 35, pp. 1163-1166, May 1999.
[8] S. Schöps, A. Bartel, and M. Clemens, ”Higher order half-explicit time
integration of eddy current problems,” IEEE Trans. Magn., vol. 48, pp.
623-626, Feb. 2012.
[9] J. Dutiné, M. Clemens, S. Schöps, and G. Wimmer, ”Explicit time
integration of transient eddy current problems,” presented at the 10th
International Symposium on Electric and Magnetic Fields (EMF) 2016.
Full paper to appear in J. Num. Mod.: ENDF.
[10] G. H. Golub and C. F. Van Loan, Matrix Computations, 4th ed., The
Johns Hopkins University Press, Baltimore, 2013.
[11] L. N. Trefethen and D. Bau, Numerical linear algebra, Society for
Industrial and Applied Mathematics, Philadelphia, 1997.
[12] M. Clemens, M. Wilke, R. Schuhmann, and T. Weiland, ”Subspace
projection extrapolation scheme for transient field simulations,” IEEE
Trans. Magn., vol. 40, pp. 934–937, Apr. 2004.
[13] T. Henneron and S. Clénet, ”Model order reduction of non-linear
magnetostatic problems based on POD and DEI methods,” IEEE Trans.
Magn., vol. 50, pp. 33–36, Feb. 2014.
[14] T. Henneron and S. Clénet, ”Model-order reduction of multiple-input
non-linear systems based on POD and DEI methods,” IEEE Trans. Magn.,
vol. 51, pp. 1–4, Mar. 2015.
[15] Y. Sato, M. Clemens, and H. Igarashi, ”Adaptive subdomain model
order reduction with discrete empirical interpolation method for nonlinear
magneto-quasi-static problems,” IEEE Trans. Magn., vol. 52, pp. 1–4,
Mar. 2016.
[16] D. Schmidthäusler, S. Schöps, and M. Clemens, ”Linear subspace reduction for quasistatic field simulations to accelerate repeated computations,”
IEEE Trans. Magn., vol. 50, pp. 421–424, Feb. 2014.
[17] T. Nakata and K. Fujiwara, ”Results for benchmark problem 10 (steel
plates around a coil),” Compel, vol. 9, pp. 181–192, 1990.
[18] A. Kameari, ”Calculation of transient 3d eddy current using edgeelements,” IEEE Trans. Magn., vol. 26, pp. 466–469, Mar. 1990.
[19] K. Stüben, Algebraic multigrid (AMG): an introduction with applications, 1999.
[20] W. Bosma, J. Cannon, and C. Playoust, ”The Magma algebra system
I: the user language,” J. Symbolic Comput., vol. 24, pp. 235–265, Sep.
1997.
| 5 |
1
Deep Local Binary Patterns
arXiv:1711.06597v1 [] 17 Nov 2017
Kelwin Fernandes and Jaime S. Cardoso
Abstract—Local Binary Pattern (LBP) is a traditional descriptor for texture analysis that gained attention in the last
decade. Being robust to several properties such as invariance
to illumination translation and scaling, LBPs achieved stateof-the-art results in several applications. However, LBPs are
not able to capture high-level features from the image, merely
encoding features with low abstraction levels. In this work,
we propose Deep LBP, which borrow ideas from the deep
learning community to improve LBP expressiveness. By using
parametrized data-driven LBP, we enable successive applications
of the LBP operators with increasing abstraction levels. We
validate the relevance of the proposed idea in several datasets
from a wide range of applications. Deep LBP improved the
performance of traditional and multiscale LBP in all cases.
xref
xref
xref
(a) r=1, n=8
(b) r=2, n=16
(c) r=2, n=8
Figure 1: LBP neighborhoods with radius (r) and angular
resolution (n). The first two cases use Euclidean distance to
define the neighborhood, the last case use Manhattan distance.
Index Terms—Local Binary Patterns, Deep Learning, Image
Processing.
•
I. I NTRODUCTION
I
N recent years, computer vision community has moved towards the usage of deep learning strategies to solve a wide
variety of traditional problems, from image enhancement [1]
to scene recognition [2]. Deep learning concepts emerged from
traditional shallow concepts from the early years of computer
vision (e.g. filters, convolution, pooling, thresholding, etc.).
Although these techniques have achieved state-of-the-art
performance in several of tasks, the deep learning hype has
overshadowed research on other fundamental ideas. Narrowing
the spectrum of methods to a single class will eventually
saturate, creating a monotonous environment where the same
basic idea is being replicated over and over, and missing the
opportunity to develop other paradigms with the potential to
lead to complementary solutions.
As deep strategies have benefited from traditional -shallowmethods in the past, some classical methods started to take
advantage of key deep learning concepts. That is the case of
deep Kernels [3], which explores the successive application of
nonlinear mappings within the kernel umbrella. In this work,
we incorporate deep concepts into Local Binary Patterns [4],
[5], a traditional descriptor for texture analysis. Local Binary
Patterns (LBP) is a robust descriptor that briefly summarizes
texture information, being invariant to illumination translation
and scaling. LBP has been successfully used in a wide
variety of applications, including texture classification [6]–[9],
face/gender recognition [10]–[15], among others [16]–[18].
LBP has two main ingredients:
• The neighborhood (N ), usually defined by an angular
resolution (typically 8 sampling angles) and radius r
of the neighborhoods. Fig. 1 illustrates several possible
neighborhoods.
Kelwin Fernandes (kafc@inesctec.pt) and Jaime S. Cardoso
(jaime.s.cardoso@inesctec.pt) are with Faculdade de Engenharia da
Universidade do Porto and with INESC TEC, Porto, Portugal.
The binarization function b(xref , xi ) ∈ {0, 1}, which allows the comparison between the reference point (central
pixel) and each one of the points xi in the neighborhood.
Classical LBP is applicable when xref (and xi ) are in an
ordered set (e.g., R and Z), with b(xref , xi ) defined as
b(xref , xi ) = (xref ≺ xi ),
(1)
where ≺ is the order relation on the set (interpolation is
used to compute xi when a neighbor location does not
coincide with the center of a pixel).
The output of the LBP at each position ref is the code
resulting from the comparison (binarization function) of the
value xref with each of the xi in the neighborhood, with i ∈
N (ref ), see Figure 2. The LBP codes can be represented by
their numerical value as formally defined in (2).
X
LBP (xref ) =
2i · b(xref , xi )
(2)
i∈|N (ref )|
LBP codes can take 2|N | different values. In predictive
tasks, for typical choices of angular resolution, LBP codes are
compactly summarized into a histogram with 2|N | bins, being
this the feature vector representing the object/region/image
(see Fig. 3). Also, it is typical to compute the histograms
in sub-regions and to build a predictive model by using as
features the concatenation of the region histograms, being nonoverlapping and overlapping [19] blocks traditional choices
(see Figure 4).
In the last decade, several variations of the LBP have been
proposed to attain different properties. The two best-known
variations were proposed by Ojala et. al, the original authors
of the LBP methodology: rotation invariant and uniform LBP
[5].
Rotation invariance can be achieved by assigning a unique
identifier to all the patterns that can be obtained by applying
circular shifts. The new encoding is defined in (3), where
2
0
22
110
<
20
>
1
1
<
>
42
<
80
80
>
⇒
1
0
0
⇒ 10011010
>
<
0
0
1
60
1
(a) LBP code with 8 neighbors.
94
1
<
30
0
222
>
⇒
65
>
15
1
<
⇒
100001
>
>
40
0
0
0
22
(b) LBP code with 6 neighbors.
Figure 2: Cylinder and linear representation of the codes
at some pixel positions. Encodings are built in a clockwise
manner from the starting point indicated in the middle section
of both figures.
LBP encodings
⇒
Histogram
e1
e5
...
e7
e5
e8
e1
...
e7
e3
e10
e1
...
e6
..
.
..
.
..
.
..
..
.
e2
...
e2
e9
e10
.
⇒
Frequency
Original image
⇒
Classifier
e1 e2 e3 . . . en
e10
Encodings
Frequency
Figure 3: Traditional pipeline for image classification using
LBP.
e1 e2 e3 . . . en
Frequency
Encodings
e1 e2 e3 . . . en
Classifier
Frequency
Encodings
e1 e2 e3 . . . en
Frequency
Encodings
e1 e2 e3 . . . en
Encodings
Figure 4: Multi-block LBP with 2 × 2 non-overlapping blocks.
ROR(e, i) applies a circular-wrapped bit-wise shift of i positions to the encoding e.
LBP (xref ) = min{ROR(LBP (xref ), i) | i ∈ [0, . . . |N |)}
(3)
In the same work, Ojala et. al [5] identified that uniform
patterns (i.e. patterns with two or less circular transitions) are
responsible for the vast majority of the histogram frequencies,
leaving low discriminative power to the remaining ones. Then,
the relatively small proportion of non-uniform patterns limits
the reliability of their probabilities, all the non-uniform patterns are assigned to a single bin in the histogram construction,
while uniform patterns are assigned to individual bins. [5].
Heikkila et. al proposed Center-Symmetric LBP [20], which
increases robustness on flat image areas and improves computational efficiency. This is achieved by comparing pairs of
neighbors located in centered symmetric directions instead of
comparing each neighbor with the reference pixel. Thus, an
encoding with four bits is generated from a neighborhood
of 8 pixels. Also, the binarization function incorporates an
activation tolerance given by b(xi , xj ) = xi > xj + T . Further
extensions of this idea can be found in [21]–[23].
Local ternary patterns (LTP) are an extension of LBP that
use a 3-valued mapping instead of a binarization function.
The function used by LTP is formalized at (4). LTP are less
sensitive to noise in uniform regions at the cost of losing
invariance to illumination scaling. Moreover, LTP induce an
additional complexity in the number of encodings, producing
histograms with up to 3N bins.
−1
bT (xref , xi ) = 0
+1
xi < xref − T
xref − T ≤ xi ≤ xref + T
xref + T < xi
(4)
So far, we have mainly described methods that rely on
redefining the construction of the LBP encodings. A different
line of research focuses on improving LBP by modifying
the texture summarization when building the frequency histograms. Two examples of this idea were presented in this
work: uniform LBP [5] and multi-block LBP (see Fig. 4).
Since different scales may bring complementary information, one can concatenate the histograms of LBP values
at different scales. Berkan et al. proposed this idea in the
Over-Complete LBP (OCLBP) [19]. Besides computing the
encoding at multiple scales, OCLBP computes the histograms
on several spatially overlapped blocks. An alternative to this
way of modeling multiscale patterns is to, at each point,
compute the LPB code at different scales, concatenate the
codes and summarize (i.e., compute the histogram) of the
concatenated feature vector. This latter option has difficulties
concerning the dimensionality of the data (potentially tackled
with a bag of words approach) and the nature of the codes
(making unsuitable standard k-means to find the bins-centers
for a bag of words approach).
Multi-channel data (e.g. RGB) has been handled in a similar
way, by 1) computing and summarizing the LBP codes in each
channel independently and then concatenating the histograms
[24] and by 2) computing a joint code for the three channels
[25].
As LBP have been successfully used to describe spatial
relations between pixels, some works explored embedding
temporal information on LBP for object detection and background removal [10], [26]–[30].
It is trivial to prove that the only options for the binarization
function that hold Eq. (5) are the constant functions (always
zeros or always ones), the one given by Eq. (1) and its
reciprocal.
Model
Histogram
...
LBP
LBP
LBP
Image
3
Figure 5: Recursive application of Local Binary Patterns.
Finally, Local Binary Pattern Network (LBPNet) was introduced by Xi et al. [31] as a preliminary attempt to embed
Deep Learning concepts in LBP. Their proposal consists
on using a pyramidal approach on the neighborhood scales
and histogram sampling. Then, Principal Component Analysis
(PCA) is used on the frequency histograms to reduce the
dimensionality of the feature space. Xi et al. analogise the
pyramidal construction of LBP neighborhoods and histogram
sampling as a convolutional layer, where multiple filters operate at different resolutions, and the dimensionality reduction
as a Pooling layer. However, LBPNets aren’t capable of
aggregating information from a single resolution into higher
levels of abstraction which is the main advantage of deep
neural networks.
In the next sections, we will bring some ideas from the
Deep Learning community to traditional LBP. In this sense,
we intend to build LBP blocks that can be applied recursively
in order to build features with higher-level of abstraction.
II. D EEP L OCAL B INARY PATTERNS
The ability to build features with increasing abstraction level
using a recursive combination of relatively simple processing
modules is one of the reasons that made Convolutional Neural
Networks – and in general Neural Networks – so successful. In
this work, we propose to represent “higher order” information
about texture by applying LBP recursively, i.e., cascading LBP
computing blocks one after the other (see Fig 5). In this sense,
while an LBP encoding describes the local texture, a second
order LBP encoding describes the texture of textures.
However, while it is trivial to recursively apply convolutions
– and many other filters – in a cascade fashion, traditional LBP
are not able to digest their own output.
Traditional LBP rely on receiving as input an image with
domain in an ordered set (e.g. grayscale intensities). However,
LPB codes are not in an ordered set, dismissing the direct
recursive application of standard LBP. As such we will first
generalize the main operations supporting LBP and discuss
next how to assemble together a deep/recursive LBP feature
extractor. We start the discussion with the binarization function
b(xref , xi ).
It is instructive to think, in the conventional LBP, if nontrivial alternative functions exist to the adopted one, Eq. (1).
What is(are) the main property(ies) required by the binarization function? Does it need to make use of a (potentially implicit) order relationship? A main property of the binarization
function is to be independent of scaling and translation of the
input data, that is,
b(k1 xref + k2 , k1 xi + k2 ) = b(xref , xi ), k1 > 0.
(5)
Proof. Assume xi , xj > xref and b(xref , xi ) 6= b(xref , xj ).
Under the independence to translation and scaling (Eq. (5)),
b(xref , xi ) = b(xref , xj ) as shown below, which is a contradiction.
b(xref , xi )
= h Identity of multiplication i
xj − xref
xj − xref
b
xref ,
xi
xj − xref
xj − xref
= h Identity of addition i
xj − xref
xref xj − xref xj
xj − xref + (xi − xi )
xref ,
xi +
b
xj − xref
xj − xref
xj − xref
= hArithmetici
xi − xref
xj − xi xi − xref
xj − xi
b
xref + xref
,
xj + xref
x − xref
xj − xref xj − xref
x − xref
j
j
xi − xref
xj − xi
= Eq. (5), where k1 =
, k2 = xref
xj − xref
xj − xref
b(xref , xj )
Therefore, b(xref , xi ) must be equal to all xi above xref .
Similarly, b(xref , xi ) must be equal to all xi below xref .
Among our options, the constant binarization function is
not a viable option, since the information (in information
theory perspective) in the output is zero. Since the recursive
application of functions can be understood as a composition,
invariance to scaling and translation is trivially ensured by
using a traditional LBP in the first transformation.
Given that transitivity is a relevant property held by the
natural ordering of real numbers, we argue that such property
should be guaranteed by our binarization function. In this
sense, we will focus on strict partial orders of encodings.
Following, we show how to build such binarization functions
for the i-th application of the LBP operator, where i > 1. We
will consider both predefined/expert driven solutions and data
driven solutions (and therefore, application specific).
Hereafter, we will refer to the binarization function as the
partial ordering of LBP encodings. Despite the existence of
other types of functions may be of general interest, narrowing
the search space to those that can be represented as a partial
ordering induce efficient learning mechanisms.
A. Preliminaries
Let us formalize the deep binarization function as the order
relation b+ ∈ P(EN ×EN ), where EN is the set of encodings
induced by the neighborhood N .
Let Φ be an oracle Φ :: P(EN ×EN ) → R that assesses the
performance of a binarization function. For example, among
other options, the oracle can be defined as the performance
of the traditional LBP pipeline (see Fig. 3) on a dataset for a
given predictive task.
4
Table I: Lower bound of the number of combinations for
deciding the best LBP binarization function as partial orders.
# Neighbors
2
3
4
5
6
7
8
Rotational Inv.
2 · 101
5 · 102
3 · 106
7 · 1011
1 · 1036
2 · 1072
1 · 10225
Uniform
2 · 104
1 · 1015
2 · 1041
6 · 1094
2 · 10190
3 · 10346
1 · 10585
Traditional
5 · 102
7 · 1011
8 · 1046
2 · 10179
1 · 10685
3 · 102640
3 · 1010288
2) Learning b+ from a High-dimensional Space: A second
option is to map the code space to a new (higher-dimensional)
space that characterizes LBP encodings. Then, an ordering or
preference relationship can be learned in the extended space,
for instance resorting to preference learning algorithms [36]–
[38].
Some examples of properties that characterize LBP encodings are:
•
•
B. Deep Binarization Function
•
From the entire space of binarization functions, we restrict
our analysis to those induced by strict partial orders. Within
this context, it is easy to see that learning the best binarization
function by exhaustive exploration is intractable since the
number of combinations equals the number of directed acyclic
graph (DAG) with 2|N | = |EN | nodes. The DAG counting
problem was studied by Robinson [32] and is given by the
recurrence relation Eqs. (6)-(7).
•
a0 = 1
n
X
k−1 n
an>1 =
(−1)
2k(n−k) an−k
k
(6)
(7)
k=1
Table I illustrates the size of the search space for several
numbers of neighbors. For instance, for the traditional setting
with 8 neighbors, the number of combinations has more than
10,000 digits. Thereby, a heuristic approximation must be
carried out.
1) Learning b+ from a User-defined dissimilarity function:
The definition of a dissimilarity function between two codes
seems reasonably accessible. For instance, an immediate option is to adopt the hamming distance between codes, dH .
With rotation invariance in mind, one can adopt the minimum
hamming distance between all circularly shifted versions of
xref and xi , dri
H . The circular invariant hamming distance
between xref and xi can be computed as
dri
H =
min
s∈0,··· ,N −1
dH (ROR(xref , s), xi )
(8)
Having defined such a dissimilarity function between pairs
of codes, one can know proceed with the definition of the
binarization function.
Given the dissimilarity function, we can learn a mapping
of the codes to an ordered set. Resorting to Spectral Embedding [33], one can obtain such a mapping. The conventional
binarization function, Eq. (1), can then be applied. Other
alternatives for building the mappings can be found in the
manifold learning literature: Isomaps [34], Multidimensional
scaling (MDS) [35], among others. In this case, the oracle
function can be defined as the intrinsic loss functions used in
the optimization process of such algorithms.
Preserving a desired property P such as rotational invariance and sign invariance (i.e. interchangeability between
ones and zeros) can be achieved by considering P -aware
dissimilarities.
•
Number of transitions of size 1 (e.g. 101, 010).
Number of groups/transitions.
Size of the smallest/largest group.
Diversity on the group sizes.
Number of ones.
Techniques to learn the final ordering based on the new
high-dimensional space include:
•
•
Dimensionality reduction techniques, including Spectral
embeddings, PCA, MDS and other manifold techniques.
Preference learning strategies for learning rankings [36]–
[38].
A case of special interest that will be used in the experimental section of this work are Lexicographic Rankers (LR)
[36], [38]. In this work, we will focus on the simplest type
of LR, linear LR. Let us assume that features in the new
high-dimensional space are scoring rankers (e.g. monotonic
functions) on the texture complexity of the codes. Thus, for
each code ei and feature sj , the complexity associated to ei by
sj is denoted as sj (ei ). We assume sj (ei ) to lie in a discrete
domain with a well-known order relation.
Thus, each feature is grouping the codes into equivalence
classes. For example, the codes with 0 transitions (i.e. flat
textures), 2 transitions (i.e. uniform textures) and so on.
If we concatenate the output of the scoring rankers in a linear manner (s0 (ei ), s1 (ei ), · · · , sn (ei )), a natural arrangement
is their lexicographic order (see Eq. (9)), where each sj (ei )
is subordering the equivalence class obtained by the previous
prefix of rankers (s0 (ei ), · · · , sj−1 (ei )).
a=b
a ≺ b
LexRank(a, b) =
ab
LexRank(t(a), t(b))
, |a| = 0 ∨ |b| = 0
, a0 ≺ b0
, a0 b0
, a0 = b0
(9)
where t(a) returns the tail of the sequence. Namely, the order
between two encodings is decided by the first scoring ranker
in the hierarchy that assigns different values to the encodings.
Therefore, the learning process is reduced to find the best
feature arrangement. A heuristic approximation to this problem
can be achieved by iteratively appending to the sequence of
features the one that maximizes the performance of the oracle
Φ.
Similarly to property-aware dissimilarity functions, if the
features in the new feature vector V(x) are invariant to
P , the P -invariance of the learned binarization function is
automatically guaranteed.
5
Image
LBP
LBP-b+
1
···
LBP-b+
n
Histogram
Model
Figure 6: Deep LBP.
(a) Input image
(b) LBP
(c) DeepLBP(1)
(d) DeepLBP(2)
(e) DeepLBP(3)
(f) DeepLBP(4)
III. D EEP A RCHITECTURES
Given the closed form of the LBP with deep binarization
functions, their recursive combination seems feasible. In this
section, several alternatives for the aggregation of deep LBP
operators are proposed.
A. Deep LBP (DLBP)
The simplest way of aggregating Deep LBP operators
is by applying them recursively and computing the final
encoding histograms. Figure 6 shows this architecture. The
first transformation is done by a traditional shallow LBP
while the remaining transformations are performed using deep
binarization functions.
Figure 7 illustrates the patterns detected by several deep
levels on a cracker image from the Brodatz database. In this
case, the ordering between LBP encodings was learned by
using a lexicographic ordering of encodings on the number
of groups, the size of the largest group and imbalance ratio
between 0’s and 1’s. We can observe that the initial layers of
the architecture extract information from local textures while
the later layers have higher levels of abstraction.
B. Multi-Deep LBP (MDLBP)
Despite it may be a good idea to extract higher-order
information from images, for the usual applications of LBP,
it is important to be able to detect features at different levels
of abstraction. For instance, if the image has textures with
several complexity levels, it may be relevant to keep the
patterns extracted at different abstraction levels. Resorting to
the techniques employed in the analysis of multimodal data
[40], we can aggregate this information in two ways: feature
and decision-level fusion.
1) Feature-level fusion: one histogram is computed at each
layer and the model is built using the concatenation of all the
histograms as features.
2) Decision-level fusion: one histogram and decision model
is computed at each layer. The final model uses the probabilities estimated by each individual model to produce the final
decision.
Figures 8a and 8b show Multi-Deep LBP architectures with
feature-level and decision-level fusion respectively. In our
experimental setting, feature-level fusion was used.
Figure 7: Visualization of LBP encodings from a Brodatz
database [39] image. The results obtained by applying n
layers of Deep LBP operators are denoted as DeepLBP(n).
A neighborhood of size 8, radius 10 and Euclidean distance
was used. The grayscale intensity is defined by the order of
the equivalence classes.
Image
LBP
LBP-b+
1
Histogram
Histogram-1
...
LBP-b+
n
Histogram-n
..
.
Model
(a) Multi-Deep LBP with feature-level fusion.
Image
LBP
LBP-b+
1
Histogram
Histogram-1
...
LBP-b+
n
Histogram-n
..
.
Model-0
Model-1
Model
Model-n
Global Model
(b) Multi-Deep LBP with decision-level fusion.
Figure 8: Deep LBP architectures.
C. Multiscale Deep LBP (Multiscale DLBP)
In the last few years, deep learning approaches have benefited from multi-scale architectures that are able to aggregate
information from different image scales [41]–[43]. Despite
being able to induce higher abstraction levels, deep networks
6
Scale 1
Image
Deep LBP1
Scale 2
Image
Deep LBP2
..
.
...
Image
Deep LBPn
(a) KTH TIPS
(b) FMD
(c) Virus
(d) Brodatz
(e) Kylberg
(f) 102 Flowers
Model
Scale n
Figure 9: Multi-scale Deep LBP.
Table II: Summary of the datasets used in the experiments
Dataset
KTH TIPS
FMD
Virus
Brodatz*
Kylberg
102 Flowers
Caltech 101
Reference
[44]
[45]
[46]
[39]
[47]
[48]
[49]
Task
Texture
Texture
Texture
Texture
Texture
Object
Object
Images
810
1000
1500
1776
4480
8189
9144
Classes
10
10
15
111
28
102
102
are restrained to the size of the individual operators. Thereby,
aggregating multi-scale information in deep architectures may
exploit their capability to detect traits that appear at different
scales in the images in addition to turning the decision process
scale invariant.
In this work, we consider the stacking of independent deep
architectures at several scales. The final decision is done by
concatenating the individual information produced at each
scale factor (cf. Figure 9). Depending on the fusion approach,
the final model operates in different spaces (i.e. feature or
decision level). In an LBP context, we can define the scale
operator of an image by resizing the image or by increasing
the neighborhood radius.
IV. E XPERIMENTS
In this section, we compare the performance of the proposed
deep LBP architectures against shallow LBP versions. Several
datasets were chosen from the LBP literature covering a wide
range of applications, from texture categorization to object
recognition. Table II summarizes the datasets used in this
work. Also, Fig. 10 shows an image per dataset in order to
understand the task diversity.
We used a 10-fold stratified cross-validation strategy and the
average performance of each method was measured in terms
of:
•
Accuracy.
(g) Caltech 101
Figure 10: Sample images from each dataset
•
Class rank: Position (%) of the ground truth label in the
ranking of classes ordered by confidence. The ranking
was induced using probabilistic classifiers.
While high values are preferred when using accuracy, low
values are preferred for class rank. All the images were resized
to have a maximum size of 100 × 100 and the neighborhood
used for all the LBP operators was formed by 8 neighbors on
a radius of size 3, which proved to be a good configuration
for the baseline LBP. The final features were built using a
global histogram, without resorting to image blocks. Further
improvements in each application can be achieved by finetuning the LBP neighborhoods and by using other spatial
sampling techniques on the histogram construction. Since the
objective of this work was to objectively compare the performance of each strategy, we decided to fix these parameters.
The final decision model is a Random Forest with 1000 trees.
In the last two datasets, which contain more than 100 classes,
the maximum depth of the decision trees was bounded to 20
in order to limit the required memory.
In all our experiments, training data was augmented by
including vertical and horizontal flips.
7
Table III: Class rank (%) of the ground-truth label and accuracy with single-scale strategies
Dataset
KTH TIPS
FMD
Virus
Brodatz
Kylberg
102 Flowers
Caltech 101
Strategy
Similarity
High Dim
LBP
Similarity
High Dim
LBP
Similarity
High Dim
LBP
Similarity
High Dim
LBP
Similarity
High Dim
LBP
Similarity
High Dim
LBP
Similarity
High Dim
LBP
Similarity
High Dim
1
1.55
26.96
8.08
0.25
0.23
13.46
13.05
-
Class Rank
Layers
2
3
4
19.18
18.87 18.26
18.64
18.78
18.79
1.60
1.68
1.82
0.94
0.91
1.09
25.77
25.79
25.61
23.20
23.36
23.30
6.61
6.78
6.72
6.65
6.50
6.53
0.22
0.22
0.23
0.21
0.21
0.22
0.18
0.16
0.14
0.07
0.07
0.07
13.34
13.56
13.99
13.10 12.99
13.15
12.37
12.23
12.32
11.98
12.19
12.16
A. Single-scale
First, we validated the performance of the proposed deep
architectures on single scale settings with increasing number
of deep layers. Information from each layer was merged at
a feature level by concatenating the layerwise histogram (c.f.
Section III-B). Table III summarizes the results of this setting.
In all the datasets, the proposed models surpassed the results
achieved by traditional LBP.
Furthermore, even when the accuracy gains are small, the
large gains in terms of class rank suggest that the deep
architectures induce more stable models, which assign a high
probability on the ground-truth level, even on misclassified
cases. For instance, in the Kylberg dataset, a small relative
accuracy gain of 3.23% was achieved by the High Dimensional
rule, the relative gain on the class rank was 69.56%.
With a few exceptions (e.g. JAFFE dataset), the data-driven
deep operator based on a high dimensional projection achieved
the best performance. Despite the possibility to induce encoding orderings using user-defined similarity functions, the final
orderings are static and domain independent. In this sense,
more flexible data-driven approaches as the one suggested in
Section II-B2 are able to take advantage of the dataset-specific
properties.
Despite the capability of the proposed deep architectures to
achieve large gain margins, the deep LBP operators saturate
rapidly. For instance, most of the best results were found on
architectures with up to three deep layers. Further research on
aggregation techniques to achieve higher levels of abstraction
should be conducted. For instance, it would be interesting
to explore efficient non-parametric approaches for building
encoding orderings that allow more flexible data-driven optimization.
5
18.80
19.10
1.86
1.11
25.59
23.50
6.73
6.55
0.25
0.23
0.14
0.07
14.29
13.32
12.38
12.34
1
89.22
29.20
56.80
89.23
95.29
23.18
39.71
-
2
53.28
53.28
88.96
92.96
28.90
33.40
61.00
61.53
89.73
90.72
96.14
98.37
25.59
24.56
40.35
41.45
Accuracy
Layers
3
53.28
53.28
88.57
93.58
30.00
32.60
61.33
61.27
90.23
90.72
96.52
98.35
24.46
23.76
40.07
40.78
4
53.28
53.28
86.99
92.72
30.90
33.30
60.93
61.47
90.50
90.09
96.72
98.26
24.92
22.81
39.74
40.56
5
53.28
53.28
87.97
92.36
30.80
33.00
61.93
62.27
90.36
89.59
96.81
98.24
24.58
22.36
39.81
40.43
B. Multi-Scale
A relevant question in this context is if the observed gains
are due to the higher abstraction levels of the deep LBP
encodings or to the aggregation of information from larger
neighborhoods. Namely, when applying a second order operator, the neighbors of the reference pixel include information
from their own neighborhood which was initially out of the
scope of the local LBP operator. Thereby, we compare the
performance of the Deep LBP and multiscale LBP.
In order to simplify the model assessment, we fixed the
number of layers to 3 in the deep architectures. A scaling factor of 0.5 was used on each layer of the pyramidal multiscale
operator. Guided by the results achieved in the single-scale
experiments, the deep operator based on the lexicographic
sorting of the high-dimensional feature space was used in all
cases.
Table IV summarizes the results on the multiscale settings.
In most cases, all the deep LBP architectures surpassed
the performance of the best multiscale shallow architecture.
Thereby, the aggregation level achieved by deep LBP operators
goes beyond a multiscale analysis, being able to address
meta-texture information. Furthermore, when combined with
a multiscale approach, deep LBP achieved the best results in
all the cases.
C. LBPNet
Finally, we compare the performance of our deep LBP
architecture against the state of the art LBPNet [31]. As
referred in the introduction, LBPNet uses LBP encodings at
different neighborhood radius and histogram sampling in order
to simulate the process of learning a bag of convolutional
filters in deep networks. Then, the dimensionality of the
descriptors are reduced by means of PCA, resorting to the
idea of pooling layers from Convolutional Neural Networks
8
Table IV: Class rank (%) of the ground-truth label and
accuracy with multi-scale strategies
Dataset
KTH TIPS
FMD
Virus
Brodatz
Kylberg
102 Flowers
Caltech 101
Strategy
Shallow
Deep
Shallow
Deep
Shallow
Deep
Shallow
Deep
Shallow
Deep
Shallow
Deep
Shallow
Deep
Class Rank
Scales
1
2
3
1.55
1.17
1.22
0.91
0.79
0.62
26.96 26.31 26.32
23.36 23.54 23.77
8.08
7.51
7.97
6.50
5.92
6.04
0.25
0.20
0.23
0.21
0.13
0.13
0.23
0.13
0.12
0.07
0.05
0.04
13.46 13.10 12.79
12.99 12.68 12.71
12.92 12.46 12.28
12.21 11.74 11.60
Accuracy
Scales
1
2
3
89.22 90.94 90.93
93.58 94.21 94.96
29.20 29.60 29.80
32.60 33.20 33.00
56.80 60.60 58.60
61.27 66.13 64.87
89.23 90.77 90.00
90.72 92.97 93.11
95.29 97.34 97.57
98.35 98.84 98.95
23.18 25.10 26.40
23.76 26.02 26.87
40.07 40.84 41.03
40.68 41.67 42.01
Table V: Class rank (%) of the ground-truth label and accuracy
Dataset
KTH TIPS
FMD
Virus
Brodatz
Kylberg
(CNN). However, the output of a LBPNet cannot be used by
itself in successive calls of the same function. Thereby, it is
uncapable of building features with higher expresiveness than
the individual operators.
In our experiments, we considered the best LBPNet with
up to three scales and histogram computations with nonoverlapping histograms that divide the image in 1 × 1, 2 × 2
and 3 × 3 blocks. The number of components kept in the
PCA transformation was chosen in order to retain 95% of the
variance for most datasets with the exception of 102 Flowers
and Caltech, where a value of 99% was chosen due to poor
performance of the previous value. A global histogram was
used in our deep LBP architecture
Table V summarizes the results obtained by multiscale LBP
(shallow), LBPNet and our proposed deep LBP. In order to
understand if the gains achieved by the LBPNet are due
to the overcomplete sampling or to the PCA transformation
preceeding the final classifier, we validated the performance
of our deep architecture with a PCA transformation on the
global descriptor before applying the Random Forest classifier.
Despite being able to surpass the performance of our deep LBP
without dimensionality reduction, LBPNet did not improve
the results obtained by our deep architecture with PCA in
most cases. In this sense, even whithout resorting to local
descriptors on the histogram sampling, our model was able
to achieve the best results within the family of LBP methods.
The only exception was observed in the 102 Flowers dataset
(see Fig. 10f), where the spatial information can be relevant.
It is important to note that our model can also benefit from
using spatial sampling of the LBP activations. Moreover,
deep learning concepts such as dropout and pooling layers
can be introduced within the Deep LBP architectures in a
straightforward manner.
V. C ONCLUSIONS
Local Binary Patterns have achieved competitive performance in several computer vision tasks, being a robust and
easy to compute descriptor with high discriminative power
on a wide spectrum of tasks. In this work, we proposed Deep
Local Binary Patterns, an extension of the traditional LBP that
102 Flowers
Caltech 101
Strategy
Shallow
LBPNet
Deep LBP
Deep LBP (PCA)
Shallow
LBPNet
Deep LBP
Deep LBP (PCA)
Shallow
LBPNet
Deep LBP
Deep LBP (PCA)
Shallow
LBPNet
Deep LBP
Deep LBP (PCA)
Shallow
LBPNet
Deep LBP
Deep LBP (PCA)
Shallow
LBPNet
Deep LBP
Deep LBP (PCA)
Shallow
LBPNet
Deep LBP
Deep LBP (PCA)
Class Rank
1.17
0.43
0.62
0.16
26.31
25.71
23.36
23.20
7.51
7.18
5.92
5.91
0.20
0.20
0.13
0.12
0.12
0.19
0.04
0.02
12.79
9.61
12.68
22.30
12.46
12.11
11.60
10.87
Accuracy
90.94
96.29
94.96
98.39
29.80
30.00
33.20
32.30
60.60
60.73
66.13
65.60
90.77
91.49
93.11
94.46
97.57
95.80
98.95
99.55
26.40
35.56
26.87
8.80
41.03
42.69
42.01
45.14
allow successive applications of the operator. By applying LBP
in a recursive way, features with higher level of abstraction are
computed that improve the descriptor discriminability.
The key aspect of our proposal is the introduction of
flexible binarization rules that define an order relation between
LBP encodings. This was achieved with two main learning
paradigms. First, learning the ordering based on a userdefined encoding similarity metric. Second, allowing the user
to describe LBP encodings on a high-dimensional space and
learning the ordering on the extended space directly. Both
ideas improved the performance of traditional LBP in a diverse
set of datasets, covering various applications such as face analysis, texture categorization and object detection. As expected,
the paradigm based on a projection to a high-dimensional
space achieved the best performance, given its capability of
using application specific knowledge in an efficient way. The
proposed deep LBP are able to aggregate information from
local neighborhoods into higher abstraction levels, being able
to surpass the performance obtained by multiscale LBP as
well.
While the advantages of the proposed approach were
demonstrated in the experimental section, further research
can be conducted on several areas. For instance, it would be
interesting to find the minimal properties of interest that should
be guaranteed by the binarization function. In this work, since
we are dealing with intensity-based image, we restricted our
analysis to partial orderings. However, under the presence of
other types of data such as directional (i.e. periodic, angular)
data, cycling or local orderings could be more suitable. In the
most extreme case, the binarization function may be arbitrarily
complex without being restricted to strict orders.
On the other hand, constraining the shape of the binarization
9
function allows more efficient ways to find suitable candidates.
In this sense, it is relevant to explore ways to improve the
performance of the similarity-based deep LBP. Two possible
options would be to refine the final embedding by using
training data and allowing the user to specify incomplete
similarity information.
In this work, each layer was learned in a local fashion,
without space for further refinement. While this idea was
commonly used in the deep learning community when training
stacked networks, later improvements take advantage of refining locally trained architectures [50]. Therefore, we plan to
explore global optimization techniques to refine the layerwise
binarization functions.
Deep learning imposed a new era in computer vision
and machine learning, achieving outstanding results on applications where previous state-of-the-art methods performed
poorly. While the foundations of deep learning rely on very
simple image processing operators, relevant properties held
by traditional methods, such as illumination and rotational
invariance, are not guaranteed. Moreover, the amount of data
required to learn competitive deep models from scratch is
usually prohibitive. Thereby, it is relevant to explore the path
into a unification of traditional and deep learning concepts.
In this work, we explored this idea within the context of
Local Binary Patterns. The extension of deep concepts to other
traditional methods is of great interest in order to rekindle the
most fundamental concepts of computer vision to the research
community.
ACKNOWLEDGMENT
This work was funded by the Project “NanoSTIMA: Macroto-Nano Human Sensing: Towards Integrated Multimodal
Health Monitoring and Analytics/- NORTE-01-0145-FEDER000016” financed by the North Portugal Regional Op-erational
Programme (NORTE 2020), under the PORTUGAL 2020
Partnership Agreement, and through the European Regional
Development Fund (ERDF), and also by Fundação para a
Ciência e a Tecnologia (FCT) within PhD grant number
SFRH/BD/93012/2013.
R EFERENCES
[1] J. Xie, L. Xu, and E. Chen, “Image denoising and inpainting with
deep neural networks,” in Advances in Neural Information Processing
Systems, 2012, pp. 341–349.
[2] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva, “Learning
deep features for scene recognition using places database,” in Advances
in neural information processing systems, 2014, pp. 487–495.
[3] Y. Cho and L. K. Saul, “Kernel methods for deep learning,” in Advances
in neural information processing systems, 2009, pp. 342–350.
[4] T. Ojala, M. Pietikäinen, and T. Mäenpää, “Gray scale and rotation
invariant texture classification with local binary patterns,” in European
Conference on Computer Vision. Springer, 2000, pp. 404–420.
[5] T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution gray-scale
and rotation invariant texture classification with local binary patterns,”
IEEE Transactions on pattern analysis and machine intelligence, vol. 24,
no. 7, pp. 971–987, 2002.
[6] L. Liu, L. Zhao, Y. Long, G. Kuang, and P. Fieguth, “Extended local
binary patterns for texture classification,” Image and Vision Computing,
vol. 30, no. 2, pp. 86–99, 2012.
[7] Y. Zhao, W. Jia, R.-X. Hu, and H. Min, “Completed robust local binary
pattern for texture classification,” Neurocomputing, vol. 106, pp. 68–76,
2013.
[8] Z. Guo, L. Zhang, and D. Zhang, “Rotation invariant texture classification using lbp variance (lbpv) with global matching,” Pattern
recognition, vol. 43, no. 3, pp. 706–719, 2010.
[9] ——, “A completed modeling of local binary pattern operator for texture
classification,” IEEE Transactions on Image Processing, vol. 19, no. 6,
pp. 1657–1663, 2010.
[10] G. Zhao and M. Pietikainen, “Dynamic texture recognition using local
binary patterns with an application to facial expressions,” IEEE transactions on pattern analysis and machine intelligence, vol. 29, no. 6,
2007.
[11] T. Ahonen, A. Hadid, and M. Pietikäinen, “Face recognition with local
binary patterns,” in European conference on computer vision. Springer,
2004, pp. 469–481.
[12] B. Zhang, Y. Gao, S. Zhao, and J. Liu, “Local derivative pattern versus
local binary pattern: face recognition with high-order local pattern
descriptor,” IEEE transactions on image processing, vol. 19, no. 2, pp.
533–544, 2010.
[13] J. Ren, X. Jiang, and J. Yuan, “Noise-resistant local binary pattern with
an embedded error-correction mechanism,” IEEE Transactions on Image
Processing, vol. 22, no. 10, pp. 4049–4060, 2013.
[14] C. Shan, “Learning local binary patterns for gender classification on
real-world face images,” Pattern Recognition Letters, vol. 33, no. 4, pp.
431–437, 2012.
[15] D. Huang, C. Shan, M. Ardabilian, Y. Wang, and L. Chen, “Local binary
patterns and its application to facial image analysis: a survey,” IEEE
Transactions on Systems, Man, and Cybernetics, Part C (Applications
and Reviews), vol. 41, no. 6, pp. 765–781, 2011.
[16] L. Yeffet and L. Wolf, “Local trinary patterns for human action recognition,” in Computer Vision, 2009 IEEE 12th International Conference
on. IEEE, 2009, pp. 492–497.
[17] L. Nanni, A. Lumini, and S. Brahnam, “Local binary patterns variants
as texture descriptors for medical image analysis,” Artificial intelligence
in medicine, vol. 49, no. 2, pp. 117–125, 2010.
[18] T. Xu, E. Kim, and X. Huang, “Adjustable adaboost classifier and pyramid features for image-based cervical cancer diagnosis,” in Biomedical
Imaging (ISBI), 2015 IEEE 12th International Symposium on. IEEE,
2015, pp. 281–285.
[19] O. Barkan, J. Weill, L. Wolf, and H. Aronowitz, “Fast high dimensional
vector multiplication face recognition,” in Proceedings of the IEEE
International Conference on Computer Vision, 2013, pp. 1960–1967.
[20] M. Heikkilä, M. Pietikäinen, and C. Schmid, “Description of interest
regions with center-symmetric local binary patterns,” in Computer
vision, graphics and image processing. Springer, 2006, pp. 58–69.
[21] J. Trefnỳ and J. Matas, “Extended set of local binary patterns for rapid
object detection,” in Computer Vision Winter Workshop, 2010, pp. 1–7.
[22] G. Xue, L. Song, J. Sun, and M. Wu, “Hybrid center-symmetric local
pattern for dynamic background subtraction,” in Multimedia and Expo
(ICME), 2011 IEEE International Conference on. IEEE, 2011, pp. 1–6.
[23] C. Silva, T. Bouwmans, and C. Frélicot, “An extended center-symmetric
local binary pattern for background modeling and subtraction in videos,”
in International Joint Conference on Computer Vision, Imaging and
Computer Graphics Theory and Applications, VISAPP 2015, 2015.
[24] J. Y. Choi, K. N. Plataniotis, and Y. M. Ro, “Using colour local binary
pattern features for face recognition,” in Image Processing (ICIP), 2010
17th IEEE International Conference on. IEEE, 2010, pp. 4541–4544.
[25] C. Zhu, C.-E. Bichot, and L. Chen, “Multi-scale color local binary
patterns for visual object classes recognition,” in Pattern Recognition
(ICPR), 2010 20th International Conference on. IEEE, 2010, pp. 3065–
3068.
[26] S. Zhang, H. Yao, and S. Liu, “Dynamic background modeling and
subtraction using spatio-temporal local binary patterns,” in Image Processing, 2008. ICIP 2008. 15th IEEE International Conference on.
IEEE, 2008, pp. 1556–1559.
[27] G. Xue, J. Sun, and L. Song, “Dynamic background subtraction based on
spatial extended center-symmetric local binary pattern,” in Multimedia
and Expo (ICME), 2010 IEEE International Conference on. IEEE,
2010, pp. 1050–1054.
[28] J. Yang, S. Wang, Z. Lei, Y. Zhao, and S. Z. Li, “Spatio-temporal
lbp based moving object segmentation in compressed domain,” in
Advanced Video and Signal-Based Surveillance (AVSS), 2012 IEEE
Ninth International Conference on. IEEE, 2012, pp. 252–257.
[29] H. Yin, H. Yang, H. Su, and C. Zhang, “Dynamic background subtraction
based on appearance and motion pattern,” in Multimedia and Expo
Workshops (ICMEW), 2013 IEEE International Conference on. IEEE,
2013, pp. 1–6.
10
[30] S. H. Davarpanah, F. Khalid, L. N. Abdullah, and M. Golchin, “A texture
descriptor: Background local binary pattern (bglbp),” Multimedia Tools
and Applications, vol. 75, no. 11, pp. 6549–6568, 2016.
[31] M. Xi, L. Chen, D. Polajnar, and W. Tong, “Local binary pattern
network: a deep learning approach for face recognition,” in Image
Processing (ICIP), 2016 IEEE International Conference on. IEEE,
2016, pp. 3224–3228.
[32] R. W. Robinson, “Counting unlabeled acyclic digraphs,” in Combinatorial mathematics V. Springer, 1977, pp. 28–43.
[33] A. Y. Ng, M. I. Jordan, Y. Weiss et al., “On spectral clustering: Analysis
and an algorithm,” in NIPS, vol. 14, no. 2, 2001, pp. 849–856.
[34] J. B. Tenenbaum, V. De Silva, and J. C. Langford, “A global geometric
framework for nonlinear dimensionality reduction,” science, vol. 290,
no. 5500, pp. 2319–2323, 2000.
[35] J. B. Kruskal, “Nonmetric multidimensional scaling: a numerical
method,” Psychometrika, vol. 29, no. 2, pp. 115–129, 1964.
[36] K. Fernandes, J. S. Cardoso, and H. Palacios, “Learning and ensembling lexicographic preference trees with multiple kernels,” in Neural
Networks (IJCNN), 2016 International Joint Conference on. IEEE,
2016, pp. 2140–2147.
[37] T. Joachims, “Optimizing search engines using clickthrough data,” in
Proceedings of the eighth ACM SIGKDD international conference on
Knowledge discovery and data mining. ACM, 2002, pp. 133–142.
[38] P. Flach and E. T. Matsubara, “A simple lexicographic ranker and
probability estimator,” in European Conference on Machine Learning.
Springer, 2007, pp. 575–582.
[39] P. Brodatz, Textures: a photographic album for artists and designers.
Dover Pubns, 1966.
[40] A. Kapoor and R. W. Picard, “Multimodal affect recognition in learning
environments,” in Proceedings of the 13th annual ACM international
conference on Multimedia. ACM, 2005, pp. 677–682.
[41] D. Eigen, C. Puhrsch, and R. Fergus, “Depth map prediction from a
single image using a multi-scale deep network,” in Advances in neural
information processing systems, 2014, pp. 2366–2374.
[42] N. Neverova, C. Wolf, G. W. Taylor, and F. Nebout, “Multi-scale deep
learning for gesture detection and localization,” in Workshop at the
European Conference on Computer Vision. Springer, 2014, pp. 474–
490.
[43] J. Wang, Y. Song, T. Leung, C. Rosenberg, J. Wang, J. Philbin, B. Chen,
and Y. Wu, “Learning fine-grained image similarity with deep ranking,”
in Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, 2014, pp. 1386–1393.
[44] E. Hayman, B. Caputo, M. Fritz, and J.-O. Eklundh, “On the significance of real-world conditions for material classification,” in European
conference on computer vision. Springer, 2004, pp. 253–266.
[45] L. Sharan, R. Rosenholtz, and E. Adelson, “Material perception: What
can you see in a brief glance?” Journal of Vision, vol. 9, no. 8, pp.
784–784, 2009.
[46] C. San Martin and S.-W. Kim, Eds., Virus Texture Analysis Using
Local Binary Patterns and Radial Density Profiles, ser. Lecture Notes
in Computer Science, vol. 7042. Springer Berlin / Heidelberg, 2011.
[47] G. Kylberg, “The kylberg texture dataset v. 1.0,” Centre for Image
Analysis, Swedish University of Agricultural Sciences and Uppsala
University, Uppsala, Sweden, External report (Blue series) 35,
September 2011. [Online]. Available: http://www.cb.uu.se/∼gustaf/
texture/
[48] M.-E. Nilsback and A. Zisserman, “Automated flower classification over
a large number of classes,” in Proceedings of the Indian Conference on
Computer Vision, Graphics and Image Processing, Dec 2008.
[49] L. Fei-Fei, R. Fergus, and P. Perona, “One-shot learning of object categories,” IEEE transactions on pattern analysis and machine intelligence,
vol. 28, no. 4, pp. 594–611, 2006.
[50] M. Norouzi, M. Ranjbar, and G. Mori, “Stacks of convolutional restricted
boltzmann machines for shift-invariant feature learning,” in Computer
Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference
on. IEEE, 2009, pp. 2735–2742.
| 1 |
International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.2, No.2, April 2013
STUDY OF Ε-SMOOTH SUPPORT VECTOR
REGRESSION AND COMPARISON WITH Ε- SUPPORT
VECTOR REGRESSION AND POTENTIAL SUPPORT
VECTOR MACHINES FOR PREDICTION FOR THE
ANTITUBERCULAR ACTIVITY OF OXAZOLINES AND
OXAZOLES DERIVATIVES
Doreswamy1 and Chanabasayya .M. Vastrad2
1
Department of Computer Science, MangaloreUniversity, Mangalagangotri-574 199,
Karnataka,INDIA
Doreswamyh@yahoo.com
2
Department of Computer Science, MangaloreUniversity, Mangalagangotri-574 199,
Karnataka, INDIA
channu.vastrad@gmail.com
ABSTRACT
A new smoothing method for solving ε -support vector regression (ε-SVR), tolerating a small error in
fitting a given data sets nonlinearly is proposed in this study. Which is a smooth unconstrained
optimization reformulation of the traditional linear programming associated with a ε-insensitive support
vector regression. We term this redeveloped problem as ε-smooth support vector regression (ε-SSVR).
The performance and predictive ability of ε-SSVR are investigated and compared with other methods
such as LIBSVM (ε-SVR) and
P-SVM methods. In the present study, two Oxazolines and Oxazoles
molecular descriptor data sets were evaluated. We demonstrate the merits of our algorithm in a series of
experiments. Primary experimental results illustrate that our proposed approach improves the
regression performance and the learning efficiency. In both studied cases, the predictive ability of the εSSVR model is comparable or superior to those obtained by LIBSVM and P-SVM. The results indicate
that ε-SSVR can be used as an alternative powerful modeling method for regression studies. The
experimental results show that the presented algorithm ε-SSVR, , plays better precisely and effectively
than LIBSVMand P-SVM in predicting antitubercular activity.
KEYWORDS
ε-SSVR , Newton-Armijo, LIBSVM, P-SVM
1.INTRODUCTION
The aim of this paper is supervised learning of real-valued functions. We study a sequence
S = x , y , . . . , x , y
of descriptor-target pairs, where the descriptors are vectors in ℝ
and the targets are real-valued scalars, yi ∈ ℝ.Our aim is to learn a function f: ℝ → ℝ which
serves a good closeness of the target values from their corresponding descriptor vectors. Such a
function is usually mentioned to as a regression function or a regressor for short.The main aimof
DOI : 10.5121/ijscai.2013.2204
49
International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.2, No.2, April 2013
regression problems is to find a function fx that can rightly predict the target values,y of new
input descriptor data points, x, by learning from the given training data set, S.Here, learning
from a given training dataset means finding a linear surface that accepts a small error in fitting
this training data set. Ignoring thevery smallerrors that fall within some acceptance, say εthat
maylead to a improvedgeneralization ability is performed bymake use of an ε -insensitive loss
function. As well as applying purpose of support vector machines (SVMs) [1-4], the function
fx is made as flat as achievable, in fitting the training data. This issue is called ε -support vector
regression (ε-SVR) and a descriptor data pointx ∈ R is called a support vector if"f#x $ − y " ≥
ε.Generally, ε-SVR is developed as a constrained minimization problem [5-6], especially, a
convex quadratic programming problem or a linear programming problem[7-9].Suchcreations
presents 2m more nonnegative variablesand 2m inequality constraints that increase the problem
sizeand could increase computational complexity for solvingthe problem. In our way, we change
the model marginally and apply the smooth methods that have been widely used for solving
important mathematical programming problems[10-14] and the support vector machine for
classification[15]to deal with the problem as an unconstrained minimizationproblemstraightly.
We name this reformulated problem as ε – smooth support vector regression(ε-SSVR). Because
ofthe limit less arrangement of distinguishability of the objectivefunction of our unconstrained
minimization problem, weuse a fast Newton-Armijo technique to deal with this reformulation.
It has been shown that the sequence achieved by the Newton-Armijo technique combines to the
unique solutionglobally and quadratically[15]. Taking benefit of ε-SSVR generation, we only
need to solve a system of linear equations iteratively instead of solving a convex quadratic
program or a linear program, as is the case with a conventionalε-SVR. Thus, we do not need to
use anysophisticated optimization package tosolve ε-SSVR. In order to deal with the case of
linear regression with aOxazolines and Oxazoles molecular descriptor dataset.
The proposed ε-SSVR model has strong mathematical properties, such as strong convexity and
infinitely often differentiability. To demonstrate the proposed ε-SSVR’s capability in solving
regression problems, we employ ε-SSVR to predict ant tuberculosis activity for Oxazolines and
Oxazoles agents. We also compared our ε-SSVR model with P-SVM[16-17] and LIBSVM [18]
in the aspect of prediction accuracies. The proposed ε-SSVR algorithm is implemented in
MATLAB.
A word about our representation and background material is given below. Entire vectors will be
column vectors by way of this paper.For a vector xin the n-dimensional real descriptor space
R , the plus functionx( is denoted as x( = max0, x , i = 1, … . . , n. The scalar(inner)
product of two vectors x and y in the n-dimensional real descriptor space R will be reprsented
by x, y and the p-norm of x will be represnted by ‖x‖. . For a matrix A ∈ R ⨯ , A is the iTh
row of A which is a row vector inR ? A column vector of ones of arbitrary dimension will be
reprsented by 1. For A ∈ R ⨯ and B ∈ R⨯2 , the kernel KA, B maps R ⨯ ⨯
R⨯2 intoR ⨯2 . In exact, if x andy are column vectors in R , then Kx , , y is a real number ,
,
KA, x = K#x , , A, $ is a column vector in R . andKA, A, is an m ⨯ m matrix . If f is a
real valued function interpreted on the n-dimensional real descriptor spaceR , the gradient of
f at x is represented by ∇fx which is a row vector in R and n ⨯ n Hessian matrixof second
partial derivatives of f at x is represented by∇5 fx . The base of the natural logarithm will be
represented bye.
50
International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.2, No.2, April 2013
2. MATERIALS AND ALGORITHAMS
2.1 The Data Set
The molecular descriptors of 100Oxazolines and Oxazoles derivatives [19-20] based H37Rv
inhibitors analyzed. These molecular descriptors are generated using Padel-Descriptor tool [21].
The dataset covers a diverse set of molecular descriptors with a wide range of inhibitory
activities against H37Rv. The pIC50 (observed biological activity) values range from -1 to 3.
The dataset can be arranged in data matrix. This data matrix x contains m samples (molecule
structures) in rows and n descriptors in columns. Vector y with order m × 1 denotes the
measured activity of interest i.e. pIC50. Before modeling, the dataset is scaled.
2.2 The Smooth ε –support vector regression(ε-SSVR)
We allow a given dataset Swhich consists of m points in n-dimensional real descriptor space R
denoted by the matrix A ∈ R ⨯ and m observations of real value associated with each
descriptor. That is,S = A , y | A ∈ R , y ∈ R, for i = 1, … … , m we would like to search a
nonlinear regression function,fx , accepting a small error in fitting this given data set. This
can be performed by make use of the ε- insensitive loss function that sets ε- insensitive “tube”
around the data, within which errors are rejected. Also, put into using the idea of support
vector machines (SVMs) [1-4],thefunction fx is made as 8lat as possible in fitting thetraining
data set. We start with the regression function f(x) and it is expressed as f(x) = x , w + b. This
problem can be formulated as an unconstrained minimization problem given as follows:
1
min>?@ w , w + C1, |ξ|D
(1)
(;,<)∈=
2
Where |ξ| ∈ R , (|ξ|D ) = max{0, |A w + b + y | − ε } that denotes the fitting errors and
positive control parameter C here weights the agreement between the fitting errors and the
flatnessof the regression functionf(x). To handle ε-insensitive loss function in the objective
function of the above minimization problem,traditionallyit is reformulated as a constrained
minimization problem expressed as follows:
1
min ∗ w , w + C1, (ξ + ξ∗ )Aw − 1b − y ≤ 1ε + ξ − Aw − 1b + y ≤ 1ε + ξ∗ ξ, ξ∗
(;,<,E,E ) 2
≥ 0.
(2)
This equation (2), which is equivalent to the equation (1), is a convex quadratic minimization
problem with n + 1 free variables, 2m nonnegative variables, and 2m imparity
constraints.However, presenting morevariables (and constraints in the formulation increases
theproblem size and could increase computational complexityfor dealing with the regression
problem.
In our smooth way, we change the model marginally and solve it as an unconstrained
minimization problem preciselyapart from adding any new variable and constraint.
51
International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.2, No.2, April 2013
Figure1. (a) |x|5D and (b) p5D (x, α) with α = 5,ε=1.
That is, the squares of 2-norm ε- insensitive loss, ‖|Aw − 1b + y|D ‖55 is minimized with weight
J
in place of the 1-norm of ε- insensitive loss as in Eqn (1). Additional, we add the term 5 b5
5
in the objective function to induce strong convexity and to certainty that the problem has a only
global optimal solution. These produce the following unconstrained minimization problem:
1
C
min>?@ (w , w + b5 ) + K|A w + b − y |5D
(;,<)∈=
2
2
L
(3)
This formulation has been projected in active set support vector regression [22] and determined
in its dual form. Motivated by smooth support vector machine for classification (SSVM) [15]
the squares of ε- insensitive loss function in the above formulation can be correctly
approximated by a smooth function which is extremely differentiable and described below.
Thus, we are admitted to use a fast Newton-Armijo algorithm to determine the approximation
problem. Before we make out the smooth approximation function, we exhibit some interesting
observations:
|x|D = max {0, |x| − ε }
= max{0, x − ε } + max{0, −x − ε }
(4)
= (x − ε)( + (−x − ε)( .
In addition, (x − ε)( . (−x − ε)( = 0 for all x ∈ R and ε > 0 . Thus, we have
(5)
|x|5D = (x − ε)5( + (−x − ε)5( .
In SSVM [15], the plus function x( approximated by a smooth p-function,p(x, α) = x +
log(1 + eSQT ), α > 0. It is straightforward to put in place of |x|5D by a very correct smooth
Q
approximation is given by:
5
5
p5D (x, α) = #p(x − ε, α)$ + #p(−x − ε, α)$ .
(6)
52
International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.2, No.2, April 2013
Figure1.exemplifies the square of ε- insensitive loss function and its smooth approximation in
the case of α = 5 and ε = 1. We call this approximation p5D -function with smoothing
parameterα. This p5D -function is used hereto put in place of the squares of ε- insensitive loss
function of Eqn. (3) to get our smooth support vector regression(ε-SSVR):
min>?@ ΦD,Q (w, b)
(;,<)∈=
∶=
1
min>?@ #w , w + b5 $
(;,<)∈=
2
C
+ K p5D (A w + b − y , α)
(7)
2
L
1
= min>?@ #w , w + b5 $
(;,<)∈=
2
C , 5
+ 1 pD (A w + b − y, α) ,
2
Where p5D (A w + b − y, α) ∈ R
is expressed by p5D (A w + b − y, α) = p5D (A w + b − y , α).
This problem is a powerfully convex minimization problem without any restriction. It is not
difficult to show that it has a one and only solution. Additionally, the objective function in Eqn.
(7)is extremelydifferentiable, thus we can use a fast Newton-Armijo technique to deal with the
problem.
Before we deal with the problem in Eqn. (7) we have to show that the result of the equation (3)
can be got by analyzing Eqn. (7) with α nearing infinity.
We begin with a simple heading thatlimits the difference betweenthe squares of ε- insensitive
loss function,|x|5D and its smooth approximation p5D (x, α).
Heading 2.2.1.For x ∈ Rand |x| < Z + [:
log 2 5 2σ
p5D (x, α) − |x|5D ≤ 2 \
] +
log 2,
α
α
where p5D x, α is expressed in Eqn. (6).
8
Proof. We allow for three cases. For −ε ≤ x ≤ ε, |x|D = 0 and p(x, α)5 are a continuity
increasing function, so we have
p5D (x, α) – |x|5D = p(x − ε, α)5 + p(−x − ε, α)5
log 2 5
5
≤ 2p(0, α) = 2 \
] ,
α
sincex − ε ≤ 0 and – x − ε ≤ 0.
2cd 5 5
For ε < a < [ + Z , using the result in SSVM[15] that px, α 5 − x( 5 ≤ b Q e +
for |x| < Z , we have
p5D (x, α) – (|x|D )5
= (p(x − ε, α))5 + (p(−x − ε, α))5 − (x − ε)5(
5f
log 2
Q
53
International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.2, No.2, April 2013
5
≤ #p(x − ε, α)$ − (x − ε)5( + (p(0, α))5
log 2 5 2σ
≤ 2\
] +
log 2.
α
α
Likewise, for the case of – ε − σ < a < – [ , we have
log 2 5 2σ
] +
log 2.
p5D x, α – |x|D )5 ≤ 2 \
α
α
2cd 5 5
e
Q
Hence, p5D x, α – |x|5D ≤ 2 b
+
5f
log 2.
Q
By Heading 2.1, we have that as the smoothing parameter α reaches infinity, the one and only
solution of Equation (7) reaches, the one and only solution of Equation (3). We shall do this for
a function fD (x) given in Eqn. (9) below that includes the objective function of Eqn. (3) and for
a function g D (x, α) given in Eqn. (10) below which includes the SSVR function of Eqn. (7).
Axiom 2.2.2. Let A ∈ R ⨯ andb ∈ R ⨯ . Explain the real valued functions fD (x) and
g D (x, α) in the n-dimensional real molecular descriptor spaceR :
1
1
5
fD (x) = K"Ag x − b" + ‖x‖55
D
2
2
(9)
L
And
g D (x, α) =
Withε,α > 0.
∑ L
5
p5D ( Ag x − b, α) + 5 ‖x‖55 ,
(10)
1. There exists a one and only solution x of minT∈=> fD (x) and one and only solution xQ of
minT∈=> g D (x, α).
2. For all α > 0 , we have the following inequality:
log 2 5
log 2
‖xQ − x‖55 ≤ m j\
] +ξ
k,
α
α
(11)
Whereξ is expressed as follows:
ξ = max |(Ax − b) |.
l l
(12)
Thus xQ gathers to xas α goes to endlessness with an upper limit given by Eqn. (11).
The proof can be adapted from the results in SSVM [15] and, thus, excluded here. We now
express a Newton-Armijo algorithm for solving the smooth equation (7).
54
International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.2, No.2, April 2013
2.2.1A NEWTON-ARMIJO ALGORITHM FOR m-SSVR
By utilizing the results of the preceding section and taking benefitof the twice differentiability
of the objectivefunction in Eqn. (7), we determine a globally and quadratically convergent
Newton-Armijo algorithm for solving Eqn. (7).
Algorithm 2.3.1 Newton-ArmijoAlgorithm For n-SSVR
Start with any choice of initial point (w o , bo ) ∈ R( . Having (w , b ), terminate if the gradient
of the objective function of Eqn. (7) is zero, that is, ∇ΦD,Q (w , b )=0. Else calculate
(w ( ,b ( ) as follows:
1.
Newton Direction:Decide the directiond ∈ R( by allocatingequal to zero the
Linearization of∇ΦD,Q (w, b) all over(w , b ), which results inn + 1
Linear equations with n + 1 variables:
∇5 ΦD,Q (w , b )d = −∇ΦD,Q (w , b ), .
2.
(13)
Armijo Step size [1]: Choose a stepsize λ ∈ R such that:
#w ( , b ( $ = (w , b )+λ d ,
(14)
5r
whereλ = max{1, , , … … }such that:
ΦD,Q #w , b $ − ΦD,Q ((w , b ) +λ d ≥ −δλ ΦD,Q (w , b )d ,
(15)
where δ ∈ b0, 5e .
Note that animportant difference between our smoothingapproach and that of the traditional
SVR [7-9] is that we are solving a linear system of equations (13) here, rather solving a
quadratic program, as is the case with the conventional SVR.
2.3LIBSVM
LIBSVM [18] is a library for support vector machines. LIBSVM is currentlyone of the most
widely used SVM software. This software contains C-support vector classification (C-SVC), vsupport vector classification (v-SVC), ε-support vector regression (ε-SVR), v-support vector
regression (v-SVR). All SVM formulations supported in LIBSVM are quadratic minimization
problems
2.4Potential-Support Vector Machines(P-SVM)
P-SVM [16-17] is a supervised learning method used for classification and regression. As well
as standard Support Vector Machines, it is based on kernels. Kernel Methods approach the
problem by mapping the data into a high dimensional feature space, where each coordinate
corresponds to one feature of the data items, transforming the data into a set of points in a
Euclidean space. In that space, a variety of methods can be used to find relations between the
data.
55
International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.2, No.2, April 2013
2.5Experimental Evaluation
In order to evaluate how well each method generalizedto unseen data, we split the entire data
set into two parts,the training set and testing set. The training data was usedto generate the
regression function that is learning fromtraining data; the testing set, which is not involved in
thetraining procedure, was used to evaluate the predictionability of the resulting regression
function.We also used a tabular structure scheme in splitting the entire data set to keep the
“similarity” between training and testing data sets [23]. That is, we tried to make the training
set and the testing set have the similar observation distributions. A smaller testing error
indicates better prediction ability. We performed tenfold cross-validation on each data set [24]
and reported the average testing error in our numerical results. Table 1 gives features of two
descriptor datasets.
Table 1: Features of two descriptor datasets
Data set(Molecular
Descriptors of
Oxazolines and
Oxazoles Derivatives)
Train
Size
Test Size
Full
75 X 254
25 X 254
254
Reduced
75 X 71
25 X 71
71
Attributes
In all experiments, 2-norm relative error was chosen to evaluate the tolerance between the
predicted values and the observations. For an observation vector y and the predicted vector yv ,
the 2-norm relative error (SRE) of two vectors y and yv was defined as follows.
SRE =
‖y − yv‖5
‖y‖5
(16)
In statistics, the mean absolute error is a quantity used to measure how close predictions are to
the eventual outcomes. The mean absolute error (MAE) is given by
1
1
MAE = K|yv − y | = K|e |
(17)
n
n
L
L
As the name suggests, the mean absolute error is an average of the absolute errors e =yv − y ,
where yv is the prediction and y the observed value.
In statistics, the coefficient of determination, denoted R5 and pronounced R squared, is used in
the context of statistical models whose main purpose is the prediction of future outcomes on the
basis of other related information. R5 is most often seen as a number between 0 and 1, used to
describe how well a regression line fits a set of data. A R5 near 1 indicates that a regression
line fits the data well, while a R5 close to 0 indicates a regression line does not fit the data very
well. It is the proportion of variability in a data set that is accounted for by the statistical
model. It provides a measure of how well future outcomes are likely to be predicted by the
model.
56
International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.2, No.2, April 2013
R5 = 1 −
∑L(y − yv )5
∑L(y − y)5
(18)
The predictive power of the models developed on the calculated statistical parameters standard
error of prediction (SEP) and relative error of prediction (REP)% as follows:
SEP = z
o.|
∑L(yv − y )5
{
n
100 1
REP(%) =
~ K(yv − y )5
y n
L
o.|
(19)
(20)
The performancesof models were evaluated in terms of root mean square error (RMSE), which
was defined as below:
∑L#
=
5
− $
(21)
Whereyv ,y and y are the predicted, observed and mean activity property, respectively.
3.RESULTS AND DISCUSSION
In this section, we demonstrate the effectiveness of our proposed approachε-SSVR by
comparing it to LIBSVM (ε-SVR) and P-SVM. In the following experiments, training is done
5
with Gaussian kernel function k(x1, x2) = exp b−ϒx − xg e , where ϒis the is the width
of the Gaussian kernel, i, j = 1, … . . , l. We perform tenfold cross-validation on each dataset and
record the average testing error in our numerical results. The performances of ε-SSVR for
regression depend on the combination of several parameters They are capacity parameter , ε
of ε- insensitive loss function and ϒparameter. is a regularization parameter that controls the
tradeoff between maximizing the margin and minimizing the training error. In practice the
parameter is varied through a wide range of values and the optimal performance assessed
using a separate test set. Regularization parameter , whose effect on the RMSE is shown in
Figure 1a for full descriptor datasetandFigure 1b for reduced descriptor dataset.
57
International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.2, No.2, April 2013
Figure 1a.The selection of the optimal capacity factor (8350) for ε-SSVR(ε=0.1, ϒ=0.0217)
For the Full descriptordataset, The RMSE valuefor ε-SSVRmodel 0.3563 is small for selected
optimal parameter C, compared to RMSE values for other two models i.e. LIBSVM (ε-SVR)
and P-SVM are 0.3665 and 0.5237.Similarly,for the reduced descriptor dataset,The RMSE
value for ε-SSVR model 0.3339 is small for selected optimal parameter C, compared to RMSE
values for other two models i.e. LIBSVM (ε-SVR) and P-SVM are 0.3791 and 0.5237. The
optimal value for ε depends on the type of noise present in the data, which is usually unknown.
Even if enough knowledge of the noise is available to select an optimal value for ε, there is the
practical consideration of the number of resulting support vectors. Ε insensitivity prevents the
entire training set meeting boundary conditions and so allows for the possibility of sparsely in
the dual formulations solution.
58
International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.2, No.2, April 2013
Figure 1b. The selection of the optimal capacity factor(1000000) for ε-SSVR(ε=0.1, ϒ=0.02)
So, choosing the appropriate value of ε is critical from theory. To find an optimal ε, the root
mean squares error (RMSE) on LOO cross-validation on different ε was calculated. The curves
of RMSE versus the epsilon (ε) is shown in Figure 2a and Figure 2b.
Figure 2a. The selection of the optimal epsilon (0.1) for ε-SSVR( = 1000, ϒ=0.02)
For the Full descriptor dataset , The RMSE value for ε-SSVR model 0.3605 is small for
selected optimal epsilon(ε), compared to RMSE value for LIBSVM(ε-SVR) model is closer i.e.
0.3665 but comparable to the proposed model and bigRMSE value for P-SVM model is
0.5237.
59
International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.2, No.2, April 2013
Figure 2b.The selection of the optimal epsilon (0.1) for ε-SSVR(= 10000000, ϒ=0.01)
Similarly , for the Reduced descriptor dataset , The RMSE value for ε-SSVR model 0.3216 is
small for selected optimal epsilon(ε) , compared to RMSE values for other two models i.e.
LIBSVM(ε-SVR) and P-SVM are 0.3386 and 0.4579.
Figure 3a. The selection of the optimal ϒ(0.02) for ε-SSVR(C =1000, ε=0.1)
Parameter tuning was conducted in ε-SSVR, where the ϒ parameter in the Gaussian kernel
function was varied from 0.01 to 0.09 in steps 0.01 to select optimal parameter. The value of ϒ
is updated based on the minimization LOO tuning error rather than directly minimizing the
training error. The curves of RMSE versus the gamma(ϒ) is shown in Figure 3a and Figure 3b.
60
International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.2, No.2, April 2013
Figure 3b. The selection of the optimal ϒ(0.01) for ε-SSVR(C =1000000, ε=0.1)
For the Full descriptordataset , The RMSE value for ε-SSVR model 0.3607 is small for
selected optimal parameter ϒ , compared to RMSE values for other two models i.e.
LIBSVM(ε-SVR) and P-SVM are 0.3675 and 0.5224. Similarly , for the Reduced descriptor
dataset , The RMSE value for ε-SSVR model 0.3161 is small for selected optimal parameterϒ,
compared to RMSE values for other two models i.e.LIBSVM(ε-SVR) and P-SVM are 0.3386
and 0.4579.
The statistical parameters calculated for the ε-SSVR, LIBSVM(ε-SVR) and P-SVM models are
represented in Table 2 and Table 3.
Table 2. Performance Comparison between ε-SSVR,ε-SVR and P-SVM for Full descriptor
dataset
Algorithm
(ε, C,ϒ)
Train
Error( )
0.9790
0.9825
0.8248
Test
Error( )
0.8183
0.8122
0.6166
MAE
SRE
SEP
REP(%)
ε-SSVR
ε-SVR
P-SVM
(0.1,1000,0.0217)
0.0994
0.0918
0.2510
0.1071
0.0979
0.3093
0.3679
0.3741
0.5345
53.7758
54.6693
78.1207
ε-SSVR
ε-SVR
P-SVM
(0.1,8350,0.0217)
0.9839
0.9825
0.8248
0.8226
0.8122
0.6166
0.0900
0.0918
0.2510
0.0939
0.0979
0.3093
0.3636
0.3741
0.5345
53.1465
54.6693
78.1207
ε-SSVR
ε-SVR
P-SVM
(0.1,1000,0.02)
0.9778
0.9823
0.8248
0.8181
0.8113
0.6186
0.1019
0.0922
0.2506
0.1100
0.0984
0.3093
0.3681
0.3750
0.5332
53.8052
54.8121
77.9205
61
International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.2, No.2, April 2013
In these tables, statistical parameters R-square (R5 ) ,Mean absolute error (MAE),2-N
Normalization(SRE), standard error of prediction (SEP) and relative error of prediction
(REP%) obtained by applying the ε-SSVR, ε-SVR and P-SVM methods to the test set indicate
a good external predictability of the models.
Table 3. Performance Comparison between ε-SSVR,ε-SVR and P-SVM for Reduced descriptor
dataset
Algorithm
( ε, C,ϒ)
ε-SSVR
ε-SVR
P-SVM
(0.1,1000000,0.02)
ε-SSVR
ε-SVR
P-SVM
ε-SSVR
ε-SVR
P-SVM
(0.1,10000000,0.01)
(0.1,1000000,0.01)
Train
Error( )
0.9841
0.9847
0.8001
Test
Error( )
0.8441
0.7991
0.7053
MAE
SRE
SEP
REP(%)
0.0881
0.0827
0.2612
0.0931
0.0914
0.3304
0.3408
0.3870
0.4687
49.8084
56.5533
68.4937
0.9849
0.9829
0.8002
0.9796
0.9829
0.8002
0.8555
0.8397
0.7069
0.8603
0.8397
0.7069
0.0851
0.0892
0.2611
0.0964
0.0892
0.2611
0.0908
0.0967
0.3303
0.1056
0.0967
0.3303
0.3282
0.3456
0.4673
0.3226
0.3456
0.4673
47.9642
50.5103
68.3036
47.1515
50.5103
68.3036
An experimental results show that experiments carried out from reduced descriptor datasets
shows good results rather than full descriptor dataset. As from can be seen from table 4 , the
results of ε-SSVR models are better than those obtainedby ε-SVR and P-SVM models for
Reduced descriptor data set.
Figure 4. Correlation between observed and predicted values for training set and test set
generated by ε-SSVR
Figure4, 5and 6 are the scatter plot of the three models, which shows a correlation between
observed value and ant tuberculosisactivity prediction in the training and test set.
62
International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.2, No.2, April 2013
Figure 5. Correlation between observed and predicted values for training set and test set
generated by ε-SVR
Figure 6. Correlation between observed and predicted values for training set and test set
generated by P-SVM algorithm
Our numerical results have demonstrated that ε-SSVR is a powerful tool for solving
regressionProblems handle the massive data sets without scarifying any prediction accuracy. In
the tuning process of these experiments, we found out that LIBSVM and P-SVM become very
slow when the control parameter becomes bigger, while ε-SSVR is quite robust to the control
parameter . Although we solved the ε-insensitive regression problem is an unconstrained
minimization problem.
63
International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.2, No.2, April 2013
4CONCLUSION
In the present work, ε-SSVR, which is a smooth unconstrained optimization reformulation of
the traditional quadratic program associated with a ε-insensitive support vector regression.We
have compared the performance of, ε-SSVR, LIBSVM and P-SVM models with two datasets.
The obtained results show that ε-SSVR can be used to derive statistical model with better
qualities and better generalization capabilities than linear regression methods. ΕSSVRalgorithm exhibits the better overall performance and a better predictive ability than the
LIBSVM and P-SVM models. The experimental results indicate ε-SSVR has high precision and
good generalization ability.
ACKNOLDGEMENTS
We gratefully thank to the Department of Computer Science Mangalore University, Mangalore
India for technical support of this research.
REFERENCES
[1]
Jan Luts,,Fabian Ojeda, Raf Van de Plasa, Bart De Moor, Sabine Van Huffel, Johan A.K. Suykens
,“A tutorialon support vector machine-based methods for classification problems in chemometrics”,
AnalyticaChimicaActa665 (2010) 129–145
[2]
HongdongLi ,Yizeng Liang, QingsongXu ,”Support vector machines and its applications in
chemistry”,Chemometrics and Intelligent Laboratory Systems 95 (2009) 188–198
[3]
Jason Weston,”Support Vector Machines and Stasitical Learning Theory”, NEC Labs America 4
IndependenceWay,
Princeton,
USA.http://www.cs.columbia.edu/~kathy/cs4701/documents/jason_svm_tutorial.pdf
[4]
AlyFaragandRefaat M Mohamed
,“Regression Using Support Vector Machines: Basic
Foundations”
,http://www.cvip.uofl.edu/wwwcvip/research/publications/TechReport/SVMRegressionTR.pdf
[5]
Chih-Jen Lin ,“Optimization, Support Vector
http://www.csie.ntu.edu.tw/~cjlin/talks/rome.pdf
[6]
Max
Welling
,“Support
Vector
Regression”
http://www.ics.uci.edu/~welling/teaching/KernelsICS273B/SVregression.pdf
[7]
ALEX J. SMOLA and BERNHARD SCHO¨ LKOPF,, “Tutorial on support vector regression”
,Statistics and Computing 14: 199–222, 2004
[8]
Qiang Wu Ding-Xuan Zhou,” SVM Soft Margin Classifiers: Linear Programming versus Quadratic
Programming” ,www6.cityu.edu.hk/ma/doc/people/zhoudx/LPSVMfinal.pdf
[9]
Laurent
El
Ghaoui,”
Convex
Optimization
www.stanford.edu/class/ee392o/mit022702.pdf
Machines,
in
and
Machine
Classifcation
Learning”
Problems”
,
,
,
[10] DONGHUI LI MASAO FUKUSHIMA, “Smoothing Newton and Quasi-Newton methods for
mixed Complementarity problems” ,Computational Optimization and Applications ,17,203230,2000
[11] C. Chen and O.L. Mangasarian ,“Smoothing Methods for Convex Inequalities and Linear
Complementarity problems”, Math. Programming, vol. 71, no. 1, pp. 51-69, 1995
64
International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.2, No.2, April 2013
[12] X Chen,L. Qi and D. Sun,“Globalandsuperlinear convergence of the Smoothing Newton
methodapplication to general box constrained variational inequalities“, Mathematics of
Computation Volume 67,Number 222, April 1998, Pages 519-540
[13] X. Chen and Y. Ye, SIAM J. , “ On Homotopy-Smoothing Methods For Variational Inequalities”
,Control andOptimization, vol. 37, pp. 589-616, 1999.
[14] Peter W. ,“A semi-smooth Newton method for elasto-plastic contact problems”, Christensen
International Journal of Solids and Structures 39 (2002) 2323–2341
[15] Y. J. Lee and O. L. Mangasarian ,” SSVM: Asmooth support vector machine forclassification”,
Computational Optimization and Applications, Vol. 22, No. 1, 2001, pp. 5-21.
[16] SeppHochreiter and Klaus Obermanyer ,”Support Vector Machines for Dyadic Data” ,Neural
Computation,18, 1472-1510, 2006. http://ni.cs.tu-berlin.de/software/psvm/index.html
[17] Ismael F. Aymerich, JaumePiera and AureliSoria-Frisch ,“Potential Suport Vector Machines and
Self-Organizing Maps for phytoplankton discrimination”, In proceeding of: International Joint
Conference onNeural Networks, IJCNN 2010, Barcelona, Spain, 18-23 July, 2010
[18] C.-C. Chang and C.-J. Lin, 2010 , “LIBSVM: A Library for Support Vector Machines”
,http://www.csie.ntu.edu.tw/~cjlin/libsvm
[19] Andrew J. Phillips, Yoshikazu Uto, Peter Wipf, Michael J. Reno, and David R. Williams,
“Synthesis of Functionalized Oxazolines and Oxazoles with DAST and Deoxo-Fluor” Organic
Letters 2000 Vol 2 ,No.81165-1168
[20] Moraski GC, Chang M, Villegas-Estrada A, Franzblau SG, Möllmann U, Miller MJ.,”Structureactivityrelationship of new anti-tuberculosis agents derived from oxazoline and oxazole benzyl
esters” ,Eur J Med Chem. 2010 May;45(5):1703-16. doi: 10.1016/j.ejmech.2009.12.074. Epub
2010 Jan 14.
[21] “Padel-Descriptor” http://padel.nus.edu.sg/software/padeldescriptor/
[22] David R. Musicant and Alexander Feinberg ,“Active Set Support Vector Regression” , IEEE
TRANSACTIONS ON NEURAL NETWORKS, VOL. 15, NO.2, MARCH 2004
[23] Ian H.Witten&Eibe Frank, “Data mining: Practical Machine Learning Tools
,SecondEditionElseveir.
andtechinques”
[24] PayamRefaeilzadeh,
Lei
Tang,
Huanliu
,“Cross-Validation”
http://www.cse.iitb.ac.in/~tarung/smt/papers_ppt/ency-cross-validation.pdf
,
65
International Journal on Soft Computing, Artificial Intelligence and Applications (IJSCAI), Vol.2, No.2, April 2013
Authors
Doreswamyreceived B.Sc degree in Computer Science andM.Sc Degree
inComputer Science from University of Mysore in 1993 and 1995 respectively.
Ph.Ddegree in Computer Science from Mangalore University in the year 2007.
Aftercompletion of his Post-Graduation Degree, he subsequently joined and served
asLecturer in Computer Science at St.Joseph’s College, Bangalore from 19961999.Then he has
elevated to the position Reader in Computer Science at
Mangalore Universityin year 2003. He was the Chairman of the Department of
Post-Graduate Studies and researchin computer science from 2003-2005 and from 2009-2008 and served
at varies capacitiesin Mangalore University at present he is the Chairman of Board of Studies and
AssociateProfessor in Computer Science of Mangalore University. His areas of Researchinterestsinclude
Data Mining and Knowledge Discovery,ArtificialIntelligence and Expert Systems, Bioinformatics
,Molecular modelling and simulation ,Computational Intelligence ,Nanotechnology, ImageProcessing
and Pattern recognition. He has been granted a Major Research project entitled “Scientific Knowledge
DiscoverySystems(SKDS) for Advanced Engineering Materials Design Applications” fromthe funding
agency University Grant Commission, New Delhi, India. Hehas been published about 30 contributedpeer
reviewed Papers at national/International Journal and Conferences.Hereceived SHIKSHA RATTAN
PURASKAR for his outstanding achievementsin the year 2009 and RASTRIYA VIDYA
SARASWATHI AWARD for outstanding achievement in chosenfield of
activityin the year 2010.
ChanabasayyaM. Vastradreceived B.E. degree and M.Tech.degree in the
year2001 and 2006 respectively. Currently working towards his Ph.D Degree in
Computer Science andTechnology under the guidance of Dr. Doreswamyin the
Department of Post-Graduate Studies and Research in Computer Science,
Mangalore University.
66
| 5 |
Provable and practical approximations for the degree
distribution using sublinear graph samples∗
Talya Eden
School of Computer Science, Tel Aviv
University
Tel Aviv, Israel
talyaa01@gmail.com
Shweta Jain
University of California, Santa Cruz
Santa Cruz, CA, USA
sjain12@ucsc.edu
arXiv:1710.08607v2 [cs.SI] 19 Jan 2018
Dana Ron
School of Computer Science, Tel Aviv
University
Tel Aviv, Israel
danaron@tau.ac.il
Ali Pinar
Sandia National Laboratories
Livermore, CA
apinar@sandia.gov
C. Seshadhri
University of California, Santa Cruz
Santa Cruz, CA
sesh@ucsc.edu
ABSTRACT
1
The degree distribution is one of the most fundamental properties
used in the analysis of massive graphs. There is a large literature on
graph sampling, where the goal is to estimate properties (especially
the degree distribution) of a large graph through a small, random
sample. The degree distribution estimation poses a significant
challenge, due to its heavy-tailed nature and the large variance
in degrees.
We design a new algorithm, SADDLES, for this problem,
using recent mathematical techniques from the field of sublinear
algorithms. The SADDLES algorithm gives provably accurate
outputs for all values of the degree distribution. For the analysis,
we define two fatness measures of the degree distribution, called
the h-index and the z-index. We prove that SADDLES is sublinear
in the graph size when these indices are large. A corollary of this
result is a provably sublinear algorithm for any degree distribution
bounded below by a power law.
We deploy our new algorithm on a variety of real datasets and
demonstrate its excellent empirical behavior. In all instances, we
get extremely accurate approximations for all values in the degree
distribution by observing at most 1% of the vertices. This is a major
improvement over the state-of-the-art sampling algorithms, which
typically sample more than 10% of the vertices to give comparable
results. We also observe that the h and z-indices of real graphs are
large, validating our theoretical analysis.
In domains as diverse as social sciences, biology, physics,
cybersecurity, graphs are used to represent entities and the
relationships between them. This has led to the explosive growth
of network science as a discipline over the past decade. One of
the hallmarks of network science is the occurrence of specific
graph properties that are common to varying domains, such as
heavy tailed degree distributions, large clustering coefficients,
and small-world behavior. Arguably, the most significant among
these properties is the degree distribution, whose study led to the
foundation of network science [7, 8, 20].
Given an undirected graph G, the degree distribution (or
technically, histogram) is the sequence of numbers n(1), n(2), . . .,
where n(d) is the number of vertices of degree d. In almost all
real-world scenarios, the average degree is small, but the variance
(and higher moments) is large. Even for relatively large d, n(d)
is still non-zero, and n(d) typically has a smooth non-increasing
behavior. In Fig. 1, we see the typical degree distribution behavior.
The average degree in a Google web network is less than 10, but
the maximum degree is more than 5000. There are also numerous
vertices with all intermediate degrees. This is referred to as a “heavy
tailed” distribution. The degree distribution, especially the tail, is
of significant relevance to modeling networks, determining their
resilience, spread of information, and for algorithmics [6, 9, 13, 16,
33–36, 42].
With full access to G, the degree distribution can be computed
in linear time, by simply determining the degree of each vertex.
Yet in many scenarios, we only have partial access to the graph,
provided through some graph samples. A naive extrapolation of
the degree distribution can result in biased results. The seminal
research paper of Faloutsos et al. claimed a power law in the degree
distribution on the Internet [20]. This degree distribution was
deduced by measuring a power law distribution in the graph sample
generated by a collection of traceroute queries on a set of routers.
Unfortunately, it was mathematically and empirically proven that
traceroute responses can have a power law even if the true network
does not [1, 11, 27, 37]. In general, a direct extrapolation of the
degree distribution from a graph subsample is not valid for the
ACM Reference format:
Talya Eden, Shweta Jain, Ali Pinar, Dana Ron, and C. Seshadhri. 2016.
Provable and practical approximations for the degree distribution using
sublinear graph samples. In Proceedings of , , , 12 pages.
DOI:
∗ Both
Talya Eden and Shweta Jain contributed equally to this work, and are joint first
authors of this work.
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for third-party components of this work must be honored.
For all other uses, contact the owner/author(s).
,
© 2016 Copyright held by the owner/author(s). .
DOI:
INTRODUCTION
The Hidden Degrees Model (HDM) Vertex and neighbor
queries allowed, not degree queries: This is a substantially weaker
model. In numerous cybersecurity and network monitoring
settings, an algorithm cannot query for degrees, and has to infer
them indirectly. Observe that this model is significantly harder
than the standard model. It takes O((m + n) log n) to determine
all the degrees, since one has to at least visit all the edges to find
degrees exactly. In this model, we express the number of queries as
a fraction of m.
We stress that other query models are possible. Recent work of
Dasgupta, Kumar, and Sarlos argues that vertex queries are too
powerful, especially in social network contexts [14]. In their query
model, a small set of random seeds are provided, but only neighbor
and degree queries are subsequently allowed.
underlying graph. This leads to the primary question behind our
work.
How can we provably and practically estimate the degree
distribution without seeing the entire graph?
There is a rich literature in statistics, data mining, and physics
on estimating graph properties (especially the degree distribution)
using a small subsample [2, 3, 5, 17, 28, 30, 31, 39, 46, 47].
Nonetheless, there is no provable algorithm for the entire degree
distribution, with a formal analysis on when it is sublinear in the
number of vertices. Furthermore, most empirical studies typically
sample 10-30% of the vertices for reasonable estimates.
1.1
Problem description
We focus on the complementary cumulative degree histogram (often
called the cumulative degree distribution) or ccdh of G. This is
Í
the sequence {N (d)}, where N (d) = r ≥d n(r ). The ccdh is
typically used for fitting distributions, since it averages out noise
and is monotonic [12]. Our aim is to get an accurate bicriteria
approximation to the ccdh of G, at all values of d.
1.2
Our contributions
Our main theoretical result is a new sampling algorithm, the
Sublinear Approximations for Degree Distributions Leveraging
Edge Samples, or SADDLES. This algorithm provably provides
(ε, δ )-approximations for the ccdh. We show how to design
SADDLES under both the Standard Model and the Hidden Degrees
Model. We apply SADDLES on a variety of real datasets and
demonstrate its ability to accurately approximate the ccdh with a
tiny sample of the graph.
• Sampling algorithm for estimating ccdh: Our algorithm
combines a number of techniques in random sampling to get
(ε, δ )-estimates for the ccdh. A crucial component is an application
of an edge simulation technique, first devised by Eden et al. in the
context of triangle counting [18, 19]. This (theoretical) technique
shows how to get a collection of weakly correlated uniform random
edges from independent uniform vertices. SADDLES employs a
weighting scheme on top of this method to estimate the ccdh.
• Heavy tails leads to sublinear algorithms: The challenge
in analyzing SADDLES is in finding parameters of the ccdh that
allow for sublinear query complexity. To that end, we discuss two
parameters that measure “heaviness” of the distribution tail: the
classic h-index and a newly defined z-index. We prove that the
running time of SADDLES is sublinear (for both models) whenever
these indices are large. This yields a provably sublinear time
algorithm to get accurate (ε, δ )-estimates for the ccdh.
• Excellent empirical behavior:
We deploy an
implementation of SADDLES on a collection of large real-world
graphs. In all instances, we achieve extremely accurate estimates
for the entire ccdh by sampling at most 1% of the vertices of the
graph. Refer to Fig. 1. Observe how SADDLES tracks various
jumps in the ccdh, for both graphs in Fig. 1.
• Comparison with existing sampling methods: A number
of graph sampling methods have been proposed in practice, such as
vertex sampling, snowball sampling, forest-fire sampling, induced
graph sampling, random walk, edge sampling [5, 17, 28, 30, 38,
39, 47]. A recent work of Zhang et al. explicitly addresses biases
in these sampling methods, and fixes them using optimization
techniques [47]. We run head-to-head comparisons with all
these sampling methods, and demonstrate the SADDLES gives
significantly better practical performance. Fig. 1 shows the output
of all these sampling methods with a total sample size of 1% of
e(d)} is an (ε, δ )-estimate of the
Definition 1.1. The sequence {N
e(d) ≤ (1 + δ )N ((1 − ε)d).
ccdh if ∀d, (1 − δ )N ((1 + ε)d) ≤ N
Computing an (ε, δ )-estimate is significantly harder than
approximating the ccdh using standard distribution measures.
Statistical measures, such as the KS-distance, χ 2 , `p -norms, etc.
tend to ignore the tail, since (in terms of probability mass) it is
a negligible portion of the distribution. From a network science
standpoint, the few vertices that are of extremely high degree are
essential for graph analysis. An (ε, δ )-estimate is accurate for all d.
The query model: A formal approach requires specifying a
query model for accessing G. We look to the subfields of property
testing and sublinear algorithms within theoretical computer
science for such models [22, 23]. Consider the following three
kinds of queries.
• Vertex queries: acquire a uniform random vertex v ∈ V .
• Neighbor queries: given v ∈ V , acquire a uniform random
neighbor u of V .
• Degree queries: given v ∈ V , acquire the degree dv .
An algorithm is only allowed to make these queries to process the
input. It has to make some number of queries, and finally produce
an output. We discuss two query models, and give results for both.
The Standard Model (SM) All queries allowed: This is the
standard model in numerous sublinear algorithms results [18, 19, 22–
24]. Furthermore, most papers on graph sampling implicitly use this
model for generating subsamples. Indeed, any method involving
crawling from a random set of vertices and collecting degrees is
in the Standard Model. This model is the primary setting for our
work, and allows for comparison with rich body of graph sampling
algorithms. It is worth noting that in the Standard Model, one can
determine the entire degree distribution in O(n log n) queries (the
extra log n factor comes from the coupon collector bound of finding
all the vertices through uniform sampling). Thus, it makes sense to
express the number of queries made by an algorithm as a fraction
of n. Alternately, the number of queries is basically the number of
vertices encountered by the algorithm. Thus, a sublinear algorithm
makes o(n) queries.
2
(a) amazon0601 copurchase network
(b) web-Google web network
(c) cit-Patents citation network
(d) com-orkut social network
Figure 1: The output of SADDLES on a collection of networks: amazon0601 (403K vertices, 4.9M edges), web-Google (870K vertices,
4.3M edges), cit-Patents (3.8M vertices, 16M edges), com-orkut social network (3M vertices, 117M edges). SADDLES samples 1%
of the vertices and gives accurate results for the entire (cumulative) degree distribution. For comparison, we show the output
of a number of sampling algorithms from past work, each run with the same number of samples. (Because of the size of
com-Orkut, methods involving optimization [47] fail to produce an estimate in reasonable time.)
the vertices. Observe how across the board, the methods make
erroneous estimates for most of the degree distribution. The errors
are also very large, for all the methods. This is consistent with
previous work, where methods sample more than 10% of the number
of vertices.
1.3
Our main result is more nuanced, and holds for all degree
distributions. If the ccdh has a heavy tail, we expect N (d) to
be reasonably large even for large values of d. We describe two
formalisms of this notion, through fatness indices.
Definition 1.4. The h-index of the degree distribution is the
largest d such that there are at least d vertices of degree at least d.
Theoretical results in detail
This is the exact analogy of the bibliometric h-index [26]. As we
show in the §2.1, h can be approximated by mind (d + N (d))/2. A
more stringent index is obtained by replacing the arithmetic mean
by the (smaller) geometric mean.
Our main theoretical result is a new sampling algorithm, the
Sublinear Approximations for Degree Distributions Leveraging
Edge Samples, or SADDLES.
We first demonstrate our results for power law degree
distributions [7, 8, 20]. Statistical fitting procedures suggest they
occur to some extent in the real-world, albeit with much noise [12].
The classic power law degree distribution sets n(d) ∝ 1/d γ , where
γ is typically in [2, 3]. We build on this to define a power law lower
bound.
Definition p1.5. The z-index of the degree distribution is z =
mind :N (d )>0 d · N (d).
Our main theorem asserts that large h and z indices lead to a
sublinear algorithm for degree distribution estimation. Theorem 1.3
is a direct corollary obtained by plugging in values of the indices
for power laws.
Definition 1.2. Fix γ > 2. A degree distribution is bounded
below by a power law with exponent γ , if the ccdh satisfies the
following property. There exists a constant τ > 0 such that for all
d, N (d) ≥ bτn/d γ −1 c.
Theorem 1.6. For any ε > 0, the SADDLES algorithm outputs
(with high probability) an (ε, ε)-approximation to the ccdh, and makes
the following number of queries.
e
• SM: O(n/h
+ m/z 2 )
e
• HDM: O(m/z)
The following is a corollary of our main result. For convenience,
we will suppress query complexity dependencies on ε and log n
e
factors, using O(·).
1.4
Theorem 1.3. Suppose the degree distribution of G is bounded
below by a power law with exponent γ . Let the average degree be
denoted by d. For any ε > 0, the SADDLES algorithm outputs (with
high probability) an (ε, ε)-approximation to the ccdh and makes the
following number of queries.
1
1
e 1− γ + n1− γ −1 d)
• SM: O(n
Challenges
The heavy-tailed behavior of the real degree distribution poses
the primary challenge to computing (ε, δ )-estimates to the ccdh.
As d increases, there are fewer and fewer vertices of that degree.
Sampling uniform random vertices is inefficient when N (d) is small.
A natural idea to find high degree vertices to pick a random neighbor
of a random vertex. Such a sample is more likely to be a high degree
vertex. This is the idea behind methods like snowball sampling,
forest fire sampling, random walk sampling, graph sample-and-hold,
etc. [5, 17, 28, 30, 38, 39, 47]. But these lead to biased samples,
since vertices with the same degree may be picked with differing
probabilities.
1
e 1− 2(γ −1) d)
• HDM: O(n
In most real-world instances, the average degree d is typically
constant. Thus, the complexities above are strongly sublinear. For
e 1/2 ) for both models. When γ = 3,
example, when γ = 2, we get O(n
e 2/3 ) and O(n
e 3/4 ).
we get O(n
3
A direct extrapolation/scaling of the degrees in the observed
graph does not provide an accurate estimate. Our experiments show
that existing methods always miss the head or the tail. A more
principled approach was proposed recently by Zhang et al. [47],
by casting the estimation of the unseen portion of the distribution
as an optimization problem. From a mathematical standpoint, the
vast majority of existing results tend to analyze the KS-statistic, or
some `p -norm. As we mentioned earlier, this does not work well
for measuring the quality of the estimate at all scales. As shown by
our experiments, none of these methods give accurate estimate for
the entire ccdh with less than 5% of the vertices.
The main innovation in SADDLES comes through the use of
a recent theoretical technique to simulate edge samples through
vertex samples [18, 19]. The sampling of edges occurs through
two stages. In the first stage, the algorithm samples a set of r
vertices and sets up a distribution over the sampled vertices such
that any edge adjacent to a sampled vertex may be sampled with
uniform probability. In the second stage, it samples q edges from
this distribution. While a single edge is uniform random, the set of
edges are correlated.
For a given d, we define a weight function on the edges, such
that the total weight is exactly N (d). SADDLES estimates the total
weight by scaling up the average weight on a random sample of
edges, generated as discussed above. The difficulty in the analysis
is the correlation between the edges. Our main insight is that if the
degree distribution has a fat tail, this correlation can be contained
even for sublinear r and q. Formally, this is achieved by relating
the concentration behavior of the average weight of the sample to
the h and z-indices. The final algorithm combines this area with
vertex sampling to get accurate estimates for all d.
The hidden degrees model is dealt with using birthday paradox
techniques formalized by Ron and
√ Tsur [41]. It is possible to
estimate the degree dv using O( dv ) queries neighbor queries.
But this adds overhead to the algorithm, especially for estimating
the ccdh at the tail. As discussed earlier, we need methods that bias
towards higher degrees, but this significantly adds to the query cost
of actually estimating the degrees.
1.5
Zhang et al. observe that the degree distribution of numerous
sampling methods is a random linear projection of the true
distribution [47]. They attempt to invert this (ill-conditioned) linear
problem, to correct the biases. This leads to improvement in the
estimate, but the empirical studies typically sample more than 10%
of the vertices for good estimates.
A recent line of work by Soundarajan et al. on active probing
also has flavors of graph sampling [44, 45]. In this setting, we start
with a small, arbitrary subgraph and try to grow this subgraph
to achieve some coverage objective (like discover the maximum
new vertices, find new edges, etc.). The probing schemes devised
in these papers outperform uniform random sampling methods for
coverage objectives.
All these results aim to capture numerous properties of the graph,
using a single graph sample. Nonetheless, the degree distribution is
typically considered as the most important, and empirical analyses
always focus on estimating it accurately. Ribiero and Towsley [39]
and Stumpf and Wiuf [46] specifically study degree distributions.
Ribiero and Towsley [39] do detailed analysis on degree distribution
estimates (they also look at the ccdh) for a variety of these sampling
methods. Their empirical results show significant errors either at
the head or the tail. We note that almost all these results end up
sampling up to 20% of the graph to estimate the degree distribution.
Some methods try to match the shape/family of the distribution,
rather than estimate it as a whole [46]. Thus, statistical methods
can be used to estimate parameters of the distribution. But it is
reasonably well-established that real-world degree distributions
are rarely pure power laws in most instances [12]. Indeed, fitting
a power law is rather challenging and naive regression fits on
log-log plots are erroneous, as results of Clauset-Shalizi-Newman
showed [12].
The subfield of property testing and sublinear algorithms for sparse
graphs within theoretical computer science can be thought of as
a formalization of graph sampling to estimate properties. Indeed,
our description of the main problem follows this language. There
is a very rich body of mathematical work in this area (refer to
Ron’s survey [40]). Practical applications of graph property testing
are quite rare, and we are only aware of one previous work on
applications for finding dense cores in router networks [25]. The
specific problem of estimating the average degree (or the total
number of edges) was studied by Feige [21] and Goldreich-Ron [23].
Gonen et al. and Eden et al. focus on the problem of estimating
higher moments of the degree distribution [19, 24]. One of the main
techniques we use of simulating edge queries was developed in
sublinear algorithms results of Eden et al. [18, 19] in the context of
triangle counting and degree moment estimation. We stress that
all these results are purely theoretical, and their practicality is by
no means obvious.
On the practical side, Dasgupta, Kumar, and Sarlos study
average degree estimation in real graphs, and develop alternate
algorithms [14]. They require the graph to have low mixing time and
demonstrate that the algorithm has excellent behavior in practice
(compared to implementations of Feige’s and the Goldreich-Ron
algorithm [21, 23]). Dasgupta et al. note that sampling uniform
random vertices is not possible in many settings, and thus they
consider a significantly weaker setting than Model 1. Chierichetti
Related Work
There is a rich body of literature on generating a graph sample
that reveals graph properties of the larger “true” graph. We
do not attempt to fully survey this literature, and only refer to
results directly related to our work. The works of Leskovec &
Faloutsos [30], Maiya & Berger-Wolf [31], and Ahmed, Neville, &
Kompella [2, 5] provide excellent surveys of multiple sampling
methods.
There are a number of sampling methods based on random
crawls: forest-fire [30], snowball sampling [31], and expansion
sampling [30]. As has been detailed in previous work, these
methods tend to bias certain parts of the network, which can be
exploited for more accurate estimates of various properties [30, 31,
39]. A series of papers by Ahmed, Neville, and Kompella [2–5] have
proposed alternate sampling methods that combine random vertices
and edges to get better representative samples. Notably, this yields
one of the best streaming algorithms for triangle counting [4].
4
Proof. Let s = mind max(d, N (d)) and let the minimum be
attained at d ∗ . If there are multiple minima, let d ∗ be the largest
among them. We consider two cases. (Note that N (d) is a
monotonically non-increasing sequence.)
Case 1: N (d ∗ ) ≥ d ∗ . So s = N (d ∗ ). Since d ∗ is the largest
minimum, for any d > d ∗ , d > N (d ∗ ). (If not, then the minimum is
also attained at d > d ∗ .) Thus, d > N (d ∗ ) ≥ N (d). For any d < d ∗ ,
N (d) ≥ N (d ∗ ) ≥ d ∗ > d. We conclude that d ∗ is largest d such that
N (d) ≥ d. Thus, h = d ∗ .
If s , h, then d ∗ < N (d ∗ ). Then, N (d ∗ + 1) < N (d ∗ ), otherwise
the minimum would be attained at d ∗ + 1. Furthermore, max(d ∗ +
1, N (d ∗ + 1)) > N (d ∗ ), implying d ∗ + 1 > N (d ∗ ). This proves that
h + 1 > s.
Case 2: d ∗ > N (d ∗ ). So s = d ∗ . For d > d ∗ , N (d) ≤ N (d ∗ ) <
∗
d < d. For d < d ∗ , N (d) ≥ d ∗ > d (if N (d) < d ∗ , then d ∗ would not
be the minimizer). Thus, d ∗ − 1 is the largest d such that N (d) ≥ d,
and h = d ∗ − 1 = s − 1.
et al. focus on sampling uniform random vertices, using only a
small set of seed vertices and neighbor queries [10].
We note that there is a large body of work on sampling graphs
from a stream [32]. This is quite different from our setting, since
a streaming algorithm observes every edge at least once. The
specific problem of estimating the degree distribution at all scales
was considered by Simpson et al. [43]. They observe many of the
challenges we mentioned earlier: the difficulty of estimating the
tail accurately, finding vertices at all degree scales, and combining
estimates from the head and the tail.
2
PRELIMINARIES
We set some notation. The input graph G has n vertices and m
edges. For any vertex v, let Γ(v) be the neighborhood of v, and dv
be the degree. As mentioned earlier, n(d) is the number of vertices
Í
of degree d and N (d) = r ≥d n(r ) is the ccdh at d. We use “u.a.r.”
as a shorthand for “uniform at random”.
We stress that the all mention of probability and error is with
respect to the randomness of the sampling algorithm. There is no
stochastic assumption on the input graph G.
We use the shorthand A ∈ (1 ± α)B for A ∈ [(1 − α)B, (1 + α)B].
We will apply the following (rescaled) Chernoff bound.
The h-index does not measure d vs N (d) at different scales, and
a large h-index only ensures that there are ”enough” high-degree
vertices. For instance, the h-index does not distinguish between
N 1 and N 2 when N 1 (100) = 100 and N 1 (d) = 0 for d > 100 and
N 2 (100, 000) = 100 and N 2 (d) = 0 for all other values of d ≥ 100.
The h and z-indices are related to each other.
√
Claim 2.4. h ≤ z ≤ h.
Theorem 2.1. [Theorem 1 in [15]] Let X 1 , X 2 , . . . , X k be a
sequence of iid random variables with expectation µ. Furthermore,
X i ∈ [0, B].
Í
• For ε < 1, Pr[| ki=1 X i − µk | ≥ εµk] ≤ 2 exp(−ε 2 µk/3B).
Í
• For t ≥ 2eµ, Pr[ ki=1 X i ≥ tk] ≤ 2−t k/B .
Proof. Since Np(d) is integral, if N (d) > 0, then N (d) ≥ 1. Thus,
for all N (d) > 0, max(d, N (d)) ≤ d · N (d) ≤ max(d, N (d)). We
take the minimum over all d to complete the proof.
We will require the following “boosting through medians” lemma,
which is a routine application of the Chernoff bound.
To give some intuition about these indices, we compute the h and
z index for power laws. The classic power law degree distribution
sets n(d) ∝ 1/d γ , where γ is typically in [2, 3].
Theorem 2.2. Consider two quantities A < B. Suppose there exists
a randomized algorithm (that does not know A or B) of expected
running time T that outputs a value in [A, B] with probability at least
2/3. Then, for any 0 < δ < 1/3, there exists a randomized algorithm
with expected running time O(T log(1/δ )) that outputs a value in
(1 ± ε)A with probability at least 1 − δ .
Claim 2.5. If a degree distribution is bounded below by a power
1
Proof. Consider d ≤ τn 1/γ , where τ is defined according to
Definition 1.2. Then, N (d) ≥ bτn/(τ 1/γ n (γ −1)/γ )c = Ω(n 1/γ ). This
proves the h-index bound.
Proof. We simply run k = 200 log(1/δ ) independent
invocations of the algorithm and output the median output. Let X i
be the indicator random variable for the ith invocation outputting
a number in [A, B]. Observe that µ = E[X i ] ≥ 2/3. By
Í
Theorem 2.1, Pr[| ki=1 X i − µk | ≥ µk/5] ≤ 2 exp(−µk/75) ≤
2 exp(−(2/3)(200/75) log(1/δ )) ≤ δ . Thus, with probability at least
1 − δ , at least (4/5)(2/3)k > k/2 outputs lie in [A, B]. This implies
that the median also lies in [A, B].
1
Set d ∗ = (τn) γ −1 . For d ≤ d ∗ , N (d) ≥ 1 and d · N (d) ≥
1
(τ /2)n/d γ −2 = Ω(n γ −1 ). If there exists no d > d ∗ such that
1
2(γ −1) ). If there does exist some such d,
N (d) > 0, then
√ z = Ω(n
then z = Ω( d ∗ ) which yields the same value.
√
Plugging in values, for γ = 2, both h and z are Ω( n). For γ = 3,
h = Θ(n1/3 ) and z = Θ(n1/4 ).
It will be convenient to fix the approximation parameter ε > 0 at
the very outset. So we will not pass ε as a parameter to our various
subroutines.
2.1
1
law with exponent γ , then h = Ω(n γ ) and z = Ω(n 2(γ −1) ).
2.2
Simulating degree queries for HDM
The Hidden Degrees Model does not allow for querying the degree
dv of a vertex v. Nonetheless, it is possible to get accurate estimates
of dv by sampling u.a.r. neighbors (with replacement) of v. This
can be done by using the birthday paradox argument, as formalized
by Ron and Tsur [41]. Roughly speaking, one repeatedly samples
neighbors until the same vertex is seen twice. If this happens after t
samples, t 2 is a constant factor approximation for dv . This argument
√
can be refined to get accurate approximations for dv using O( dv )
random edge queries.
More on Fatness indices
The following characterization of the h-index will be useful for
analysis. Since (d + N (d))/2 ≤ max(d, N (d)) ≤ d + N (d), this
proves that mind (d + N (d))/2 is a 2-factor approximation to the
h-index.
Lemma 2.3. mind max(d, N (d)) ∈ {h, h + 1}
5
• HDM: O((m/z)(ε −3 log(n/εδ ))).
Theorem 2.6. [Theorem 3.1 of [41], restated] Fix any α > 0. There
is an algorithm that outputs a value
√ in (1 ± α)dv with probability
> 2/3, and makes an expected O( dv /α 2 ) u.a.r. neighbor samples.
Observe how a larger h and z-index lead to smaller running times.
Ignoring constant factors and assuming m = O(n), asymptotically
increasing h and z-indices lead to sublinear algorithms. Suppose
the degree distribution was a power law with exponent γ > 1.
The average degree is constant, so m = O(n). Using the index
calculations in §2.1, for SM, the running time is O(n 1−1/γ ). For
√
HDM, it is O(n1−1/2(γ −1) ). For γ = 2, the running times are O( n)
for both models. For γ = 3, the running times are O(n2/3 ) and
O(n3/4 ) respectively.
We now describe the algorithm itself. The main innovation in
SADDLES comes through the use of a recent theoretical technique
to simulate edge samples through vertex samples [18, 19]. The
sampling of edges occurs through two stages. In the first stage,
the algorithm samples a set of r vertices and sets up a distribution
over the sampled vertices such that any edge adjacent to a sampled
vertex may be sampled with uniform probability. In the second
stage, it samples q edges from this distribution.
For each edge, we compute a weight based on the degrees of its
vertices and generate our final ccdh estimate by averaging these
weights. Additionally, we use vertex sampling to estimate the head
of the distribution. Straightforward Chernoff bound arguments can
be used to determine when to use the vertex sampling over the
edge sampling method.
In the following description, we use c to denote a sufficiently
large constant. The same algorithmic structure is used for the
Standard Model and the Hidden Degrees Model. The only difference
is the use the algorithm of Corollary 2.7 to estimate degrees in
the HDM, while the degrees are directly available in the Standard
Model.
We abuse notation somewhat, and use SADDLES to denote the
core sampling procedure. As described, this works for a single
choice of d to estimate N (d). The final algorithm simply invokes
this procedure for various degrees.
For the sake of the theoretical analysis, we will simply assume
this theorem. In the actual implementation of SADDLES, we will
discuss the specific parameters used. It will be helpful to abstract
out the estimation of degrees through the following corollary. The
procedure DEG(v) will be repeatedly invoked by SADDLES. This is
a direct consequence of setting α − ε/10 and applying Theorem 2.2
with δ = 1/n3 .
Corollary 2.7. There is an algorithm DEG that takes as input a
vertex v, and has the following properties:
• For all v: with probability > 1 − 1/n 3 , the output DEG(v) is in
(1 ± ε/10)dv .
√
• The expected running time of DEG(v) is O(ε −2 dv log n).
We will assume that invocations to DEG with the same
arguments use the same sequence of random bits. Alternately,
imagine that a call to DEG(v, ε) stores the output, so subsequent
calls output the same value.
Definition 2.8. The output DEG(v) is denoted by dˆv . The random
bits used in all calls to DEG is collectively denoted Λ. (Thus, Λ
completely specifies all the values {dˆv }.) We say Λ is good if ∀v ∈ V ,
dˆv ∈ (1 ± ε/10)dv .
The following is a consequence of conditional probabilities.
Claim 2.9. Consider any event A, such that for any good Λ,
Pr[A|Λ] ≥ p. Then Pr[A] ≥ p − 1/n 2 .
Proof. The probability that Λ is not good is at most the
probability that for some v, DEG(v) < (1 ± ε/10). By the union
bound and Corollary 2.7, the probability is at most 1/n 2 .
Note that
Í
Pr[A] ≥ Λgood Pr[Λ] Pr[A|Λ] ≥ p Pr[Λ is good]. Since Λ is good
with probability at least 1−1/n2 , Pr[A] ≥ (1−1/n2 )p ≥ p−1/n 2 .
cΛ (d) to be |{v |dˆv ≥ d}|. We will
For any fixed Λ, we set N
cΛ -values.
perform the analysis of SADDLES with respect to the N
4
cΛ (v) ∈ [N ((1 +
Claim 2.10. Suppose Λ is good. For all v, N
ε/9)d), N ((1 − ε/9)d)].
Claim 4.1. The following holds with probability > 9/10. If
e(d) ∈ (1 ±
SADDLES(d, r , q) outputs an estimate in Step 6, then N
cΛ (d). If it does not output in Step 6, then N
cΛ (d) < (2c/ε 2 )(n/r ).
ε/10)N
Proof. Since Λ is good, ∀u, dˆu ∈ (1 ± ε/10)du , Furthermore, if
du ≥ (1 + ε/9)d, then dˆu ≥ (1 − ε/10)(1 + ε/9)d ≥ d. Analogously,
if du ≤ (1 − ε/9)d, then dˆu ≤ (1 + ε/10)(1 − ε/9)d ≤ d. Thus,
{u|du ≥ d(1 + ε/9)} ⊆ {u|dˆu ≥ d} ⊆ {u|du ≥ d(1 − ε/9)}.
3
ANALYSIS OF SADDLES
The estimate of Step 6 can be analyzed with a direct Chernoff bound.
Proof. Each X i is an iid Bernoulli random variable, with success
cΛ (d)/n. We split into two cases.
probability precisely N
c
Case 1: N Λ (d) ≥ (c/10ε 2 )(n/r ). By the Chernoff bound of
Í
cΛ (d)/n| ≥ (ε/10)(r N
cΛ (d)/n)] ≤
Theorem 2.1, Pr[| i ≤r X i − r N
2
c
2 exp(−(ε /100)(r N Λ (d)/n) ≤ 1/100.
cΛ (d) ≤ (c/10ε 2 )(n/r ). Note that E[Íi ≤r X i ] ≤ c/10ε 2
Case 2: N
Í
2
≤ (c/ε )/2e. By the upper tail bound of Theorem 2.1, Pr[ i ≤r X i ≥
2
c/ε ] < 1/100.
Thus, with probability at least 99/100, if an estimate is output
cΛ (d) > (c/10ε 2 )(n/r ). By the first case, with probability
in Step 6, N
e(d) is a (1 + ε/10)-estimate for N
cΛ (d). A union
at least 99/100, N
bound completes the first part.
THE MAIN RESULT AND SADDLES
We begin by stating the main result, and explaining how heavy
tails lead to sublinear algorithms.
Theorem 3.1. There exists an algorithm SADDLES with the
following properties. For any ε > 0, β > 0, it outputs an
(ε, ε)-approximation of the ccdh with probability > 1 − β. The total
representation size is O((log n)/ε). The expected running time depends
on the model.
• SM: O((n/h + m/z 2 )(ε −3 log(n/εδ ))).
6
Lemma 4.4. Fix any good Λ and d. Suppose r ≥ cε −2n/d. With
probability at least 9/10,
Õ
cΛ (d)
wtΛ,d (v) ∈ (1 ± ε/8)(r /n)N
Algorithm 1: SADDLES(d, r , q)
Inputs:
d: degree for which N(d) is to be computed
r: budget for vertex samples
q: budget for edge samples
Output:
e(d): estimated N (d)
N
1
2
3
4
5
6
7
8
9
10
11
12
13
.
Í
Proof. Let wt(R) denote v ∈R wtΛ,d (v). By linearity of
Í
cΛ (d). To
expectation, E[wt(R)] = (r /n)· v ∈V wtΛ,d (v) ≥ (r /2n)N
apply the Chernoff bound, we need to bound the maximum weight
of a vertex. For good Λ, the weight wtΛ,d of any ordered pair is at
most 1/(1 − ε/10)d ≤ 2/d. The number of neighbors of v such that
cΛ (d). Thus, wtΛ,d (v) ≤ 2N
cΛ (d)/d.
dˆu ≥ d is at most N
By the Chernoff bound of Theorem 2.1 and setting r ≥ cε −2n/d,
Repeat for i = 1, . . . , r ;
Select u.a.r. vertex v and add it to multiset R ;
In HDM, call DEG(v) to get estimate dˆv . In SM, set dˆv to dv .
;
If dˆ ≥ d, set X i = 1. Else, X i = 0. ;
Í v
If i ≤r X i ≥ c/ε 2 ;
e(d) = n Íi ≤k X i ;
Return N
r
Í
Let dˆR = v ∈R dˆv and D denote the distribution over R
where v ∈ R is selected with probability dˆv /dˆR ;
Repeat for i = 1, . . . , q ;
Sample v ∼ D ;
Pick u.a.r. neighbor u of v ;
Call DEG(u) to get dˆu ;
If dˆu ≥ d, set Yi = 1/dˆu . Else, set Yi = 0 ;
q
e(d) = n · dˆR Í Yi .
Return N
r
q
Pr [|wt(R) − E[wt(R)]| > (ε/20)E[wt(R)]]
!
cΛ (d)/2n)
ε 2 · (cε −2n/d) · (N
< 2 exp −
≤ 1/10
cΛ (d)/d
3 · 202 · 2N
With probability at least 9/10, wt(R) ∈ (1 ± ε/20)E[wt(R)]. By
cΛ (d). We
the arguments given above, E[wt(R)] ∈ (1 ± ε/9)(r /n)N
combine to complete the proof.
Now, we determine the number of edge samples required to
estimate the weight wtΛ,d (R).
e(d) be as defined in Step 13 of SADDLES. Assume
Lemma 4.5. Let N
cΛ (d)). Then, with
Λ is good, r ≥ cε −2n/d, and q ≥ cε −2m/(d N
e(d) ∈ (1 ± ε/4)N
cΛ (d).
probability > 7/8, N
i=1
cΛ (d) ≥ (2c/ε 2 )(n/r ), then with probability at
Furthermore, if N
Í
cΛ (d)/n ≥ c/ε 2 . A union boun
least 99/100, i ≤r X i ≥ (1 − ε/10)r N
proves (the contrapositive of) the second part.
Proof. We define the random set R selected in Step 2 to be
Í
sound if the following hold. (1) wt(R) = v ∈R wtΛ,d (v) ∈ (1 ±
Í
cΛ (d) and (2) v ∈R dv ≤ 100r (2m/n). By Lemma 4.4, the
ε/8)(r /n)N
Í
first holds with probability > 9/10. Observe that E[ v ∈R dv ] =
r (2m/n), since 2m/n is the average degree. By the Markov bound,
the second holds with probability > 99/100. By the union bound, R
is sound with probability at least 1 − (1/10 + 1/100) > 8/9.
Fix a sound R. Recall Yi from Step 12. The expectation of Yi |R
Í
Í
is v ∈R Pr[v is selected]· u ∈Γ(v) Pr[u is selected]wtΛ,d (hv, ui).
We plug in the probability values, and observe that for good Λ,
for all v, dˆv /dv ∈ (1 ± ε/10).
Õ
Õ
(1/dv )wtΛ,d (hv, ui)
E[Yi |R] =
(dˆv /dˆR )
We define weights of ordered edges. The weight only depends on
the second member in the pair, but allows for a more convenient
analysis. The weight of hv, ui is the random variable Yi of Step 12.
Definition 4.2. The d-weight of an ordered edge hv, ui for a
given Λ (the randomness of DEG) is defined as follows. We set
wtΛ,d (hv, ui) to be 1/dˆu if dˆu ≥ d, and zero otherwise.
Í
For vertex v, wtΛ,d (v) = u ∈Γ(v) wtΛ,d (hv, ui).
The utility of the weight definition is captured by the following
e(d), and thus, we
claim. The total weight is an approximation of N
can analyze how well SADDLES approximates the total weight.
Í
cΛ (d).
Claim 4.3. If Λ is good, v ∈V wtΛ,d (v) ∈ (1 ± ε/9)N
Proof.
Õ
wtΛ,d (v)
=
v ∈V
Õ Õ
v ∈V u ∈Γ(v)
=
Õ
v ∈R
=
u:dˆu ≥d v ∈Γ(u)
1/dˆu =
(1/dˆR )
u ∈Γ(v)
Õ
(dˆv /dv )
v ∈R
∈
1dˆu ≥d /dˆu
Õ
v ∈R
(1 ± ε/10)(1/dˆR )
Õ
wtΛ,d (hv, ui)
u ∈Γ(v)
Õ Õ
wtΛ,d (hv, ui)
v ∈R u ∈Γ(v)
Õ
du /dˆu
(1 ± ε/10)(wt(R)/dˆR )
(2)
Í
ˆ
e
N (d)
=
(n/r )(d R /q) i ≤q Yi
Note
that
and
Í
(n/r )(dˆR /q)E[ i ≤q Yi |R] ∈ (1 ± ε/10)(n/r )wt(R). Since R is
cΛ (d). Also, note that
sound, the latter is in (1 ± ε/4)N
∈
(1)
u:dˆu ≥d
Since Λ is good, ∀u, dˆu ∈ (1 ± ε/10)du , and du /dˆu ∈ (1 ± ε/9).
Í
cΛ (d).
Applying in (1), v ∈V wtΛ,d (v) ∈ (1 ± ε/9)N
cΛ (d)
c (d)
qwt(R)
(r /n)N
N
≥
= Λ
(3)
ˆ
4(100r (2m/n)
800m
2d R
Í
By linearity of expectation, E[ i ≤q Yi |R] = qE[X 1 |R]. Observe
that Yi ≤ 1/d. We can apply the Chernoff bound of Theorem 2.1 to
E[Yi |R] = E[X 1 |R] ≥
We come to an important lemma, that shows that the weight
of the random subset R (chosen in Step 2) is well-concentrated.
This is proven using a Chernoff bound, but we need to bound the
maximum possible weight to get a good bound on r = |R|.
7
Proof. (of Theorem 3.1) The overall algorithm is the same for
both models, involving multiple invocations to SADDLES. The
only difference is in DEG, which is trivial when degree queries are
allowed. We first argue about correctness.
Consider the set D = {b(1 + ε/10)i c|0 ≤ i ≤ 10ε −1 log n}. We
will run a boosted version of SADDLES for each degree in D. The
e(d 0 ), where d 0 is the largest
output for an arbitrary d will be N
power of (1 + ε/10) smaller than d (rounding down). The boosting
is done through Theorem 2.2, which ensures we can get the desired
estimate for each d with probability > 1 − εβ/100n. A union bound
over all d ∈ D yields a total error probability of at most β.
Observe that the query complexity and running time of
SADDLES are within constant factors of each other. Hence, we
only focus on the number of queries made. For the Standard Model,
the bound for a single invocation of SADDLES is simply O(r + q)
= O(ε −2 (n/h + m/z 2 )).
For the Hidden Degrees Model, we have to account for
the overhead of using Ron-Tsur birthday paradox algorithm of
Corollary 2.7 for each degree estimated.
The number of queries
√
for a single call to DEG(d) is O(ε −2
d
log
n). The total overhead
v
√
Í
of all calls in Step 3 is E[ v ∈R d√v (ε −2 log n)]. By linearity of
expectation, this is O((ε −2 log n)r E[ dv ], where the√expectation is
over a uniform random vertex. We can bound r E[ dv ] ≤ r E[dv ]
= O(ε −2n(m/n)/h) = O(ε −2n/h).
The total overhead of all calls in Step 11 requires more care.
Note that when DEG(v) is called multiple times for a fixed v, the
subsequent calls require no further queries. (This is because the
output of the first call can be stored.) We partition the vertices
into two sets S 0 = {v |dv ≤ z 2 } and S 1 = {v |dv > z 2 }. The total
query cost of queries to S 0 is at most O(q f ) = O((ε −2 log n)m/z).
For the total cost
we directly√bound by (ignoring the ε −2 log n
√ to S 1 ,Í
Í
Í
factor) v ∈S 1 dv = v ∈S 1 dv / dv ≤ z −1 v dv = O(m/z). All
−4
in all, the total query complexity is O((ε log2 )(n/h + m/z)). Since
m ≥ n and z ≤ h, we can simplify to O((ε −4 log2 n)(m/z)).
the iid random variables (Yi |R).
Õ
Õ
Õ
Pr[|
Yi − E[ Yi ]| > (ε/100)E[ Yi ]|R]
i
i
· d · qE[X 1 |R]
(4)
3 · 1002
We use (3) to bound the (positive) term in the exponent is at least
≤
2 exp −
i
ε2
c (d)
cε −2m N
ε2
·
· Λ
≥ 10.
2
cΛ (d) 800m
3 · 100 N
Thus, if R is sound, the following bound holds with probability at
least 0.99. We also apply (2).
cΛ (d)
N
=
(n/r )(dˆR /q)
q
Õ
Yi
i=1
∈
(1 ± ε/100)(n/r )(dˆR /q)qE[Yi |R]
∈
e(d)
(1 ± ε/100)(1 ± ε/10)(n/r )wt(R) ∈ (1 ± ε/4)N
The probability that R is sound is at least 8/9. A union bound
completes the proof.
The bounds on r and q in Lemma 4.5 depend on the degree d. We
now bring in the h and z-indices to derive bounds that hold for all
d. We also remove the conditioning over a good Λ.
Theorem 4.6. Let c be a sufficiently large constant. Suppose r ≥
cε −2n/h and q ≥ cε −2m/z 2 . Then, for all d, with probability ≥ 5/6,
e(d) ∈ [(1 − ε/2)N ((1 + ε/2)d), (1 + ε/2)N ((1 − ε/2)d].
N
Proof. We will first assume that Λ is good. By Claim 2.10,
cΛ (d) ∈ [N ((1 + ε/9)d, N ((1 − ε/9)d)].
N
cΛ (d) = 0, so there are no vertices with dˆv ≥ d. By the
Suppose N
bound above, N ((1 + ε/9)d) = 0, implying that N ((1 + ε/2)d) = 0.
e(d) = 0, since the random variables X i and Yi in
Furthermore N
e(d) = N ((1 + ε/2)d),
SADDLES can never be non-zero. Thus, N
completing the proof.
cΛ (d) > 0. We split into two cases,
We now assume that N
depending on whether Step 6 outputs or not. By Claim 4.1, with
e(d) ∈ (1 ± ε/9)N
cΛ (d).
probability > 9/10, if Step 6 outputs, then N
e(d) holds with
By combining these bounds, the desired bound on N
probability > 9/10, conditioned on a good Λ.
Henceforth, we focus on the case that Step 6 does not output. By
cΛ (d) < 2cε −2 (n/r ). By the choice of r and Claim 2.10,
Claim 4.1, N
cΛ ((1 + ε/9)d) < h. By the characterization of h of Lemma 2.3,
N
cΛ ((1 + ε/9)d), (1 + ε/9)d) = (1 + ε/9)d.
z 2 ≤ max(N
cε −2n/d.
5
EXPERIMENTAL RESULTS
We implemented our algorithm in C++ and performed our
experiments on a MacBook Pro laptop with 2.7 GHz Intel Core
i5 with 8 GB RAM. We performed our experiments on a collection
of graphs from SNAP [29], including social networks, web networks,
and infrastructure networks. The graphs typically have millions of
edges, with the largest with more than 100M edges. Basic properties
of these graphs are presented in Table 1. We ignore direction and
treat all edges as undirected edges.
(5)
z2
This implies that r ≥
By the definition of z,
≤
N (min(dmax , (1 +ε/9)d)) · min(dmax , (1 +ε/9)d). By the Claim 2.10
cΛ (d) ≥ N ((1+ε/9)d). Since N
cΛ (d) >
bound in the first paragraph, N
2
c
c
c
0, N Λ (d) ≥ N Λ (dmax ). Plugging into (5), z ≤ N Λ (d) · (1 + ε/9)d.
cΛ (d)). The parameters satisfy the conditions
Thus, m ≤ cε −2m/(d N
e(d) ∈ (1 ± ε/4)N
cΛ (d), and
in Lemma 4.5. With probability > 7/8, N
e(d) has the desired accuracy.
by Claim 2.10, N
e(d)
All in all, assuming Λ is good, with probability at least 7/8, N
has the desired accuracy. The conditioning on a good Λ is removed
by Claim 2.9 to complete the proof.
5.1
Implementation Details
For the Hidden Degrees Model, we explicitly describe the procedure
DEG, which estimates the degree of a given vertex. In the algorithm
Algorithm 2: DEG(v)
1
2
3
We finally prove Theorem 3.1.
8
Initialize S = ∅ ;
Repeatedly add u.a.r. vertex to S, until the number of pair-wise
collisions is at least k = 50 ;
Output |S | /k as estimate dˆv
2
size of n/h + m/z 2 (as given by Theorem 3.1, ignoring constants)
is significantly sublinear. This is consistent with our choice of
r + q = n/100 leading to accurate estimates for the ccdh.
DEG, a “pair-wise collision” refers to a pair of neighbor samples
that yield the same vertex. If S has size t, the expected number
of pair-wise collisions is t2 /dv . We simply reverse engineer that
inequality to get the estimate dˆv . Ron√and Tsur essentially prove
that with high probability, |S | = Θ( dv ) and furthermore, this
suffices to bound the variance of the estimate [41].
Our implementation of SADDLES is identical to the pseudo-code
given in Alg. 1. The only constant to be set is c/ε 2 in Step 5, which
our implementation fixes at 25. There are two parameters r and
q that are chosen to be typically around 0.005n. To get the entire
degree distribution, we run SADDLES on all degrees d = b1.1i c.
5.2
5.3
Comparison with previous work
There are several graph sampling algorithms that have been
discussed in [2, 17, 28, 30, 38, 39, 47]. We describe these methods
below in more detail, and discuss our implementation of the method.
• Vertex Sampling (VS, also called egocentric sampling) [5, 17,
28, 30, 38, 39]: In this algorithm, we sample vertices u.a.r. and scale
the ccdh obtained appropriately, to get an estimate for the ccdh of
the entire graph.
• Edge Sampling (ES) [5, 17, 28, 30, 38, 39]: This algorithm
samples edges u.a.r. and includes one or both end points in the
sampled network. Note that this does not fall into the standard
model. In our implementation we pick a random end point.
• Random walk with jump (RWJ) [5, 17, 30, 38, 39]: We start
a random walk at a vertex selected u.a.r. and collect all vertices
encountered on the path in our sampled network. At any point,
with a constant probability (0.15 in our implementation, based on
previous results) we jump to another u.a.r. vertex.
• One Wave Snowball (OWS) [5, 17, 28]: Snowball sampling
starts with some vertices selected u.a.r. and crawls the network until
a network of the desired size is sampled. In our implementation,
we typically stop at the one level since that accumulates enough
vertices.
• Forest fire (FF) [5, 17, 30]: This method generates random
sub-crawls of the network, and is related to snowball sampling. A
vertex is picked u.a.r. and randomly selects a subset of its neighbors.
In previous work, this is done by choosing x such neighbors, where
x is a geometric random variable with mean 0.2. The process is
repeated from every selected vertex until it ends. It is then repeated
from another u.a.r. vertex.
We run all these algorithms on the amazon0601, web-Google,
cit-Patents, and com-orkut networks. To make fair comparisons,
we run each method until it selects 1% of the vertices. The
comparisons are shown in Fig. 1. Observe how none of the methods
come close to accurately measuring the ccdh. (This is consistent
with previous work, where typically 10-20% of the vertices are
sampled for results.) Naive vertex sampling is accurate at the head
of the distribution, but completely misses the tail. Except for vertex
sampling, all other algorithms are biased towards the tail. Crawls
find high degree vertices with disproportionately higher probability,
and overestimate the tail.
Evaluation of SADDLES
The sample size of SADDLES in the Standard Model is exactly r + q.
We will typically fix this to be 1% of the number of vertices in our
runs, unless otherwise stated.
Accuracy over all graphs. We show results of running
SADDLES with the parameters discussed above for a variety of
graphs. Fig. 1, and Fig. 2 show the results for the Standard Model on
all graphs in Tab. 1. For all these runs, we set r + q to be 1% of the
number of vertices in the graph. For the Hidden Degrees Model, we
show results in Fig. 3. For space reasons, we only show results on
HDM for the graphs in Fig. 2, though results are consistent over all
our experiments. Again, we set r + q to be 1%, though the number
of edges sampled varies quite a bit. The required number of samples
are provided in Tab. 1. Note that the number of edges sampled is
well within 10% of the total, except for the com-youtube graph.
Visually, we can see that the estimates are accurate for all degrees,
in all graphs, for both models. This is despite there being sufficient
irregular behavior in N (d). For example, the web-BerkStan ccdh
(Fig. 1) is quite “bumpy” between degree 102 and 104 , and the
extreme tail has sudden jumps. Note that the shape of the various
ccdhs are different and none of them form an obvious straight line.
Nonetheless, SADDLES captures the distribution almost perfectly
in all cases by observing 1% of the vertices.
Convergence. To demonstrate convergence, we use the
following setup. In the figures, we fix the graph com-orkut, and
run SADDLES only for the degrees 10, 100, and 1000. For each
choice of degree, we vary the total number of samples r + q. (We
set r = q in all runs.) Finally, for each setting of r + q and each
degree, we perform 100 independent runs of SADDLES.
For each such run, we compute an error parameter α. Suppose
the output of a run is M, for degree d. The value of α is the smallest
value of ϵ, such that M ∈ [(1 − ϵ)N ((1 + ϵ)d), (1 + ϵ)N ((1 − ϵ)d)].
(It is the smallest ϵ such that M is an (ϵ, ϵ)-approximation of N (d).)
Fig. 4 shows the spread of α, for the 100 runs, for each choice
of r + q. Observe how the spread decreases as r + q goes to 10%.
In all cases, the values of α decay to less than 0.05. We notice that
convergence is much faster for d = 10. This is because N (10) is
quite large, and SADDLES is using vertex sampling to estimate the
value.
Inverse method of Zhang et al [47]. An important result
of estimating degree distributions is that of Zhang et al [47],
that explicitly points out the bias problems in various sampling
methods. They propose a bias correction method by solving an
ill-conditioned linear system. Essentially, given one of the above
sampled networks, it applies a constrained, penalized weighted
least-squares approach to solving the problem of debiasing the
estimated degree distribution. We apply this method for the
sampling methods demonstrated in their paper, namely vertex
sampling (VS), one-wave snowball (OWS), and induced sampling
(IN) (sample vertices u.a.r. and only retain edges between sampled
vertices). We show results in Fig. 1, again with a sample size of 1% of
Large value of h and z-index on real graphs. The h and
z-index of all graphs is given in Tab. 1. Observe how they are
typically in the hundreds. Note that the average degree is typically
an order of magnitude smaller than these indices. Thus, a sample
9
Table 1: Graph properties: #vertices (n), #edges (m), maximum degree, h-index and z-index. The last column indicates the
median number of samples over 100 runs (as a percentage of m) required by SADDLES under HDM to estimate the ccdh for
r + q = 0.01n. For all graphs except one, the number of samples required is < 0.1m.
graph
loc-gowalla
web-Stanford
com-youtube
web-Google
web-BerkStan
wiki-Talk
as-skitter
cit-Patents
com-lj
soc-LiveJournal1
com-orkut
#vertices
1.97E+05
2.82E+05
1.13E+06
8.76E+05
6.85E+05
2.39E+06
1.70E+06
3.77E+06
4.00E+06
4.85E+06
3.07E+06
#edges
9.50E+05
1.99E+06
2.99E+06
4.32E+06
6.65E+06
9.32E+06
1.11E+07
1.65E+07
3.47E+07
8.57E+07
1.17E+08
max.
degree
14730
38625
28754
6332
84230
100029
35455
793
14815
20333
33313
avg.
degree
4.8
7.0
2.6
4.9
9.7
3.9
6.5
4.3
8.6
17.7
38.1
H-index
275
427
547
419
707
1055
982
237
810
989
1638
Z-index
101
148
121
73
220
180
184
28
114
124
172
Perc. edge
samples for HDM)
7.0
6.4
11.7
6.2
5.5
8.5
6.7
5.6
4.7
2.4
2.0
(a) as-skitter
(b) loc-gowalla
(c) web-Google
(d) wiki-Talk
(e) soc-LiveJournal
(f) com-lj
(g) web-BerkStan
(h) com-youtube
Figure 2: The result of runs of SADDLES on a variety of graphs, for the Standard Model. We set r + q to be 1% of the number of
vertices, for all graphs. Observe the close match at all degrees between the true degree distribution and output of SADDLES.
the vertices. Observe that no method get even close to estimating
the ccdh accurately, even after debiasing. Fundamentally, these
methods require significantly more samples to generate accurate
estimates.
The running time and memory requirements of this method
grow superlinearly with the maximum degree in the graph. The
maximum degree is not known in advance, but the algorithm needs
to know this value , so it uses an upper bound. The largest graph
processed by [47] has a few hundred thousand edges, which is on
the smaller side of graphs in Tab. 1. SADDLES processes a graph
with more than 100M edges in less than a minute, while our attempts
to run the [47] algorithm on this graph did not terminate in hours.
6
ACKNOWLEDGEMENTS
Ali Pinar’s work is supported by the Laboratory Directed Research
and Development program at Sandia National Laboratories. Sandia
10
(a) as-skitter
(b) loc-gowalla
(c) web-Google
(d) wiki-Talk
Figure 3: The result of runs of SADDLES on a variety of graphs, for the Hidden Degrees Model. We set r + q to be 1% of the
number of vertices, for all graphs. The actual number of edges sampled varies, and is given in Tab. 1.
(a) d = 10
(b) d = 100
(c) d = 1000
(d) d = 10000
Figure 4: Convergence of SADDLES: We plot the values of the error parameter α (as defined in §5.2) for 100 runs at increasing
values of r + q. We have a different plot for d = 10, 100, 1000, 10000 to show the convergence at varying portions of the ccdh.
National Laboratories is a multimission laboratory managed and
operated by National Technology and Engineering Solutions
of Sandia, LLC., a wholly owned subsidiary of Honeywell
International, Inc., for the U.S. Department of Energy’s National
Nuclear Security Administration under contract DE-NA-0003525.
Both Shweta Jain and C. Seshadhri are grateful to the support
of the Sandia National Laboratories LDRD program for funding
this research. C. Seshadhri also acknowledges the support of NSF
TRIPODS grant.
This research was partially supported by the Israel Science
Foundation grant No. 671/13 and by a grant from the Blavatnik
fund. Talya Eden is grateful to the Azrieli Foundation for the award
of an Azrieli Fellowship.
Both Talya Eden and C. Seshadhri are grateful to the support
of the Simons Institute, where this work was initiated during the
Algorithms and Uncertainty Semester.
[5] Nesreen K Ahmed, Jennifer Neville, and Ramana Kompella. 2014. Network
sampling: From static to streaming graphs. TKDD 8, 2 (2014), 7.
[6] Sinan G. Aksoy, Tamara G. Kolda, and Ali Pinar. 2017. Measuring and modeling
bipartite graphs with community structure. Journal of Complex Networks (2017).
to appear.
[7] Albert-László Barabási and Réka Albert. 1999. Emergence of Scaling in Random
Networks. Science 286 (Oct. 1999), 509–512.
[8] A. Broder, R. Kumar, F. Maghoul, P. Raghavan, S. Rajagopalan, R. Stata, A.
Tomkins, and J. Wiener. 2000. Graph structure in the web. Computer Networks
33 (2000), 309–320.
[9] Deepayan Chakrabarti and Christos Faloutsos. 2006. Graph Mining: Laws,
Generators, and Algorithms. Comput. Surveys 38, 1 (2006). DOI:http://dx.doi.
org/10.1145/1132952.1132954
[10] F. Chierichetti, A. Dasgupta, R. Kumar, S. Lattanzi, and T. Sarlos. 2016. On
Sampling Nodes in a Network. In Conference on the World Wide Web (WWW).
[11] A. Clauset and C. Moore. 2005. Accuracy and scaling phenomena in internet
mapping. Phys. Rev. Lett. 94 (2005), 018701.
[12] A. Clauset, C. R. Shalizi, and M. E. J. Newman. 2009. Power-Law Distributions in
Empirical Data. SIAM Rev. 51, 4 (2009), 661–703. DOI:http://dx.doi.org/10.1137/
070710111
[13] R. Cohen, K. Erez, D. ben Avraham, and S. Havlin. 2000. Resilience of the Internet
to Random Breakdowns. Phys. Rev. Lett. 85, 4626fi?!8 (2000).
[14] A. Dasgupta, R. Kumar, and T. Sarlos. 2014. On estimating the average degree.
In Conference on the World Wide Web (WWW). 795–806.
[15] D. Dubhashi and A. Panconesi. 2012. Concentration of Measure for the Analysis
of Randomised Algorithms. Cambridge University Press.
[16] N. Durak, T.G. Kolda, A. Pinar, and C. Seshadhri. 2013. A scalable null model
for directed graphs matching all degree distributions: In, out, and reciprocal. In
Network Science Workshop (NSW), 2013 IEEE 2nd. 23–30. DOI:http://dx.doi.org/
10.1109/NSW.2013.6609190
[17] Peter Ebbes, Zan Huang, Arvind Rangaswamy, Hari P Thadakamalla, and ORGB
Unit. 2008. Sampling large-scale social networks: Insights from simulated
networks. In 18th Annual Workshop on Information Technologies and Systems,
Paris, France.
REFERENCES
[1] D. Achlioptas, A. Clauset, D. Kempe, and C. Moore. 2009. On the bias of traceroute
sampling: Or, power-law degree distributions in regular graphs. J. ACM 56, 4
(2009).
[2] N.K. Ahmed, J. Neville, and R. Kompella. 2010. Reconsidering the Foundations
of Network Sampling. In WIN 10.
[3] N. Ahmed, J. Neville, and R. Kompella. 2012. Space-Efficient Sampling from
Social Activity Streams. In SIGKDD BigMine. 1–8.
[4] Nesreen K Ahmed, Nick Duffield, Jennifer Neville, and Ramana Kompella. 2014.
Graph sample and hold: A framework for big-graph analytics. In SIGKDD. ACM,
ACM, 1446–1455.
11
[18] T. Eden, A. Levi, D. Ron, and C. Seshadhri. 2015. Approximately Counting
Triangles in Sublinear Time. In Foundations of Computer Science (FOCS), GRS11
(Ed.). 614–633.
[19] T. Eden, D. Ron, and C. Seshadhri. 2017. Sublinear Time Estimation of Degree
Distribution Moments: The Degeneracy Connection. In International Colloquium
on Automata, Languages, and Programming (ICALP), GRS11 (Ed.). 614–633.
[20] M. Faloutsos, P. Faloutsos, and C. Faloutsos. 1999. On power-law relationships
of the internet topology. In SIGCOMM. 251–262.
[21] U. Feige. 2006. On sums of independent random variables with unbounded
variance and estimating the average degree in a graph. SIAM J. Comput. 35, 4
(2006), 964–984.
[22] O. Goldreich and D. Ron. 2002. Property Testing in Bounded Degree Graphs.
Algorithmica (2002), 302–343.
[23] O. Goldreich and D. Ron. 2008. Approximating average parameters of graphs.
Random Structures and Algorithms 32, 4 (2008), 473–493.
[24] M. Gonen, D. Ron, and Y. Shavitt. 2011. Counting stars and other small subgraphs
in sublinear-time. SIAM Journal on Discrete Math 25, 3 (2011), 1365–1411.
[25] Mira Gonen, Dana Ron, Udi Weinsberg, and Avishai Wool. 2008. Finding a
dense-core in Jellyfish graphs. Computer Networks 52, 15 (2008), 2831–2841. DOI:
http://dx.doi.org/10.1016/j.comnet.2008.06.005
[26] J. E. Hirsch. 2005. An index to quantify an individual’s scientific research output.
Proceedings of the National Academy of Sciences 102, 46 (2005), 16569fi?!16572.
[27] A. Lakhina, J. Byers, M. Crovella, and P. Xie. 2003. Sampling biases in IP topology
measurements. In Proceedings of INFOCOMM, Vol. 1. 332–341.
[28] Sang Hoon Lee, Pan-Jun Kim, and Hawoong Jeong. 2006. Statistical properties
of sampled networks. Physical Review E 73, 1 (2006), 016102.
[29] Jure Leskovec. 2015. SNAP Stanford Network Analysis Project. http://snap.
standord.edu. (2015).
[30] Jure Leskovec and Christos Faloutsos. 2006. Sampling from large graphs. In
Knowledge Data and Discovery (KDD). ACM, 631–636.
[31] A. S. Maiya and T. Y. Berger-Wolf. 2011. Benefits of Bias: Towards Better
Characterization of Network Sampling, In Knowledge Data and Discovery (KDD).
ArXiv e-prints (2011), 105–113.
[32] Andrew McGregor. 2014. Graph stream algorithms: A survey. SIGMOD 43, 1
(2014), 9–20.
[33] M. Mitzenmacher. 2003. A Brief History of Generative Models for Power Law
and Lognormal Distributions. Internet Mathematics 1, 2 (2003), 226–251.
[34] M. E. J. Newman. 2003. The Structure and Function of Complex Networks. SIAM
Rev. 45, 2 (2003), 167–256. DOI:http://dx.doi.org/10.1137/S003614450342480
[35] M. E. J. Newman, S. Strogatz, and D. Watts. 2001. Random graphs with arbitrary
degree distributions and their applications. Physical Review E 64 (2001), 026118.
[36] D. Pennock, G. Flake, S. Lawrence, E. Glover, and C. L. Giles. 2002. Winners
don’t take all: Characterizing the competition for links on the web. Proceedings
of the National Academy of Sciences 99, 8 (2002), 5207–5211. DOI:http://dx.doi.
org/10.1073/pnas.032085699
[37] T. Petermann and P. Rios. 2004. Exploration of scale-free networks. European
Physical Journal B 38 (2004), 201–204.
[38] Ali Pinar, Sucheta Soundarajan, Tina Eliassi-Rad, and Brian Gallagher. 2015.
MaxOutProbe: An Algorithm for Increasing the Size of Partially Observed Networks.
Technical Report. Sandia National Laboratories (SNL-CA), Livermore, CA (United
States).
[39] Bruno Ribeiro and Don Towsley. 2012. On the estimation accuracy of degree
distributions from graph sampling. In Annual Conference on Decision and Control
(CDC). IEEE, 5240–5247.
[40] Dana Ron. 2010. Algorithmic and Analysis Techniques in Property Testing.
Foundations and Trends in Theoretical Computer Science 5, 2 (2010), 73–205.
[41] Dana Ron and Gilad Tsur. 2016. The Power of an Example: Hidden Set
Size Approximation Using Group Queries and Conditional Sampling. ACM
Transactions on Computation Theory 8, 4 (2016), 15:1–15:19.
[42] C. Seshadhri, Tamara G. Kolda, and Ali Pinar. 2012. Community structure and
scale-free collections of Erdös-Rényi graphs. Physical Review E 85, 5 (May 2012),
056109. DOI:http://dx.doi.org/10.1103/PhysRevE.85.056109
[43] Olivia Simpson, C Seshadhri, and Andrew McGregor. 2015. Catching the head,
tail, and everything in between: a streaming algorithm for the degree distribution.
In International Conference on Data Mining (ICDM). IEEE, 979–984.
[44] Sucheta Soundarajan, Tina Eliassi-Rad, Brian Gallagher, and Ali Pinar. 2016.
MaxReach: Reducing network incompleteness through node probes. 152–157.
DOI:http://dx.doi.org/10.1109/ASONAM.2016.7752227
[45] Sucheta Soundarajan, Tina Eliassi-Rad, Brian Gallagher, and Ali Pinar. 2017. ϵ
- WGX: Adaptive Edge Probing for Enhancing Incomplete Networks. In Web
Science Conference. 161–170.
[46] Michael PH Stumpf and Carsten Wiuf. 2005. Sampling properties of random
graphs: the degree distribution. Physical Review E 72, 3 (2005), 036118.
[47] Yaonan Zhang, Eric D Kolaczyk, and Bruce D Spencer. 2015. Estimating network
degree distributions under sampling: An inverse problem, with applications to
monitoring social media networks. The Annals of Applied Statistics 9, 1 (2015),
166–199.
12
| 10 |
arXiv:1704.00699v2 [math.DS] 14 Jan 2018
FØLNER TILINGS FOR ACTIONS OF AMENABLE GROUPS
CLINTON T. CONLEY, STEVE C. JACKSON, DAVID KERR, ANDREW S. MARKS,
BRANDON SEWARD, AND ROBIN D. TUCKER-DROB
Abstract. We show that every probability-measure-preserving action of a countable
amenable group G can be tiled, modulo a null set, using finitely many finite subsets
of G (“shapes”) with prescribed approximate invariance so that the collection of tiling
centers for each shape is Borel. This is a dynamical version of the Downarowicz–Huczek–
Zhang tiling theorem for countable amenable groups and strengthens the Ornstein–Weiss
Rokhlin lemma. As an application we prove that, for every countably infinite amenable
group G, the crossed product of a generic free minimal action of G on the Cantor set is
Z-stable.
1. Introduction
A discrete group G is said to be amenable if it admits a finitely additive probability
measure which is invariant under the action of G on itself by left translation, or equivalently
if there exists a unital positive linear functional ℓ∞ (G) → C which is invariant under
the action of G on ℓ∞ (G) induced by left translation (such a functional is called a left
invariant mean). This definition was introduced by von Neumann in connection with the
Banach–Tarski paradox and shown by Tarski to be equivalent to the absence of paradoxical
decompositions of the group. Amenability has come to be most usefully leveraged through
its combinatorial expression as the Følner property, which asks that for every finite set
K ⊆ G and δ > 0 there exists a nonempty finite set F ⊆ G which is (K, δ)-invariant in
the sense that |KF ∆F | < δ|F |.
The concept of amenability appears as a common thread throughout much of ergodic
theory as well as the related subject of operator algebras, where it is known via a number
of avatars like injectivity, hyperfiniteness, and nuclearity. It forms the cornerstone of
the theory of orbit equivalence, and also underpins both Kolmogorov–Sinai entropy and
the classical ergodic theorems, whether explicitly in their most general formulations or
implicitly in the original setting of single transformations (see Chapters 4 and 9 of [11]).
A key tool in applying amenability to dynamics is the Rokhlin lemma of Ornstein and
Weiss, which in one of its simpler forms says that for every free probability-measurepreserving action G y (X, µ) of a countably infinite amenable group and every finite set
K ⊆ G and δ > 0 there exist (K, δ)-invariant finite sets T1 , . . . , Tn ⊆ G and measurable
sets A1 , . . . , An ⊆ X such that the sets sAi for i = 1, . . . , n and s ∈ Ti are pairwise disjoint
and have union of measure at least 1 − δ [17].
The proportionality in terms of which approximate invariance is expressed in the Følner
condition makes it clear that amenability is a measure-theoretic property, and it is not
Date: November 9, 2017.
1
2
CONLEY, JACKSON, KERR, MARKS, SEWARD, AND TUCKER-DROB
surprising that the most influential and definitive applications of these ideas in dynamics (e.g., the Connes–Feldman–Weiss theorem) occur in the presence of an invariant or
quasi-invariant measure. Nevertheless, amenability also has significant ramifications for
topological dynamics, for instance in guaranteeing the existence of invariant probability
measures when the space is compact and in providing the basis for the theory of topological
entropy. In the realm of operator algebras, similar comments can be made concerning the
relative significance of amenability for von Neumann algebras (measure) and C∗ -algebras
(topology).
While the subjects of von Neumann algebras and C∗ -algebras have long enjoyed a symbiotic relationship sustained in large part through the lens of analogy, and a similar relationship has historically bound together ergodic theory and topological dynamics, the last
few years have witnessed the emergence of a new and structurally more direct kind of rapport between topology and measure in these domains, beginning on the operator algebra
side with the groundbreaking work of Matui and Sato on strict comparison, Z-stability,
and decomposition rank [14, 15]. On the side of groups and dynamics, Downarowicz,
Huczek, and Zhang recently showed that if G is a countable amenable group then for
every finite set K ⊆ G and δ > 0 one can partition (or “tile”) G by left translates of
finitely many (K, δ)-invariant finite sets [3]. The consequences that they derive from this
tileability are topological and include the existence, for every such G, of a free minimal
action with zero entropy. One of the aims of the present paper is to provide some insight
into how these advances in operator algebras and dynamics, while seemingly unrelated
at first glance, actually fit together as part of a common circle of ideas that we expect,
among other things, to lead to further progress in the structure and classification theory
of crossed product C∗ -algebras.
Our main theorem is a version of the Downarowicz–Huczek–Zhang tiling result for
free p.m.p. (probability-measure-preserving) actions of countable amenable groups which
strengthens the Ornstein–Weiss Rokhlin lemma in the form recalled above by shrinking
the leftover piece down to a null set (Theorem 3.6). As in the case of groups, one does
not expect the utility of this dynamical tileability to be found in the measure setting,
where the Ornstein–Weiss machinery generally suffices, but rather in the derivation of
topological consequences. Indeed we will apply our tiling result to show that, for every
countably infinite amenable group G, the crossed product C(X) ⋊ G of a generic free
minimal action G y X on the Cantor set possesses the regularity property of Z-stability
(Theorem 5.4). The strategy is to first prove that such an action admits clopen tower
decompositions with arbitrarily good Følner shapes (Theorem 4.2), and then to demonstrate that the existence of such tower decompositions implies that the crossed product is
Z-stable (Theorem 5.3). The significance of Z-stability within the classification program
for simple separable nuclear C∗ -algebras is explained at the beginning of Section 5.
It is a curious irony in the theory of amenability that the Hall–Rado matching theorem
can be used not only to show that the failure of the Følner property for a discrete group
implies the formally stronger Tarski characterization of nonamenability in terms of the
existence of paradoxical decompositions [2] but also to show, in the opposite direction,
that the Følner property itself implies the formally stronger Downarowicz–Huczek–Zhang
characterization of amenability which guarantees the existence of tilings of the group
by translates of finitely many Følner sets [3]. This Janus-like scenario will be reprised
FØLNER TILINGS FOR ACTIONS OF AMENABLE GROUPS
3
here in the dynamical context through the use of a measurable matching argument of
Lyons and Nazarov that was originally developed to prove that for every simple bipartite
nonamenable Cayley graph of a discrete group G there is a factor of a Bernoulli action of
G which is an a.e. perfect matching of the graph [13]. Accordingly the basic scheme for
proving Theorem 3.6 will be the same as that of Downarowicz, Huczek, and Zhang and
divides into two parts:
(i) using an Ornstein–Weiss-type argument to show that a subset of the space of
lower Banach density close to one can be tiled by dynamical translates of Følner
sets, and
(ii) using a Lyons–Nazarov-type measurable matching to distribute almost all remaining points to existing tiles with only a small proportional increase in the size of
the Følner sets, so that the approximate invariance is preserved.
We begin in Section 2 with the measurable matching result (Lemma 2.6), which is a
variation on the Lyons–Nazarov theorem from [13] and is established along similar lines.
In Section 3 we establish the appropriate variant of the Ornstein–Weiss Rokhlin lemma
(Lemma 3.4) and put everything together in Theorem 3.6. Section 4 contains the genericity
result for free minimal actions on the Cantor set, while Section 5 is devoted to the material
on Z-stability.
Acknowledgements. C.C. was partially supported by NSF grant DMS-1500906. D.K. was
partially supported by NSF grant DMS-1500593. Part of this work was carried out while
he was visiting the Erwin Schrödinger Institute (January–February 2016) and the Mittag–
Leffler Institute (February–March 2016). A.M. was partially supported by NSF grant
DMS-1500974. B.S. was partially supported by ERC grant 306494. R.T.D. was partially
supported by NSF grant DMS-1600904. Part of this work was carried out during the AIM
SQuaRE: Measurable Graph Theory.
2. Measurable matchings
Given sets X and Y and a subset R ⊆ X × Y , with each x ∈ X we associate its vertical
section Rx = {y ∈ Y : (x, y) ∈ R} and with each y ∈ Y we associate S
its horizontal section
Ry = {x ∈ X : (x, y) ∈ R}. Analogously, for A ⊆ X we put RA = x∈A Rx = {y ∈ Y :
∃x ∈ A (x, y) ∈ R}. We say that R is locally finite if for all x ∈ X and y ∈ Y the sets Rx
and Ry are finite.
If now X and Y are standard Borel spaces equipped with respective Borel measures
µ and ν, we say that R ⊆ X × Y is (µ, ν)-preserving if whenever f : A → B is a Borel
bijection between subsets A ⊆ X and B ⊆ Y with graph(f ) ⊆ R we have µ(A) = ν(B).
We say that R is expansive if there is some c > 1 such that for all Borel A ⊆ X we have
ν(RA ) ≥ cµ(A).
We use the notation f : X ⇀ Y to denote a partial function from X to Y . We say that
such a partial function f is compatible with R ⊆ X × Y if graph(f ) ⊆ R.
Proposition 2.1 (ess. Lyons–Nazarov [13, Theorem 1.1]). Suppose that X and Y are
standard Borel spaces, that µ is a Borel probability measure on X, and that ν is a Borel
measure on Y . Suppose that R ⊆ X × Y is Borel, locally finite, (µ, ν)-preserving, and
4
CONLEY, JACKSON, KERR, MARKS, SEWARD, AND TUCKER-DROB
expansive. Then there is a µ-conull X ′ ⊆ X and a Borel injection f : X ′ → Y compatible
with R.
Proof. Fix a constant of expansivity c > 1 for R.
We construct a sequence (fn )n∈N of Borel partial injections from X to Y which are
compatible with R. Moreover, we will guarantee that the set X ′ = {x ∈ X : ∃m ∈ N ∀n ≥
m x ∈ dom(fn ) and fn (x) = fm (x)} is µ-conull, establishing that the limiting function
satisfies the conclusion of the lemma.
Given a Borel partial injection g : X ⇀ Y we say that a sequence (x0 , y0 , . . . , xn , yn ) ∈
X × Y × · · · × X × Y is a g-augmenting path if
• x0 ∈ X is not in the domain of g,
• for all distinct i, j < n, yi 6= yj ,
• for all i < n, (xi , yi ) ∈ R,
• for all i < n, yi = g(xi+1 ),
• yn ∈ Y is not in the image of g.
We call n the length of such a g-augmenting path and x0 the origin of the path. Note
that the sequence (x0 , y0 , y1 , . . . , yn ) in fact determines the entire g-augmenting path, and
moreover that xi 6= xj for distinct i, j < n.
In order to proceed we require a few lemmas.
Lemma 2.2. Suppose that n ∈ N and g : X ⇀ Y is a Borel partial injection compatible
with R admitting no augmenting paths of length less than n. Then µ(X \ dom(g)) ≤ c−n .
Proof. Put A0 = X \ dom(g). Define recursively for i < n sets Bi = RAi and Ai+1 =
Ai ∪ g −1 (Bi ). Note that the assumption that there are no augmenting paths of length
less than n implies that each Bi is contained in the image of g. Expansivity of R yields
ν(Bi ) ≥ cµ(Ai ) and (µ, ν)-preservation of R then implies that µ(Ai+1 ) ≥ ν(Bi ) ≥ cµ(Ai ).
Consequently, 1 ≥ µ(An ) ≥ cn µ(A0 ), and hence µ(A0 ) ≤ c−n .
We say that a graph G on a standard Borel space X has a Borel N-coloring if there is
a Borel function c : X → N such that if x and y are G-adjacent then c(x) 6= c(y).
Lemma 2.3 (Kechris–Solecki–Todorcevic [12, Proposition 4.5]). Every locally finite Borel
graph on a standard Borel space has a Borel N-coloring.
Proof. Fix a countable algebra {Bn : n ∈ N} of Borel sets which separates points (for
example, the algebra generated by the basic open sets of a compatible Polish topology),
and color each vertex x by the least n ∈ N such that Bn contains x and none of its
neighbors.
Analogously, for k ∈ N, we say that a graph on a standard Borel X has a Borel kcoloring if there is a Borel function c : X → {1, . . . , k} giving adjacent points distinct
colors.
Lemma 2.4 (Kechris–Solecki–Todorcevic [12, Proposition 4.6]). If a Borel graph on a
standard Borel X has degree bounded by d ∈ N, then it has a Borel (d + 1)-coloring.
Proof. By Lemma 2.3, the graph has a Borel N-coloring c : X → N. We recursively
build sets An for n ∈ N by A0 = {x ∈ X : c(x) = 0} and An+1 = An ∪ {x ∈ X :
FØLNER TILINGS FOR ACTIONS OF AMENABLE GROUPS
5
S
c(x) = n + 1 and no neighbor of x is in An }. Then A = n An is a Borel set which is
G-independent, and moreover is maximal with this property. So the restriction of G to
X \ A has degree less than d, and the result follows by induction.
Lemma 2.5 (ess. Elek–Lippner [5, Proposition 1.1]). Suppose that g : X ⇀ Y is a Borel
partial injection compatible with R, and let n ≥ 1. Then there is a Borel partial injection
g′ : X ⇀ Y compatible with R such that
• dom(g ′ ) ⊇ dom(g),
• g′ admits no augmenting paths of length less than n,
• µ({x ∈ X : g ′ (x) 6= g(x)} ≤ nµ(X \ dom(g)).
Proof. Consider the set Z of injective sequences (x0 , y0 , x1 , y1 , . . . , xm , ym ), where m < n,
such that for all i ≤ m we have (xi , yi ) ∈ R and for all i < m we have (xi+1 , yi ) ∈ R. Equip
Z with the standard Borel structure it inherits as a Borel subset of (X × Y )≤n . Consider
also the locally finite Borel graph G on Z rendering adjacent two distinct
sequences in
F
Z if they share any entries. By Lemma 2.3 there is a partition Z = k∈N Zk of Z into
Borel sets such that for all k, no two elements of Zk are G-adjacent. In other words, we
partition potential augmenting paths into countably many colors, where no two paths of
the same color intersect. Thus we may flip paths of the same color simultaneously without
risk of causing conflicts. Towards that end, fix a bookkeeping function s : N → N such
that s−1 (k) is infinite for all k ∈ N in order to consider each color class infinitely often.
Given a g-augmenting path z = (x0 , y0 , . . . , xm , ym ), define the flip along z to be the
Borel partial function gz : X ⇀ Y given by
(
yi
if ∃i ≤ m x = xi ,
gz (x) =
g(x) otherwise.
The fact that z is g-augmenting ensures that gz is injective. More generally, for any Borel
G-independent set Z aug ⊆ Z of g-augmenting paths, we may simultaneously flip g along
all paths in Z aug to obtain another Borel partial injection (g)Z aug .
We iterate this construction. Put g0 = g. Recursively assuming that gk : X ⇀ Y has
been defined, let Zkaug be the set of gk -augmenting paths in Zs(k) , and let gk+1 = (gk )Z aug
k
be the result of flipping gk along all paths in Zkaug . As each x ∈ X is contained in only
finitely many elements of Z, and since each path in Z can be flipped at most once (after the
first flip its origin is always in the domain of the subsequent partial injections), it follows
that the sequence (gk (x))k∈N is eventually constant. Defining g′ (x) to be the limiting
value, it is routine to check that there are no g′ -augmenting paths of length less than n.
Finally, to verify the third item of the lemma, put A = {x ∈ X : g′ (x) 6= g(x)}.
With each x ∈ A associate the origin of the first augmenting path along which it was
flipped. This is an at most n-to-1 Borel function from A to X \ dom(g), and since R is
(µ, ν)-preserving the bound follows.
We are now in position to follow the strategy outlined at the beginning of the proof.
Let f0 : X ⇀ Y be the empty function. Recursively assuming the Borel partial injection
fn : X ⇀ Y has been defined to have no augmenting paths of length less than n, let fn+1
be the Borel partial injection (fn )′ granted by applying Lemma 2.5 to fn . Thus fn+1 has
no augmenting paths of length less than n + 1 and the recursive construction continues.
6
CONLEY, JACKSON, KERR, MARKS, SEWARD, AND TUCKER-DROB
Lemma 2.2 ensures that µ(X \ dom(fn )) ≤ c−n , and thus the third item of Lemma 2.5
ensures that µ({x ∈ X : fn+1 (x) 6= fn (x)}) ≤ (n + 1)c−n . As the sequence (n + 1)c−n is
summable, the Borel–Cantelli lemma implies that X ′ = {x ∈ X : ∃m ∈ N ∀n ≥ m x ∈
dom(fn ) and fn (x) = fm (x)} is µ-conull. Finally, f = limn→∞ fn ↾ X ′ is as desired.
Lemma 2.6. Suppose X and Y are standard Borel spaces, that µ is a Borel measure on
X, and that ν is a Borel measure on Y . Suppose R ⊆ X × Y is Borel, locally finite,
(µ, ν)-preserving graph. Assume that there exist numbers a, b > 0 such that |Rx | ≥ a for
µ-a.e. x ∈ X and |Ry | ≤ b for ν-a.e. y ∈ Y . Then ν(RA ) ≥ ab µ(A) for all Borel subsets
A ⊆ X.
R
R
Proof. Since R is (µ, ν)-preserving we have A |Rx | dµ = RA |Ry ∩ A| dν. Hence
Z
Z
Z
Z
y
b dν = bν(RA ).
|R ∩ A| dν ≤
|Rx | dµ =
a dµ ≤
aµ(A) =
A
A
RA
RA
3. Følner tilings
Fix a countable group G. For finite sets K, F ⊆ G and δ > 0, we say that F is (K, δ)invariant if |KF △F | < δ|F |. Note this condition implies |KF | < (1 + δ)|F |. Recall that
G is amenable if for every finite K ⊆ G and δ > 0 there exists a (K, δ)-invariant set F . A
Følner sequence is a sequence of finite sets Fn ⊆ G with the property that for every finite
K ⊆ G and δ > 0 the set Fn is (K, δ)-invariant for all but finitely many n. Below, we
always assume that G is amenable.
Fix a free action G y X. For A ⊆ X we define the lower and upper Banach densities
of A to be
|A ∩ F x|
|A ∩ F x|
D (A) = sup inf
and
D̄(A) = inf sup
.
F ⊆G x∈X
¯
|F |
|F |
F ⊆G x∈X
F finite
F finite
Equivalently [3, Lemma 2.9], if (Fn )n∈N is a Følner sequence then
D (A) = lim inf
n→∞ x∈X
¯
|A ∩ Fn x|
|Fn |
and
D̄(A) = lim sup
n→∞ x∈X
|A ∩ Fn x|
.
|Fn |
We now define an analogue of ‘(K, δ)-invariant’ for infinite subsets of X. A set A ⊆ X
(possibly infinite) is (K, δ)∗ -invariant if there is a finite set F ⊆ G such that |(KA△A) ∩
F x| < δ|A ∩ F x| for all x ∈ X. Equivalently, A is (K, δ)∗ -invariant if and only if for every
Følner sequence (Fn )n∈N we have limn supx |(KA△A) ∩ Fn x|/|A ∩ Fn x| < δ.
A collection {Fi : i ∈ I} of finite subsets of X is called ǫ-disjoint if for each i there is
an Fi′ ⊆ Fi such that |Fi′ | > (1 − ǫ)|Fi | and such that the sets {Fi′ : i ∈ I} are pairwise
disjoint.
Lemma 3.1. Let K, W ⊆ G be finite, let ǫ, δ > 0, let C ⊆ X, and for c ∈ CS let Fc ⊆ W
be (K, δ(1 − ǫ))-invariant. If the collection
{Fc c : c ∈ C} is ǫ-disjoint and c∈C Fc c has
S
positive lower Banach density, then c∈C Fc c is (K, δ)∗ -invariant.
S
Proof. Set A = c∈C Fc c and set T = W W −1 ({1G } ∪ K)−1 . Since W is finite and each
Fc ⊆ W , there is 0 < δ0 < δ such that each Fc is (K, δ0 (1 − ǫ))-invariant. Fix a finite set
D (A)
(A)
U ⊆ G which is (T, D¯2|T
| (δ − δ0 ))-invariant and satisfies inf x∈X |A ∩ U x| > ¯ 2 |U |. Now
FØLNER TILINGS FOR ACTIONS OF AMENABLE GROUPS
7
fix x ∈ X. Let B be the set of b ∈ U x such that T b 6⊆ U x. Note that B ⊆ T −1 (T U x△U x)
and thus
|U |
|T U △U |
·
· |A ∩ U x| < (δ − δ0 )|A ∩ U x|.
|B| ≤ |T | ·
|U |
|A ∩ U x|
P
Set C ′ = {c ∈ C : Fc c ⊆ U x}. Note that the ǫ-disjoint assumption gives (1−ǫ) c∈C ′ |Fc | ≤
|A∩U x|. Also, our definitions of C ′ , T , and B imply that if c ∈ C \C ′ and
S ({1G }∪K)Fc c∩
U x 6= ∅ then (({1G }∪K)Fc c)∩U x ⊆ B. Therefore (KA△A)∩U x ⊆ B∪ c∈C ′ (KFc c△Fc c).
Combining this with the fact that each set Fc is (K, δ0 (1 − ǫ))-invariant, we obtain
X
|(KA△A) ∩ U x| ≤ |B| +
|KFc c△Fc c|
c∈C ′
< (δ − δ0 )|A ∩ U x| +
X
δ0 (1 − ǫ)|Fc |
c∈C ′
≤ (δ − δ0 )|A ∩ U x| + δ0 |A ∩ U x|
= δ|A ∩ U x|.
Since x was arbitrary, we conclude that A is (K, δ)∗ -invariant.
Lemma 3.2. Let T ⊆ G be finite and let ǫ, δ > 0 with ǫ(1 + δ) < 1. Suppose that A ⊆ X
is (T −1 , δ)∗ -invariant. If B ⊇ A and |B ∩ T x| ≥ ǫ|T | for all x ∈ X, then
D (B) ≥ (1 − ǫ(1 + δ)) · D (A) + ǫ.
¯
¯
Proof. This is implicitly demonstrated in [3, Proof of Lemma 4.1]. As a convenience to
the reader, we include a proof here. Fix θ > 0. Since A is (T −1 , δ)∗ -invariant, we can pick
a finite set U ⊆ G which is (T, θ)-invariant and satisfies
inf
x∈X
|A ∩ U x|
> D(A) − θ
¯
|U |
and
sup
x∈X
|T −1 A ∩ U x|
< 1 + δ.
|A ∩ U x|
|A∩U x|
|U |
> D (A) − θ, and set U ′ = {u ∈ U : A ∩ T ux = ∅}. Notice that
¯
|U | − |T −1 A ∩ U x|
|T −1 A ∩ U x| |A ∩ U x|
|U ′ |
=
=1−
·
> 1 − (1 + δ)α.
|U |
|U |
|A ∩ U x|
|U |
Fix x ∈ X, set α =
Since A ∩ T U ′ x = ∅ and |B ∩ T y| ≥ ǫ|T | for all y ∈ X, it follows that |(B \ A) ∩ T ux| ≥ ǫ|T |
for all u ∈ U ′ . Thus there are ǫ|T ||U ′ | many pairs (t, u) ∈ T × U ′ with tux ∈ B \ A. It
follows there is t∗ ∈ T with |(B \ A) ∩ t∗ U ′ x| ≥ ǫ · |U ′ |. Therefore
|A ∩ U x| |(B \ A) ∩ t∗ U ′ x| |U ′ |
|U |
|B ∩ T U x|
≥
+
·
·
′
|T U |
|U |
|U |
|U |
|T U |
−1
> α + ǫ(1 − (1 + δ)α) · (1 + θ)
= (1 − ǫ(1 + δ))α + ǫ · (1 + θ)−1
> (1 − ǫ(1 + δ))(D (A) − θ) + ǫ · (1 + θ)−1 .
¯
Letting θ tend to 0 completes the proof.
8
CONLEY, JACKSON, KERR, MARKS, SEWARD, AND TUCKER-DROB
Lemma 3.3. Let X be a standard Borel space and let G y X be a free Borel action. Let
Y ⊆ X be Borel, let T ⊆ G be finite, and let ǫ ∈ (0, 1/2). Then there is a Borel set C ⊆ X
and a Borel function c ∈ C 7→ Tc ⊆ T suchSthat |Tc | > (1− ǫ)|T |, the sets {Tc c : c ∈ C} are
pairwise disjoint and disjoint with Y , Y ∪ c∈C Tc c = Y ∪ T C, and |(Y ∪ T C) ∩ T x| ≥ ǫ|T |
for all x ∈ X.
Proof. Using Lemma 2.4, fix a Borel partition P = {P1 , . . . , Pm } of X such that T x∩T x′ =
∅ for S
all x 6= x′ ∈ Pi and all 1 ≤ i ≤ m. We will pick Borel sets Ci ⊆ Pi and set
C = 1≤i≤m Ci . Set Y0 = Y . Let 1 ≤ i ≤ m and inductively assume that Yi−1 has
been defined. Define Ci = {c ∈ Pi : |Yi−1 ∩ T c| < ǫ|T |}, defineSYi = Yi−1 ∪ T Ci , and for
c ∈ Ci set Tc = {t ∈ T : tc 6∈ Yi−1 }. It is easily seen that C = 1≤i≤m Ci has the desired
properties.
The following lemma is mainly due to Ornstein–Weiss [17], who proved it with an
invariant probability measure taking the place of Banach density. Ornstein and Weiss also
established a purely group-theoretic counterpart of this result which was later adapted to
the Banach density setting by Downarowicz–Huczek–Zhang in [3] and will be heavily used
in Section 5, where it is recorded as Theorem 5.2. The only difference between this lemma
and prior versions is that we simultaneously work in the Borel setting and use Banach
density.
Lemma 3.4. [17, II.§2. Theorem 5] [3, Lemma 4.1] Let X be a standard Borel space and
let G y X be a free Borel action. Let K ⊆ G be finite, let ǫ ∈ (0, 1/2), and let n satisfy
(1 − ǫ)n < ǫ. Then there exist (K, ǫ)-invariant sets F1 , . . . , Fn , a Borel set C ⊆ X, and a
Borel function c ∈ C 7→ Fc ⊆ G such that:
(i) for every c ∈ C there is 1 ≤ i ≤ n with Fc ⊆ Fi and |Fc | > (1 − ǫ)|Fi |;
(ii) theSsets Fc c, c ∈ C, are pairwise disjoint; and
(iii) D ( c∈C Fc c) > 1 − ǫ.
¯
Proof. Fix δ > 0 satisfying (1 + δ)−1 (1 − (1 + δ)ǫ)n < ǫ − 1 + (1 + δ)−1 . Fix a sequence
of (K, ǫ)-invariant sets F1 , . . . , Fn such that Fi is (Fj−1 , δ(1 − ǫ))-invariant for all 1 ≤ j <
i ≤ n.
The set C will be the disjoint union of sets Ci , 1 ≤ i ≤ n. The construction
S
S will be
such that Fc ⊆ Fi and |Fc | > (1 − ǫ)|Fi | for c ∈ Ci . We will define Ai = i≤k≤n c∈Ck Fc c
S
and arrange that Ai+1 ∪ Fi Ci = Ai+1 ∪ c∈Ci Fc c and
D (Ai ) ≥ (1 + δ)−1 − (1 + δ)−1 (1 − ǫ(1 + δ))n+1−i .
¯
S
In particular, we will have Ai = i≤k≤n Fk Ck .
To begin, apply Lemma 3.3 with Y = ∅ and T = Fn to get a Borel set Cn and a Borel
map c ∈ S
Cn 7→ Fc ⊆ Fn such that |Fc | > (1 − ǫ)|Fn |, the sets {Fc c : c ∈ Cn } are pairwise
disjoint, c∈Cn Fc c = Fn Cn , and |Fn Cn ∩ Fn x| ≥ ǫ|Fn | for all x ∈ X. Applying Lemma
3.2 with A = ∅ and B = Fn Cn we find that the set An = Fn Cn satisfies D (An ) ≥ ǫ.
¯
Inductively assume that Cn through Ci+1 have been defined and An through Ai+1 are
defined as above and satisfy (3.1). Using Y = Ai+1 and T = Fi , apply Lemma 3.3 to get
a Borel set Ci and a Borel map c ∈ Ci 7→ Fc ⊆ Fi such that |FcS| > (1 − ǫ)|Fi |, the sets
{Fc c : c ∈ Ci } are pairwise disjoint and disjoint with Ai+1 , Ai+1 ∪ c∈Ci Fc c = Ai+1 ∪ Fi Ci ,
and |(Ai+1 ∪ Fi Ci ) ∩ Fi x| ≥ ǫ|Fi | for all x ∈ X. The set Ai+1 is the union of an ǫ-disjoint
(3.1)
FØLNER TILINGS FOR ACTIONS OF AMENABLE GROUPS
9
collection of (Fi−1 , δ(1 − ǫ))-invariant sets and has positive lower Banach density. So by
Lemma 3.1 Ai+1 is (Fi−1 , δ)∗ -invariant. Applying Lemma 3.2 with A = Ai+1 , we find that
Ai = Ai+1 ∪ Fi Ci satisfies
D (Ai ) ≥ (1 − ǫ(1 + δ)) · D (Ai+1 ) + ǫ
¯
¯
ǫ(1 + δ)
(1 − ǫ(1 + δ))
− (1 + δ)−1 (1 − ǫ(1 + δ))n+1−i +
≥
(1 + δ)
1+δ
= (1 + δ)−1 − (1 + δ)−1 (1 − ǫ(1 + δ))n+1−i .
This completes the inductive step and completes the definition of C. It is immediate from
the construction that (i) and (ii) are satisfied. Clause (iii) also follows by noting that (3.1)
is greater than 1 − ǫ when i = 1.
We recall the following simple fact.
Lemma 3.5. [3, Lemma 2.3] If F ⊆ G is (K, δ)-invariant and F ′ satisfies |F ′ △F | < ǫ|F |
then F ′ is (K, (|K|+1)ǫ+δ
)-invariant.
1−ǫ
Now we present the main theorem.
Theorem 3.6. Let G be a countable amenable group, let (X, µ) be a standard probability
space, and let G y (X, µ) be a free p.m.p. action. For every finite K ⊆ G and every δ > 0
there exist a µ-conull G-invariant Borel set X ′ ⊆ X, a collection {Ci : 0 ≤ i ≤ m} of
Borel subsets of X ′ , and a collection {Fi : 0 ≤ i ≤ m} of (K, δ)-invariant sets such that
{Fi c : 0 ≤ i ≤ m, c ∈ Ci } partitions X ′ .
< δ. Apply Lemma 3.4 to get (K, ǫ)-invariant
Proof. Fix ǫ ∈ (0, 1/2) satisfying (|K|+1)6ǫ+ǫ
1−6ǫ
′
′
sets F1 , . . . , Fn , a Borel set C ⊆ X, and a Borel function c ∈ C 7→ Fc ⊆ G satisfying
(i) for every c ∈ C there is 1 ≤ i ≤ n with Fc ⊆ Fi′ and |Fc | > (1 − ǫ)|Fi′ |;
(ii) theSsets Fc c, c ∈ C, are pairwise disjoint; and
(iii) D ( c∈C Fc c) > 1 − ǫ.
¯
S
Set Y = X \ c∈C Fc c. If µ(Y ) = 0 then we are done. So we assume µ(Y ) > 0 and
we let ν denote the restriction of µ to Y . Fix a Borel map c ∈ C 7→ Zc ⊆ Fc satisfying
4ǫ|Fc | < |Zc | < 5ǫ|Fc | for all c ∈ C (it’s clear from the proof of Lemma
S 3.4 that we may
choose the sets Fi′ so that ǫ|Fc | > mini ǫ(1 − ǫ)|Fi′ | > 1). Set Z = c∈C Zc c and let ζ
denote the restriction
of µ to Z (note that µ(Z) > 0).
S
Set W = ni=1 Fi′ and W ′ = W W −1 . Fix a finite set U ⊆ G which is (W ′ , (1/2−ǫ)/|W ′ |)invariant and satisfies inf x∈X |(X \ Y ) ∩ U x| > (1 − ǫ)|U |. Since every amenable group
admits a Følner sequence consisting of symmetric sets, we may assume that U = U −1 [16,
Corollary 5.3]. Define R ⊆ Y ×Z by declaring (y, z) ∈ R if and only if y ∈ U z (equivalently
z ∈ U y). Then R is Borel, locally finite, and (ν, ζ)-preserving. We now check that R is
expansive. We automatically have |Rz | = |Y ∩ U z| < ǫ|U | for all z ∈ Z. By Lemma 2.6 it
suffices to show that |Ry | = |Z ∩ U y| ≥ 2ǫ|U | for all y ∈ Y . Fix y ∈ Y . Let B be the set
of b ∈ U y such that W ′ b 6⊆ U y. Then B ⊆ W ′ (W ′ U y△U y) and thus
|W ′ U △U |
|B|
≤ |W ′ | ·
< 1/2 − ǫ.
|U |
|U |
10
CONLEY, JACKSON, KERR, MARKS, SEWARD, AND TUCKER-DROB
Let A be the union of those sets Fc c, c ∈ C, which are contained in U y. Notice that
(X \ Y ) ∩ U y ⊆ B ∪ A. Therefore
|A|
|(B ∪ A) ∩ U y|
|(X \ Y ) ∩ U y|
1
−ǫ+
>
≥
> 1 − ǫ,
2
|U |
|U |
|U |
hence |A| > |U |/2. By construction |Z ∩ A| > 4ǫ|A|. So |Z ∩ U y| ≥ |Z ∩ A| > 2ǫ|U |. We
conclude that R is expansive.
Apply Proposition 2.1 to obtain a G-invariant µ-conull set X ′ ⊆ X and a Borel injection
ρ : Y ∩X ′ → Z with graph(ρ) ⊆ R. Consider the sets Fc ∪{g ∈ U : gc ∈ Y and ρ(gc) ∈ Fc c}
as c ∈ C varies. These are subsets of W ∪ U and thus there are only finitely many such sets
which we can enumerate as F1 , . . . , Fm . We partition C ∩ X ′ into Borel sets C1 , . . . , Cm
with c ∈ Ci if and only if c ∈ X ′ and Fc ∪ {g ∈ U : gc ∈ Y and ρ(gc) ∈ Fc c} = Fi . Since
ρ is defined on all of Y ∩ X ′ , we see that the sets {Fi c : 1 ≤ i ≤ m, c ∈ Ci } partition X ′ .
Finally, for c ∈ Ci ∩ X ′ , if we let Fj′ be such that |Fc △Fj′ | < ǫ|Fj′ |, then
|Fi △Fj′ | ≤ |Fi △Fc | + |Fc △Fj′ | ≤ |ρ−1 (Fc c)| + ǫ|Fj′ |
≤ |Zc | + ǫ|Fj′ | < 5ǫ|Fc | + ǫ|Fj′ | ≤ 6ǫ|Fj′ |.
Using Lemma 3.5 and our choice of ǫ, this implies that each set Fi is (K, δ)-invariant.
4. Clopen tower decompositions with Følner shapes
Let G y X be an action of a group on a compact space. By a clopen tower we mean
a pair (B, S) where B is a clopen subset of X (the base of the tower) and S is a finite
subset of G (the shape of the tower) such that the sets sB for s ∈ S are pairwise disjoint.
By a clopen tower decomposition of X we mean a finite collection {(Bi , Si )}ni=1 of clopen
towers such that the sets S1 B1 , . . . , Sn Bn form a partition of X. We also similarly speak
of measurable towers and measurable tower decompositions for an action G y (X, µ) on a
measure space, with the bases now being measurable sets instead of clopen sets. In this
terminology, Theorem 3.6 says that if G y (X, µ) is a free p.m.p. action of a countable
amenable group on a standard probability space then for every finite set K ⊆ G and
δ > 0 there exists, modulo a null set, a measurable tower decomposition of X with (K, δ)invariant shapes.
Lemma 4.1. Let G be a countably infinite amenable group and G y X a free minimal
action on the Cantor set. Then this action has a free minimal extension G y Y on
the Cantor set such that for every finite set F ⊆ G and δ > 0 there is a clopen tower
decomposition of Y with (F, δ)-invariant shapes.
Proof. Let F1 ⊆ F2 ⊆ . . . be an increasing sequence of finite subsets of G whose union is
equal to G. Fix a G-invariant Borel probability measure µ on X (such a measure exists
by amenability). The freeness of the action G y X means that for each n ∈ N we can
apply Theorem 3.6 to produce, modulo a null set, a measurable tower decomposition Un
for the p.m.p. action G y (X, µ) such that each shape is (Fn , 1/n)-invariant. Let A be the
unital G-invariant C∗ -algebra of L∞ (X, µ) generated by C(X) and the indicator functions
of the levels of each of the tower decompositions Un . Since there are countably many
such indicator functions and the group G is countable, the C∗ -algebra A is separable.
Therefore by the Gelfand–Naimark theorem we have A = C(Z) for some zero-dimensional
FØLNER TILINGS FOR ACTIONS OF AMENABLE GROUPS
11
metrizable space Z and a G-factor map ϕ : Z → X. By a standard fact which can be
established using Zorn’s lemma, there exists a nonempty closed G-invariant set Y ⊆ Z
such that the restriction action G y Y is minimal. Note that Y is necessarily a Cantor
set, since G is infinite. Also, the action G y Y is free, since it is an extension of a free
action. Since the action on X is minimal, the restriction ϕ|Y : Y → X is surjective and
hence a G-factor map. For each n we get from Un a clopen tower decomposition Vn of
Y with (Fn , 1/n)-invariant shapes, and by intersecting the levels of the towers in Vn with
Y we obtain a clopen tower decomposition of Y with (Fn , 1/n)-invariant shapes, showing
that the extension G y Y has the desired property.
Let X be the Cantor set and let G be a countable infinite amenable group. The set
Act(G, X) is a Polish space under the topology which has as a basis the sets
Uα,P,F = {β ∈ Act(G, X) : αs A = βs A for all A ∈ P and s ∈ F }
where α ∈ Act(G, X), P is a clopen partition of X, and F is a finite subset of G.
Write FrMin(G, X) for the set of actions in Act(G, X) which are free and minimal. Then
FrMin(G, X) is a Gδ set. To see this, fix an enumeration s1 , s2 , s3 , . . . of G \ {e} (where
e denotes the identity element of the group) and for every n ∈ N and
S nonempty clopen
set A ⊆ X define the set Wn,A of all α ∈ Act(G, X) such that (i) s∈F αs A = X for
some finite set F ⊆ G, and (ii) there exists a clopen partition {A1 , . . . , Ak } of A such that
αsn Ai ∩ Ai = ∅ for all i = 1, . . . , k. Then each Wn,A is open, which means, with A ranging
T over
T the countable collection of nonempty clopen subsets of X, that the intersection
n∈N A Wn,A , which is equal to FrMin(G, X), is a Gδ set. It follows that FrMin(G, X)
is a Polish space.
Theorem 4.2. Let G be a countably infinite amenable group. Let C be the collection of
actions in FrMin(G, X) with the property that for every finite set F ⊆ G and δ > 0 there
is a clopen tower decomposition of X with (F, δ)-invariant shapes. Then C is a dense Gδ
subset of FrMin(G, X).
α
Proof. That C is a Gδ set is a simple exercise. Let G y X be any action in FrMin(G, X).
β
By Lemma 4.1 this action has a free minimal extension G y Y with the property in the
theorem statement, where Y is the Cantor set. Let P be a clopen partition of X and F
a nonempty finite subset of G. Write A1 , . . . , An for the members of the clopen partition
W
−1
−1
s∈F s P. Then for each i = 1, . . . , n the set Ai and its inverse image ϕ (Ai ) under
the extension map ϕ : Y → X are Cantor sets, and so we can find a homeomorphism
ψi : Ai → ϕ−1 (Ai ). Let ψ : X → Y be the homeomorphism which is equal to ψi on Ai for
γ
each i. Then the action G y X defined by γs = ψ −1 ◦ βs ◦ ψ for s ∈ G belongs to C as
well as to the basic open neighborhood Uα,P,F of α, establishing the density of C.
5. Applications to Z-stability
A C∗ -algebra A is said to be Z-stable if A ⊗ Z ∼
= A where Z is the Jiang–Su algebra [10],
with the C∗ -tensor product being unique in this case because Z is nuclear. Z-stability has
become an important regularity property in the classification program for simple separable
nuclear C∗ -algebras, which has recently witnessed some spectacular advances. Thanks to
recent work of Gong–Lin–Niu [6], Elliott–Gong–Lin–Niu [4], and Tikuisis–White–Winter
12
CONLEY, JACKSON, KERR, MARKS, SEWARD, AND TUCKER-DROB
[22], it is now known that simple separable unital C∗ -algebras satisfying the universal
coefficient theorem and having finite nuclear dimension are classified by ordered K-theory
paired with tracial states. Although Z-stability does not appear in the hypotheses of this
classification theorem, it does play an important technical role in the proof. Moreover, it
is a conjecture of Toms and Winter that for simple separable infinite-dimensional unital
nuclear C∗ -algebras the following properties are equivalent:
(i) Z-stability,
(ii) finite nuclear dimension,
(iii) strict comparison.
Implications between (i), (ii), and (iii) are known to hold in various degrees of generality.
In particular, the implication (ii)⇒(i) was established in [23] while the converse is known
to hold when the extreme boundary of the convex set of tracial states is compact [1]. It
remains a problem to determine whether any of the crossed products of the actions in Theorem 5.4 falls within the purview of these positive results on the Toms–Winter conjecture,
and in particular whether any of them has finite nuclear dimension (see Question 5.5).
By now there exist highly effectively methods for establishing finite nuclear dimension
for crossed products of free actions on compact metrizable spaces of finite covering dimension [20, 21, 7], but their utility is structurally restricted to groups with finite asymptotic
dimension and hence excludes many amenable examples like the Grigorchuk group. One
can show using the technology from [7] that, for a countably infinite amenable group with
finite asymptotic dimension, the crossed product of a generic free minimal action on the
Cantor set has finite nuclear dimension. Our intention here has been to remove the restriction of finite asymptotic dimension by means of a different approach that establishes
instead the conjecturally equivalent property of Z-stability but for arbitrary countably
infinite amenable groups.
To verify Z-stability in the proof of Theorem 5.3 we will use the following result of
Hirshberg and Orovitz [8]. Recall that a linear map ϕ : A → B between C∗ -algebras is
said to be complete positive if its tensor product id ⊗ ϕ : Mn ⊗ A → Mn ⊗ B with the
identity map on the n × n matrix algebra Mn is positive for every n ∈ N. It is of order zero
if ϕ(a)ϕ(b) = 0 for all a, b ∈ A satisfying ab = 0. One can show that ϕ is an order-zero
completely positive map if and only if there is an embedding B ⊆ D of B into a larger
C∗ -algebra, a ∗ -homomorphism π : A → D, and a positive element h ∈ D commuting with
the image of π such that ϕ(a) = hπ(a) for all a ∈ A [24]. Below - denotes the relation
of Cuntz subequivalence, so that a - b for positive elements a, b in a C∗ -algebra A means
that there is a sequence (vn ) in A such that limn→∞ ka − vn bvn∗ k = 0.
Theorem 5.1. Let A be a simple separable unital nuclear C∗ -algebra not isomorphic to
C. Suppose that for every n ∈ N, finite set Ω ⊆ A, ε > 0, and nonzero positive element
a ∈ A there exists an order-zero complete positive contractive linear map ϕ : Mn → A
such that
(i) 1 − ϕ(1) - a,
(ii) k[b, ϕ(z)]k < ε for all b ∈ Ω and norm-one z ∈ Mn .
Then A is Z-stable.
FØLNER TILINGS FOR ACTIONS OF AMENABLE GROUPS
13
The following is the Ornstein–Weiss quasitiling theorem [17] as formulated in Theorem 3.36 of [11]. For finite sets A, F ⊆ G we write
∂F A = {s ∈ A : F s ∩ A 6= ∅ and F s ∩ (G \ A) 6= ∅}.
For λS≤ 1, a collection C of finite subsets of G is said to λ-cover a finite subset A of G if
|A ∩ C| ≥ λ|A|. For β ≥ 0, a collection C of finite subsets of G is said to be β-disjoint if
for each C ∈ C there is a set C ′ ⊆ C with |C ′ | ≥ (1 − β)|C| so that the sets C ′ for C ∈ C
are pairwise disjoint.
Theorem 5.2. Let 0 < β < 12 and let n be a positive integer such that (1 − β/2)n <
β. Then whenever e ∈ T1 ⊆ T2 ⊆ · · · ⊆ Tn are finite subsets of a group G such that
|∂Ti−1 Ti | ≤ (η/8)|Ti | for i = 2, . . . , n, for every (Tn , β/4)-invariant nonempty finite set
E ⊆ G there exist C1 , . . . , Cn ⊆ G such that
S
(i) ni=1 Ti Ci ⊆ E, and
S
(ii) the collection of right translates ni=1 {Ti c : c ∈ Ci } is β-disjoint and (1−β)-covers
E.
Theorem 5.3. Let G be a countably infinite amenable group and let G y X be a free
minimal action on the Cantor set such that for every finite set F ⊆ G and δ > 0 there is a
clopen tower decomposition of X with (F, δ)-invariant shapes. Then C(X) ⋊ G is Z-stable.
Proof. Let n ∈ N. Let Υ be a finite subset of the unit ball of C(X), F a symmetric finite
subset of G containing the identity element e, and ε > 0. Let a be a nonzero positive
element of C(X) ⋊ G. We will show the existence of an order-zero completely positive
contractive linear map ϕ : Mn → C(X) ⋊ G satisfying (i) and (ii) in Theorem 5.1 where
the finite set Ω there is taken to be Υ ∪ {us : s ∈ F }. Since C(X) ⋊ G is generated as a
C∗ -algebra by the unit ball of C(X) and the unitaries us for s ∈ G, we will thereafter be
able to conclude by Theorem 5.1 that C(X) ⋊ G is Z-stable.
By Lemma 7.9 in [18] we may assume that a is a function in C(X). Taking a clopen set
A ⊆ X on which a is nonzero, we may furthermore assume that a is equal to the indicator
function 1A . Minimality implies that the clopen sets sA for s ∈ G cover X, and so by
compactness there is a finite set D ⊆ G such that D −1 A = X.
Equip X with a compatible metric d. Choose an integer Q > n2 /ε.
Let γ > 0, to be determined. Take a 0 < β < 1/n which is small enough so that if T is
a nonempty finite subset of G which is sufficiently invariant
under left translation by F Q
T
and T ′ is a subset of T with |T ′ | ≥ (1 − nβ)|T | then | s∈F Q s−1 T ′ | ≥ (1 − γ)|T |.
Choose an L ∈ N large enough so that (1 − β/2)L < β. By amenability there exist
finite subsets e ∈ T1 ⊆ T2 ⊆ · · · ⊆ TL of G such that |∂Tl−1 Tl | ≤ (β/8)|Tl | for l = 2, . . . , L.
By the previous paragraph, we may also assume that for each l the set Tl is sufficiently
invariant under left translation by F Q so that for all T ⊆ Tl satisfying |T | ≥ (1 − nβ)|Tl |
one has
\
s−1 T ≥ (1 − γ)|Tl |.
(5.1)
s∈F Q
By uniform continuity there is a η > 0 such that |f (x)−f (y)| < ε/(3n2 ) for all f ∈ Υ∪Υ2
and all x, y ∈ X satisfying d(x, y) < η. Again by uniform continuity there is an η ′ > 0
14
CONLEY, JACKSON, KERR, MARKS, SEWARD, AND TUCKER-DROB
S
such that d(tx, ty) < η for all x, y ∈ X satisfying d(x, y) < η ′ and all t ∈ L
l=1 Tl . Fix a
clopen partition {A1 , . . . , AM } of X whose members all have diameter less that η ′ .
Let E be a finite subset of G containing TL and let δ > 0 be such that δ ≤ β/4. We will
further specify E and δ below. By hypothesis there is a collection {(Vk , Sk )}K
k=1 of clopen
towers such that the shapes S1 , . . . , SK are (E, δ)-invariant and the sets S1 V1 , . . . , SK VK
partition X. We may assume that for each k = 1, . . . , K the set Sk is large enough so that
X
L
Mn
(5.2)
|Tl | ≤ β|Sk |.
l=1
By a simple procedure we can construct, for each k, a clopen partition Pk of Vk such that
each level of every one of the towers (V, Sk ) for V ∈ Pk is contained in one of the sets
A1 , . . . , AM as well as in one of the sets A and X \ A. By replacing (Vk , Sk ) with these
thinner towers for each k, we may therefore assume that each level in every one of the
towers (V1 , S1 ), . . . , (VK , SK ) is contained in one of the sets A1 , . . . , AM and in one of the
sets A and X \ A.
Let 1 ≤ k ≤ K. Since Sk is (TL , β/4)-invariant, by Theorem 5.2 and our choice of
the sets T1 , . . . , TL we can find Ck,1 , . . . , Ck,L ⊆ Sk such that the collection {Tl c : l =
1, . . . , L, c ∈ Ck,l } is β-disjoint, has union contained in Sk , and (1 − β)-covers Sk . By
β-disjointness, for every l = 1, . . . , L and c ∈ Ck,l we can find a Tk,l,c ⊆ Tl satisfying
|Tk,l,c | ≥ (1 − β)|Tl | so that the collection of sets Tk,l,cc for l = 1, . . . , L and c ∈ Ck,l is
disjoint and has the same union as the sets Tl c for l = 1, . . . , L and c ∈ Ck,l , so that it
(1 − β)-covers Sk .
For each l = 1, . . . , L and m = 1, . . . , M write Ck,l,m for the set of all c ∈ Ck,l such that
(n)
(1)
cVk ⊆ Am , and choose pairwise disjoint subsets Ck,l,m, . . . , Ck,l,m of Ck,l,m such that each
has cardinality ⌊|Ck,l,m |/n⌋. For each i = 2, . . . , n choose a bijection
G (1)
G (i)
Λk,i :
Ck,l,m →
Ck,l,m
l,m
(1)
l,m
(i)
which sends Ck,l,m to Ck,l,m for all l, m. Also, define Λk,1 to be the identity map from
F
(1)
l,m Ck,l,m to itself.
T
F
(1)
′
= ni=1 Tk,l,Λk,i (c) , which satisfies
Let 1 ≤ l ≤ L and c ∈ m Ck,l,m. Define the set Tk,l,c
(5.3)
′
| ≥ (1 − nβ)|Tl | ≥ (1 − nβ)|Tk,l,c |
|Tk,l,c
since each Tk,l,Λk,i (c) is a subset of Tl of cardinality at least (1 − β)|Tl |. Set
\
′
′
Bk,l,c,Q =
sTk,l,c
, Bk,l,c,0 = Tk,l,c
\ F Q−1 Bk,l,c,Q,
s∈F Q
and, for q = 1, . . . , Q − 1, using the convention F 0 = {e},
Bk,l,c,q = F Q−q Bk,l,c,Q \ F Q−q−1 Bk,l,c,Q.
′
. For s ∈ F we have
Then the sets Bk,l,c,0, . . . , Bk,l,c,Q partition Tk,l,c
(5.4)
sBk,l,c,Q ⊆ Bk,l,c,Q−1 ∪ Bk,l,c,Q,
FØLNER TILINGS FOR ACTIONS OF AMENABLE GROUPS
15
while for q = 1, . . . , Q − 1 we have
sBk,l,c,q ⊆ Bk,l,c,q−1 ∪ Bk,l,c,q ∪ Bk,l,c,q+1,
(5.5)
for if we are given a t ∈ Bk,l,c,q then st ∈ F Q−q+1 Bk,l,c,Q, while if st ∈ F Q−q−2 Bk,l,c,Q then
t ∈ F Q−q−1 Bk,l,c,Q since F is symmetric, contradicting the membership of t in Bk,l,c,q .
Also, from (5.1) and (5.3) we get
|Bk,l,c,Q| ≥ (1 − γ)|Tl |.
(5.6)
For i = 2, . . . , n, c ∈
F
(i)
m Ck,l,m ,
and q = 0, . . . , Q we set Bk,l,c,q = Bk,l,λ−1(c),q .
k,i
Write Λk,i,j for the composition Λk,i ◦ Λ−1
k,j . Define a linear map ψ : Mn → C(X) ⋊ G
by declaring it on the standard matrix units {eij }ni,j=1 of Mn to be given by
X
X X
utΛk,i,j (c)c−1 t−1 1tcVk
ψ(eij ) =
k,l,m c∈C (j)
k,l,m
′
t∈Tk,l,c
and extending linearly. Then ψ(eij )∗ = ψ(eji ) for all i, j and the product ψ(eij )ψ(ei′ j ′ ) is
1 or 0 depending on whether i = i′ , so that ψ is a ∗ -homomorphism.
F
(j)
For all k and l, all 1 ≤ i, j ≤ n, and all c ∈ m Ck,l,m we set
hk,l,c,i,j =
Q
X
X
q=1 t∈Bk,l,c,q
q
u
−1 −1 1tcV
k
Q tΛk,i,j (c)c t
and put
h=
n
X
X X
hk,l,c,i,i.
k,l,m i=1 c∈C (i)
k,l,m
Then h is a norm-one function which commutes with the image of ψ, and so we can define
an order-zero completely positive contractive linear map ϕ : Mn → C(X) ⋊ G by setting
ϕ(z) = hψ(z).
Note that ϕ(eij ) =
P
k,l,m
P
(j)
c∈Ck,l,m
hk,l,c,i,j .
We now verify condition (ii) in Theorem 5.1 for the elements of the set {us : s ∈ F }.
F
(j)
Let 1 ≤ i, j ≤ n. For all k and l, all c ∈ m Ck,l,m, and all s ∈ F we have
us hk,l,c,i,j u−1
s − hk,l,c,i,j =
Q
X
X
q=1 t∈Bk,l,c,q
q
u
−1
−1 1stcV
k
Q stΛk,i,j (c)c (st)
−
Q
X
X
q=1 t∈Bk,l,c,q
q
u
−1 −1 1tcV ,
k
Q tΛk,i,j (c)c t
and so in view of (5.4) and (5.5) we obtain
kus hk,l,c,i,j u−1
s − hk,l,c,i,j k ≤
1
ε
< 2.
Q
n
16
CONLEY, JACKSON, KERR, MARKS, SEWARD, AND TUCKER-DROB
∗
∗
Since each of the elements b = us hk,l,c,i,j u−1
s −hk,l,c,i,j is such that b b and bb are dominated
′
′
by twice the indicator functions of Tk,l,Λ−1 (c) cVk and Tk,l,Λ−1 (c) Λk,i,j (c)Vk , respectively, and
j
j
F
(i′ )
′
′
the sets Tk,l,Λ−1(c) cVk over all k, l, all i = 1, . . . , n, and all c ∈ m Ck,l,m are pairwise
i′
disjoint, this yields
−1
kus ϕ(eij )u−1
s − ϕ(eij )k = max max kus hk,l,c,i,j us − hk,l,c,i,j k <
k,l,m c∈C (j)
k,l,m
ε
n2
and hence, for every norm-one element z = (zij ) ∈ Mn ,
k[us , ϕ(z)]k = kus ϕ(z)u−1
s − ϕ(z)k ≤
n
X
|zij |kus ϕ(eij )u−1
s − ϕ(eij )k
i,j=1
ε
= ε.
n2
Next we verify condition (ii) in Theorem 5.1 for the functions in Υ. Let 1 ≤ i, j ≤ n.
(j)
Let g ∈ Υ ∪ Υ2 . Let 1 ≤ k ≤ K, 1 ≤ l ≤ L, 1 ≤ m ≤ M , and c ∈ Ck,l,m. Then
< n2 ·
(5.7)
h∗k,l,c,i,j ghk,l,c,i,j =
Q
X
X
q=1 t∈Bk,l,c,q
q2
(tcΛk,i,j (c)−1 t−1 g)1tcVk
Q2
and
(5.8)
gh∗k,l,c,i,j hk,l,c,i,j
=
Q
X
X
q=1 t∈Bk,l,c,q
q2
g1tcVk .
Q2
Now let x ∈ Vk . Since Λk,i,j (c)x and cx both belong to Am , we have d(Λk,i,j (c)x, cx) < η ′ .
It follows that for every t ∈ Tl we have d(tΛk,i,j (c)x, tcx) < η by our choice of η ′ , so that
|g(tΛk,i,j (c)x) − g(tcx)| < ε/(3n2 ) by our choice of η, in which case
k(tcΛk,i,j (c)−1 t−1 g − g)1tcVk k = kc−1 t−1 ((tcΛk,i,j (c)−1 t−1 g − g)1tcVk )k
= k(Λk,i,j (c)−1 t−1 g − c−1 t−1 g)1Vk k
= sup |g(tΛk,i,j (c)x) − g(tcx)|
x∈Vk
<
ε
.
3n2
Using (5.7) and (5.8) this gives us
(5.9)
kh∗k,l,c,i,j ghk,l,c,i,j − gh∗k,l,c,i,j hk,l,c,i,j k
= max
max
q=1,...,Q t∈Bk,l,c,q
q2
ε
k(tcΛk,i,j (c)−1 t−1 g − g)1tcVk k < 2 .
Q2
3n
Set w = ϕ(eij ) for brevity. Let f ∈ Υ. For g ∈ {f, f 2 } the functions h∗k,l,c,i,j ghk,l,c,i,j −
(j)
gh∗k,l,c,i,j hk,l,c,i,j over all k, l, and m and all c ∈ Ck,l,m have pairwise disjoint supports, so
that (5.9) yields
ε
kw∗ gw − gw∗ wk < 2 .
3n
FØLNER TILINGS FOR ACTIONS OF AMENABLE GROUPS
17
It follows that
kw∗ f 2 w − f w∗ f wk ≤ kw∗ f 2 w − f 2 w∗ wk + kf (f w∗ w − w∗ f w)k <
2ε
3n2
and so
kf w − wf k2 = k(f w − wf )∗ (f w − wf )k
= kw∗ f 2 w − f w∗ f w + f w∗ wf − w∗ f wf k
≤ kw∗ f 2 w − f w∗ f wk + k(f w∗ w − w∗ f w)f k
2ε
ε
ε
< 2 + 2 = 2.
3n
3n
n
Therefore for every norm-one element z = (zij ) ∈ Mn we have
k[f, ϕ(z)]k ≤
n
X
|zij |k[f, ϕ(eij )]k < n2 ·
i,j=1
ε
= ε.
n2
Finally, we verify that the parameters in the construction of ϕ can be chosen so that
1 − ϕ(1) - 1A . By taking the sets S1 , . . . , SK to be sufficiently left invariant (by enlarging
E and shrinking δ if necessary) we may assume that for every k = 1, . . . , K there is an
Sk′ ⊆ Sk such that the set {s ∈ Sk′ : Ds ⊆ Sk } has cardinality at least |Sk |/2. Let
1 ≤ k ≤ K. Take a maximal set Sk′′ ⊆ Sk′ such that the sets Ds for s ∈ Sk′′ are pairwise
disjoint, and note that |Sk′′ | ≥ |Sk′ |/|D −1 D| ≥ |Sk |/(2|D|2 ). Since D −1 A = X, each of the
sets DsVk for s ∈ Sk′′ intersects A, and so the set Sk♯ of all s ∈ Sk such that sVk ⊆ A has
F F F
cardinality at least |Sk |/(2|D|2 ). Define Sk,1 = l,m ni=1 c∈C (i) Bk,l,c,Qc, which is the
k,l,m
set of all s ∈ Sk such that the function ϕ(1) takes the value 1 on sVk . Set Sk,0 = Sk \ Sk,1 .
P
(i)
Since ni=1 |Ck,l,m | ≥ |Ck,l,m | − n for every l and m, by (5.2) we have
n
XX
(i)
|Tl ||Ck,l,m | ≥
l,m i=1
X
|Tl ||Ck,l,m | − M n
l,m
≥
[
X
|Tl |
l
Tl Ck,l − β|Sk |
l
≥ (1 − 2β)|Sk |.
(i)
Since for all l and i and all c ∈ Ck,l,m we have |Bk,l,c,Q| ≥ (1 − γ)|Tl | by (5.6), it follows,
putting λ = (1 − γ)(1 − 2β), that
|Sk,1 | ≥ (1 − γ)
n
XX
(i)
|Tl ||Ck,l,m | ≥ λ|Sk |.
l,m i=1
By taking γ and β small enough we can guarantee that 1 − λ ≤ 1/(2|D|2 ) and hence
|Sk,0 | = |Sk | − |Sk,1 | ≤ (1 − λ)|Sk | ≤ |Sk♯ |,
18
CONLEY, JACKSON, KERR, MARKS, SEWARD, AND TUCKER-DROB
so that there exists an injection θk : Sk,0 → Sk♯ . Define
z=
K X
X
uθk (s)s−1 1sVk .
k=1 s∈Sk,0
A simple computation shows that z ∗ 1A z is the indicator function of
is the support of 1 − ϕ(1), and so putting v = (1 − ϕ(1))1/2 z ∗ we get
FK
k=1 Sk,0 Vk ,
which
v1A v ∗ = (1 − ϕ(1))1/2 z ∗ 1A z(1 − ϕ(1))1/2 = 1 − ϕ(1).
This demonstrates that 1 − ϕ(1) - 1A , as desired.
Combining Theorems 5.1 and 4.2 yields the following.
Theorem 5.4. Let G be a countably infinite amenable group and X the Cantor set. Then
the set of all actions in FrMin(G, X) whose crossed product is Z-stable is comeager, and
in particular nonempty.
Question 5.5. Do any of the crossed products in Theorem 5.4 have tracial state space
with compact extreme boundary (from which we would be able to conclude finite nuclear
dimension by [1] and hence classifiability)? For G = Z a generic action in FrMin(G, X)
is uniquely ergodic, so that the crossed product has a unique tracial state [9]. However,
already for Z2 nothing of this nature seems to be known. On the other hand, it is known
that the crossed products of free minimal actions of finitely generated nilpotent groups on
compact metrizable spaces of finite covering dimension have finite nuclear dimension, and
in particular are Z-stable [21].
References
[1] J. Bosa, N. Brown, Y. Sato, A. Tikuisis, S. White and W. Winter. Covering dimension of C∗ -algebras
and 2-coloured classification. To appear in Mem. Amer. Math. Soc..
[2] T. Ceccherini-Silberstein, P. de la Harpe, and R. I. Grigorchuk. Amenability and paradoxical decompositions for pseudogroups and discrete metric spaces. (Russian) Tr. Mat. Inst. Steklova 224 (1999),
68–111; translation in Proc. Steklov Inst. Math. 224 (1999), 57–97.
[3] T. Downarowicz, D. Huczek, and G. Zhang. Tilings of amenable groups. To appear in J. Reine
Angew. Math.
[4] G. Elliott, G. Gong, H. Lin, and Z. Niu. On the classification of simple amenable C∗ -algebras with
finite decomposition rank, II. arXiv:1507.03437.
[5] G. Elek and G. Lippner. Borel oracles. An analytic approach to constant time algorithms. Proc.
Amer. Math. Soc. 138 (2010), 2939–2947.
[6] G. Gong, H. Lin, and Z. Niu. Classification of finite simple amenable Z-stable C∗ -algebras.
arXiv:1501.00135.
[7] E. Guentner, R. Willett, and G. Yu. Dynamic asymptotic dimension: relation to dynamics, topology,
coarse geometry, and C∗ -algebras. Math. Ann. 367 (2017), 785–829.
[8] I. Hirshberg and J. Orovitz. Tracially Z–absorbing C∗ -algebras. J. Funct. Anal. 265 (2013), 765–785.
[9] M. Hochman. Genericity in topological dynamics. Ergodic Theory Dynam. Systems 28 (2008), 125–
165.
[10] X. Jiang and H. Su. On a simple unital projectionless C∗ -algebra. Amer. J. Math. 121 (1999),
359–413.
[11] D. Kerr and H. Li. Ergodic Theory: Independence and Dichotomies. Springer, Cham, 2016.
[12] A. Kechris, S. Solecki, and S. Todorcevic. Borel chromatic numbers, Adv. in Math. 141 (1999), 1–44.
FØLNER TILINGS FOR ACTIONS OF AMENABLE GROUPS
19
[13] R. Lyons and F. Nazarov. Perfect matchings as IID factors on non-amenable groups. European J.
Combin. 32 (2011), 1115–1125.
[14] H. Matui and Y. Sato. Strict comparison and Z-absorption of nuclear C∗ -algebras. Acta Math. 209
(2012), 179–196.
[15] H. Matui and Y. Sato. Decomposition rank of UHF-absorbing C∗ -algebras. Duke Math. J. 163 (2014),
2687–2708.
[16] I. Namioka, Følner’s conditions for amenable semi-groups. Math. Scand. 15 (1964), 18–28.
[17] D. S. Ornstein and B. Weiss. Entropy and isomorphism theorems for actions of amenable groups. J.
Analyse Math. 48 (1987), 1–141.
[18] N. C. Phillips. Large subalgebras. arXiv:1408.5546.
[19] M. Rørdam and W. Winter. The Jiang–Su algebra revisited. J. Reine Angew. Math. 642 (2010),
129–155.
[20] G. Szabó. The Rokhlin dimension of topological Zm -actions. Proc. Lond. Math. Soc. (3) 110 (2015),
673–694.
[21] G. Szabo, J. Wu, and J. Zacharias. Rokhlin dimension for actions of residually finite groups.
arXiv:1408.6096.
[22] A. Tikuisis, S. White, and W. Winter. Quasidiagonality of nuclear C∗ -algebras. Ann. of Math. (2)
185 (2017), 229–284.
[23] W. Winter. Nuclear dimension and Z-stability of pure C∗ -algebras. Invent. Math. 187 (2012), 259–
342.
[24] W. Winter and J. Zacharias. Completely positive maps of order zero. Münster J. Math. 2 (2009),
311–324.
Clinton T. Conley, Department of Mathematical Sciences, Carnegie Mellon University,
Pittsburgh, PA 15213, U.S.A.
E-mail address: clintonc@andrew.cmu.edu
Steve Jackson, Department of Mathematics, University of North Texas, Denton, TX 762035017, U.S.A.
E-mail address: jackson@unt.edu
David Kerr, Department of Mathematics, Texas A&M University, College Station, TX
77843-3368, U.S.A.
E-mail address: kerr@math.tamu.edu
Andrew Marks, UCLA Department of Mathematics, Los Angeles, CA 90095-1555, U.S.A.
E-mail address: marks@math.ucla.edu
Brandon Seward, Courant Institute of Mathematical Sciences, New York, NY 10012, U.S.A.
E-mail address: bseward@cims.nyu.edu
Robin Tucker-Drob, Department of Mathematics, Texas A&M University, College Station,
TX 77843-3368, U.S.A.
E-mail address: rtuckerd@math.tamu.edu
| 4 |
IEEE TRANSACTIONS ON INFORMATION THEORY, 2018
1
Capacity Scaling in MIMO Systems with General
Unitarily Invariant Random Matrices
arXiv:1306.2595v4 [] 5 Mar 2018
Burak Çakmak, Ralf R. Müller, Senior Member, IEEE, Bernard H. Fleury, Senior Member, IEEE
Abstract—We investigate the capacity scaling of MIMO systems with the system dimensions. To that end we quantify how
the mutual information varies when the number of antennas (at
either the receiver or transmitter side) is altered. For a system
comprising R receive and T transmit antennas with R > T ,
we find the following: By removing as many receive antennas as
needed to obtain a square system (provided the channel matrices
before and after the removal have full rank) the maximum
resulting loss of mutual information over all signal-to-noise ratios
(SNRs) depends only on R, T and the matrix of left-singular
vectors of the initial channel matrix, but not on its singular
values. In particular, if the latter
is Haar distributed the
P matrix
P
1
ergodic rate loss is given by Tt=1 R
r=T +1 r−t nats. Under the
same assumption, if T, R → ∞ with the ratio φ , T /R fixed, the
rate loss normalized by R converges almost surely to H(φ) bits
with H(·) denoting the binary entropy function. We also quantify
and study how the mutual information as a function of the system
dimensions deviates from the traditionally assumed linear growth
in the minimum of the system dimensions at high SNR.
Index Terms—multiple-input–multiple-output, mutual information, high SNR, multiplexing gain, unitary invariance, binary
entropy function, Haar random matrix, S-transform
I. I NTRODUCTION
HE capacity of a multiple-input–multiple-output (MIMO)
system with perfect channel state information at the
receiver can be expressed as [1]
T
min(T, R) log2 SNR + O(1)
(1)
whenever the channel matrix has full rank almost surely.
Here T and R denote the number of receive and transmit
antennas, respectively, and O(1) is a bounded function of the
signal-to-noise ratio (SNR) that does depend on T and R, in
general. The scaling term min(T, R) is often referred to as
the multiplexing gain. The explicit expression for the capacity
scaling when the number of transmit or receive antennas
varies, is difficult to calculate. Closed-form expressions can
be obtained only in few particular cases, e.g. for a channel
matrix of asymptotically large size with independent identically distributed (iid) zero-mean entries [2].
Burak Çakmak and Bernard H. Fleury were supported by the research
project VIRTUOSO funded by Intel Mobile Communications, Keysight,
Telenor, Aalborg University, and the Danish National Advanced Technology
Foundation. Ralf R. Müller was supported by the Alexander von Humboldt
Foundation.
Burak Çakmak is with the Department of Computer Science, Technical
University of Berlin, 10587 Berlin, Germany (e-mail: burak.cakmak@tuberlin.de).
Ralf R. Müller is with the Institute for Digital Communications, FriedrichAlexander Universität Erlangen-Nürnberg, 91058 Erlangen, Germany (e-mail:
mueller@lnt.de).
Bernard H. Fleury is with the Department of Electronic Systems, Aalborg
University, 9220 Aalborg, Denmark (e-mail: fleury@es.aau.dk).
In order to better understand capacity scaling in MIMO
channels with more complicated structures, such as correlation
at transmit and/or receive antennas, related works use either
implicit solutions, e.g. [3], or consider asymptotically high
SNR and express the capacity in terms of the multiplexing
gain, e.g. [4]. However, implicit solutions provide limited
intuitive insight into the capacity scaling and the multiplexing
gain is a crude measure of capacity.
In this article, we consider an affine approximation to the
mutual information at high SNR. In particular, we investigate
how mutual information varies when the numbers of antennas
(at either the receiver or transmitter side) is altered. Our affine
approximation to the mutual information leads to a generalization of the multiplexing gain which we call the multiplexing
rate. Such an approximation was formerly addressed in [1],
which was the baseline of many published works, e.g. [5]–[7].
We study the variation of the multiplexing rate when the
number of antennas either at the transmit or receive side varies.
More specifically, we formulate the reduction of the number of
antennas by means of a convenient linear projection operator.
This formulation allows us to asses the mutual information at
high SNR in insightful and explicit closed form. We consider
unitarily invariant matrix ensembles [8] which model a broad
class of MIMO channels [9]. Specifically, our sole restriction
is that the matrix of left (right) singular vectors of the initial
channel matrix, i.e. before the reduction, is Haar distributed.
Informally speaking, this implies that the channel matrix
involves some symmetry with respect to the antennas. An
individual antenna contributes in a “democratic fashion” to
the mutual information. There is no preferred antenna in the
system. In fact, such an invariance seems a natural property
for the mutual information to depend on T and R only, but
not on the specific antennas in the system.
Since the term O(1) in (1) is a bounded function of SNR,
the expression (1) has more than once led to misinterpretations
in the wireless communications community:
(i) when the number of antennas at either the transmit or
receive side varies, while the minimum of the system
dimensions (i.e. the numbers of transmit and receive
antennas) is kept fixed, the mutual information does not
vary at high SNR;
(ii) the mutual information scales linearly with the minimum
of the system dimensions at high SNR.
It is the goal of this paper to debunk these misinterpretations.
We summarize our main contributions as follows:
1) As regards misinterpretation (i) we find the following: For
a system comprising R receive and T transmit antennas
IEEE TRANSACTIONS ON INFORMATION THEORY, 2018
with R > T (T > R), let some of the receive (transmit)
antennas be removed from the system to obtain a system
with R̃ ≥ T receive (T̃ ≥ R transmit) antennas. Note
that min(T, R̃) = T (min(T̃ , R) = R). Then, the loss
of mutual information in the high SNR limit depends
only on R, T and R̃ (T̃ ) and the matrix of left (right)singular vectors of the initial R × T channel matrix,
but not on its singular values. Assuming the matrix
of left-(right-)singular vectors to bePHaarPdistributed,
T
R
1
the ergodic rate loss is given by
t=1
r=R̃+1 r−t
PR PT
1
( r=1 t=T̃ +1 t−r ) nats.
2) As regards misinterpretation (ii), we quantify how the
mutual information as a function of the number of antennas deviates from the approximate linear growth (versus
the minimum of the system dimensions) in the high SNR
limit. This deviation does depend on the singular values
of the channel matrix. We show that in the large system
limit the deviation is additive for compound unitarily
invariant channels and can be easily expressed in terms
of the S-transform (in free probability) of the limiting
eigenvalue distribution (LED) of the Gramian of the
channel matrix.
3) We show that the aforementioned results on the variation
of mutual information in the high SNR limit provide least
upper bounds on said variation over all SNRs. Thus, these
results have a universal character related to the SNR.
4) We derive novel formulations of the mutual information
and the multiplexing rate in terms of the S-transform
of the empirical eigenvalue distribution of the Gramian
of the channel matrix. These formulations establish a
fundamental relationship between the mutual information
and the multiplexing rate.
A. Related Work
The work presented in paper [5] is related to contribution 1).
Specifically, in [5, Section 3] the authors unveiled misinterpretation (i) for iid Gaussian unitarily invariant channel matrices.
We elucidate misinterpretation (i) by considering arbitrary
unitarily invariant matrices that need neither be Gaussian nor
iid. In particular, our results and/or statements do not require
any assumptions on the singular values of the channel matrix.
They solely depend on the singular vectors of the channel
matrix, e.g. see contribution 1). Our proof technique - which is
based on an algebraic manipulation of the projection operator
that we introduce - is different from any related work we are
aware of.
B. Organization
The paper is organized as follows. In Section II, we introduce the preliminary notations and definitions. In Section III,
we present the system model. In Section IV, we introduce new
formulations of the mutual information and the multiplexing
rate in terms of the S-transform. Section V and VI are
dedicated to lift misinterpretations (i) and (ii), respectively.
Conclusions are outlined in Section VII. The technical lemmas
and the proofs are located in the Appendix.
2
II. N OTATIONS & D EFINITIONS
N OTATION 1 We denote the binary entropy function as
(
(p − 1) log2 (1 − p) − p log2 p p ∈ (0, 1)
. (2)
H(p) ,
0
p ∈ {0, 1}
N OTATION 2 For an N × K matrix X, FK
X denotes the
empirical eigenvalue distribution function of X † X, i.e.
FK
X (x) =
1
| {λi ∈ L : λi ≤x} |
K
(3)
with L and |·| denoting the set of eigenvalues of X † X and the
cardinality of a set, respectively. Here, (·)† denotes conjugate
transposition. Moreover, for N, K → ∞ with φ = K/N fixed,
if FK
X converges weakly and almost surely to a LED function,
this limit is denoted by FX .
D EFINITION 1 A K-dimensional projector P β with β ≤ 1 is
a βK × K matrix with entries (P β )ij = δij , ∀i, j, where δij
denotes the Kronecker delta.
D EFINITION 2 For an N × K matrix X 6= 0, we define the
normalized rank of X † X as
K
αX
, 1 − FK
X (0)
(4)
and the distribution function of non-zero eigenvalues of X † X
as
1 K
αX − 1 u(x) + FK
(5)
F̃K
X (x)
X (x) , K
αX
with u(x) denoting the unit-step function.
The S-transform introduced by Voiculescu in the context of
free probability is defined as follows:
D EFINITION 3 [10] Let F be a probability distribution function with support in [0, ∞). Moreover, let α , 1 − F(0) 6= 0.
Define
Z
zx
Ψ(z) ,
dF(x), −∞ < z < 0.
(6)
1 − zx
Then, the S-transform of F is defined as
S(z) ,
z + 1 −1
Ψ (z),
z
−α < z < 0
(7)
where Ψ−1 denotes the composition inverse of Ψ.
N OTATION 3 For an N × K matrix X 6= 0, the S-transform
K
of FK
X is denoted by SX . For N, K → ∞ with φ = K/N
†
fixed, if X X has a LED function FX almost surely, the Stransform of FX is denoted by SX . Similarly, we define ΨK
X
and ΨX .
All large-system limits are assumed to hold in the almost
sure sense, unless explicitly stated otherwise. Where obvious,
limit operators indicating the large-system limit are omitted
for the sake of compactness and readability.
IEEE TRANSACTIONS ON INFORMATION THEORY, 2018
3
III. S YSTEM M ODEL
B. Unitary Invariance
Consider the MIMO system
y = Hx + n
(8)
where H ∈ CR×T , x ∈ CT ×1 , y ∈ CR×1 , n ∈ CR×1 are
respectively the channel matrix, the input vector, the output
vector, and the noise vector. The entries of x and n are
assumed to be independent (circularly symmetric) complex
Gaussian distributed with zero mean and variances σx2 and
σn2 , respectively. The transmit SNR is defined as
γ,
σx2
,
σn2
0 < γ < ∞.
(9)
The mutual information per transmit antenna of the communication link (8) is given by [11]
Z
I(γ; FTH ) , log2 (1 + γx) dFTH (x).
(10)
Similarly, I(γ; FR
) is the mutual information per receive
H†
antenna of (8).
A. Antenna Removal Via Projector
I(γ; FTV HU ) = I(γ; FTH )
In the sequel, we formulate the variation of mutual information when the number of antennas either at the transmit or
receive side of reference system (8) changes. This variation
is achieved by removing a certain fraction of antennas at the
corresponding side of the system. We formulate this removal
process via a multiplication of the channel matrix with a
rectangular projector matrix.
We distinguish between two cases: the removal of receive
antennas and the removal of transmit antennas. In the first case,
the system model resulting after removing a fraction 1 − β of
receive antennas in (8) reads
y β = P β (Hx + n)
(11)
= P β Hx + nβ .
(12)
The βR × R matrix P β is an R-dimensional projector which
removes a fraction 1 − β of receive antennas in reference
system (8) and nβ = P β n. The mutual information of the
MIMO system (12) is equal to
T I(γ; FTP β H ).
(13)
Similarly, removing a fraction 1 − β of transmit antennas in
(8) yields the R × βT system
ỹ = HP †β xβ + n.
(14)
Here, xβ is the vector obtained by removing from x the (1 −
β)T entries fed to the removed transmit antennas, i.e. xβ =
P β x with P β being a T -dimensional projector. The mutual
information of system (14) reads
βT I(γ; FβT
HP †β
For channel matrices that are unitarily invariant from right,
i.e. H and HU admit the same distribution for any unitary
matrix U independent of H, it does not matter which transmit
antennas are removed. Only their number counts. The same
applies to channel matrices that are unitarily invariant from
left for the removal of receive antennas. For channel matrices
that involve an asymmetry with respect to the antennas, i.e.
some antennas contribute more to the mutual information
than others, it must be specified which antennas are to be
removed and the mutual information will depend (typically in
a complicated manner) on the choice of the removed antennas.
In this paper, we restrict the considerations to cases where only
the number of removed antennas matters, since this leads to
explicit closed-form expressions.
For asymmetric channel matrices, one could obtain antennaindependent scaling laws if all antennas with equal contributions to mutual information are grouped together and all those
groups are decimated proportionally. Doing so would heavily
complicate the formulation of the antenna removal by means
of multiplication with projector matrices. However, we can
utilize the fact that for the channel in (8), mutual information
is invariant to multiplication with unitary matrices, i.e.
).
(15)
(16)
for all unitary matrices U and V . Since the channel matrix
U HV is bi-unitarily invariant for all random unitary matrices
U and V independent of H, and has the same mutual
information as H, we can assume without loss of generality
that H is unitarily invariant from left for receive and from
right for transmit antenna removal, respectively, and keep the
projector formulation of Section III-A as it is.
The multiplication with a random unitary matrix followed
by a fixed selection of antennas has statistically the same effect
as a random selection of antennas. It provides the symmetry required to make mutual information only depend on the number
of removed antennas and not on which antennas are removed.
Equivalence to the ergodic capacity variation: The ergodic
capacity of channel (8) is [12]
i
h
¯ FTH ) , max E I(γ; FT √ ) .
(17)
C(γ,
H Q
Q≥0
tr(Q)=T
Conceptually, we relax the iid assumption on the entries of x
in (8) and assume arbitrary correlation between these entries
described by the covariance matrix σx2 Q where Q is nonnegative definite with unit trace and σx2 , T1 E[x† x]. It is
shown in [12] that for channel matrices that are unitarily
invariant from right the ergodic capacity in (17) is attained
with Q = I, i.e.
¯ FT ) = E I(γ; FT ) .
C(γ,
(18)
H
H
In particular, the unitary invariance property of the channel
is not broken by removing some of the transmit or receive
antennas. For example, if H is invariant from right, then
HP †β is invariant from right too. In summary, for bi-unitarily
invariant channel matrices the variation of ergodic mutual
informations that results from removing some number of
IEEE TRANSACTIONS ON INFORMATION THEORY, 2018
4
transmit or receive antennas does actually coincide with the
corresponding variation of ergodic capacities.
IV. M UTUAL I NFORMATION AND M ULTIPLEXING R ATE
The normalized mutual information in (10) can be decomposed as
Z
T
T
I(γ; FH ) = αH log2 (γx) dF̃TH (x)
{z
}
|
I0 (γ;FT
H)
+
T
αH
|
Z
1
log2 1 +
dF̃TH (x) .
xγ
{z
}
(19)
∆I(γ;FT
H)
We refer to the first term I0 (γ; FTH ) as the multiplexing
T
rate per transmit antenna. The factor αH
is the multiplexing
gain normalized by the number of transmit antennas. The
second term ∆I(γ; FTH ) is the difference between the mutual
information per transmit antenna and the multiplexing rate per
transmit antenna. To alleviate the terminology, in the sequel we
skip the explicit reference to the normalization by the number
of transmit (or receive, see later) antennas when we refer to
quantities such as those arising in (19). Whether the quantities
considered are absolute or normalized will be clear from the
context. We have
lim ∆I(γ; FTH ) = 0.
(20)
γ→∞
If H † H is invertible we have
1
(21)
I0 (γ; FTH ) = log2 det γH † H
T
1
∆I(γ; FTH ) = log2 det I + (γH † H)−1
(22)
T
with I denoting the identity matrix.
The affine approximation of the ergodic mutual information
at high SNR introduced in [1], see also [5, Eq. (9)] for a compact formulation of it, coincides with the ergodic formulation
of our definition of the multiplexing rate.
We next uncover a fundamental link between the mutual
information and the multiplexing rate. This result makes use
of the minimum-mean-square-error (MMSE) achieved by the
optimal receiver for (8) normalized by the number of transmit
antennas
Z
dFTH (x)
T
.
(23)
ηH (γ) ,
1 + γx
T
Clearly, ηH
(γ) is a strictly decreasing function of γ with range
T
(1 − αH , 1) [9].
T HEOREM 1 Define
Z
fH (x) , H(x)−
x
log2 STH (−z) dz,
T
0 ≤ x ≤ αH
. (24)
0
Then, we have
T
T
I(γ; FTH ) = fH (1 − ηH
) + (1 − ηH
) log2 γ
T
T
I0 (γ; FTH ) = fH (αH
) + αH
log2 γ.
T
T
For short we write ηH
for ηH
(γ) in (25).
(25)
(26)
P ROOF 1 See Appendix B.
Note that by definition the function fH (x) in (24) may
T
involve αH
via STH (z). We have the following implications of
Theorem 1: i) the mutual information can be directly expressed
as a function of the (normalized) MMSE; ii) for any expression
T
of the mutual information as a function of the MMSE ηH
the
T
multiplexing rate results immediately by substituting ηH for
T
1−αH
, e.g. see Examples 1 and 2; iii) the converse of ii) is not
always true: given an expression of the multiplexing rate as a
T
T
T
function of αH
, substituting αH
for 1 − ηH
does not always
yield the mutual information. An intermediate step is required
here to guarantee that the converse holds: the expression needs
T
first to be recast as a function of fH . Then substituting αH
T
for 1−ηH
in the latter function yields the mutual information.
If any probability distribution function with support in
[0, ∞), say F, is substituted for FTH in (19) the formulas (25)
and (26) remain valid provided I(γ; F) is finite and log(x) is
absolutely integrable over F̃, respectively1 . The absolute integrability condition holds if, and only if, I(γ; F) and ∆I(γ; F)
are finite, see (171)-(173). In the sequel we substitute FH for
FTH to calculate I(γ; FH ) and I0 (γ; FH ). In Appendix C, we
provide some sufficient conditions that guarantee the almost
sure convergence of I(γ; FTH ) and I0 (γ; FTH ) to I(γ; FH ) and
I0 (γ; FH ), respectively. We conclude that these asymptotic
convergence are reasonable assumptions in practice, for the
details see Appendix C.
It is well-known that the S-transform of the LED of the
product of asymptotically free matrices is the product of
the respective S-transforms of the LEDs of these matrices.
Therefore, for MIMO channel matrices that involve a compound structure, Theorem 1 provides a means to analytically
calculate the large-system limits of the mutual information
and multiplexing rate in terms of the large-system limits of
the MMSE and the multiplexing gain. We next address two
relevant random matrix ensembles that share this structure.
E XAMPLE 1 We consider the concatenation of vector-valued
fading channels described in [13]. Specifically, we assume that
the channel matrix H factorizes according to
H = X N X N −1 · · · X 2 X 1
(27)
where the entries of the Kn × Kn−1 matrix X n are iid with
zero mean and variance 1/Kn for n ∈ [1, N ]. Furthermore,
the ratios ρn , Kn /K0 n ∈ [1, N ] are fixed as Kn → ∞.
T
Moreover, let ηH denote the large-system limit of MMSE ηH
.
By invoking Theorem 1 we obtain an analytical expression of
the large-system limit of the mutual information in terms of
(the large-system limit of) the MMSE2 as
I(γ; FH ) = H(ηH ) + (1 − ηH )(log2 γ − N log2 e)
#
"N
X ρn
1 − ηH
1 − ηH
H
+ log2
.
+ (1 − ηH )
1 − ηH
ρn
ρn
n=1
(28)
F̃ is defined by substituting F̃T
H for F in (5).
explicit expression of the MMSE as a function of SNR is difficult
to obtain. However, ηH (γ) can be solved numerically from the fixed point
ηH (γ) QN
ηH (γ)+ρn −1
equation γ = 1−η
[13, Eq. (21)].
n=1
(γ)
ρ
1 Here
2 An
H
n
IEEE TRANSACTIONS ON INFORMATION THEORY, 2018
5
Furthermore, as regards the multiplexing rate, we have
I0 (γ; FH ) =H(αH ) + αH (log2 γ − N log2 e)
#
"N
X ρn αH
αH
H
+ log2
(29)
+ αH
α
ρn
ρn
n=1 H
with αH = min(1, ρ1 , · · · , ρN ).
Normalizing this loss with the number of transmit antennas
yields
I(γ; FTH ) − I(γ; FTP β H ).
(34)
Assume that H and P β H have both full rank almost surely.
Then, we define the rate loss
χTH (R, βR) , lim I(γ; FTH ) − I(γ; FTP β H ), β ≥ φ. (35)
γ→∞
= I0 (γ; FTH ) − I0 (γ; FTP β H )
P ROOF 2 See Appendix D.
(36)
†
E XAMPLE 2 We consider a Jacobi matrix ensemble, see e.g.
[14], [15], which find application in the context of optical
MIMO communications [16], [17]. Accordingly, the channel
matrix factorizes as
H = P β2 U P †β1
(30)
where U is an N × N Haar unitary matrix. From Theorem 1
we obtain
I(γ; FH ) =H(ηH ) + (1 − ηH ) log2 γ
β1
H(β1 (1 − ηH )) β2
+ H
(1 − ηH )
−
β1
β1
β2
(31)
where ηH = ηH (γ) is given by
p
−(1 + κγ) + (1 + κγ)2 − 4β1 β2 γ(1 + γ)
ηH (γ) = 1 +
2β1 (1 + γ)
(32)
with κ , β1 + β2 . Moreover, we have
I0 (γ; FH ) =H(αH ) + αH log2 γ
H(β1 αH ) β2
β1
−
+ H
αH
β1
β1
β2
(33)
with αH = min(1, β2 /β1 ).
P ROOF 3 See Appendix E.
V. T HE U NIVERSAL R ATE L OSS
In Section 1 we underlined the following misinterpretation
of mutual information: when the number of antennas (at either
the transmit or receive side) varies, with the minimum of the
system dimensions kept fixed, the mutual information does not
vary at high SNR. It is the goal of this section to elucidate this
misinterpretation. To do so we need to distinguish between two
cases as to reference system (8): (i) T ≤ R; (ii) T ≥ R. In
the former (latter) case we consider the removal of receive
(transmit) antennas. In both cases the reduction of antennas is
constrained in a way that keeps the minimum of the numbers
of antennas at both sides fixed.
A. Case (i) – Removing receive antennas
We remove a fraction (1 − β) of receive antennas in system
(8) to obtain system (12). We constrain the reduction with
the condition β ≥ φ , T /R to ensure that min(T, βR) = T .
This reduction of the number of receive antennas causes a loss
in mutual information given by T I(γ; FTH ) − T I(γ; FTP β H ).
=
det H H
1
log2
.
T
det H † P †β P β H
(37)
T
T
The full-rank assumption implies αH
= αP
which is
βH
essential in the definition (35). Otherwise the difference in (35)
diverges as γ → ∞. Next, we present some general important
properties of the rate loss χTH (R, βR).
1) Universality related to SNR: Note that both quantities
in (34) increase with the SNR. It is shown in Appendix F that
their difference, i.e. (34), increases with the SNR too. Hence,
the rate loss χTH (R, βR) provides the least upper bound on
the mutual information loss over the entire SNR range.
R EMARK 1 Let H and P β H have both full rank almost
surely. Then, we have
χTH (R, βR) = sup{I(γ; FTH ) − I(γ; FTP β H )}.
(38)
γ
P ROOF 4 See Appendix F
2) Equivalence to capacity loss: Let us denote the capacity
of channel (12) as
C(γ; FTP β H ) , max I(γ; FTP β H √Q ).
Q≥0
tr(Q)=T
(39)
It turns out that (35) also holds when the mutual informations
in (35) are replaced by the respective capacities.
R EMARK 2 Let H and P β H have both full rank almost
surely. Then, we have
χTH (R, βR) = lim C(γ; FTH ) − C(γ; FTP β H ).
γ→∞
(40)
P ROOF 5 See Appendix G.
3) The invariance related to singular values: Though
χTH (R, βR) is defined through the distribution functions FTH
and FTP β H in (35), it actually depends solely on the matrix of
left singular vectors of H:
T HEOREM 2 Let H and P β H have both full rank almost
surely. Consider the spectral decomposition
H = LSR
(41)
where L is a R × R unitary matrix whose columns are the left
singular vectors of H, R is a T × T unitary matrix whose
IEEE TRANSACTIONS ON INFORMATION THEORY, 2018
6
columns are the right singular vectors of H and the diagonal
entries of S are the singular values of H. Then, we have
χTH (R, βR) = −
1
log2 det P φ L† P †β P β LP †φ .
T
(42)
P ROOF 6 See Appendix H.
4) Statistical properties resulting from unitarily invariance:
Let H † H have full rank almost surely and be unitarily
invariant3 . Thereby, the matrix of left singular vectors of H,
i.e. L, is Haar, see [9, Lemma 2.6]. Thus, P φ L† P †β P β LP †φ
belongs to the Jacobi matrix ensemble, see Example 2. In other
words, the rate loss χTH (R, βR) becomes nothing but minus
the log det of the Jacobi matrix ensemble normalized by T .
We also refer the reader to [14] for a detailed study of the
determinant of the Jacobi matrix ensemble. In particular, from
[14, Proposition 2.4], the rate loss admits the explicit statistical
characterization
χTH (R, βR) ∼ −
T
1X
log2 ρt
T t=1
(43)
where {ρ1 , · · · , ρT } are independent random variables and
ρt ∼ Be ((βR + 1 − t), (1 − β)R). Here, X ∼ Y indicates
that random variables X and Y are identically distributed. For
a > 0 and b > 0, Be(a, b) denotes the Beta distribution with
density
Be(x; a, b) =
Γ(a + b) a−1
x
(1 − x)b−1 ,
Γ(a)Γ(b)
x>0
(44)
C OROLLARY 1 (U NIVERSAL R ATE L OSS ) Let H † H have
full rank almost surely and be unitarily invariant. Define4
R
T
1 X X
T ln 2 t=1
0
r=R +1
R EMARK 3 The function T χT (R, R0 ) (see (45)) satisfies the
symmetry property
0
T χT (R, T ) = T 0 χT (R, T 0 ) ,
T <R
(49)
where T 0 , R − T .
where Γ is the gamma function.
χT (R, R0 ) ,
Note that χT (R, T ) equals to the ergodic rate loss when we
remove as many antennas as needed to obtain a square system.
Furthermore, if R, T → ∞ with the ratio φ = T /R fixed, the
first and the second terms of (48) converge to respectively the
first and the second terms of (47).
We coin the limit (47) the binary entropy loss as it only
involves the binary entropy function evaluated at the aspect
ratios φ and β/φ of two channel matrices – the one before
and the one after the removal of the antennas. In particular,
for β = φ, i.e. we remove as many receive antennas as needed
to obtain a square system, the binary entropy loss has the
compact expression H(φ)/φ.
5) A symmetry property of the universal rate loss: We show
a symmetry property of the universal rate loss in the case when
the end system after (completion of the antenna removal) is
square, i.e. β = φ. Let us start with an illustrative example.
Consider two separate MIMO systems one of dimensions 3×2
and one of dimensions 3×1. Let the antenna removal processes
be 3 × 1 → 1 × 1 for the former system and 3 × 2 → 2 × 2
for the latter. Thus, in both cases two communication links
are removed from the reference systems. Let the channel
matrices of the reference systems fulfill the conditions stated
in Corollary 1 (i.e. full-rank and unitary invariance). Both
removal process lead to the same the binary entropy loss equal
to 3H(1/3) = 3H(2/3) = 2.75 bit.
1
,
r−t
T ≤ R0 ≤ R.
(45)
Then, we have
E[χTH (R, βR)] = χT (R, βR).
(46)
Moreover, if R, T → ∞ with φ = T /R fixed, we have almost
surely
H (φ) β
φ
T
χH (R, βR) →
− H
.
(47)
φ
φ
β
P ROOF 7 See Appendix I.
The name Universal Rate Loss refers to the fact that the
results in Corollary 1 solely refer to the number of transmit and
receive antennas before and after the variation. The ergodic
rate loss has the additive property
χT (R, R0 ) = χT (R, T ) − χT (R0 , T ) ,
T ≤ R0 ≤ R. (48)
3 Provided H † H is unitarily invariant, when H has almost surely full
rank, so does P β H too, see Appendix I.
4 The sum over an empty index set is by definition zero.
P ROOF 8 See Appendix J
0
Note that the expressions T χT (R, T ) and T 0 χT (R, T 0 ) corresponds to the ergodic rate losses for the antenna removal
processes R×T → T ×T and R×T 0 → T 0 ×T 0 , respectively.
In both cases T × (R − T ) communications links are removed
from the reference systems. In other words, for R being fixed
the ergodic rate loss T χT (R, T ) is a symmetric function of T
with respect to T = R/2 (see Figure 2).
Since χφR (R, βR) ≤ χφR (R, φR), the symmetry property
(49) implies that the maximum ergodic rate loss is attained
when φ = β = 1/2.
R EMARK 4 Let H † H have full rank almost surely and be
unitarily invariant. Then, for φ ≤ β ≤ 1 we have
1 1
,
= arg max E[χφR
(50)
H (R, βR)].
φ,β
2 2
B. Case (ii) – Removing transmit antennas
We remove a fraction (1 − β) of transmit antennas in (8)
to obtain system (14). We constrain the reduction of receive
antennas with β ≥ 1/φ (φ = T /R) to ensure min(βT, R) =
T . Reducing the number of transmit antennas results in a loss
of mutual information equal to T I(γ; FTH ) − βT I(γ; FβT † ).
HP β
IEEE TRANSACTIONS ON INFORMATION THEORY, 2018
7
Normalizing this loss with the number of transmit antennas of
the reference system gives
I(γ; FTH ) − βI(γ; FβT
HP †β
).
(51)
Let H and HP †β have both full rank almost surely. Then, we
define the large SNR limit
1
.
φ
(52)
Again the full rank assumption is important here. Otherwise
the difference (52) may diverge as γ → ∞.
βT
T
χ̃R
H (T, βT ) , lim I(γ; FH ) − βI(γ; F
HP †β
γ→∞
),
β≥
C OROLLARY 2 Let H and HP †β have both full rank almost
surely. Then, we have
1
(53)
log2 det P φ1 RP †β P β R† P †1
φ
T
where R is a T × T unitary matrix whose columns are the
right singular vectors of H, see (41).
χ̃R
H (T, βT ) = −
Note that the right-hand side in (53) is obtained by formally
replacing φ with φ−1 in the right-hand side of (47). This
follows from the identity
βI(γ; FβT
HP †β
)=
1
I(γ; FR
P β H † ).
φ
(54)
This substitution is valid for any result that refers to mutual
information, e.g. as in Corollary 1. However, it does not apply
in general to capacity related results, such as in Remark 2, due
to the placement of the projection operator on the transmitter
side.
C. The rate loss with antenna power profile
In this subsection we address the rate loss χTH for a channel
model that takes into consideration the power imbalance at the
transmitter and receiver sides:
H = ΛR H̃ΛT .
(55)
Here, the matrices ΛR ∈ CR×R and ΛT ∈ CT ×T are
diagonal, full-rank, and deterministic. The matrix ΛR (ΛT )
represents the power imbalance at receive (transmit) side.
We generalize Theorem 2 for the model (55) as (see
Appendix H)
†
χTH (R, βR)
det P φ L̃ Θ1 L̃P †φ
1
= log2
†
T
det P φ L̃ Θβ L̃P †φ
Moreover, let X β , P β X where X is a R × T matrix with
iid zero-mean complex Gaussian entries. Let D β be the βR ×
βR diagonal matrix whose diagonal entries are the non-zero
eigenvalues of Θβ , ΛR † P †β P β ΛR . Then, we have
"
#
det X †1 D 1 X 1
1
T
.
(57)
E[χH (R, βR)] = E log2
T
det X †β D β X β
(56)
where Θβ , ΛR † P †β P β ΛR for β ≤ 1 and L̃ is a R × R
unitary matrix whose columns are the left singular vectors of
H̃, see (41). Note that the rate loss does not depend on the
singular values of H̃. This property allows for obtaining a
convenient expression for the ergodic rate loss E[χTH (R, βR)]
†
when H̃ H̃ is unitarily invariant, i.e. L̃ in (56) is Haar
distributed.
C OROLLARY 3 Let H be defined as in (55). Furthermore, let
†
H̃ H̃ have full rank almost surely and be unitarily invariant.
P ROOF 9 See Appendix K
The expectation in (57) can be simply computed by using the
following result.
L EMMA 1 [5, Lemma 2] Let X be an n × m matrix with iid
zero-mean complex Gaussian entries such that n > m. Let D
be an n × n deterministic Hermitian positive-definite matrix
whose jth eigenvalue is denoted by λj . Moreover, let Ω be the
n × n Vandermonde matrix with (Ω)ij = λij−1 and Γ be the
(n − m) × (n − m) principal submatrix of Ω. Then, we have
m
E[ln det X † DX] =
det Γ X
det Ψi
det Ω i=1
(58)
where Ψi is m × m matrix whose entries are
n−m−1+l
(Ψi )k,l =νn−m+k λn−m+k
−
n−m
X
d−1 d−1
νq (Γ−1 )d,q
λn−m+k λqn−m−1+1 . (59)
d=1,q=1
In this expression, νq = ψ(l) + ln λq if l = i else νq = 1 with
ψ(·) denoting the digamma function.
D. Further discussions based on numerical results
As a warm up example, consider a 4 × 2 MIMO system
that is stripped off two of its four receive antennas. For fullrank channel matrices that are unitarily invariant from left
Theorem 1 gives the exact high SNR limit of the ergodic loss
equal to 4χ2 (4, 2) = 3.37 bit. The asymptotic loss (47) is
4H(2/4) = 4 bit. Note also that 4χ2 (4, 2) is the supremum
of the mutual information loss over all SNRs. This is depicted
in Figure 1 for a Gaussian channel.
We illustrate the universal rate loss and the tightness of
the approximation provided by the binary entropy loss, i.e.
RH(T /R), already for small system dimensions. To this end
we consider three different channel models that are unitarily
invariant from the left: (i) the channel matrix H = U Λ where
U ∈ CR×T is uniformly distributed over the manifold of
complex R×T matrices such that U † U = I and Λ ∈ RT ×T is
a positive diagonal matrix that represents the power imbalance
at the transmitter. This is a typical channel model in the
context of massive MIMO, i.e. in the regime of T R.
Here we point out that Λ does not affect the rate loss.
Therefore for convenience we set Λ = I. (ii) the channel
matrix H = X with the entries of X being zero-mean iid
complex-valued Gaussian with finite variance; (iii) the channel
matrix H = X 2 DX 1 . Here X 1 ∈ CS×T and X 2 ∈ CR×S
represent the propagation channel from the transmit antennas
IEEE TRANSACTIONS ON INFORMATION THEORY, 2018
8
15
VI. D EVIATION FROM L INEAR G ROWTH
T E[I(.; FTH )]
T E[I0 (.; FTH )]
T E[I(.; FTP- H )]
T E[I0 (.; FTP- H )]
[bits]
10
3:37
5
0
-10
-5
0
5
10
15
20
.(dB)
Fig. 1: Ergodic mutual information (continuous lines) and
ergodic multiplexing rate (dashed lines) versus the SNR of a
zero-mean iid complex Gaussian MIMO channel with T = 2
transmit antennas and the number of receive antennas decreased from R = 4 (blue curves) to R = 2 (red curves).
4H(T =4)
H=U
H=X
H = X2 DX1
4
4E[@TH (4; T )]
3
2
In this section we clarify the second misinterpretation
underlined in Section 1. Specifically, we analyze the variation
of the multiplexing rate when either the number of receive or
the number transmit antennas varies while their maximum is
kept fixed.
For a channel matrix having orthogonal columns when
the number of transmit or receive antennas varies, the linear
growth of mutual information is obvious. However, for a
channel matrix with e.g. iid entries, a substantial crosstalk
arises due to the lack of orthogonality of its columns. The
effect of this crosstalk onto mutual information is non-linear
in the number of antennas.
The mutual information scales approximately linearly in the
minimum of the numbers of transmit and receive antennas.
For a tall rectangular channel matrix that becomes wider and
wider, the mutual information can only grow approximately
linearly until the matrix becomes square. The same holds for
a wide rectangular channel matrix growing taller and taller.
Therefore, we have to distinguish between two cases: (i) the
number of receive antennas is smaller than the number of
transmit antennas, i.e. a wide channel matrix, and (ii) the
converse of (i), i.e. a tall channel matrix. Since case (ii) can
be easily treated by replacing the channel matrix with its
conjugate transpose, we restrict our investigations to case (i).
The linear growth cannot continue once the channel matrix
has grown square. Thus, it makes sense to constrain the matrix
of reference system (8) to be square; i.e. we assume that the
channel matrix H in (8) is N × N i.e. N = R = T .
The exact mutual information of the (rectangular) system
(14) of size βN × N , β ≤ 1 is
N I(γ; FN
P β H ).
(60)
The mutual information (60) scales approximately linearly
with the number of receive antennas, if it is close to
1
βN I(γ; FN
H ).
0
0
1
2
3
4
T
Fig. 2: The maximal ergodic mutual information loss over
the SNR range: The entries of X ∈ C4×T , X 1 ∈ CS=4×T
and X 2 ∈ C4×S=4 are zero mean iid complex Gaussian. The
matrix U ∈ C4×T is uniformly distributed over the manifold
of complex 4 × T matrices. The S × S matrix D is positive
diagonal. Its diagonal entries are iid and uniformly distributed.
to the scatterers and from the scatterers to the receive antennas
respectively, while the diagonal entries in the diagonal matrix
D are the individual scattering coefficients of the scatterers.
This random matrix ensemble models the channel under the
assumption of propagation via one-bounce scattering only
[18]. To fulfill the full-rank condition we restrict to the case
S ≥ T . From Figure 2, we conclude that the binary entropy
loss yields an accurate approximation even for small system
dimensions.
(61)
Thus, in the high SNR limit, the deviation from the linear
growth normalized to N (the deviation from linear growth for
short) is given by
N
N
∆L(β; FN
H ) , lim I(γ; FP β H ) − βI(γ; FH )
γ→∞
N
= I0 (γ; FN
P β H ) − βI0 (γ; FH )
(62)
(63)
where H is assumed to have full rank almost surely. The fullT
T
rank assumption implies αH
= αP
which is necessary in
βH
the definition (62). Otherwise, (62) is divergent.
E XAMPLE 3 Let H be unitary. Then, we have
∆L(β; FN
H ) = 0.
(64)
A. The large-system limit consideration
The deviation from linear growth (63) differs from the
quantity χTH defined in (35) only by the factor β scaling the
second term. Unlike χTH , ∆L does depend on the singular
values of channel matrix. This makes the analysis somehow
IEEE TRANSACTIONS ON INFORMATION THEORY, 2018
9
A SSUMPTION 1 The channel matrix H has full rank almost
surely. Furthermore, HH † is unitarily invariant, has a uniformly bounded spectral norm, and its empirical eigenvalue
distribution converges almost surely as N → ∞. Moreover,
∆I(1; FH ) is finite.
We carry out the analysis on the basis of the LED function
FP β H . Specifically, we consider
∆L(β; FH ) = I0 (γ; FP β H ) − βI0 (γ; FH ).
(65)
When we interpret the asymptotic results in the numerical
investigations we assume that
A
40
N E[I0 (.; FN
P- H )]
-N E[I0 (.; FN
H )]
[bits]
30
20
10
0
0
1
2
3
4
5
3
4
5
-N
B
3
N E["L(-; FN
H )]
N (- ! 1) log2 (1 ! -)
2
[bits]
intractable. On the other hand, it is well-known that asymptotic
results when the numbers of antennas grow large provide very
good approximations already for systems with a dozen (or
even less) of antennas in practice. Thus, we can resort to the
asymptotic regime in the number of antennas to study the
deviation from linear growth. To that end, in this section we
make use of the following underlying assumption:
1
0
0
1
2
-N
lim E[I0 (γ; FN
P β H )] = I0 (γ; FP β H ) ,
N →∞
β ≤ 1.
(66)
It is easy to show that the convergence (66) is a mild assumption for β < 1: as FH is assumed to have a compact support,
FP β H has a compact support too, see [19, Corollary 1.14].
Note that a compactly supported probability distribution can
be uniquely characterized by its moments. This fact allow us
to use the machinery provided
in Proposition 1 in Appendix C.
R
Specifically, supN E[ x−1 dF̃N
P β H (x)] < ∞ is sufficient for
(66) to hold. Indeed this is a reasonable condition for β < 1
since
Z
1
dF̃P β H (x) , 0 < β < 1
(67)
x
Fig. 3: Ergodic multiplexing rate and corresponding linear
growth (A) and (ergodic) deviation from linear growth (B) versus number of receive antennas βN . The entries of H ∈ C5×5
are iid Gaussian with zero mean and variance 1/5. The SNR
is γ = 20 dB.
L EMMA 2 Let H fulfill Assumption 1. Then, we have
Z 1
SH (−βz)
dz.
∆L(β; FH ) = −β
log2
SH (−z)
0
(69)
is strictly increasing with β, see Remark 5.
P ROOF 11 See Appendix M.
E XAMPLE 4 Let the entries of H be iid with zero mean and
variance σ 2 /N . Then, we have
Alternatively, we may bypass the need for using the Stransform by invoking the following result:
∆L(β; FH ) = (β − 1) log2 (1 − β)
R EMARK 5 Let H fulfill Assumption 1. Furthermore, let P t
be an N -dimensional projector with 0 < t < 1. Then, we have
Z
1
SH (−t) =
dF̃P t H (x) , 0 < t < 1.
(70)
x
(68)
where by convention 0 log2 0 = 0.
P ROOF 10 See Appendix L.
In other words, at high SNR the normalized mutual information of a MIMO system of sufficiently large dimensions with
zero-mean iid channel entries grows approximately linearly
with the minimum of the numbers of transmit and receive
antennas up to 1st order and the deviation from the linear
growth is close to (β − 1) log2 (1 − β). Figure 3 illustrates this
behavior.
The result in (70) also provides a convenient means to calculate the deviation from linear growth in the large-system
limit. The right-hand side of (70) is nothing but the asymptotic
inverse spectral mean of the channel matrix P t H.
B. The S-transform formulation
C. The universality related to the SNR range
The result in Example 4 can be obtained from previous
capacity results, e.g. [9, Eq. (2.63)]. We obtained it as a special
case of the following lemma.
Note that the difference I(γ; FP β H ) − βI(γ; FH ) converges to ∆L(β; FH ) as the SNR tends to infinity, see (63).
In Appendix O we show that this difference actually increases
with SNR unless FH is a Dirac distribution function. Thus, we
P ROOF 12 See Appendix M.
IEEE TRANSACTIONS ON INFORMATION THEORY, 2018
10
VII. C ONCLUSIONS
have the following universal characterization over the whole
SNR range.
R EMARK 6 Let H fulfill Assumption 1. Then, we have
∆L(β, FH ) = sup I(γ; FP β H ) − βI(γ; FH ) .
(71)
γ
P ROOF 13 See Appendix O.
D. The additive property
We now draw the attention to another important property of
the deviation from linear growth:
T HEOREM 3 Let X and Y be independent CN ×N random
matrices. Moreover, let X and Y fulfill Assumption 1. Then,
we have
∆L(β; FXY ) = ∆L(β; FX ) + ∆L(β; FY ).
(72)
P ROOF 14 See Appendix P.
E XAMPLE 5 Consider a random matrix defined as
H=
M
Y
Am
(73)
m=1
where the N × N matrices Am , m = 1, . . . , M , are independent, have iid entries with zero mean and variance σ 2 /N .
Then we have almost surely
∆L(β; FH ) = M ∆L(β; FA1 )
(74)
= M (β − 1) log2 (1 − β).
A variation of the number of antennas in a MIMO system
affects the mutual information at asymptotically large SNR
in following way: If the minimum number of antennas at
transmitter and receiver side stays unaltered, the change of
mutual information depends only on the system dimensions
and the matrix of left (or right) singular vectors of initial
channel matrix but not on its singular values. For channel
matrices that are unitarily invariant from left (or right) this
change of mutual information in the ergodic sense can be
expressed with a simple analytic function of the system
dimensions. Moreover, the large system limit of this expression
involves only the binary entropy functions of the aspect ratios
of two varying channel matrices – the one before and the one
after altering the number of antennas.
Mutual information grows only approximately linear with
the minimum of the system dimensions even at high SNR.
This deviation from that linear growth, i.e. the error of the
linear approximation, does depend on the singular values of
the channel matrix. It can be quantified and has the following
remarkable property in the large system limit: For certain
factorizable MIMO channel matrices, the deviation is the sum
of the deviations of the individual factors.
The results derived in this work for asymptoticly large SNR
are least upper bounds over the whole SNR range. This gives
them a universal character.
Finally, a fundamental relation between mutual information
and its affine approximation (the multiplexing rate) was unveiled. This relation can be conveniently described via the
S-transform of free probability.
(75)
As mentioned previously, the crosstalk due to non-orthogonal
columns in H affects the mutual information in a way that
is non-linear in the number of antennas. Thus, it causes the
deviation from linear growth. Let us be more precise here and
(inspired from [20, Eq. (1)]) introduce the concept of crosstalk
ratio:
PN P
†
2
i=1
j<i |hi hj |
.
(76)
CTH , lim
PN
†
2
N →∞
i=1 |hi hi |
Here hi denotes the ith column of H. For example, for an
unitary matrix H, we have CTH = 0. As a second example,
let the entries of H be iid complex Gaussian with zero mean
and variance 1/N . Then, from (196) we get
1
CTH = .
(77)
2
We next show that the crosstalk ratio has the same additive
property as the deviation from linear growth.
A PPENDIX A
P RELIMINARIES
L EMMA 3 Let p ∈ [0, 1]. Then, we have
Zp
log2
P ROOF 16 We first recast (79) into the equivalent identity
Zx
lim
log2
x→p
1−z
dz = H(p).
x−z
(80)
0
To prove (80), we first apply a variable substitution
Zx
1−t
log2
dt = x
x−t
0
Z1
log2
x−1 − z
dz
1−z
(81)
0
and decompose the right hand side of (81) as
Z1
x
P ROOF 15 See Appendix Q.
(79)
0
N ×N
R EMARK 7 Let X and Y be independent C
random
matrices. Moreover let X and Y fulfill Assumption 1. Then
we have
CTXY = CTX + CTY .
(78)
1−z
dz = H(p).
p−z
log2 (x
0
−1
Z1
− z) dz − x
log2 (1 − z) dz.
0
(82)
IEEE TRANSACTIONS ON INFORMATION THEORY, 2018
11
Define u , log2 (x−1 −z) and v = z. Applying the integration
by part rule, we obtain for the first integral:
Z1
log2 x
−1
− z dz
1
uv|0
=
0
Z1
−
v du
(83)
lim (Ψn )−1 (z) = Ψ−1 (z),
n→∞
0
= x−1 H(x) − log2 e.
Furthermore, Ψn (z) is a strictly increasing homeomorphism
of (−∞, 0) onto (−α, 0) [22]. This implies that (see e.g. [23,
Proposition 0.1])
(84)
−α < z < 0.
(93)
This completes the proof.
Using (84), we compute the second integral:
Z1
Z1
log2 (1 − z) dz = lim
x→1
0
0
L EMMA 6 Consider a random matrix X and a projector P β .
Assume that X † X and P †β P β are asymptotically free. Then,
log2 (x−1 − z) dz = − log2 e. (85) we have
SXP † (z) = SX (βz).
(94)
β
This completes the proof.
L EMMA 4 [21] Let A and A + B be invertible and B have
rank 1. Furthermore let g , tr(BA−1 ) 6= −1. Then, we have
(A + B)
−1
1
A−1 BA−1 .
= A−1 −
g+1
(86)
L EMMA 5 [22, Lemma 2 & Lemma 4] Let F be a probability
distribution function with support in [0, ∞) and S its Stransform. Moreover, let F be not a Dirac distribution function.
Then, S is strictly decreasing on (−α, 0) with α , 1 − F(0).
In particular, we have
Z
−1
lim− S(z) =
x dF(x)
(87)
z→0
Z
1
lim + S(z) =
dF(x)
(88)
x
z→−α
P ROOF 18 The S-transform of P β reads [9, Example 2.32]
SP β (z) =
z+1
.
z+β
(95)
By invoking the identity [9, Theorem 2.32] and the asymptotic
freeness between X † X and P †β P β , we obtain
SXP † (z)
=
β
=
z+1
SP (βz)SX (βz)
z + 1/β β
SX (βz).
(96)
(97)
R EMARK 8 Let H = P β2 U P †β1 with U an N -dimensional
Haar unitary. Then, we have almost surely
SH (z) =
1 + β1 z
.
β2 + β1 z
(98)
where we use the convention 1/0 = ∞ in (88) when F(0) > 0.
T HEOREM 4 [22, Proposition 1] Let F be a probability
distribution function
R with support in (0, ∞) and S its Stransform. Then | log x| dF(x) is finite if, and only if,
R1
| log S(−z)| dz is finite. If either of these integrals is finite,
0
Z1
Z
log(x) dF(x) = −
log S(−z) dz.
(89)
0
T HEOREM 5 For n ∈ N+ , {1, 2, ...} let Fn be probability
distribution functions on [0, ∞). Furthermore let 1 − Fn (0) =
α > 0, ∀n ∈ N+ . Moreover let Sn denote the S-transform of
Fn . Then if Fn converges weakly to a probability distribution
function F as n → ∞, we have
lim Sn (z) = S(z),
n→∞
−α < z < 0
(90)
P ROOF 19 By invoking Lemma 6 to (95) we obtain (98).
A PPENDIX B
P ROOF OF T HEOREM 1
A. Proof of (25)
By definition, I(γ; FTH ) < ∞. Then, from identity [22,
Eq. (5)] we write
Z 1
I(γ; FTH ) = −
log2 (s)∂ΨT√γH (−s) ds
(99)
0
dΨT
H (x)
where ∂ΨTH (ω) , dx
.
x=ω
At this stage we point out two identities:
T
ΨT√γH (−1) + 1 = ΨTH (−γ) + 1 = ηH
(γ)
where S is the S-transform of F.
lim
x→0−
P ROOF 17 Let us consider the function, (see (6))
Z
zx
dFn (x), −∞ < z < 0.
Ψn (z) ,
1 − zx
(91)
zx
For z ∈ (−∞, 0), z → 1−zx
is bounded and continuous.
Hence, the weak convergence of Fn implies that
n
lim Ψ (z) = Ψ(z),
n→∞
−∞ < z < 0.
(92)
ΨT√
γH (x)
+ 1 = 1.
(100)
(101)
Now we apply the variable substitution z , ΨT√γH (−s) + 1
in the integral in (99). Notice that with this substitution the
upper and lower limits of this integral read (100) and (101),
respectively. As a result (99) is recast in the form
I(γ; FTH ) =
Z
1
T
ηH
<−1>
log2 −ΨT√γH (z − 1) dz
(102)
IEEE TRANSACTIONS ON INFORMATION THEORY, 2018
12
<−1>
with ΨTH
denoting the inverse of ΨTH . Then, by the
definition of the S-transform, see (7), we obtain
T
T
Z ηH
Z ηH
1−z
dz +
log2 ST√γH (z − 1) dz
I(γ; FTH ) =
log2
z
1
1
(103)
T
Z ηH
T
= H(ηH
)+
(104)
log2 ST√γH (z − 1) dz
1
T
= H(ηH
)−
T
1−ηH
Z
log2 ST√γH (−z) dz.
(105)
0
Finally, we obtain (25) by using the scaling property of the
S-transform [24, Lemma 4.2].
B. Proof of (26)
Let S̃TH be the S-transform of F̃TH . By using [9, Theorem
2.32] we write
z+1
T
STH (αH
z), −1 < z < 0.
(106)
S̃TH (z) =
T
z + 1/αH
Note that F̃TH is an empirical distribution function. Thus,
log2 (x) is absolutely integrable over it. We use Theorem 4
and Lemma 3 to complete the proof:
I0 (γ; FTH )
=
T
αH
log2 γ −
T
αH
Z1
log2 S̃TH (−x) dx
(107)
we have almost surely
lim I0 (γ; FTH ) = I0 (γ; FH ).
T →∞
Condition (111) is reasonable in practice. Otherwise the
power amplification per dimension of the MIMO system
explodes as its dimensions grow to infinity. One can show that
for rectangular and unitarily invariant channel matrices, the
condition (113) is reasonable too due to the strict decreasing
property of the function of β in (67). However, it might
not hold when the channel matrix is square. As an example,
consider a channel matrix H whose entries are iid with zero
mean and variance σ 2 /T . Then, condition (113) holds if
φ 6= 1, but is violated if φ = 1. Indeed the latter case turns
out critical for the “log det” convergence of the zero-mean
iid matrix ensemble, e.g. see [9], [25]. Nevertheless, both [26,
Proposition 2.2] and numerical evidence lead us to conjecture
that (114) holds when φ = 1 as well. Thus, we conclude that
the asymptotic convergence of the multiplexing rate, i.e. (114),
is a mild assumption in practice.
Proof of Proposition 1
For the sake of readability of the proof, whenever we use the
limit operator indicating that T tends to infinity, we implicitly
assume that the ratio φ = T /R is fixed.
For convenience we define
Y , I + γH † H.
0
T
T
= αH
log2 γ − αH
Z1
log2
0
1−x
T
ST (−αH
x) dx
T −x H
1/αH
T
= αH
log2 γ −
Z
log2
T
αH
−x T
S (−x) dx
1−x H
I(γ; FTH )
(109)
T
T
T
= αH
log2 γ + H(αH
)−
log2 STH (−x) dx.
(110)
0
A PPENDIX C
O N THE CONVERGENCE OF M UTUAL INFORMATION AND
M ULTIPLEXING R ATE
In this section we provide some sufficient conditions that
guarantee the convergence of the mutual information (10) and
multiplexing rate (see (19)) in the large system limit.
P ROPOSITION 1 As R, T → ∞ with the ratio φ , T /R fixed
let H † H have a LED FH . Furthermore, let
Z
sup x dFTH (x) < ∞ a.s..
(111)
T
Then we have almost surely
lim I(γ; FTH ) = I(γ; FH ).
T →∞
Moreover if in addition
Z
1
dF̃TH (x) < ∞ a.s.
sup
x
T
Z0
=−
log2 ST√Y (z) dz.
(116)
−1
0
ZαH
(115)
By Theorem 4 we have
(108)
αT
H
(114)
(112)
(113)
The function ST√Y is strictly decreasing on (−1, 0) if, and only
if, FTH is not a Dirac distribution function, see Lemma 5. If
FTH is a Dirac distribution function then ST√Y is a constant
function. Without loss of generality, we can assume that FTH is
not a Dirac distribution function. Then, by invoking Lemma 5
again we have
−1
1
1
tr(Y )
< ST√Y (z) < tr(Y −1 ), −1 < z < 0.
T
T
(117)
For convenience we define the random variable
Z
T
M , sup x dFTH (x) s.t. φ = .
(118)
R
T
Since the upper bound in (117) is smaller than one we have
| log2 ST√Y (z)| = − log2 ST√Y (z),
1
< log2 tr(Y )
T
≤ log2 (1 + γM ).
−1 < z < 0
(119)
(120)
(121)
Because of (121), we can apply Lebesgue’s dominated convergence theorem [27, Theorem 10.21]:
lim I(γ; FTH ) = −
Z0
T →∞
−1
lim log2 ST√Y (z) dz.
T →∞
(122)
IEEE TRANSACTIONS ON INFORMATION THEORY, 2018
13
By invoking Theorem 5 we complete the proof of (111):
lim log2 ST√Y (z) = log2 lim ST√Y (z)
(123)
=
(124)
T →∞
T →∞
log2 S√Y (z).
I0 (γ, FTH )
where the result (131) follows from the identity (84). We
obtain (31) from (25) with (131) inserted in (24). Moreover,
by the definition of the S-transform we have
β1 (1 − z)Ψ2H (z) + (1 − (β1 + β2 )z)ΨH (z) − β2 z = 0. (132)
To prove (114), we use the same arguments for
as for I(γ, FTH ). In particular, by invoking Lemma 5 again
we can write
Z
−1
Z
1
x dF̃TH (x)
< S̃TH (z) <
dF̃TH (x), −1 < z < 0
x
(125)
with S̃TH denoting the S-transform of F̃TH . Unlike (117), the
right-most integral is not bounded in general, so we need the
additional assumption (113). This completes the proof.
Note that 1 + ΨH (−γ) = ηH (γ). Thus, (132) has two
solutions for ηH (γ). Only one fulfills the properties of ηH (γ)
in [9, pp. 41]. Specifically, from the property ηH (γ) → 1 as
γ → 0 we concludeR that (32) is this solution. Finally it is also
1
easy to show that R 0 | log2 S̃H (−z)| dz is finite in this case.
This implies that | log(x)|dF̃H (x) is finite too. Thus, the
multiplexing rate is obtained by replacing the term (1 − ηH )
in (31) with αH , which leads to (33).
A PPENDIX D
P ROOF OF E XAMPLE 1
A PPENDIX F
P ROOF OF R EMARK 1
With a convenient re-parameterization of [13, Eq. (19)] we
write
N
Y
ρn
.
(126)
SH (z) =
z
+
ρn
n=1
We can write the integral terms in (127) as
Z 1−ηH
z
log2 (1 −
) dz =
ρn
0
Z 1
1 − ηH
ρn
log2
+
− z) dz
log2 (
ρn
1 − ηH
0
d{I(γ; FTH ) − I(γ; FTP β H )}
dγ
=
T
T
ηP
(γ) − ηH
(γ)
βH
γ ln 2
(127)
where the equality holds when β = 1. To prove (134) it is
sufficient to consider the removal of a single receive antenna,
i.e. β = (R − 1)/R. It is immediate that
H † H = H † P †β P β H + h†R hR
(128)
for n ∈ [1, N ]. By invoking the result in (84) we obtain (28).
From the linearity
R 1 property of the Lebesgue integral, it is
easy Rto show that 0 | log2 S̃H (−z)dz| is finite, which implies
that | log(x)|dF̃H (x) is finite too, due to Theorem 4. Thus,
the multiplexing rate is obtained by replacing the term (1−ηH )
in (28) with αH (due to Theorem 1). This leads to (29).
Finally, we note that if αH < 1 the S-transform SH (z)
diverges as z → (−αH ), see Lemma 5. Thus, from (126)
the unique solution of αH is αH = min(1, ρ1 , ρ2 , . . . , ρN ).
(135)
with hR ∈ C1×T representing the Rth row of H. Then (134)
follows directly from Lemma 4 in Appendix A.
A PPENDIX G
P ROOF OF R EMARK 2
We decompose the capacity expression in (39) as
C(γ, FTP β H ) = I0 (γ; FTP β H √Q∗ ) + ∆I(γ; FTP β H √Q∗ )
(136)
with Q? denoting the capacity achieving covariance matrix.
We define
C0 (γ, FTP β H ) , max I0 (γ; FTP β H √Q ).
Recall (98):
1 + β1 z
.
β2 + β1 z
(137)
Q≥0
tr(Q)=T
A PPENDIX E
P ROOF OF E XAMPLE 2
SH (z) =
. (133)
Hence, to prove Remark 1 we simply need to show that
n
o
tr (I + γH † P †β P β H)−1 − (I + γH † H)−1 ≥ 0 (134)
From Theorem 1 we have
I(γ; FH ) = H(ηH ) + (1 − ηH ) log2 γ
N Z 1−ηH
X
z
) dz.
+
log2 (1 −
ρ
n
n=1 0
We first point out the relationship [9]
(129)
Moreover, notice that αH = 1 − FH (0) = min(1, β2 /β1 ). For
convenience let a , 1 − ηH (γ) < αH . Then, we have
Z a
Z 1
1 − β1 at
log2
log2 SH (−z) dz = a
dt
(130)
β2 − β1 at
0
0
β1
H(β1 a) β2
=
− H
a
(131)
β1
β1
β2
In particular, by the definitions in (39) and (137) we have
C0 (γ, FTP β H ) ≥ I0 (γ; FTP β H √Q∗ ). Hence, we have
lim C0 (γ; FTP β H ) − C(γ; FTP β H ) ≥ 0.
(138)
γ→∞
Since H † P †β P β H has almost surely full rank, we have
T
= α√
and thereby
Q
T
√
αP
βH Q
T
T
√
I0 (γ; FTP β H √Q ) = α√
Q log2 γ+α Q
Z
T
log2 x dF̃P β H √Q (x).
(139)
IEEE TRANSACTIONS ON INFORMATION THEORY, 2018
14
For a sufficiently large SNR a full-rank matrix Q maximizes
(139). Therefore, to prove the result we can assume without
loss of generality that Q has full rank. Doing so, we have
I0 (γ; FTP β H √Q )
1
= log2 γ + log2 det H † P †β P β H
T
1
+ log2 det Q.
(140)
T
Due to the constraint tr(Q) = T , the identity operator
maximizes (140). Hence, from (138) we have
lim I0 (γ; FTP β H ) − C(γ; FTP β H ) ≥ 0.
γ→∞
(141)
On the other hand we have
I0 (γ; FTP β H ) < I(γ; FTP β H ) ≤ C(γ; FTP β H ).
(142)
Thus (141) must be zero. This completes the proof.
A PPENDIX I
P ROOF OF C OROLLARY 1
We first show that provided H † H is unitarily invariant,
when H † H has full rank almost surely, so does H † P †β P β H
too for φ ≤ β: From (147) we have
det H † P †β P β H = det Σ2 det P φ L† P †β P β LP †φ
(149)
where Σ is a T ×T diagonal matrix whose diagonal entries are
the positive singular values of H. By the unitary invariance
assumption, P φ L† P †β P β LP †φ is a Jacobi matrix ensemble
with a positive determinant for φ ≤ β [14]. Thereby, (149) is
positive.
Given x ∼ Be(a, b) we have E[ln x] = ψ(a) − ψ(a + b)
where ψ(·) denotes the digamma function. For natural arguments, the digamma function can be expressed as
n−1
X
1
.
l
A PPENDIX H
P ROOF OF T HEOREM 2
ψ(n) = ψ(1) +
We prove (56) which is a generalization of Theorem 2. We
make use of (55) to write
Hence, from (43) we can write the ergodic rate loss as
l=1
(150)
T
†
det H † P †β P β H = det Λ†T ΛT det H̃ Θβ H̃
(143)
E[χTH (R, βR)] = −
where Θβ , ΛR † P †β P β ΛR for β ≤ 1. Hence, from (37) the
rate loss reads as
det H̃ Θ1 H̃
1
log2
.
†
T
det H̃ Θβ H̃
T
1 X
[ψ(R + 1 − t) − ψ(βR + 1 − t)]
T ln 2 t=1
(152)
"
#
βR−t
T
R−t
X 1
1 X X1
−
(153)
=
T ln 2 t=1 r=1 r
r
r=1
(144)
To simplify this expression, we consider the singular value
decomposition of H̃
†
H̃ = L̃[Σ|0] R̃
H̃ =L̃P †φ ΣR̃.
(146)
For notational compactness, let us define Z β
,
†
†
P φ L̃ Θβ L̃P φ and A , ΣR̃. Thereby, we can write
†
H̃ Θβ H̃ = A† Z β A. Note that A† Z β A and Z β AA† have
the same eigenvalues. Thus, we have
†
det H̃ Θβ H̃ = det Σ2 det Z β .
T
=
(145)
where L̃ and R̃ are respectively R × R and T × T unitary
matrices, Σ is a T × T positive diagonal matrix and 0 is a
(R − T ) × T zero matrix. Remark that we can actually write
(145) as
(147)
(151)
=
†
χTH (R, βR) =
1 X
E[ln ρt ]
T ln 2 t=1
1 X
T ln 2 t=1
R−t
X
r=βR−t+1
1
.
r
(154)
This completes the derivation of (46).
As regards to derivation of (47), we first note the almost sure
convergence of the limit [14, Theorem 3.6 and Eq. (4.23)]
1
log2 det P φ L† P †β P β LP †φ =
T →∞ T
Z
lim
log2 (x) dFP β U P † (x).
φ
(155)
Using (33) we express this limit in terms of binary entropy
function:
Z
1
β
φ
log2 (x) dFP β U P † (x) = − H(φ) + H
. (156)
φ
φ
φ
β
This completes the derivation of (47).
We complete the derivation of (56) by plugging (147) in (144):
χTH (R, βR) =
det Z 1
1
log2
.
T
det Z β
(148)
Note also that Z 1 = I for ΛR = I. This completes the proof
of Theorem 2.
A PPENDIX J
P ROOF OF R EMARK 3
It is sufficient to prove the result for (R − T ) <PT . For the
p
sake of notational compactness, we define hp , l=1 1l and
IEEE TRANSACTIONS ON INFORMATION THEORY, 2018
15
g(R, T ) , ln(2)T χT (R, T ). Then, from (153) we write
g(R, T ) =
=
T
X
hR−t −
t=1
R−T
X
hT −t
(157)
t=1
T
X
hR−t +
t=1
−
T
X
A PPENDIX L
S OLUTION OF E XAMPLE 4
hR−t
t=(R−T )+1
2T
−R
X
T
X
hT −t −
hT −t .
(158)
t=2T −R+1
t=1
Notice that
T
X
hR−t =
hT −t
(159)
A PPENDIX M
P ROOF OF L EMMA 2
t=1
t=(R−T )+1
T
X
2T
−R
X
Note that we do not assume that H has Gaussian entries.
However it is well known that for any distribution of the entries
of H, the distribution function FN
P β H converges weakly and
almost surely to the Marc̆enko-Pastur law. In other words,
we get the same asymtotic results regardless of whether we
restrict the entries of H to Gaussian or not. Thus, without
loss of generality we can assume that the entries of H are
Gaussian, so that HH † is unitarily invariant. Doing so we
have SH (z) = (1 + z)−1 [9]. Then, we immediately obtain
(68) from (33).
hT −t =
t=2T −R+1
R−T
X
h(R−T )−t .
(160)
t=1
We have αP β H = β. Thus
I0 (γ; FP β H ) = βI0 (γ; F̃P β H ) = βI0 (γ; FH † P † ).
Thereby, we get
(168)
β
g(R, T ) =
R−T
X
hR−t −
R−T
X
t=1
Furthermore, with Lemma 6 we have
h(R−T )−t
(161)
SH † P † (z) = SH † (βz) = SH (βz).
t=1
= g(R, R − T ).
(162)
In the sequel we first show that
This completes the proof.
(170)
where S̃P β H is the S-transform of F̃P β H . To do so, it is
R1
sufficient to show that |log2 SH (−z)| dz < ∞. Since FH
0
(163)
has a compact support, I(γ; FH ) is finite. Now we show that
log x is absolutely integrable over FH if, and only if, I(1; FH )
and ∆I(1; FH ) are finite [22]:
where U is a R × R unitary matrix whose columns are the
left singular vectors of the Gaussian random matrix X. Since
X † X is unitarily invariant U is Haar distributed. The matrix
of the left singular vectors of H̃, i.e. L̃, is Haar distributed too
†
as H̃ H̃ is unitarily invariant. Thereby, from (56) and (163)
we have
(164)
Thereby, we have
which completes the proof.
Z1
|log2 (x)| dFH (x) =
0
1
dFH (x)
log2
x
0
Z∞
+
log2 (x) dFH (x).
(171)
Thus, we have
Z1
1
dFH (x) < ∞ ⇐⇒ ∆I(1; FH ) < ∞, (172)
log2
x
0
(165)
where U β is a R × R unitary matrix. Since X ∼ U β X, we
have
X † Θβ X ∼ X † P †β D β P β X.
(166)
#
"
1
det X † D 1 X
T
E[χH (R, βR)] = E log2
T
det X † P †β D β P β X
Z∞
1
Note that the rank of Θβ is βR. Thus, we can consider the
eigenvalue decomposition
Θβ = U †β P †β D β P β U β
0
0
Following the same line of argumentation as used to obtain
(148) we get
1
det X † Θ1 X
log2
.
T
det X † Θβ X
|log2 SH (−βz)| dz < ∞
log2 S̃P β H (−z) dz =
A PPENDIX K
P ROOF OF C OROLLARY 3
χTH (R, βR) ∼
Z1
Z1
det P φ U † Θ1 U P †φ
det X † Θ1 X
=
log
log2
2
det X † Θβ X
det P φ U † Θβ U P †φ
(169)
β
Z∞
log2 (x) dFH (x) < ∞ ⇐⇒ I(1; FH ) < ∞.
1
with ⇐⇒ implying ‘’if, and only if”. Hence (171) is finite.
R1
Due to Theorem 4 this implies that |log2 SH (−z)| dz is
0
(167)
(173)
finite too.
By invoking Theorem 4, (168) and (169) we obtain
Z 1
I0 (γ; FP β H ) = β log2 γ − β
log2 SH (−βz) dz. (174)
0
IEEE TRANSACTIONS ON INFORMATION THEORY, 2018
16
Due to (170), it follows from the linearity property of the
Lebesgue integral that
∆L(β; FH ) = I0 (γ; FP β H ) − βI0 (γ; FH )
Z 1
SH (−βz)
dz.
= −β
log2
SH (−z)
0
Then, by using (178) and (182) we obtain
ηH † P † = lim S√Y β (z)
(175)
z→−1+
β
(176)
(187)
= lim + S√Y 1 (βz)
(188)
=
(189)
z→−1
S√Y 1 (−β)
,
0<β<1
This completes the proof.
which is strictly increasing with β, see Lemma 5. This
completes the proof.
A PPENDIX N
P ROOF OF R EMARK 5
Invoking Lemma 6 we can write
SH (−t) = SH † (−t) = lim SH † P † (z).
z→−1+
t
A PPENDIX P
P ROOF OF T HEOREM 3
(177)
Since H has almost surely full rank, αHP † = 1, so that
t
F̃P t H = FH † P † . Then from Lemma 5 we have
t
Z
1
lim + SH † P † (z) =
dFH † P † (x).
(178)
t
t
x
z→−1
The matrices XX † , Y Y † and P †β P β are asymptotically
free [28]. Then, from Lemma 2 and the linearity property of
the Lebesgue integral we have
Z1
∆L(β; FXY ) = −β
This completes the proof.
log2
SX (−βz)SY (−βz)
dz (190)
SX (−z)SY (−z)
0
= ∆L(β; FX ) + ∆L(β; FY ).
(191)
A PPENDIX O
P ROOF OF R EMARK 6
A PPENDIX Q
P ROOF OF R EMARK 7
For the sake of notational simplicity we introduce
Y β , I + γP β HH † P †β
(179)
= P β (I + γHH † )P †β
(180)
= P β Y 1 P †β .
(181)
It follows that Y 1 is unitarily invariant since HH † is.
Furthermore, since HH † has a compactly support LED so
does Y 1 . Thus Y 1 is asymptotically free of P †β P β [28]. Then,
with Lemma 6 we have in the limit N → ∞
S√Y β (z) = S√Y 1 (βz).
For an N × N matrix A, we define
φ(A) , lim
N →∞
1
tr(A)
N
whenever the limit exists. Since XX † and Y Y † are asymptotically free, we have (see [29, Eq. (120)])
φ(X † Y † Y X) = φ(X † X)φ(Y † Y )
(182)
φ((X † Y † Y X)2 ) = φ(X † X)2 φ((Y † Y )2 )
Here we note that S√Y β (z) is strictly decreasing on (−1, 0)
if, and only if, F√Y β is not a Dirac distribution function, see
Lemma 5.
We recall the following property of ηH (γ) see (23) [9]:
+ φ(Y † Y )2 φ((X † X)2 )
1 − ηP β H − β(1 − ηH )
d{I(γ; FP β H ) − βI(γ; FH )}
=
,
dγ
γ ln 2
(183)
where for convenience ηH is short for ηH (γ). Hence, in order
to prove the remark it is sufficient to show that
(1 − β) + βηH − ηP β H ≥ 0
− φ(X † X)2 φ(Y † Y )2 .
lim (A† A)ii → φ(A† A), ∀i.
N →∞
(185)
Thus, the right-hand side of (184) is equal to β(ηH −ηH † P † ).
β
Therefore we are left with proving ηH ≥ ηH † P † . Firstly,
β
remark that
Z
1
dF√Y β (x).
(186)
ηH † P † =
β
x
(194)
(195)
Inserting (195) in the definition of the crosstalk ratio in (76),
we get for A ∈ {X, Y , XY } that
(184)
β
(193)
Furthermore, from [30, Theorem 2.1] for A ∈ {X, Y , XY }
we have almost surely
CTA =
where the equality holds when β = 1. Furthermore, by using
[9, Lemma 2.26] we have
ηP β H = (1 − β) + βηH † P † .
(192)
φ((A† A)2 ) 1
− .
2
2φ(A† A)2
(196)
We complete the proof by plugging (193) and (194) in (196)
for A = XY :
CTXY =
φ(X † X)2 φ((Y † Y )2 ) + φ(Y † Y )2 φ((X † X)2 )
−1
2φ(X † X)2 φ(Y † Y )2
(197)
= CTX + CTY .
(198)
IEEE TRANSACTIONS ON INFORMATION THEORY, 2018
R EFERENCES
[1] S. Shamai (Shitz) and S. Verdú, “The effect of frequency-flat fading on
the spectral efficiency of CDMA,” IEEE Transactions on Information
Theory, vol. 47, no. 4, pp. 1302–1327, May 2001.
[2] S. Verdú and S. Shamai (Shitz), “Spectral efficiency of CDMA with
random spreading,” IEEE Transactions on Information Theory, vol. 45,
no. 2, pp. 622–640, March 1999.
[3] A. M. Tulino, A. Lozano, and S. Verdú, “Impact of antenna correlation on the capacity of multiantenna channels,” IEEE Transactions on
Information Theory, vol. 51, no. 7, pp. 2491– 2509, July 2005.
[4] D. Tse and P. Viswanath, Fundamentals of Wireless Communication.
Cambridge University Press, 2005.
[5] A. Lozano, A. M. Tulino, and S. Verdú, “High-SNR power offset
in multiantenna communication,” IEEE Transactions on Information
Theory, vol. 51, no. 12, pp. 4134–4151, Dec. 2005.
[6] J. Lee and N. Jindal, “High SNR analysis for MIMO broadcast channels:
Dirty paper coding versus linear precoding,” IEEE Transactions on
Information Theory, vol. 53, no. 12, pp. 4787–4792, Dec 2007.
[7] Y. Chen, A. Goldsmith, and Y. Eldar, “Backing off from infinity:
Performance bounds via concentration of spectral measure for random
MIMO channels,” IEEE Transactions on Information Theory, vol. 61,
no. 1, pp. 366–387, Jan 2015.
[8] P. Deift and D. Gioev, Random Matrix Theory: Invariant Ensembles and
Universality. American Mathematical Society, 2009, vol. 18.
[9] A. M. Tulino and S. Verdú, Random Matrix Theory and Wireless
Communications. Now Publishers Inc., June 2004, vol. 1, no. 1.
[10] H. Bercovici and D. Voiculescu, “Free convolution of measures with
unbounded supports,” Indiana University Mathematics Journal, vol.
42(3), pp. 733–773, 1993.
[11] T. M. Cover and J. A. Thomas, Elements of Information Theory. John
Wiley & Sons, New York, 1991.
[12] E. Telatar, “Capacity of multi-antenna Gaussian channels,” European
transactions on telecommunications, vol. 10, pp. 585–595, 1999.
[13] R. R. Müller, “On the asymptotic eigenvalue distribution of concatenated
vector-valued fading channels,” IEEE Transactions on Information Theory, vol. 48, no. 7, pp. 2086–2091, July 2002.
[14] A. Rouault, “Asymptotic behavior of random determinants in the
Laguerre, Gram and Jacobi ensembles,” Latin American Journal of
Probability and Mathematical Statistics (ALEA), 3, pp. 181–230, 2007.
[15] A. Edelman and B. D. Sutton, “The beta-Jacobi matrix model, the CS
decomposition, and generalized singular value problems,” Foundations
of Computational Mathematics, vol. 8.2, pp. 259–285, July 2008.
[16] R. Dar, M. Feder, and M. Shtaif, “The Jacobi MIMO channel,” IEEE
Transactions on Information Theory, vol. 59, no. 4, pp. 2426–2441,
March 2013.
[17] A. Karadimitrakis, A. Moustakas, and P. Vivo, “Outage capacity for
the optical MIMO channel,” IEEE Transactions on Information Theory,
vol. 60, no. 7, pp. 4370–4382, July 2014.
[18] R. R. Müller, “A random matrix model for communication via antenna
arrays,” IEEE Transactions on Information Theory, vol. 48, no. 9, pp.
2495–2506, Sep. 2002.
[19] A. Nica and R. Speicher, “On the multiplication of free N-tuples of
noncommutative random variables,” American Journal of Mathematics,
pp. 799–837, 1996.
[20] A. Litwin-Kumar, K. D. Harris, R. Axel, H. Sompolinsky, and L. F.
Abbott, “Optimal degrees of synaptic connectivity,” Neuron, vol. 93,
no. 5, pp. 1153–1164, 2017.
[21] K. S. Miller, “On the inverse of the sum of matrices,” Mathematics
Magazine, vol. 54, no. 2, pp. 67–72, 1981.
[22] U. Haagerup and S. Möller, “The law of large numbers for the free
multiplicative convolution,” Operator Algebra and Dynamics. Springer
Proceedings in Mathematics & Statistics, vol. 58, pp. 157–186, 2013.
[23] S. I. Resnick, Extreme Values, Regular Variation, and Point Processes.
Springer, 2007.
[24] R. Couillet and M. Debbah, Random Matrix Methods for Wireless
Communications. Cambridge University Press, 2011.
[25] D. Jonsson, “Some limit theorems for the eigenvalues of a sample
covariance matrix,” Journal of Multivariate Analyis, vol. 12, no. 1, pp.
1–38, 1982.
[26] T. Tao and V. Vu, “Random matrices: universality of ESDs and the
circular law,” The Annals of Probability, vol. 38, pp. 2023– 2065, 2010.
[27] A. Browder, Mathematical Analysis: An Introduction.
New York:
Springer-Verlag, 1996.
[28] F. Hiai and D. Petz, The Semicircle Law, Free Random Variables and
Entropy (Mathematical Surveys & Monographs). Boston, MA, USA:
American Mathematical Society, 2006.
17
[29] R. R. Müller, G. Alfano, B. M. Zaidel, and R. de Miguel, “Applications of large random matrices in communications engineering,” arXiv
preprint arXiv:1310.5479, 2013.
[30] B. Cakmak, “Random matrices for information processing–a democratic
vision,” Ph.D. dissertation, Aalborg Universitetsforlag, 2016.
Burak Çakmak was born in Istanbul, Turkey, 1986. He received the B.Eng.
degree from Uludağ University, Turkey in 2009, M.Sc. degree from Norwegian
University of Science and Technology, Norway in 2012 and Ph.D degree from
Aalborg University, Denmark in 2017. Dr. Çakmak is a postdoctoral researcher
at the Department of Computer Science, Technical University of Berlin,
Germany. His research interests include random matrix theory, communication
theory, statistical physics of disorder systems, machine learning and Bayesian
inference.
Ralf R. Müller (S’96–M’03–SM’05) was born in Schwabach, Germany,
1970. He received the Dipl.-Ing. and Dr.-Ing. degree with distinction from
Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg in 1996 and 1999,
respectively. From 2000 to 2004, he directed a research group at The
Telecommunications Research Center Vienna in Austria and taught as an
adjunct professor at TU Wien. In 2005, he was appointed full professor at
the Department of Electronics and Telecommunications at the Norwegian
University of Science and Technology in Trondheim, Norway. In 2013, he
joined the Institute for Digital Communications at FAU Erlangen-Nürnberg
in Erlangen, Germany. He held visiting appointments at Princeton University,
US, Institute Eurcom, France, University of Melbourne, Australia, University
of Oulu, Finland, National University of Singapore, Babes-Bolyai University,
Cluj-Napoca, Romania, Kyoto University, Japan, FAU Erlangen-Nürnberg,
Germany, and TU München, Germany. Prof. Müller received the Leonard G.
Abraham Prize (jointly with Sergio Verdú) for the paper “Design and analysis
of low-complexity interference mitigation on vector channels” from the IEEE
Communications Society. He was presented awards for his dissertation “Power
and bandwidth efficiency of multiuser systems with random spreading”
by the Vodafone Foundation for Mobile Communications and the German
Information Technology Society (ITG). Moreover, he received the ITG award
for the paper “A random matrix model for communication via antenna
arrays” as well as the Philipp-Reis Award (jointly with Robert Fischer). Prof.
Müller served as an associate editor for the IEEE TRANSACTIONS ON
INFORMATION THEORY from 2003 to 2006 and as an executive editor
for the IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS from
2014 to 2016.
Bernard H. Fleury (M’97–SM’99) received the Diplomas in Electrical
Engineering and in Mathematics in 1978 and 1990 respectively and the
Ph.D. Degree in Electrical Engineering in 1990 from the Swiss Federal
Institute of Technology Zurich (ETHZ), Switzerland. Since 1997, he has been
with the Department of Electronic Systems, Aalborg University, Denmark,
as a Professor of Communication Theory. From 2000 till 2014 he was
Head of Section, first of the Digital Signal Processing Section and later of
the Navigation and Communications Section. From 2006 to 2009, he was
partly affiliated as a Key Researcher with the Telecommunications Research
Center Vienna (ftw.), Austria. During 19781985 and 19921996, he was a
Teaching Assistant and a Senior Research Associate, respectively, with the
Communication Technology Laboratory, ETHZ. Between 1988 and 1992, he
was a Research Assistant with the Statistical Seminar at ETHZ. Prof. Fleurys
research interests cover numerous aspects within communication theory, signal
processing, and machine learning, mainly for wireless communication systems
and networks. His current scientific activities include stochastic modeling
and estimation of the radio channel, especially for large systems (operating
in large bandwidths, equipped with large antenna arrays, etc.) deployed in
harsh conditions (e.g. in highly time-varying environments); iterative messagepassing processing (with focus on the design of efficient feasible architectures
for wireless receivers); localization techniques in wireless terrestrial systems;
and radar signal processing. Prof. Fleury has authored and coauthored more
than 150 publications and is co-inventor of 6 filed or published patents in
these areas. He has developed, with his staff, a high-resolution method for the
estimation of radio channel parameters that has found a wide application and
has inspired similar estimation techniques both in academia and in industry.
| 7 |
1
One-pass Person Re-identification by
Sketch Online Discriminant Analysis
arXiv:1711.03368v1 [] 9 Nov 2017
Wei-Hong Li, Zhuowei Zhong, and Wei-Shi Zheng∗
Abstract—Person re-identification (re-id) is to match people
across disjoint camera views in a multi-camera system, and re-id
has been an important technology applied in smart city in recent
years. However, the majority of existing person re-id methods
are not designed for processing sequential data in an online way.
This ignores the real-world scenario that person images detected
from multi-cameras system are coming sequentially. While there
is a few work on discussing online re-id, most of them require
considerable storage of all passed data samples that have been
ever observed, and this could be unrealistic for processing data
from a large camera network. In this work, we present an onepass person re-id model that adapts the re-id model based on
each newly observed data and no passed data are directly used
for each update. More specifically, we develop an Sketch online
Discriminant Analysis (SoDA) by embedding sketch processing
into Fisher discriminant analysis (FDA). SoDA can efficiently
keep the main data variations of all passed samples in a low rank
matrix when processing sequential data samples, and estimate the
approximate within-class variance (i.e. within-class covariance
matrix) from the sketch data information. We provide theoretical
analysis on the effect of the estimated approximate within-class
covariance matrix. In particular, we derive upper and lower
bounds on the Fisher discriminant score (i.e. the quotient between
between-class variation and within-class variation after feature
transformation) in order to investigate how the optimal feature
transformation learned by SoDA sequentially approximates the
offline FDA that is learned on all observed data. Extensive
experimental results have shown the effectiveness of our SoDA
and empirically support our theoretical analysis.
Index Terms—Online learning, Person re-identification, Discriminant feature extraction
I. I NTRODUCTION
Person re-identification (re-id) [51], [1], [13], [22], [31],
[20], [54] is crucially important for successfully tracking
people in a large camera network. It is to match the same
person’s images captured at non-overlapping camera views at
different time. Person re-id by visual matching is inherently
challenging because of the existence of many visually similar
persons and dramatic appearance changes of the same person
caused by the serious cross-camera-view variations such as
illumination, viewpoint, occlusions and background clutter.
Recently, a large number of works [22], [23], [3], [16], [27],
[30], [35], [44], [53] have been reported to solve this challenge.
Wei-Hong Li is with the School of Electronics and Information
Technology, Sun Yat-sen University, Guangzhou, China. E-mail: liweih3@mail2.sysu.edu.cn
Zhuowei Zhong is with the School of Data and Computer Science, Sun
Yat-sen University, Guangzhou, China. E-mail: zhongzhw6@gmail.com
Wei-Shi Zheng is with the School of Data and Computer
Science, Sun Yat-sen University, Guangzhou, China. E-mail:
wszheng@ieee.org/zhwshi@mail.sysu.edu.cn
* Corresponding author.
However, it is largely unsolved to perform online learning
for person re-identification, since most person re-id models
except [25], [39], [29], [37] are only suitable for offline
learning. On one hand, the offline learning mode cannot enable
a real-time update of person re-id model when a large amount
of persons are detected in a camera network. An online update
is important to keep the cross-view matching system work on
recent mostly interested persons, that is to make the whole reid system work on sequential data. On the other hand, online
learning is helpful to alleviate the large scale learning problem
(either with high-dimensional feature, or on large-scale data
set, or both) nowadays. By using online learning, especially
the one-pass online learning, it is not necessary to always store
(all) observed/passed data samples.
In this paper, we overcome the limitation of offline person
re-id methods by developing an effective online person reid model. We proposed to embed the sketch processing into
Fisher discriminant analysis (FDA), and the new model is
called Sketch online Discriminant Analysis (SoDA). In SoDA,
the sketch processing preserves the main variations of all
passed data samples in a low-rank sketch matrix, and thus
SoDA enables selecting data variation for acquring discriminant features during online learning. SoDA enables the newly
learned discriminant model to embrace information from a new
coming data sample in the current round and meanwhile retain
important information learned in previous rounds in a light
and fast manner without directly saving any passed observed
data samples and keeping large-scale covariance matrices,
so that SoDA is formed as an one-pass online adaptation
model. While no passed data samples are saved in SoDA,
we propose to estimate the within-class variation from the
sketch information (i.e. a low-rank sketch matrix), and thus
in SoDA an approximate within-class covariance matrix can
be derived. We have provided in-depth theoretical analysis on
how sketch affects the discriminant feature extraction in an
online way. The rigorous upper and lower bounds on how
SoDA approaches its offline model (i.e. the classical Fisher
Discriminant Analysis [41]) are presented and proved.
Compared to existing online models for person re-id [25],
[39], [29], [37], SoDA is succinct, but it is theoretically
guaranteed and effective. While most existing online re-id
models have to retain all observed passed data samples,
the proposed SoDA relies on the sketch information from
historical data without any explicit storage of passed data
samples, and sketch information will assist our online model
in preventing one-pass online model from being biased by a
new coming data. While a more conventional way for online
learning of FDA is to update both within-class and between-
2
class covariance matrices directly [33], [48], [38], [26], [34],
[15], we introduce a novel approach to realize online FDA
by mining any within-class information from a sketch data
matrix, and this provides a lighter, more effecient and effective
online learning for FDA. We also find that an extra benefit of
embedding sketch processing in SoDA is to simultaneously
embed dimension reduction as well, so that no extra learning
task on learning dimension reduction technology (e.g. PCA)
is required and SoDA is more flexible when learning on some
high dimensional data [22], [5] in an online manner.
We have conducted extensive experiments on three largest
scale person re-identification datasets in order to evaluate the
effectiveness of SoDA for learning person re-identification
model in an online way. Extensive experiments are also
included for comparing SoDA with related online learning
models, even though they were not applied to person reidentification before.
The rest of the paper is organized as follows. In Sec. II,
the related literatures are first reviewed. We elaborate our
online algorithm and analyze the space and time complexity
of SoDA in Sec. III. Then we present theoretical analysis on
the relationship between our SoDA and the offline FDA in
Sec. IV. Experimental results for evaluation and verification
of our theoretical analysis are reported in Sec. V and finally
we conclude the work in Sec. VI.
II. R ELATED W ORK
Online Person re-identification. While person reidentification has been investigated in a large number
of works [51], [1], [13], [31], [20], [54], [22], [23], [3], [16],
[27], [30], [35], [44], [53], [32], [28], [47], the majority of
them only address by offline learning. That is person re-id
model is learned on a fixed training dataset. This ignores the
increase demand of data from a visual surveillance system,
since thousands of person images are captured day by day and
it is demanded to train a person re-id system on streaming
data so as to keep the system update to date.
Recently, only a few works [37], [25], [39], [29] have
been developed towards online processing for person reidentification. The most related work is the incremental distance metric based online learning mechanism (OL-IDM)
proposed in [37]. For updating the KISSME metric [17], the
OL-IDM utilizes the modified Self-Organizing Incremental
Neural Network (SOINN) [8] to produce two pairwise sets:
a similar pairs set and a dissimilar pairs set. Although SOINN
enables learning KISSME [17] on sequential data, SOINN has
to compare the newly observed sample with all the preserved
nodes and adds the newly observed sample as a new node if
it does not appear in the network. This would be costly as
sequential data increase and when feature dimension is high.
Another related work is the human-in-the-loop ones [39],
[25], [29], which proposed incremental method learned with
the involvement of humans’ feedback. Wang et al. [39] assumes that an operator is available to scan the rank list
provided by the proposed algorithm when matching a new
probe sample with existing observed gallery ones, and this
operator will select the true match, strong-negative match,
and weak-negative match for the probe. After having the
human feedback, the algorithm is able to be update. Martinel
et al. presented a graph-based approach to exploit the most
informative probe-gallery pairs for reducing human efforts and
developed an incremental and iterative approach based on the
feedback [29].
Unlike these models, we design a sketch FDA model called
SoDA for one-pass online learning, without any storage of
passed observed samples, maintaining a small size sketch
matrix on handling streaming data so that the discriminant
projections can be updated efficiently for extracting discriminative features for identifying different individuals.
Thanks to the sketch matrix, our SoDA is capable of
obtaining comparable performance with offline FDA models
on streaming data or large and high dimensional datasets
with very low cost on space and time. Compared to the
related online person re-id models, SoDA is theoretically
sounded since the bounds on approximating the offline model
is provided.
In particular, compared to Wang et al.’s and Martinel et al.’s
work, our work has the following distinct aspects: Firstly, the
proposed SoDA is developed for the one-pass online learning,
while Wang et al.’s and Martinel et al.’s work cannot work
for one-pass online learning, because the former one requires
human feedback between probe sample and all preserved
gallery samples, and the latter one needs to store all sample
pairs during interative learning. Secondly, the proposed SoDA
could be orthogonal to the human-in-the-loop work, since we
discuss how to automatically update a person re-identification
model on streaming data without elaborated human interaction
(feedback), and thus our work and the idea of incorporating
more human interaction in human-in-the-loop work can accompany each other.
SoDA vs. Incremental Fisher Discriminant Learning. SoDA
is related to existing incremental/online Fisher Discriminant
Analysis (FDA) methods, which aim to update within-class
and between-class covariance matrix sequentially. Pang et al.
proposed to directly update the between-class and within-class
scatter matrices [33]. However, Pang et al.’s method has to
preserve the whole scatter matrices in the memory, which
becomes impractical for high dimensional data. Ye et al. [48]
and Uray et al. [38] performed online learning by updating
PCA components to derive an approximate update of scatter
matrices. Compared to Pang’s method, Ye’s and Uray’s can
only perform online learning sample by sample, which can
be time consuming for large scale data. Also, Ye’s method
is based on QR decomposition of between-class covariance
matrix, and therefore it would increase computational cost
when the number of class is large. Since, Ye’s method is
limited to learning discriminant projections in the range space
of between-class covariance matrix but not the range space
of total-class covariance matrix [46], which may lose discriminant information. Lu et al. proposed a complete model
that picks up the lost discriminant information [26]. But Lu’s
method only can update the model sample by sample. Peng
et al. alternately proposed a chuck version of Ye’s method in
order to process multiple data points at a time [34]. Kim et
3
2𝑙
𝑡=0
2𝑙
𝑡=1
𝐵𝑒𝑡𝑤𝑒𝑒𝑛 − 𝑐𝑙𝑎𝑠𝑠
𝑆𝑐𝑎𝑡𝑡𝑒𝑟 𝑀𝑎𝑡𝑟𝑖𝑥
……
……
𝑆𝑘𝑒𝑡𝑐ℎ
𝒘1
𝒘𝑘
𝒘2
𝒘3
𝒘4
𝑙
𝐴 𝑆𝑒𝑡 𝑜𝑓 𝐷𝑖𝑠𝑐𝑟𝑖𝑚𝑖𝑛𝑎𝑛𝑡
𝐶𝑜𝑚𝑝𝑜𝑛𝑒𝑛𝑡𝑠
𝑡=𝑇
𝐴𝑝𝑝𝑟𝑜𝑥𝑖𝑚𝑎𝑡𝑒
𝑊𝑖𝑡ℎ𝑖𝑛 − 𝑐𝑙𝑎𝑠𝑠
𝑆𝑐𝑎𝑡𝑡𝑒𝑟 𝑀𝑎𝑡𝑟𝑖𝑥
Collecting Images
and Labels
(𝑎)
Feature
Representation
(𝑏)
Sketching
Main Data Variations
(𝑐)
Constructing
Covariance Matrices
(𝑑)
Learning Discriminant
Components
(𝑒)
Fig. 1. Illustration of our proposed Sketch online Discriminant Analysis (SoDA) (Best viewed in color). (a) In real-world application, images are generated
endlessly from visual surveillance camera network. (b) (t = 0, 1, · · · , T ), every presented image is represented by a d−dimensional row feature vector. (c)
We maintain a low rank sketch matrix to summarize all passed data by matrix sketch: 1) At the begining, we set B ∈ R2`×d , the sketch matrix, to be a zero
matrix. 2) All rows of B would be filled by 2` samples from top to bottom one by one. 3) we maintain the main data variations in the upper half of B by
sketch. 4) Each row of the lower half of B is set to be all zero and will be replaced by a new sample. (d) After sketch, the between-class and within-class
covariance matrices are constructed. (e) Due to the sketch, we can update a set of discriminant components efficiently only using limited space and time.
al. proposed a sufficient spanning set based incremental FDA
[15] to overcome the limitations in the previous works. Since
it is hard to directly update the discriminant components in
FDA, Yan et al. [45] and Hiraoka et al. [10] modified FDA
in order to get the discriminant components updated. They
proposed iterative methods for directly updating discriminant
projections.
Compared to the above mentioned incremental/online FDA
methods, our proposed SoDA embeds sketch processing into
FDA and therefore mines the within-class scatter information
from a sketch data matrix rather than directly from samples.
This gives the benefit that while the passed data samples are
not necessary to be saved, SoDA is still able to extract useful
within-class information from the compressed data information contained in the sketch matrix. In general, SoDA is an
online version of FDA, and SoDA can not only approximate
the FDA, which optimizes discriminant components on whole
data directly, but also run faster with limited memory. Also,
dimension reduction is naturally embedded into SoDA and
no extra online model for dimension reduction is required. Indepth theoretical investigation is provided in Sec. IV to explain
its rationale and to guarantee its effectiveness.
Although the proposed SoDA can be seen as embedding
sketch processing into FDA, we contribute solid theoretical
analysis on how SoDA will approximate the Batch mode
FDA when estimating the within-class variations from sketch
information, where the lower bound and upper bound are
provided. The theoretical analysis guarantees SoDA to be an
effective and efficient online learning method.
Online Learning. SoDA is an online learning methods. In
literatures, online learning [2], [6], [12], [40], [11] is known as
a light and rapid means to process streaming data or large-scale
datasets, and it has been widely exploited in many real-world
tasks such as Face Recognition [14], [36], Images Retrieval
[21], [42] and Object Tracking [19], [18]. It enables learning
a up-to-date model based on streaming data. However, most
of these online leaning based models [6], [18], [19] are not
suitable for person re-identification, since they are incapable
of predicting labels of data samples from unseen classes which
do not appear in the training stage.
III. S KETCH ONLINE D ISCRIMINANT A NALYSIS (S O DA)
In this section, we start to present the Sketch online Discriminant Analysis (SoDA) for Person re-identification. In
real-world scenario, samples come endlessly and sequentially
from vision system (Figure 1). The number of samples received in each round is random, and the individual sample
obtained is also stochastic. Suppose the tth (t = 1, 2, · · · ) new
coming sample represented as a d−dimensional feature vector
xi ∈ Rd is labelled with class label yi . For convenience, at the
tth round, we denote all passed data (i.e. N training samples
collected in the current and previous rounds) as a training
T
sample matrix X = [x1 , x2 , · · · , xN ] ∈ RN ×d , and denote
all the corresponding labels as y = [y1 , y2 , · · · , yN ]T ∈ RN
where yi is the class label of xi and yi ∈ {1, 2, ..., C}.
At each round (t = 1, 2, · · · ), the proposed SoDA maintains
the main variations of all passed data (X ∈ RN ×d ) in a
low rank matrix, which is named as the “sketch matrix”.
4
B. Estimating Approximate Within-class covariance matrix
Algorithm 1: Sketch online Discriminant Analysis
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
Input: X = [x1 , x2 , · · · , xN ]T ∈ RN ×d , y ∈ RN , λ > 0
B ←− zero matrix ∈ R2`×d ;
for each data xi ∈ Rd and label yi do
using xT
i to replace one zero row of B;
if all samples in X are processed then
deleting all zero rows of B;
end
if B has no zero rows then
[U, Σ, V] = SVD(B);
settingpξ as the (` + 1)th largest element Σ`+1 of Σ;
Σ̂ = max(Σ2 − I2` ξ 2 , O);
B = Σ̂VT (B contains ` rows non-zero values);
end
mc ←− (Nc mc + xi )/(Nc + 1) ( c = 0, yi );
Nc ←− Nc + 1 ( c = 0, yi );
end
B ←− B+ , P = V+ ;
P
Nc
T
Sb = C
c=1 N0 (mc − m0 )(mc − m0 ) ;
T
T
S̃t = B B/N0 − m0 m0 ;
S̃w = S̃t − Sb ;
Ŝb = PT Sb P;
Ŝw = PT S̃w P;
[W, Λ] = EVD(Ŝb , Ŝw );
Output: B, W, Λ
The sketch matrix keeps a small number of selected frequent
directions, which are obtained and updated by a matrix sketch
technique during the whole online learning process. While
sketching main data variations, the population mean and the
one of each class are also updated. We further utilize these
updated means and the low rank sketch matrix to estimate
between-class covariance matrix and derive the approximate
within-class covariance matrix after all new coming samples
are compressed into the sketch matrix. Finally, we generate
discriminant components by eigenvalue decomposition for
simultaneously minimizing the approximate within-class variance and maximizing the between-class variance. The whole
procedure of SoDA is illustrated in Figure 1 and presented in
Algorithm 1. The in-depth theoretical investigation to explain
why SoDA can approximate the offline FDA model by sketch
and guarantee its effectiveness on extracting discriminant
components is provided in Sec. IV.
During online learning, we keep updating the population
mean m0 and mean of each class mc (c = 1, 2, . . ., C)
so as to construct the between-class covariance matrix Sb .
When having a new coming sample xi with class label yi ,
the population mean and mean of class yi are updated by
(1)
and the population number and the number of samples for
class yi are also updated by:
Nc = Nc + 1, c = 0, yi .
In the above, S̃w is not always the exact within-class covariance matrix but it is an approximate one. In Sec. IV, we
will provide in-depth theoretical analysis of the bias of this
approximation on discriminant feature component extraction.
C. Dimension Reduction and Extraction of Discriminant Components
A. Estimating Between-class covariance matrix
mc = (Nc mc + xi )/(Nc + 1), c = 0, yi ,
For realizing one-pass online learning, we aim to update/form the within-class covariance matrix which describes
the within-class variation without using any passed observed
data samples. Different from previous online FDA approaches,
we embed sketch processing into FDA and derive a novel
approximate within-class covariance matrix efficiently and
effectively. For this purpose, we first employ the sketch
technique [24] to compress the passed data samples into a
sketch matrix so as to maintain the main variations of passed
data. More specifically, we maintain the main variations of all
passed data X in a small size matrix B ∈ R2`×d , called a
sketch matrix, where B is initialized by a zero matrix. Each
new coming sample xTi (i.e. the i-th row of X) replaces a
zero row of B from top to bottom until B is full without
any all zero rows. When B is full, we apply Singular Value
Decomposition (SVD) on B such that UΣVT = B, where Σ
is a diagonal matrix with singular values on the diagonal in
decreasing order. Each row in VT corresponds to a singular
value in Σ, and let vectors {vj } of VT corresponding to the
first half singular values denoted as f requent directions and
the ones corresponding to lower half singular values denoted as
unf requent directions. By employingpthe sketch algorithm,
the frequent directions vj are scaled by λ2i − ξ 2 and retained
in B, where ξ is the (` + 1)th largeast singular value in Σ`+1
of Σ. In thisp
way, the sketch matrix B is obtained by Σ̂VT ,
where Σ̂ = max(Σ2 − I2` ξ 2 , O) and O is a zero matrix.
Therefore, the sketch matrix B is a 2` × d matrix, where B+ ,
the upper half of B, retains the main variations of passed data
samples, and B− , the lower half of B, is reset to zero.
Although no passed observed data samples are saved, we
propose to derive an approximate within-class covariance
matrix using the sketch matrix B below:
S̃w = S̃t − Sb ,
(4)
where
S̃t = BT B/N0 − m0 mT0 .
(5)
(2)
We then use the updated means to estimate the betweenclass covariance matrix as follows:
C
X
Nc
Sb =
(mc − m0 )(mc − m0 )T .
(3)
N
0
c=1
Normally, after updating the two covariance matrices Sb
and S̃w , it is only necessary to compute the generalized
eigen-vectors of ΛS̃w W = Sb W. However, in person reidentification, some kinds of features are of high dimensionality such as HIPHOP [5], LOMO [22] and etc, and the size of
the two covariance matrices Sb and S̃w was determined by the
feature dimensionality. Thus the above eigen-decomposition
remains costly when the size of both Sb and S̃w are large.
An intuitive solution is to conduct another online learning
for dimension reduction, which spends extra time and space.
However, SoDA does not require such an extra learning. Due
to sketch, SoDA actually maintains a set of frequent directions
that describe main data variations. And thus we take these
5
frequent directions as basis vectors and the span of them can
+
approximate the data space. Hence, we set P = VT , the
upper half of matrix VT (Line 16 in Algorithm 1), and the
dimension reduction is performed by:
Ŝb = PT Sb P,
(6)
Ŝw = PT S̃w P,
where P = [v1 , v2 , . . . , vk ] consists of k frequent directions.
In this way, Ŝb and Ŝw become matrices in Rk×k , and
computing generalized eigen-vectors will become much faster.
Finally, the generalized eigen-vectors (Line 22 in Algorithm
1) are computed by ΛŜw W = Ŝb W, and they are the
discriminant components we pursuit.
D. Computational Complexity
𝑏 𝑆𝑤 − 𝑆ሚ𝑤
𝑎 𝑇ℎ𝑒 𝑆𝑘𝑒𝑡𝑐ℎ 𝑀𝑎𝑡𝑟𝑖𝑥 (𝑩)
𝑐 𝑆ሚ𝑤
As presented above, after processing all observed samples,
we maintain B ∈ R`×d , P ∈ Rd×k , mc ∈ Rd and Nc (c = 0,
1, 2, . . . , C). The time and space cost of the rest procedure
is O(d`2 ) (After the whole processing, N0 is equal to N )
and O((` + C)d), respectively. Therefore, the cost of time and
space is O(d`2 ) and O((` + k + C)d), respectively, almost the
same as the cost of sketch algorithm [24].
𝑑 𝑆𝑤
Fig. 2. (a) is the sketch matrix (B). (c) is the approximate within-class
covariance matrix (S̃w ) generated by SoDA while (d) is the groundtruth one
(Sw ) produced by FDA. (b) is the difference (Sw − S̃w ) of the groundtruth
within-class covariance matrix and the approximate one. It is noteworthy
that the distinction between Sw and S̃w is less than 1 × 10−12 , which
indicates that S̃w estimated by SoDA can approximate the groundtruth one
(Best viewed in color).
In this section, we theoretically show that SoDA approximates FDA in a principled way, although SoDA is formed
based on the approximate within-class covariance matrix
mined from sketch data information.
eigenvectors, that is ΛSw W = Sb W and Λ is a diagonal
matrix with generalized eigenvalues on the diagonal. Here,
the eigenvectors corresponding to the largest eigenvalues are
used to compress a high dimensional data vector to a low
dimensional feature representation.
A. Fisher Discriminant Analysis
B. Relation between SoDA and FDA
Fisher discriminant analysis (FDA) aims to seek discriminant projections for minimizing within-class variance and
maximizing between-class variance, which are estimated over
the data matrix X and its label set y in an offline way. There
are several equivalent criteria JF for the multi-class case. For
analysis, we consider the one that maxmizes the following
criterion:
T
Before presenting the theoretical analysis, we first define
the following notations. Let
tr(WT Sb W)
J1F (W) =
,
tr(WT Sw W)
(11)
tr(WT Sb W)
J2F (W) =
,
tr(WT S̃w W)
1
where JF (W) is the conventional FDA criterion and J2F (W)
is SoDA criterion by replacing Sw with S̃w that is mined from
sketch data information.
Let the largest Fisher scores in the above equations be
J1F (W1 ) = max J1F (W) = µ1 ,
W∈Rd×k
(12)
2
2
JF (W ) = max J2F (W) = µ2 .
IV. T HEORETICAL A NALYSIS
JF (W) =
W Sb W
,
WT Sw W
(7)
where Sb is the between-class covariance matrix and Sw is
the within-class covariance matrix. They are given by
Sb =
C
X
Nc
(mc − m0 )(mc − m0 )T ,
N
c=1
C
X
Nc X 1
Sw =
(xi − mc )(xi − mc )T ,
N
N
c
c=1
y =c
(8)
W∈Rd×k
(10)
Since for optimizing Eqs. (12), we can form a Lagrangian
function by imposing the constraint tr(WT Sb W) = 1 for
both criteria [41] We define D = {W = [w1 , · · · , wk ] ∈
Rd×k |tr(WT Sb W) = 1}, and thus we can reform Eqs. (12)
by:
µ−1
{J1F (W1 )}−1 = tr(WT Sw W),
1 = min
W1 ∈D
(13)
µ−1
{J2F (W2 )}−1 = tr(WT S̃w W).
2 = min
2
Generally, the analysis seeks a set of feature vectors {wj } that
maximize the criterion subject to the normalization constraint
tr(WT Sb W) = 1, where W is the matrix whose columns
are {wj }. This leads to the computation of generalized
In the following sections, we first discuss the relationship
between µ1 and µ2 . And then this relationship will be used to
present a bound for J1F (W2 ). Note that J1F (W2 ) is to measure
how well the optimal projection learned by our SoDA approximates the optimal solution that maximizes J1F (W). Note
(9)
i
where mc and Nc are the data mean and the number of
samples of the cth class, respectively, and N and m0 are the
population number and population mean, respectively. And the
total covariance matrix is
St = Sw + Sb =
C
1 XX
(xi − m0 )(xi − m0 )T .
N c=1 y =c
i
W ∈D
6
that our analysis will not take any dimension reduction before
extracting discriminant components below for discussion. Our
analysis can be extended if the same dimension reduction is
applied to all methods discussed below.
wT Sw w and wT S̃w w when constraining wT Sb w = 1. In
T
addition, since w2 Sb w2 = 1, we have s0 rb ||w2 ||22 ≤ 1, i.e.
1
||w2 ||2 ≤ (s0 rb )− 2 . Therefore, based on Theorem 1, we have
T
T
T
T
T
2
µ−1
S̃w w2 ≤ w1 S̃w w1 ≤ w1 Sw w1 = µ−1
2 =w
1 ,
C. Relationship Between the Maximum Fisher Score of FDA
and that of SoDA
We first present the relationship between the maximum
Fisher score of FDA and the one of SoDA, i.e. the relationship
between µ1 and µ2 . Suppose that matrix X ∈ RN ×d is the
totally training sample set consisting of samples acquired at
each time step.
However, it is not intuitive to obtain the relationship between the maximum Fisher score of FDA and the one of
SoDA based on the covariance matrices inferred in Eq. (5).
In order to exploit such a relationship, we first investigate the
Fisher score obtained by Sb and the approximate within-class
covariance matrix S̃w as follows:
S̃w = S̃t − Sb = BT B/N − m0 mT0 − Sb .
(14)
Let Sw be the within-class covariance matrix computed in
batch mode (i.e. for offline FDA). Since it is known that Sw =
St − Sb = XT X/N − m0 mT0 − Sb , it can be verified that
Sw − S̃w
=(XT X/N − m0 mT0 − Sb ) − (BT B/N − m0 mT0 − Sb ) (15)
=(XT X − BT B)/N .
By combining Eq. (25) as stated in the Appendix, it is
not hard to have the following theorem about the relation
between Sw and S̃w , and we visualize the approximation
between the groundtruth within-class covaraince matrix and
our approximate one in Figure 2. We assume that Sw , S̃w and
Ŝw are not singular in the following analysis 1 .
2
Theorem 1. S̃w Sw , and ||Sw − S̃w || ≤ 2||X||f /(N `),
where || ∗ || is the induced norm of a matrix and || ∗ ||f is the
Frobenius norm.
Based on the above theorem, we particularly consider the
two-class classification case.
Theorem 2. Considering the two criteria in Eq. (13) when
the discriminant feature transformation is a one-dimensional
vector, i.e. W1 = w1 ∈ Rd and W2 = w2 ∈ Rd , the
relationship between µ1 and µ2 is as follow:
2
−1
− 21
||X||f /(N `) ≤ µ−1
(16)
µ−1
2 ≤ µ1 ,
1 − 2(s0 rb )
where s0 is the smallest (non-zero) singular value of matrix
Sb and rb = rank(Sb ).
2
Proof. Let D = 2||X||f /(N `). From the Theorem 1, we have
wT (Sw −S̃w )w
||w||2
wT Sw w, wT Sw w ≤
for any nonzero w ∈ Rd , 0 ≤
∀w ∈ Rd , wT S̃w w ≤
≤ D. That is
1
µ−1
Sw w1 ≤ w2 Sw w2
1 =w
2T
(18)
− 21
− 21
= µ−1
.
≤ w S̃w w2 + D(s0 rb )
2 + D(s0 rb )
2
−1
−1
− 12
Then µ1 − 2(s0 rb ) ||X||f /(N `) ≤ µ−1
≤
µ
.
2
1
From the theorem above, we can claim that the largest
Fisher score J2F (w2 ) is always greater than or equal to the
original one J1F (w1 ) after sketch. From another aspect, the
2
− 21
||X||f /(N `) ≤ µ−1
≤ µ−1
inequalities “µ−1
1 − 2(s0 rb )
2
1 ”
means when more rows are set in the sketch matrix B, (i.e.
much larger ` is set), µ2 becomes µ1 , and thus SoDA becomes
exactly the FDA.
For the multi-class case, we can generalize the above proof
below.
Theorem 3. Considering the two criteria in Eq. (13), when
the discriminant feature transformation is a d-dimensional
transformation where d > 1, we have µ1 ≤ µ2 .
Proof. Note that W1 and W2 (∈ Rd×k ) make the two criteria
minimized in Eq. (13), respectively. Let W1 = [w11 , · · · , wk1 ]
and W2 = [w12 , · · · , wk2 ]. Since for any w ∈ Rd , wT Sw w ≥
wT S̃w w by Theorem 1, we have
T
T
2
S̃w W2 ) ≤ tr(W1 S̃w W1 )
µ−1
2 = tr(W
=
k
X
T
wi1 S̃w wi1 ≤
k
X
T
wi1 Sw wi1
i=1
i=1
T
= tr(W1 Sw W1 ) = µ−1
1 .
Hence, the theorem is proved.
D. How Does the Projection Learned by SoDA Optimize the
Original Fisher Criterion Approximately?
In the above, we analyze the quotient values between
tr(WT Sb W)
and tr(W
. However, in SoDA, our withinT S̃ W)
w
class covariance matrix is estimated by sketch and is not the
exact within-class covariance matrix. In the following, we will
present the effect of the learned discriminant component using
SoDA on minimizing the grouth-truth within-class covariance.
For this purpose, the following theorems are presented.
tr(WT Sb W)
tr(WT Sw W)
Theorem 4. For any w ∈ Q = {w ∈ Rd |wT w = 1}, we
have
wT S̃w w ≤ wT Sw w ≤ wT S̃w w +
2
||X||2f /`.
N
(19)
wT S̃w w+D||w||2 . Proof. While the inequality wT S̃w w ≤ wT Sw w is obvious
(17) by using Theorem 1, we focus on the latter one. Since Sw =
Let w1 and w2 be the discriminant vectors that minimize S̃w + (XT X − BT B)/N in Eq. (15), by applying Eq. (25),
T
the Criterion in Eq. (13) under the constraints w1 Sb w1 = 1 we have wT Sw w = wT S̃w w + N1 wT (XT X − BT B)w ≤
2T
2
1T
and w Sb w = 1, respectively. That is w Sw w1 = µ−1
wT S̃w w + N2 ||X||2f /`.
1
−1
2T
2
1
2
and w S̃w w = µ2 , i.e. w and w would minimize
Theorem 5. Considering the two criteria in Eq. (13), we de1 The analysis can be generalized to the case when S̃ is not invertible if
fine D = {W = [w1 , · · · , wk ] ∈ Rd×k |tr(WT Sb W) = 1},
w
the same regularization is imposed on both Sw , S̃w and Ŝw
denote the smallest non-zero singular value of Sb as s0 , and
7
(a)
(b)
Fig. 3. Fisher Score comparison on three datasets using JLH feature. (Best viewed in color).
(c)
TABLE I
C OMPARISION AMONG DIFFERENT ONLINE / INCREMENTAL APPROACHES
Approaches
IFDA [15] Pang’s IFDA [33] IDR/QR [48] OL-IDM [37] Wang
Save within-class
-scatter matrix?
Save between-class
-scatter matrix?
Is an one-pass
algorithm?
Human feedback
Can the model be
trained on streaming data?
Is the model embedded
with dimension reduction?
time
O(ndc)
-O(d3 )
O(nd2 )
complexity
space
2
2
2
-O(d )
O(d )
O(d )
complexity
!
!
%
%
!
%
!
!
!
%
!
%
!
%
!
%
!
%
!
%
!
%
et al. [39] Martinel et al. [29]
SoDA (Ours)
--
--
--
--
%
!
%
%
%
!
%
%
%
%
!
%
!
!
--
--
O(min(`, d)2 max(`, d))
--
--
O((` + k + C)d)
T
let rb = rank(Sb ). Suppose the norm of each data vector xi
(i.e. each row of the data matrix X ∈ RN ×d ) is bounded by
M , that is ||xi ||22 ≤ M. Then we have
1
≤ J1F (W2 ) ≤ µ1 .
(20)
2k
µ−1
+
M/`
1
s0 rb
2
Proof. First, given W ∈ D that minimize
{J2F (W)}−1 .
T
{J1F (W2 )}−1 =tr(W2 Sw W2 )
=
k
X
T
wi2 Sw wi2
i=1
=
k
X
T
||wi2 ||22
wi2
wi2
Sw
2
||wi ||2
||wi2 ||2
||wi2 ||22
wi2
wi2
S̃w
2
||wi ||2
||wi2 ||2
i=1
≤
k
X
T
i=1
+
≤
k
X
(21)
k
X
2
||wi2 ||22 ||X||2f /`
N
i=1
{J1F (W2 )}−1 =tr(W2 Sw W2 )
≤
k
X
T
wi2 S̃w wi2 +
i=1
=µ−1
2 +
2k
(s0 rb )−1 ||X||2f /`
N
(22)
2k
(s0 rb )−1 ||X||2f /`.
N
Pk
2T
2
Note that µ−1
=
2
i=1 wi S̃w wi since it is assumed that
−1
2
2
W ∈ D minimizes {JF (W)} . Thus, under the constraint
T
tr(W2 Sb W2 ) = 1, we have
1
≤ J1F (W2 ) ≤ µ1 ,
(23)
−1
2k
µ2 + N (s0 rb )−1 ||X||2f /`
where the latter equation is obvious since W2 may not be
the optimal projection for mamixizing JF1 (W). Finally, since
µ1 ≤ µ2 and ||xi ||22 ≤ M that means the norm of any data
vector xi (i.e. each row of the data matrix X ∈ RN ×d ) is
bounded by M , we have
1
≤ J1F (W2 ) ≤ µ1 .
(24)
−1
µ1 + s2k
M/`
r
0 b
T
wi2 S̃w wi2
i=1
+
T
2k
||wi2 ||22 ||X||2f /`.
N
T
Since tr(W2 Sb W2 ) = 1, we have wi2 Sb wi2 ≤ 1. Here,
T
for convenience, one can further assume wi2 Sb wi2 > 0,
otherwise a much tighter bound can be inferred. And thus
s0 rb ||wi2 ||22 ≤ 1. So we have
E. Discussion
1) SoDA vs. FDA: The above theorem indicates that 1)
the learned transformation by SoDA may not be the optimal
one for the FDA directly learned on all observed data since
J1F (W2 ) ≤ µ1 , which is obvious and reasonable; 2) however,
there is a lower bound on J1F (W2 ), since µ−1 + 12k M/` ≤
1
s0 r b
8
(a)
(b)
Fig. 4. Comparison on three datasets using JSTL feature. (Best viewed in color).
𝑀𝑎𝑟𝑘𝑒𝑡 − 1501
𝑆𝑌𝑆𝑈
𝐸𝑥𝑀𝑎𝑟𝑘𝑒𝑡
(c)
V. E XPERIMENTS
A. Datasets and Evaluation Settings
Fig. 5. Example images from different person re-id datasets. For each dataset,
two images in a column correspond to the same person.
J1F (W2 ); 3) as long as more and more rows are set in the
sketch matrix B used in SoDA, i.e. ` is larger and larger,
2k
2
1
s0 M/` → 0 and so that JF (W ) ≈ µ1 in such a case. The
latter case is reasonable because although the sketch in SoDA
enables selecting data variation during the online learning,
more data information is kept when a much larger sketch
matrix B is used, and this will be verified in the experiments
(see Figure 3 for example).
2) SoDA vs. Incremental/online models: In Table I, we
compare SoDA with related incremental/online FDA models
in details. A distinct and important characteristic of SoDA is
that it is able to perform one-pass online learning directly
only relying on sketch data information. SoDA does not
have to keep within-class covariance matrix and betweenclass covariance matrix in memory during online learning,
due to embedding sketch processing, which has not been
considered for online learning of FDA before. Moreover, as
compared to the others, SoDA does not need any extra online
learning progress on dimension reduction, which is naturally
embedded. Thus the training cost of SoDA is much lighter.
When applied SoDA to person re-id, we perform the comparison with related online person re-id models. An important
distinction is that no extra human feedback is required, and
SoDA is able to be applied on streaming data in an onepass learning manner. In comparison with OL-IDM, SoDA
has its merits: 1) dimension reduction is naturally embedded in
SoDA; 2) embedding sketch into person re-id model learning
is a more efficient and effective way to maintain the main
variations of data, which has been verified by our experimental
results.
1) Datasets: We extensively evaluated the proposed approach on three large person re-id benchmarks: Market-1501,
SYSU, and ExMarket.
• Market-1501 dataset [51] contains person images collected in front of a campus supermarket at a University.
It consists of 32,643 person images of 1,501 identities.
• SYSU dataset contains totally 48,892 images of 502
pedestrians captured by two cameras. Similar to [4],
we randomly selected 251 identities from two views
as training set which contains 12308 images. And we
randomly selected three images of each person from the
rest 251 identities of both cameras to form the testing set,
where the 753 images of the first camera were used as
query images.
• ExMarket dataset was formed by combining the MARS
dataset [50] and Market-1501 dataset. MARS was formed
as a video dataset for person re-identification. All the
identities from MARS are of a subset of those from
Market. More specifically, for each identity, we extracted
one frame for each five consecutive frames firstly and
combined images extracted from MARS and the ones
from Market-1501 of the same person. Therefore, ExMarket contains 237147 images of 1501 identities, the largest
population size among the three benchmark datasets
tested.
2) Features: In this work, we conducted the evaluation
based on four types of feature for evaluation: 1) JSTL, 2)
LOMO, 3) HIPHOP, 4) JSTL + LOMO + HIPHOP (JLH).
• JSTL is a kind of low-dimensional deep feature representation (R256 ) extracted by a deep convolutional network
[43];
• LOMO is an effective handcraft feature proposed for
person re-id in [22], and it is a 26960-dimensional vector;
• HIPHOP is another recently proposed person re-id feature
(R84096 ) [5] that extracts more view invariant histogram
features from shallow layers of a convolution network.
In addition, since person re-id can benefit from using
multiple different types of appearance features as shown in
[5], [7], [9], [49], [52]. we concatenated JSTL, LOMO and
HIPHOP as a high dimensional feature (R111312 ), named JLH
9
(a)
(b)
Fig. 6. Comparison on three datasets using LOMO feature. (Best viewed in color).
(c)
Market-1501
100
Matching Rate (%)
90
80
70
60
50
OL-IDM
IDR/QR
IFDA
Pang's IFDA
FDA
SoDA (Ours)
40
30
20
10
5
10
15
20
Rank
(a)
(b)
Fig. 7. Comparison on three datasets using HIPHOP feature. (Best viewed in color).
SYSU
100
Matching Rate (%)
(c)
80
60
40
OL-IDM
IDR/QR
IFDA
Pang's IFDA
FDA
SoDA (Ours)
20
0
5
10
15
Rank
(a)
(b)
Fig. 8. Comparison on three datasets using JLH feature. (Best viewed in color).
in this work for convenience of description. On all datasets, we
report experimental results of SoDA using the concatenated
feature in Table VI. Since LOMO, HIPHOP, and JLH are
of high dimension, for all methods except SoDA, we first
reduced their feature dimension of the three types of feature to
2000, 2000 and 2500, respectively, on all datasets. For SoDA,
we set the sketch size (`) to the (reduced) feature dimension
menthioned above on all datasets.
3) Evaluation protocol: On all datasets, we followed the
standard evaluation settings on person re-identification, i.e.
images of half of the persons were used for training and
images of the rest half were used for testing, so that there is
no overlap in persons between training and testing sets. More
specifically, on Market-1501 dataset, we used the standard
training (12936 images of 750 people) and testing (19732
images of 751 people) sets provided in [51]. On SYSU dataset,
20
(c)
similar to [4], we randomly picked all images of the selected
251 identities from two views to form the training set which
contains 12308 images, and we randomly picked 3 images
of each pedestrian of the rest 251 identities in each view for
forming the gallery and query sets for testing. On ExMarket
dataset, we conducted the same identity split as the Market1501 dataset. The training set contains 112351 images, and the
testing set contains 124796 images, among which 3363 images
are considered as query images and the rest are considered as
gallery images.
On all datasets, the cumulative matching characteristic
(CMC) curves is shown to measure the performance of the
compared methods on re-identifying individuals across different camera views under online setting. In addition to this,
we also report results using another two performance metrics:
1) rank-1 Matching Rate, and 2) mean Average Precision
10
TABLE II
C OMPARISON WITH FDA ON ALL BENCHMARKS .
Feature
Dataset Method
Market FDA
-1501 SoDA
FDA
SYSU SoDA
ExFDA
Market SoDA
JSTL
rank-1 rank-5 rank-10 rank-20 mAP
57.30 75.53 81.38 86.49 28.57
57.13 74.79 81.18 85.90 28.25
31.21 52.99 61.49 71.85 25.86
31.74 52.86 62.15 71.31 26.04
53.89 68.11 73.13 77.97 22.71
54.93 68.79 73.13 77.46 22.87
LOMO
rank-1 rank-5 rank-10 rank-20 mAP
51.90 74.26 81.12 87.14 23.60
52.41 73.37 81.38 87.17 23.58
46.61 70.78 79.42 86.19 41.81
47.81 70.39 78.75 86.72 41.69
45.64 60.42 66.86 72.89 17.98
46.08 61.31 67.81 73.63 17.77
HIPHOP
rank-1 rank-5 rank-10 rank-20 mAP
60.27 80.52 87.05 91.18 31.45
61.88 81.41 86.70 91.60 33.39
52.86 73.84 81.67 87.78 48.20
53.12 73.97 80.88 87.25 48.48
57.24 71.38 77.11 81.74 27.20
55.76 70.40 76.10 81.59 24.97
JLH
rank-1 rank-5 rank-10 rank-20 mAP
74.20 88.75 92.19 94.80 49.01
75.27 89.28 92.70 95.22 49.82
63.08 80.35 86.32 91.50 56.82
64.81 80.74 87.25 91.77 59.82
66.86 78.18 82.63 86.70 39.00
66.18 78.36 82.48 86.64 37.11
TABLE III
C OMPARISON WITH INCREMENTAL FDA MODELS AND ONLINE METHOD USING JSTL.
Dataset
Market-1501
SYSU
ExMarket
rank-1
Accumulative
rank-1
Accumulative
rank-1
Accumulative
Method
matching rate (%) mAP (%) Time (s)
matching rate (%) mAP (%) Time (s)
matching rate (%) mAP (%) Time (s)
OL-IDM
31.50
10.48
3706.84
12.08
10.29
10588.15
50.24
18.93 1646433.70
IDR/QR
41.15
13.20
803.59
12.88
10.24
247.17
42.70
11.20
6172.79
IFDA
51.45
21.21
38.22
22.97
18.12
12.40
49.91
16.58
394.31
Pang’s IFDA
57.36
28.58
13.68
31.08
25.28
7.65
55.46
22.97
120.94
SoDA
57.13
28.25
7.84
31.74
26.04
4.68
54.93
22.87
50.52
TABLE IV
C OMPARISON WITH INCREMENTAL FDA MODELS AND ONLINE METHOD USING LOMO.
Dataset
Market-1501
SYSU
ExMarket
rank-1
Accumulative
rank-1
Accumulative
rank-1
Accumulative
Method
matching rate (%) mAP (%) Time (s)
matching rate (%) mAP (%) Time (s)
matching rate (%) mAP (%) Time (min)
OL-IDM
3.95
0.73
736707.11
1.06
1.59
743335.02
3.86
0.33
> 1 week
IDR/QR
19.36
5.09
345181.63
6.37
5.16
83903.98
19.92
3.58
74393.24
IFDA
38.75
13.32
314470.08
26.83
22.59
67003.60
35.63
10.43
69668.26
Pang’s IFDA
44.80
18.64
314461.09
35.99
31.82
66646.88
43.50
15.42
69625.84
SoDA
52.41
23.53
2127.47
47.81
41.69
3345.30
46.08
17.77
359.28
TABLE V
C OMPARISON WITH INCREMENTAL FDA MODELS AND ONLINE METHOD USING HIPHOP.
Dataset
Market-1501
SYSU
ExMarket
rank-1
Accumulative
rank-1
Accumulative
rank-1
Accumulative
Method
matching rate (%) mAP (%) Time (s)
matching rate (%) mAP (%) Time (s)
matching rate (%) mAP (%) Time (s)
OL-IDM
11.97
2.22
277104.72
1.46
2.00
252626.33
7.24
0.54
> 1 week
IDR/QR
19.98
6.00
225226.32
10.49
9.34
86513.64
21.97
5.32
2392922.71
IFDA
52.14
21.30
185390.31
25.50
22.32
66202.88
46.08
15.50 2133499.12
Pang’s IFDA
60.42
31.30
185174.97
51.79
47.51
65593.56
54.84
25.11 2135671.23
SoDA
61.88
33.39
3620.00
53.12
48.48
13849.61
55.76
24.97
83319.79
(mAP). mAP first computes the area under the PrecisionRecall curve for each query and then calculates the mean
of Average Precision over all query persons. All experiments
were implemented using MATLAB on a machine with CPU E5
2686 2.3 GHz and 256 GB RAM, and the accumulative time
of all compared methods were also computed and reported for
measuring efficiency.
B. SoDA vs. FDA
In Sec. IV, we provide theoretical analysis on the relation
between SoDA and FDA. In this section, we provide empirical
evaluation on three datasets by the comparison on Fisher Score
between SoDA and FDA in Figure 3. The figure indicates that
by keeping more rows in the sketch matrix, SoDA can acquire
more similar Fisher Score as the one of FDA, and this is
supported by Theorem 5. We also compared SoDA with FDA
on the three datasets in Table II, and the comparison shows
that they work comparably. Therefore the results reported here
have validated that our sketch approach approximates FDA
(i.e. the offline model) for extracting discriminant information
very well, and thus the effectiveness of our model is verfied
both theoretically and empirically.
C. SoDA vs. Incremental FDA Model
There are existing works that are related to incremental
learning of FDA, which also process sequential data and update the models online. We compared extensively our method
SoDA with three related online/incremental FDA methods,
including IFDA [15], IDR/QR [48] and Pang’s IFDA [33].
We show CMC curve of all methods using different types
of features in Figure 4, Figure 6, Figure 7 and Figure 8.
The results illustrate that the proposed SoDA outperformed
the compared incremental FDA. For instance, when using
JLH, SoDA outperformed Pang’s IFDA and achieved 75.27%,
64.81% and 66.18% rank-1 matching rate on Market, SYSU
and ExMarket, respectively. We further report mAP and accumulative time in Table III, Table IV, Table V and Table VI.
It suggests that SoDA has a better mAP values especially on
11
(a)
(b)
(c)
Fig. 9. Effect of the sketch size on accumulative time consumption. (Best viewed in color).
TABLE VI
C OMPARISON WITH INCREMENTAL FDA MODELS AND ONLINE METHOD USING JLH.
Dataset
Market-1501
SYSU
ExMarket
rank-1
Accumulative
rank-1
Accumulative
rank-1
Accumulative
Method
matching rate (%) mAP (%) Time (s)
matching rate (%) mAP (%) Time (s)
matching rate (%) mAP (%) Time (s)
OL-IDM
14.43
2.48
356136.53
3.32
4.91
554908.70
10.84
0.70
> 1 week
IDR/QR
36.70
13.73
251934.68
15.80
12.82
220962.28
39.64
10.85 2479401.99
IFDA
61.19
30.36
203537.09
21.65
18.23
189960.96
56.24
23.46 2032679.17
Pang’s IFDA
71.64
45.15
204406.03
56.31
49.60
189897.24
64.64
34.80 2036601.02
SoDA
75.27
49.82
12952.07
64.81
59.82
9951.20
66.18
37.11
164475.67
TABLE IX
C OMPARISON WITH OFFLINE RE - ID MODELS ON E X M ARKET USING
JLH(%).
Method
CRAFT
MLAPG
KISSME
XQDA
SoDA
(a)
(b)
Fig. 10. Effect of the sketch size on rank-1 Matching Rate. (Best viewed in
color).
rank-1 rank-5 rank-10 rank-20
54.51 69.39 75.56 80.94
50.21 65.29 70.90 77.20
57.42 69.71 74.23 78.83
55.05 68.02 73.10 77.73
66.18 78.36 82.48 86.64
mAP
24.26
25.63
30.03
28.36
37.11
SYSU and spends much less time, where for instance SoDA
gains around 60% reduction on the cost of computation time,
as compared with Pang’s ILDA.
D. SoDA vs. Related Person re-id Models
TABLE VII
C OMPARISON WITH OFFLINE RE - ID MODELS ON M ARKET-1501 USING
JLH (%).
Method
CRAFT
MLAPG
KISSME
XQDA
SoDA
rank-1 rank-5 rank-10 rank-20
71.20 87.35 391.69 94.39
69.33 85.63 90.23 93.82
67.99 83.67 88.93 92.79
67.96 83.91 88.95 93.14
75.27 89.28 92.70 95.22
Map
44.24
46.16
39.79
43.89
49.82
TABLE VIII
C OMPARISON WITH OFFLINE RE - ID MODELS ON SYSU USING JLH(%).
Method
CRAFT
MLAPG
KISSME
XQDA
SoDA
rank-1 rank-5 rank-10 rank-20
24.70 43.03 55.11 67.73
18.46 35.86 47.01 58.83
62.28 79.81 86.06 90.31
64.14 80.88 86.85 91.90
64.81 80.74 87.25 91.77
mAP
23.31
18.03
56.23
59.12
59.82
Comparison with online re-id model. We compared the
online re-id method OL-IDM [37] that addresses the same
setting as ours in this work. Table III, IV, V and VI tabulate
the comparison results. It is noteworthy that our SoDA obtains
much more stable results on rank-1 matching rate and mAP
performance. Moreover, SoDA is more efficient than OL-IDM,
taking 30 times smaller accumulative time.
Comparison with related subspace model and classical
models. We also compared two related subspace model for
person re-identification: 1) CRAFT [5] ; 2) MLAPG [23],
and two classical methods: 1) KISSME [16] ; 2) XQDA [22],
when the JLH feature was applied on all datasets. All of these
methods were learned in an offline way, and the results of these
methods on all benchmarks using JLH features are presented
in Table VII, VIII and IX. Among all compared methods, the
rank-1 matching rate and mAP of SoDA are the highest, and
its accumulative time is the lowest. This indicates that SoDA
achieves better or comparable performance of the related offline subspace person re-id models.
12
E. Further Evaluation of SoDA
We report the performance of SoDA in Figure 10 and Figure
9 when varying two key parameters `.
Effect of the sketch size ` using low dimensional feature.
On all benchmarks, we conducted experiments using JSTL
feature (256−dimensional) for evaluating the effect of the
sketch size ` on low dimensional feature. The experimental
results in Figure 10(a) indicate that the performance of our
proposed SoDA can be improved when ` (i.e. the rank of
B) is larger. That is the performance is better when more
variations of passed data are remained in the sketch matrix. It
is reasonable because when more data variations are reserved,
the estimated within-class covariance matrix from the sketch
matrix B can approximate the ground-truth one better. However, larger ` indeed increases the accumulative time since
the computation complexity and memory depend on ` when
the number of samples and the dimensionality of features are
determined (Sec. III-D). Fortunately, we empirically find that
good performance and low accumulative time can be achieved
at the same time when setting the rank of the sketch matrix
B to a properly small value, i.e. ` = d = 256.
Effect of the sketch size ` using high dimensional features. We also show the effect of ` when using high dimensional features, as some recent proposed state-of-theart person re-id features are of high dimension, such as
LOMO (26960−dimensional), HIPHOP (84096−dimensional)
and also the JLH (111312−dimensional) formed in this work.
High dimensionality will increase the computational and space
complexities (e.g., the whole training data matrix of ExMarket
is a 112351 × 111312 matrix). Instead of conducting another
online learning for dimension reduction, SoDA utilizes a set of
orthogonal frequent directions maintained by the sketch matrix
B for reducing feature dimension. The experimental results
shown in Figure 10(b) and Figure 9 again verify that increasing
the sketch size ` can improve the performance of SoDA but
also increase the accumulative time due to extra computation
for dimension reduction. Also, on high dimensional feature,
setting ` to be a properly small value (e.g. ` = 1000) can
gain a good balance between good performance and low
accumulative computation time.
VI. C ONCLUSION
We contribute to developing a succinct and effective online person re-identification (re-id) methods namely SoDA.
Compared with existing online person re-id models, SoDA
performs one-pass online learning without any explicit storage
of passed observed data samples, meanwhile preserving a
small sketch matrix that describes the main variation of passed
observed data samples. And moreover, SoDA is able to be
trained on streaming data efficiently with low computational
cost, upon on no elaborated human feedback. Compared with
the related online FDA models, we take a novel approach by
embedding sketch processing into FDA, and we approximately
estimate the within-class variation from a sketch matrix and
finally derive SoDA for extracting discriminant components.
More importantly, we have provided in-depth theoretical analysis on how the sketch information affects the discriminant
component analysis. The rigorous upper and lower bounds
on how SoDA approaches its offline model (i.e. the classical
Fisher Discriminant Analysis) are given and proved. Extensive
experimental results have clearly illustrated the effectiveness
of our SoDA and verified our theoretical analysis.
ACKNOWLEDGEMENT
This research was supported by the NSFC (No. 61472456,
No. 61573387, No. 61522115).
R EFERENCES
[1] E. Ahmed, M. Jones, and T. K. Marks. An improved deep learning
architecture for person re-identification. In CVPR, 2015.
[2] G. Chechik, V. Sharma, U. Shalit, and S. Bengio. Large scale online
learning of image similarity through ranking. JMLR, 11(Mar):1109–
1135, 2010.
[3] Y.-C. Chen, W.-S. Zheng, and J. Lai. Mirror representation for modeling
view-specific transform in person re-identification. In IJCAI, 2015.
[4] Y.-C. Chen, W.-S. Zheng, J.-H. Lai, and P. Yuen. An asymmetric distance model for cross-view feature mapping in person re-identification.
TCSVT, 2016.
[5] Y.-C. Chen, X. Zhu, W.-S. Zheng, and J.-H. Lai. Person re-identification
by camera correlation aware feature augmentation. TPAMI, 2017.
[6] K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer.
Online passive-aggressive algorithms. JMLR, 7(Mar):551–585, 2006.
[7] M. Farenzena, L. Bazzani, A. Perina, V. Murino, and M. Cristani. Person
re-identification by symmetry-driven accumulation of local features. In
CVPR, 2010.
[8] S. Furao and O. Hasegawa. An incremental network for on-line
unsupervised classification and topology learning. NN, 19(1):90–106,
2006.
[9] D. Gray and H. Tao. Viewpoint invariant pedestrian recognition with an
ensemble of localized features. In ECCV, 2008.
[10] K. Hiraoka, K.-i. Hidai, M. Hamahira, H. Mizoguchi, T. Mishima,
and S. Yoshizawa. Successive learning of linear discriminant analysis:
Sanger-type algorithm. In ICPR, 2000.
[11] L.-K. Huang, Q. Yang, and W.-S. Zheng. Online hashing. TNNLS, 2017.
[12] P. Jain, B. Kulis, I. S. Dhillon, and K. Grauman. Online metric learning
and fast similarity search. In ANIPS, 2009.
[13] X.-Y. Jing, X. Zhu, F. Wu, X. You, Q. Liu, D. Yue, R. Hu, and B. Xu.
Super-resolution person re-identification with semi-coupled low-rank
discriminant dictionary learning. In CVPR, 2015.
[14] T.-K. Kim, J. Kittler, and R. Cipolla. On-line learning of mutually orthogonal subspaces for face recognition by image sets. TIP, 19(4):1067–
1074, 2010.
[15] T.-K. Kim, B. Stenger, J. Kittler, and R. Cipolla. Incremental linear
discriminant analysis using sufficient spanning sets and its applications.
IJCV, 91(2):216–232, 2011.
[16] M. Koestinger, M. Hirzer, P. Wohlhart, P. M. Roth, and H. Bischof.
Large scale metric learning from equivalence constraints. In CVPR,
2012.
[17] M. Koestinger, M. Hirzer, P. Wohlhart, P. M. Roth, and H. Bischof.
Large scale metric learning from equivalence constraints. In CVPR,
2012.
[18] H. Li, Y. Li, and F. Porikli. Deeptrack: Learning discriminative feature
representations online for robust visual tracking. TIP, 25(4):1834–1848,
2016.
[19] X. Li, C. Shen, A. Dick, Z. M. Zhang, and Y. Zhuang. Online metricweighted linear representations for robust visual tracking. TPAMI,
38(5):931–950, 2016.
[20] X. Li, W.-S. Zheng, X. Wang, T. Xiang, and S. Gong. Multi-scale
learning for low-resolution person re-identification. In ICCV, 2015.
[21] J. Liang, Q. Hu, W. Wang, and Y. Han. Semisupervised online
multikernel similarity learning for image retrieval. TMM, 19(5):1077–
1089, 2017.
[22] S. Liao, Y. Hu, X. Zhu, and S. Z. Li. Person re-identification by local
maximal occurrence representation and metric learning. In CVPR, 2015.
[23] S. Liao and S. Z. Li. Efficient psd constrained asymmetric metric
learning for person re-identification. In ICCV, 2015.
[24] E. Liberty. Simple and deterministic matrix sketching. In SIGKDD,
KDD ’13, 2013.
[25] C. Liu, C. Change Loy, S. Gong, and G. Wang. Pop: Person reidentification post-rank optimisation. In ICCV, 2013.
[26] G.-F. Lu, J. Zou, and Y. Wang. Incremental complete lda for face
recognition. PR, 45(7):2510–2521, 2012.
13
[27] L. Ma, X. Yang, and D. Tao. Person re-identification over camera
networks using multi-task distance metric learning. TIP, 23(8):3656–
3670, 2014.
[28] N. Martinel, A. Das, C. Micheloni, and A. K. Roy-Chowdhury.
Re-identification in the function space of feature warps. TPAMI,
37(8):1656–1669, 2015.
[29] N. Martinel, A. Das, C. Micheloni, and A. K. Roy-Chowdhury. Temporal
model adaptation for person re-identification. In ECCV, 2016.
[30] A. Mignon and F. Jurie. Pcca: A new approach for distance learning
from sparse pairwise constraints. In CVPR, 2012.
[31] S. Paisitkriangkrai, C. Shen, and A. van den Hengel. Learning to rank
in person re-identification with metric ensembles. In CVPR, 2015.
[32] R. Panda, A. Bhuiyan, V. Murino, and A. K. Roy-Chowdhury. Unsupervised adaptive re-identification in open world dynamic camera networks.
In CVPR, 2017.
[33] S. Pang, S. Ozawa, and N. Kasabov. Incremental linear discriminant
analysis for classification of data streams. TSMCB, 35(5):905–914, 2005.
[34] Y. Peng, S. Pang, G. Chen, A. Sarrafzadeh, T. Ban, and D. Inoue. Chunk
incremental idr/qr lda learning. In IJCNN, 2013.
[35] B. Prosser, W.-S. Zheng, S. Gong, T. Xiang, and Q. Mary. Person reidentification by support vector ranking. In BMCV, 2010.
[36] F. Schroff, D. Kalenichenko, and J. Philbin. Facenet: A unified
embedding for face recognition and clustering. In CVPR, 2015.
[37] Y. Sun, H. Liu, and Q. Sun. Online learning on incremental distance
metric for person re-identification. In RB, 2014.
[38] M. Uray, D. Skocaj, P. M. Roth, H. Bischof, and A. Leonardis.
Incremental lda learning by combining reconstructive and discriminative
approaches. In BMVC, 2007.
[39] H. Wang, S. Gong, X. Zhu, and T. Xiang. Human-in-the-loop person
re-identification. In ECCV, 2016.
[40] M. K. Warmuth and D. Kuzmin. Randomized online pca algorithms with
regret bounds that are logarithmic in the dimension. JMLR, 9(Oct):2287–
2320, 2008.
[41] A. R. Webb. Statistical pattern recognition. 2003.
[42] P. Wu, S. C. Hoi, P. Zhao, C. Miao, and Z.-Y. Liu. Online multi-modal
distance metric learning with application to image retrieval. TKDE,
28(2):454–467, 2016.
[43] T. Xiao, H. Li, W. Ouyang, and X. Wang. Learning deep feature
representations with domain guided dropout for person re-identification.
In CVPR, 2016.
[44] F. Xiong, M. Gou, O. Camps, and M. Sznaier. Person re-identification
using kernel-based metric learning methods. In ECCV, 2014.
[45] J. Yan, B. Zhang, S. Yan, Q. Yang, H. Li, Z. Chen, W. Xi, W. Fan, W.-Y.
Ma, and Q. Cheng. Immc: incremental maximum margin criterion. In
SIGKDD, 2004.
[46] J. Yang, A. F. Frangi, J.-y. Yang, D. Zhang, and Z. Jin. Kpca plus lda:
a complete kernel fisher discriminant framework for feature extraction
and recognition. TPAMI, 27(2):230–244, 2005.
[47] H. Yao, S. Zhang, D. Zhang, Y. Zhang, J. Li, Y. Wang, and Q. Tian.
Large-scale person re-identification as retrieval.
[48] J. Ye, Q. Li, H. Xiong, H. Park, R. Janardan, and V. Kumar. Idr/qr: an
incremental dimension reduction algorithm via qr decomposition. TKDE,
17(9):1208–1222, 2005.
[49] L. Zhang, T. Xiang, and S. Gong. Learning a discriminative null space
for person re-identification. In CVPR, 2016.
[50] L. Zheng, Z. Bie, Y. Sun, J. Wang, C. Su, S. Wang, and Q. Tian. Mars:
A video benchmark for large-scale person re-identification. In ECCV,
pages 868–884. Springer, 2016.
[51] L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian. Scalable
person re-identification: A benchmark. In ICCV, 2015.
[52] L. Zheng, S. Wang, L. Tian, F. He, Z. Liu, and Q. Tian. Query-adaptive
late fusion for image search and person re-identification. In CVPR, 2015.
[53] W.-S. Zheng, S. Gong, and T. Xiang. Person re-identification by
probabilistic relative distance comparison. In CVPR, 2011.
[54] W.-S. Zheng, X. Li, T. Xiang, S. Liao, J. Lai, and S. Gong. Partial
person re-identification. In ICCV, 2015.
A PPENDIX
Matrix Sketch. The sketch technique we discuss in this work
is related to the matrix sketch [24], which is pass-efficient to
read streaming data at most a constant number of time. The
sketch algorithm learns a set of frequent directions from an
N × d matrix X ∈ RN ×d in a stream, where each row of X is
a d-dimensional vector. It maintains a sketch matrix B ∈ R`×d
containing ` (` << N ) rows and guarantees that:
BT B XT X & ||XT X − BT B|| ≤ 2||X||2f /`.
(25)
Such a sketch processing is light in both processing time
(bounded by O(d`2 ) ) and space (bounded by O(`d)).
Wei-Hong Li is currently a postgraduate student
majoring in Information and Communication
Engineering in School of Electronics and
Information
Technology
at
Sun
Yat-sen
University. He received the bachelor’s degree
in intelligence science and technology from
Sun Yat-Sen University in 2015. His research
interests include person re-identification, object
tracking, object detection and image-based
modeling.
Homepage: https://weihonglee.github.io.
Zhuowei Zhong is a student from Sun Yat-sen
University under the joint supervision program of
the Chinese University of Hong Kong. He is now
graduated and received BSc degree in computer
science. His research interest is in Artificial Intelligence, especially in machine learning and constraint
satisfaction problem.
Wei-Shi Zheng is currently a Professor with Sun
Yat-sen University. He has joined Microsoft Research Asia Young Faculty Visiting Programme. He
has authored over 90 papers, including over 60 publications in main journals (TPAMI, TNN/TNNLS,
TIP, TSMC-B, and PR) and top conferences (ICCV,
CVPR, IJCAI, and AAAI). His recent research interests include person association and activity understanding in visual surveillance. He was a recipient
of Excellent Young Scientists Fund of the National
Natural Science Foundation of China, and Royal
Society-Newton Advanced Fellowship, U.K.
Homepage: http://isee.sysu.edu.cn/∼zhwshi.
| 1 |
Logical Methods in Computer Science
Vol. 5 (3:8) 2009, pp. 1–69
www.lmcs-online.org
Submitted
Published
Jan. 2, 2008
Sep. 11, 2009
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
NIKOS TZEVELEKOS
Oxford University Computing Laboratory
e-mail address: nikt@comlab.ox.ac.uk
Abstract. Game semantics has been used with considerable success in formulating fully
abstract semantics for languages with higher-order procedures and a wide range of computational effects. Recently, nominal games have been proposed for modelling functional
languages with names. These are ordinary, stateful games cast in the theory of nominal
sets developed by Pitts and Gabbay. Here we take nominal games one step further, by
developing a fully abstract semantics for a language with nominal general references.
5.5. Observationality
5.6. Definability and full-abstraction
5.7. An equivalence established
semantically
6. Conclusion
Appendix A. Deferred proofs
References
Contents
List of Figures
1. Introduction
2. Theory of nominal sets
2.1. Nominal sets
2.2. Strong support
3. The language
3.1. Definitions
3.2. Categorical semantics
4. Nominal games
4.1. The basic category G
4.2. Arena and strategy orders in G
4.3. Innocence: the category V
4.4. Totality: the category Vt
4.5. A monad, and some comonads
4.6. Nominal games à la Laird
5. The nominal games model
5.1. Solving the Store Equation
5.2. Obtaining the νρ-model
5.3. Adequacy
5.4. Tidy strategies
1
2
4
5
7
8
9
12
21
21
29
30
33
38
40
41
42
45
48
50
53
56
61
61
62
67
List of Figures
1
Typing rules.
2
3
Reduction rules.
The semantic translation.
10
18
4
The store arena and the type
translation.
44
5
6
9
The store monad.
Strategies for update, dereferencing
and fresh-name creation.
45
47
7
A dialogue in innocent store.
47
8
Store-H’s -Q’s -A’s in arena T 1.
50
1998 ACM Subject Classification: F.3.2.
Key words and phrases: game semantics, denotational semantics, monads and comonads, ν-calculus, ML.
Research financially supported by the Engineering and Physical Sciences Research Council, the Eugenides
Foundation, the A. G. Leventis Foundation and Brasenose College.
l
LOGICAL METHODS
IN COMPUTER SCIENCE
c
DOI:10.2168/LMCS-5 (3:8) 2009
CC
N. Tzevelekos
Creative Commons
2
N. TZEVELEKOS
1. Introduction
Functional languages constitute a programming paradigm built around the intuitive notion
of a computational function, that is, an effectively specified entity assigning values from a
codomain to elements of a domain in a pure manner : a pure function is not allowed to carry
any notion of state or side-effect. This simple notion reveals great computational power if
the domains considered are higher-order, i.e sets of functions: with the addition of recursive
constructs, higher-order functional computation becomes Turing complete (PCF [42, 37]).
In practice, though, functional programming languages usually contain impure features that
make programming simpler (computational effects), like references, exceptions, etc. While
not adding necessarily to its computational power, these effects affect the expressivity of a
language: two functions which seem to accomplish the same task may have different innerworkings which can be detected by use of effects (e.g. exceptions can distinguish constant
functions that do or do not evaluate their inputs). The study of denotational models for
effects allows us to better understand their expressive power and to categorise languages
with respect to their expressivity.
A computational effect present in most functional programming languages is that of
general references. General references are references which can store not only values of
ground type (integers, booleans, etc.) but also of higher-order type (procedures, higherorder functions) or references themselves. They constitute a very powerful and useful programming construct, allowing us not only the encoding of recursion (see example 3.4) but
also the simulation of a wide range of computational effects and programming paradigms
(e.g. object-oriented programming [3, section 2.3] or aspect-oriented programming [40]).
The denotational modelling of general references is quite demanding since, on top of phenomena of dynamic update and interference, one has to cope with the inherent cyclicity
of higher-order storage. In this paper we provide a fully abstract semantics for a language
with general references called the νρ-calculus.
The νρ-calculus is a functional language with dynamically allocated general references,
reference-equality tests and “good variables”, which faithfully reflects the practice of real
programming languages such as ML [27]. In particular, it extends the basic nominal language of Pitts and Stark [36], the ν-calculus, by using names for general references. That
is, names in νρ are atomic entities which can be (cf. [36]):
created with local scope, updated and dereferenced, tested for equality and
passed around via function application, but that is all.
The fully abstract model of νρ is the first such for a language with general references and
good variables.1
Fully abstract models for general references were given via game semantics in [3] and
via abstract categorical semantics (and games) in [20]. Neither approach used names. The
model of [3] is based on the idea of relaxing strategy conditions in order to model computational effects. In particular, it models references as variables of a read/write product type
and it uses strategies which violate visibility in order to use values assigned to references
previously in a play. The synchronisation of references is managed by cell strategies which
model fresh-reference creation. Because references are modelled by products, and in order
to produce a fully abstract semantics, the examined language needs to include bad variables,
which in turn yield unwanted behaviours affecting severely the expressivity of the language
1In fact, the νρ-calculus and its fully abstract model were first presented in [46], of which the present
paper is an extended and updated version.
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
3
and prohibit the use of equality tests for references.2 On the other hand, the approach
in [20] bypasses the bad-variables problem by not including types for references (variables
and references of the same type coincide). This contributes new intuitions on sequential
categorical behaviour (sequoidal category), but we think that it is somehow distanced from
the common notion of reference in functional programming.
The full-abstraction problem has also been tackled via trace semantics in [23]. The
language examined is a version of that in [3] without bad variables. The latter are not needed
since the modelling of references is achieved by names pointing to a store (which is analogous
to our approach). Of relevance is also the fully abstract trace model for a language with
nominal threads and nominal objects presented in [17]. An important difference between
trace models and game models is that the former are defined operationally (i.e. traces are
computed by using the operational semantics), whereas game models are defined in a purely
compositional manner. Nonetheless, trace models and game models have many similarities,
deriving mainly from their sequential-interactive representation of computation, and in
particular there are connections between [23] and the work herein that should be further
examined.
The approach. We model nominal computation in nominal games. These were introduced
independently in [2, 21] for producing fully abstract models of the ν-calculus and its extension with pointers respectively. Here we follow the formulation of [2] with rectifications
pertaining to the issue of unordered state (see remark 4.20).3 Thus, our nominal games
constitute a stateful (cf. Ong [34]) version of Honda-Yoshida call-by-value games [15] built
inside the universe of nominal sets of Gabbay and Pitts [12, 35].
A particularly elegant approach to the modelling of names is by use of nominal
sets [12, 35]. These are sets whose elements involve a finite number of atoms, and
which can be acted upon by finite atom-permutations. The expressivity thus obtained
is remarkable: in the realm (the category) of nominal sets, notions like atom-permutation,
atom-freshness and atom-abstraction are built inside the underlying structure. We therefore use nominal sets, with atoms playing the role of names, as a general foundation for
reasoning about names.
The essential feature of nominal games is the appearance of names explicitly in plays
as constants (i.e. as atoms), which allows us to directly model names and express namerelated notions (name-equality, name-privacy, scope-extrusion, etc.) in the games setting.
Thus nominal games can capture the essential features of nominal computation and, in
particular, they model the ν-calculus. From that model we can move to a model of νρ by
an appropriate effect-encapsulation procedure, that is, by use of a store-monad. A fully
abstract model is then achieved by enforcing appropriate store-discipline conditions on the
games.
2
By “bad variables” we mean read/write constructs of reference type which are not references. They are
necessary for obtaining definability and full-abstraction in [3] since read/write-product semantical objects
may not necessarily denote references.
3The nominal games of [2] use moves attached with finite sets of names. It turns out, however, that
this yields discrepancies, as unordered name-creation is incompatible with the deterministic behaviour of
strategies and, in fact, nominal games in [2] do not form a category. Here (and also in [46]), we recast
nominal games using moves attached with name-lists instead of name-sets. This allows us to restrict our
attention to strong nominal sets (v. definition 2.6), a restriction necessary for overcoming the complications
with determinacy.
4
N. TZEVELEKOS
The paper is structured as follows. In section 2 we briefly present nominal sets and
some of their basic properties. We finally introduce strong nominal sets, that is, nominal
sets with “ordered involvement” of names, and prove the strong support lemma. In section 3
we introduce the νρ-calculus and its operational semantics. We then introduce the notion
of a νρ-model, which provides abstract categorical conditions for modelling νρ in a setting
involving local-state comonads and a store-monad. We finally show definability and, by use
of a quotienting procedure, full-abstraction in a special class of νρ-models. In section 4
we introduce nominal games and show a series of results with the aim of constructing a
category Vt of total, innocent nominal strategies. In the end of the section we attempt a
comparison with the nominal games presented by Laird in [21, 24]. In section 5 we proceed
to construct a specific fully abstract νρ-model in the category Vt . The basic ingredients
for such a construction have already been obtained in the previous section, except for the
construction of the store-monad, which involves solving a recursive domain equation in Vt .
Once this has been achieved and the νρ-model has been obtained, we further restrict legal
strategies to tidy ones, i.e. to those that obey a specific store-related discipline; for these
strategies we show definability and full-abstraction. We conclude in section 6 with some
further directions.
The contributions of this paper are: a) the identification of strong nominal sets as the
adequate setting for nominal language semantics; b) the abstract categorical presentation
in a monadic-comonadic setting of models of a language with nominal general references;
c) the rectification of nominal games of [2] and their use in constructing a specific such
model; d) the introduction of a game-discipline (tidiness) to capture computation with
names-as-references, leading to a definable and hence fully abstract game model.
2. Theory of nominal sets
We give a short overview of nominal sets, which form the basis of all constructions presented
in this paper; our presentation generally follows [35]. Nominal sets are an inspiring paradigm
of the universality (and reusability) of good mathematics: invented in the 1920’s and 1930’s
by Fraenkel and Mostowski as a model of set theory with atoms (ZFA) for showing its
independence from the Axiom of Choice, they were reused in the late 1990’s by Gabbay
and Pitts [12] as the foundation of a general theory of syntax with binding constructs. The
central notion of nominal sets is that of atoms, which are to be seen as basic ‘particles’
present in elements of nominal sets, and of atom-permutations which can act upon those
elements. Moreover, there is an infinite supply of atoms, yet each element of a nominal
set ‘involves’ finitely many of them, that is, it has finite support with regard to atompermutations.
We will be expressing the intuitive notion of names by use of atoms, both in the abstract
syntax of the language and in its denotational semantics. Perhaps it is not clear to the reader
why nominal sets should be used — couldn’t we simply model names by natural numbers?
Indeed, numerals could be used for such semantical purposes (see e.g. [24]), but they would
constitute an overspecification: numerals carry a linear order and a bottom element, which
would need to be carefully nullified in the semantical definitions. Nominal sets factor out
this burden by providing the minimal solution to specifying names; in this sense, nominal
sets are the intended model for names.
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
5
2.1. Nominal sets. Let us fix a countably infinite family (Ai )i∈ω of pairwise disjoint,
countably infinite sets of atoms, and let us denote by PERM(Ai ) the group of finite permutations of Ai . Atoms are denoted by a, b, c and variants; permutations are denoted by π
and variants; id is the identity permutation and (a b) is the permutation swapping a and
b (and fixing all other atoms). We write A for the union of all the Ai ’s. We take
M
PERM(A) ,
PERM(Ai )
(2.1)
i∈I
to be the direct sum of the groups PERM(Ai ), so PERM(A) is a group of finite permutations
of A which act separately on each constituent
Ai . In particular, each π ∈ PERM(A) is
Q
an ω-indexed list of permutations, π ∈ i∈ω PERM(Ai ), such that (π)i 6= idAi holds for
finitely many indices i. In fact, we will write (non-uniquely) each permutation π as a finite
composition
π = π1 ◦ · · · ◦ πn
such that each πi belongs to some PERM(Aji ) — note that ji ’s need not be distinct.
Definition 2.1. A nominal set X is a set |X| (usually denoted X) equipped with an
: PERM(A) × X → X such that, for any
action of PERM(A), that is, a function ◦
π, π ′ ∈ PERM(A) and x ∈ X,
π ◦ (π ′ ◦ x) = (π ◦ π ′ ) ◦ x ,
id ◦ x = x .
Moreover, for any x ∈ X there exists a finite set S such that, for all permutations π,
(∀a ∈ S. π(a) = a) =⇒ π ◦ x = x .
N
For example, A with the action of permutations being simply permutation-application is
a nominal set. Moreover, any set can be trivially rendered into a nominal set of elements
with empty support.
Finite support is closed under intersection and hence there is a least finite support for
each element x of a nominal set; this we call the support of x and denote by S(x).
Proposition and Definition 2.2 ([12]). Let X be a nominal set and x ∈ X. For any
finite S ⊆ A, S supports x iff ∀a, a′ ∈ (A \ S). (a a′ ) ◦ x = x .
Moreover, if finite S, S ′ ⊆ A support x then S ∩ S ′ also supports x. Hence, we can
define
\
S(x) ,
{ S ⊆fin A | S supports x } ,
which can be expressed also as:
S(x) = { a ∈ A | for infinitely many b. (a b) ◦ x 6= x } .
For example, for each a ∈ A, S(a) = {a}. We say that a is fresh for x, written a # x, if
a∈
/ S(x). x is called equivariant if it has empty support. It follows from the definition
that
a # x ⇐⇒ for cofinitely many b. (a b) ◦ x = x .
(2.2)
There are several ways to obtain new nominal sets from given nominal sets X and Y :
• The disjoint union X ⊎Y with permutation-action inherited from X and Y is a nominal
set. This extends to infinite disjoint unions.
• The cartesian product X × Y with permutations acting componentwise is a nominal
set; if (x, y) ∈ X ×Y then S(x, y) = S(x) ∪ S(y).
6
N. TZEVELEKOS
• The fs-powerset Pfs (X), that is, the set of subsets of X which have finite support, with
permutations acting on subsets of X elementwise. In particular, X ′ ⊆ X is a nominal
subset of X if it has empty support, i.e. if for all x ∈ X ′ and permutation π, π ◦ x ∈ X ′ .
Apart from A, some standard nominal sets are the following.
• Using products and infinite unions we obtain the nominal set
[
A# ,
{ a1 . . . an | ∀i, j ∈ 1..n. ai ∈ A ∧ (j 6= i =⇒ aj 6= ai ) } ,
(2.3)
n
that is, the set of finite lists of distinct atoms. Such lists we denote by ā, b̄, c̄ and
variants.
• The fs-powerset Pfs (A) is the set of finite and cofinite sets of atoms, and has Pfin (A) as
a nominal subset (the set of finite sets of atoms).
For X and Y nominal sets, a relation R ⊆ X ×Y is a nominal relation if it is a nominal
subset of X×Y . Concretely, R is a nominal relation iff, for any permutation π and (x, y) ∈
X ×Y ,
xRy ⇐⇒ (π ◦ x)R(π ◦ y) .
For example, it is easy to show that # ⊆ A × X is a nominal relation. Extending this
reasoning to functions we obtain the notion of nominal functions.
Definition 2.3 (The category Nom). We let Nom be the category of nominal sets
and nominal functions, where a function f : X → Y between nominal sets is nominal if
f (π ◦ x) = π ◦ f (x) for any π ∈ PERM(A) and x ∈ X.
N
For example, the support function, S( ) : X → Pfin (A) , is a nominal function since
S(π ◦ x) = π ◦ S(x) .
Nom inherits rich structure from Set and is in particular a topos. More importantly, it
contains atom-abstraction mechanisms; we will concentrate on the following.
Definition 2.4 (Nominal abstraction). Let X be a nominal set and x ∈ X. For any
finite S ⊆ A, we can abstract x to S, by forming
[x]S , { y ∈ X | ∃π. (∀a ∈ S ∩ S(x). π(a) = a) ∧ y = π ◦ x } .
N
The abstraction restricts the support of x to S ∩ S(x) by appropriate orbiting of x (note
that [x]S ∈ Pfs (X)). In particular, we can show the following.
Lemma 2.5 ([48]). For any x ∈ X, S ⊆fin A and π ∈ PERM(A),
π ◦ [x]S = [π ◦ x]π ◦ S ∧ S([x]S ) = S(x) ∩ S .
Two particular subcases of nominal abstraction are of interest. Firstly, in case S ⊆ S(x)
the abstraction becomes
[x]S = { y ∈ X | ∃π. (∀a ∈ S. π(a) = a) ∧ y = π ◦ x } .
(∗)
This is the mechanism used in [46]. Note that if S * S(x) ∧ S(x) * S then (∗) does not yield
S([x]S ) = S ∩ S(x). The other case is the simplest possible, that is, of S being empty; it
turns out that this last constructor is all we need from nominal abstractions in this paper.
We define:
[x] , { y ∈ X | ∃π. y = π ◦ x } .
(2.4)
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
7
2.2. Strong support. Modelling local state in sets of atoms yields a notion of unordered
state, which is inadequate for our intended semantics. Nominal game semantics is defined
by means of nominal strategies for games that model computation. These strategies, however, are deterministic up to choice of fresh names, a feature which is in direct conflict to
unordered state. For example, in unordered state the consecutive creation of two atoms
a, b is modelled by adding the set {a, b} to the local state; on the other hand, by allowing
strategies to play such moves we lose determinism in strategies.4
Ordered state is therefore more appropriate for our semantical purposes and so we
restrict our attention to nominal sets with ordered presence of atoms in their elements.
This notion is described as strong support.5
Definition 2.6. For any nominal set X, any x ∈ X and any S ⊆ A, S strongly supports
x if, for any permutation π,
(∀a ∈ S. π(a) = a) ⇐⇒ π ◦ x = x .
We say that X is a strong nominal set if it is a nominal set with all its elements having
strong support.
N
Compare the last assertion above with that of definition 2.1, which employs only the leftto-right implication. In fact, strong support coincides with weak support when the former
exists.
Proposition 2.7. If X is a nominal set and x ∈ X has strong support S then S = S(x).
Proof: By definition, S supports x, so S(x) ⊆ S. Now suppose there exists a ∈ S \ S(x).
For any fresh b, (a b) fixes S(x) but not S, so it doesn’t fix x, .
Thus, for example, the set {a, b} ⊆ Ai of the previous paragraph does not have strong
support, since the permutation (a b) does not fix the atoms in its support (the set {a, b})
but still (a b) ◦ {a, b} = {a, b}. On the other hand, {a, b} strongly supports the list ab. In
fact, all lists of (distinct) atoms have strong support and therefore A# is a strong nominal
set (but Pfin (A) is not).
The main reason for introducing strong nominal sets is the following result, which is a
specialised version of the Strong Support Lemma of [48] (with S = ∅).
Lemma 2.8 (Strong Support Lemma). Let X be a strong nominal set and let
x1 , x2 , y1 , y2 , z1 , z2 ∈ X. Suppose also that S(yi ) ∩ S(zi ) ⊆ S(xi ) , for i = 1, 2, and that
there exist πy , πz such that
πy ◦ x1 = πz ◦ x1 = x2 ,
πy ◦ y 1 = y 2 ,
πz ◦ z1 = z2 .
Then, there exists some π such that π ◦ x1 = x2 , π ◦ y1 = y2 and π ◦ z1 = z2 .
Proof: Let ∆i , S(zi ) \ S(xi ) , i = 1, 2 , so ∆2 = πz ◦ ∆1 , and let π ′ , πy−1 ◦ πz . By
assumption, π ′ ◦ x1 = x1 , and therefore by strong support π ′ (a) = a for all a ∈ S(x1 ).
Take any b ∈ ∆1 . Then π ′ (b) # π ′ ◦ x1 = x1 and πz (b) ∈ πz ◦ ∆1 = ∆2 , ∴ πz (b) # y2 ,
∴ π ′ (b) # πy−1 ◦ y2 = y1 . Hence,
b ∈ ∆1 =⇒ b, π ′ (b) # x1 , y1 .
4The problematic behaviour of nominal games in weak support is discussed again in remark 4.20.
5An even stricter notion of support is linear support, introduced in [31]: a nominal set X is called linear
if for each x ∈ X there is a linear order <x of S(x) such that a <x b =⇒ π(a) <π ◦
x
π(b).
8
N. TZEVELEKOS
Now assume ∆1 = {b1 , ..., bN } and define π1 , ..., πN by recursion:
π0 , id ,
πi+1 , (bi+1 πi ◦ π ′ ◦ bi+1 ) ◦ πi .
We claim that, for each 0 ≤ i ≤ N and 1 ≤ j ≤ i, we have
π i ◦ π ′ ◦ bj = bj ,
πi ◦ x1 = x1 ,
πi ◦ y 1 = y 1 .
We do induction on i; the case of i = 0 is trivial. For the inductive step, if πi ◦ π ′ ◦ bi+1 = bi+1
then πi+1 = πi , and πi+1 ◦ π ′ ◦ bi+1 = πi ◦ π ′ ◦ bi+1 = bi+1 . Moreover, by IH, πi+1 ◦ π ′ ◦ bj =
bj for all 1 ≤ j ≤ i, and πi+1 ◦ x1 = x1 and πi+1 ◦ y1 = y1 . If πi ◦ π ′ ◦ bi+1 = b′i+1 6= bi+1 then,
by construction, πi+1 ◦ π ′ ◦ bi+1 = bi+1 . Moreover, for each 1 ≤ j ≤ i, by IH, πi+1 ◦ π ′ ◦ bj =
(bi+1 b′i+1 ) ◦ bj , and the latter equals bj since bi+1 6= bj implies b′i+1 6= πi ◦ π ′ ◦ bj = bj .
Finally, for any a ∈ S(x1 ) ∪ S(y1 ), πi+1 ◦ a = (bi+1 b′i+1 ) ◦ πi ◦ a = (bi+1 b′i+1 ) ◦ a, by IH,
with a 6= bi+1 . But the latter equals a since π ′ (bi+1 ) 6= a implies that b′i+1 6= πi ◦ a = a, as
required.
Hence, for each 1 ≤ j ≤ N ,
π N ◦ π ′ ◦ bj = bj ,
πN ◦ x1 = x1 ,
πN ◦ y 1 = y 1 .
Moreover, πN ◦ π ′ ◦ z1 = z1 , as we also have
b ∈ S(z1 ) ∩ S(x1 ) =⇒ πN ◦ π ′ ◦ b = πN ◦ b = b
−1
we have:
(again by strong support). Thus, considering π , πy ◦ πN
−1 ◦
x1 = πy ◦ x1 = x2 ,
πy ◦ πN
−1 ◦
y 1 = πy ◦ y 1 = y 2 ,
πy ◦ πN
−1 ◦
−1 ◦
πN ◦ π ′ ◦ z1 = πy ◦ π ′ ◦ z1 = πy ◦ πy−1 ◦ πz ◦ z1 = z2 ,
z1 = πy ◦ πN
πy ◦ πN
as required.
A more enlightening formulation of the lemma can be given in terms of abstractions, as in
the following table. In the context of nominal games later on, the strong support lemma
will guarantee us that composition of abstractions of plays can be reduced to composition
of plays.
Strong Support Lemma.
Let X be a strong nominal set and x1 , x2 , y1 , y2 , z1 , z2 ∈ X. Suppose also that
S(yi ) ∩ S(zi ) ⊆ S(xi ) , for i = 1, 2, and moreover that
[x1 , y1 ] = [x2 , y2 ] ,
[x1 , z1 ] = [x2 , z2 ] .
Then, [x1 , y1 , z1 ] = [x2 , y2 , z2 ].
3. The language
The language we examine, the νρ-calculus, is a call-by-value λ-calculus with nominal general
references. It constitutes an extension of the ν-calculus [36] and Reduced ML [44, chapter
5] in which names are used for general references. It is essentially the same calculus of [23],
that is, the mkvar-free fragment of the language of [3] extended with reference-equality tests
and names.
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
ā | Γ |− n : N
ā | Γ |− skip : 1
ā | Γ, x : A |− x : A
ā | Γ |− M : A × B
ā | Γ |− M : A × B
ā | Γ |− fst M : A
ā | Γ |− snd M : B
ā | Γ |− M : N
ā | Γ |− M : N
ā | Γ |− pred M : N
ā | Γ |− succ M : N
ā | Γ, x : A |− M : B
ā | Γ |− M : A
ā | Γ |− a : [A]
ā | Γ |− M : [A]
āa | Γ |− M : B
ā | Γ |− N : B
ā | Γ |− hM, N i : A × B
ā | Γ |− M : N
ā | Γ |− Ni : A
(i=1,2)
ā | Γ |− if0 M then N1 else N2 : A
ā | Γ |− M : A → B
ā | Γ |− λx.M : A → B
a ∈ AA
∧a ∈ ā
9
ā | Γ |− N : A
ā | Γ |− M N : B
ā | Γ |− M : [A]
ā | Γ |− νa.M : B
ā | Γ |− N : A
ā | Γ |− M := N : 1
ā | Γ |− N : [A]
ā | Γ |− [M = N ] : N
ā | Γ |− M : [A]
ā | Γ |− ! M : A
Figure 1: Typing rules.
3.1. Definitions. The syntax of the language is built inside Nom. In particular, we assume
there is a set of names (atoms) AA ∈ (Ai )i∈ω for each type A in the language. Types include
types for commands, naturals and references, product types and arrow types.
Definition 3.1. The νρ-calculus is a typed functional language of nominal references. Its
types, terms and values are given as follows.
TY ∋ A, B ::= 1 | N | [A] | A → B | A × B
TE ∋ M, N ::= x | λx.M | M N hM, N i | fst M | snd N
λ-calculus
| n | pred M | succ N
arithmetic
| skip | if0 M then N1 else N2
return / if then else
|a
reference to type A (a ∈ AA )
| [M = N ]
name-equality test
| νa.M
ν-abstraction
| M := N
update
| !M
dereferencing
VA ∋ V, W ::= n | skip | a | x | λx.M | hV, W i
The typing system involves terms in environments ā | Γ, where ā a list of (distinct) names
and Γ a finite set of variable-type pairs. Typing rules are given in figure 1.
N
The ν-constructor is a name-binder : an occurrence of a name a inside a term M is bound
10
N. TZEVELEKOS
if it is in the scope of some νa . We follow the standard convention of equating terms up to
α-equivalence, the latter defined with respect to both variable- and name-binding.
Note that TE and VA are strong nominal sets: each name a of type A is taken from
AA and all terms contain finitely many atoms — be they free or bound — which form their
support. Note also the notion of ordered state that is imposed by use of name-lists (instead
of name-sets) in type-environments. In fact, we could have used unordered state at the level
of syntax (and operational semantics) of νρ, and ordered state at the level of denotational
semantics. This already happens with contexts: a context Γ is a set of premises, but JΓK is
an (ordered) product of type-translations. Nevertheless, we think that ordered state does
not add much complication while it saves us from some informality.
The operational semantics of the calculus involves computation in some store environment where created names have their values stored. Formally, we define store environments
S to be lists of the form:
S ::= ǫ | a, S | a :: V, S .
(3.1)
Observe that the store may include names that have been created but remain as yet unassigned a value. For each store environment S we define its domain to be the name-list given
by:
dom(ǫ) , ǫ , dom(a, S) , a, dom(S) , dom(a :: V, S) , a, dom(S) .
(3.2)
We only consider environments whose domains are lists of distinct names. We write
S |=Γ,A M , or simply S |= M , only if dom(S) | Γ |− M : A is valid (i.e., derivable).
Definition 3.2. The operational semantics is given in terms of a small-step reduction, the
rules of which are given in figure 2. Evaluation contexts E[ ] are of the form:
[ = N ] , [a =
(λx.N )
,
], !
,
N , fst
:= N , a :=
, snd
, pred
, if0
then N1 else N2 ,
, succ
, h , N i , hV, i
N
We can see that νρ is not strongly normalising with the following example. Recall the
NEW
EQ
IF0
a#S
n=0 if a=b
PRD
S |= [a = b] −→ S |= n n=1 if a6=b
S |= if0 n then N1 else N2 −→ S |= Nj
UPD
SUC
S |= νa.M −→ S, a |= M
j=1 if n=0
j=2 if n>0
S, a :: V, S ′ |= ! a −→ S, a :: V, S ′ |= V
LAM
S |= pred (n+1) −→ S |= n
PRD
FST
S, a(:: W ), S ′ |= a := V −→ S, a :: V, S ′ |= skip
DRF
S |= succ n −→ S |= n+1
SND
S |= (λx.M ) V −→ S |= M {V /x}
Figure 2: Reduction rules.
CTX
S |= pred 0 −→ S |= 0
S |= fst hV, W i −→ S |= V
S |= snd hV, W i −→ S |= W
S |= M −→ S ′ |= M ′
S |= E[M ] −→ S ′ |= E[M ′ ]
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
11
standard CBV encoding of sequencing:
M ; N , (λz.N )M
(3.3)
with z not free in N .
Example 3.3. For each type A, take
stopA , νb.(b := λx.(! b)skip) ;(! b)skip
with b ∈ A1→A . We can see that stopA diverges, since:
|= stopA −→
→ b :: λx.(! b)skip |= (! b)skip −→ b :: λx.(! b)skip |= (λx.(! b)skip)skip
−→ b :: λx.(! b)skip |= (! b)skip .
The great expressive power of general references is seen in the fact that we can encode the
Y combinator. The following example is adapted from [3].
Example 3.4. Taking a ∈ AA→A , define:
YA , λf.νa.(a := λx.f (! a)x) ; ! a .
YA has type ((A → A) → A → A) → A → A and, for any relevant term M and value V ,
we have
|= (YA (λy.M ))V −→
→ a :: λx.(λy.M )(! a)x |= (! a)V
−→ a :: λx.(λy.M )(! a)x |= (λx.(λy.M )(! a)x)V
−→ a :: λx.(λy.M )(! a)x |= (λy.M )(! a)V ,
and also |= (λy.M )(YA (λy.M ))V −→
→ a :: λx.(λy.M )(! a)x |= (λy.M )(! a)V .
For example, setting
addrecx , λx. if0 snd x then x else xhsucc fst x, pred snd xi ,
add , Y(λh.addrech ) ,
S , a :: λx.(λh.addrech )(! a)x ,
where x is a metavariable of relevant type, we have that, for any n, m ∈ N,
|= addhn, mi −→
→ S |= (λh.addrech )(! a)hn, mi −→
→ S |= addrecS(a) hn, mi
−→
→ S |= if0 m then hn, mi else S(a)hsucc fst hn, mi, pred snd hn, mii
−→
→ S |= S(a)hn+1, m−1i −→ S |= (λh.addrech )(! a)hn+1, m−1i
· · · −→
→ S |= (λh.addrech )(! a)hn+m, 0i −→
→ S |= hn+m, 0i .
The notions of observational approximation and observational equivalence are built
around the observable type N. Two terms are equivalent if, whenever they are put inside
a variable- and name-closing context of resulting type N, called a program context, they
reduce to the same natural number. The formal definition follows; note that we usually
omit ā and Γ and write simply M / N .
Definition 3.5. For typed terms ā | Γ |− M : A and ā | Γ |− N : A , define
ā | Γ |− M / N ⇐⇒ ∀C. (∃S ′ . |= C[M ] −→
→ S ′ |= 0) =⇒ (∃S ′′ . |= C[N ] −→
→ S ′′ |= 0)
where C is a program context. Moreover, ≅ , / ∩ ' .
N
12
N. TZEVELEKOS
3.2. Categorical semantics. We now examine sufficient conditions for a fully abstract
semantics of νρ in an abstract categorical setting. Our aim is to construct fully abstract
models in an appropriate categorical setting, pinpointing the parts of structure needed for
such a task. In section 5 we will apply this knowledge in constructing a concrete such model
in nominal games.
Translating each term M into a semantical entity JM K and assuming a preorder “.”
in the semantics, full-abstraction amounts to the assertion:
M / N ⇐⇒ JM K . JN K
(FA)
Note that this formulation is weaker than equational full abstraction, which is given by:
M ≅ N ⇐⇒ JM K = JN K .
(EFA)
Nevertheless, once we achieve (FA) we can construct an extensional model, via a quotienting construction, for which EFA holds. Being a quotiented structure, the extensional
model does not have an explicit, simple description, and for this reason we prefer working
with the intensional model (i.e., the unquotiented one). Of course, an intensional model
satisfying (EFA) would be preferred but this cannot be achieved in our nominal games.
Therefore, our categorical models will be guided by the (FA) formulation.
3.2.1. Monads and comonads. The abstract categorical semantics we put forward is based
on the notions of monads and comonads. These are standard categorical notions (v. [25],
and [8, Triples]) which have been used extensively in denotational semantics of programming
languages. We present here some basic definitions and properties.
Monads. Monads were introduced in denotational semantics through the work of Moggi [29,
30] as a generic tool for encapsulating computational effects. Wadler [49] popularised monads in programming as a means of simulating effects in functional programs, and nowadays
monads form part and parcel of the Haskell programming language [18].
Definition 3.6. A strong monad over a category C with finite products is a quadruple
(T, η, µ, τ ), where T is an endofunctor in C and η : IdC → T , µ : T 2 → T and τ : × T →
T ( × ) are natural transformations such that the following diagrams commute.
µT A
T 3A
/ T 2A
µA
T µA
T 2A
µA
(A × B) × T C
/ TA
ηT A
/ T 2A
JJ
JJ
JJ
µA
T ηA
J
idT A JJ%
/ TA
T 2A
µA
A × (B × T C)id
1 × T AO
τA,B
τ
A,T B
/ T ((A × B) × C)
/ T (A × T B)
A × T 2 BP
RRR
P
P
PPP
RRR
RRRT ∼
idA ×µB PPPP
RR=R
PPP
R(
(
/ A × T (B × C)
/ T (A × (B × C))
A × TB
A ×τB,C
τ1,A
/ T (1 × A)
OOO
OOO
OO
OOO η
∼
idA ×ηB
T∼
OOA×B
=
= OOOO
OO'
OOO
'
/ T (A × B)
A × TB
TA
A × BO
τA×B,C
∼
=
TA J
τA,B×C
T τA,B
/ T 2 (A × B)
µA×B
τA,B
/ T (A × B)
We say that C has T -exponentials if, for every pair B, C of objects, there exists an object
T C B such that for any object A there exists a bijection
∼
=
ΛTA,B,C : C(A × B, T C) −
→ C(A, T C B )
natural in A.
N
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
13
Given a strong monad (T, η, µ, τ ), we can define the following transformations.
∼
=
∼
=
τA,B
′
→ T (A × B) ,
→ B × T A −−−→ T (B × A) −
τA,B
, TA × B −
′
τA,T
B
T τA,B
µA×B
τT A,B
′
T τA,B
µA×B
ψA,B , T A × T B −−−−→ T (A × T B) −−−−→ T 2 (A × B) −−−−→ T (A × B) ,
(3.4)
′
ψA,B
, T A × T B −−−−→ T (T A × B) −−−−→ T 2 (A × B) −−−−→ T (A × B) .
Moreover, T -exponentials supply us with T -evaluation arrows, that is,
evTB,C : T C B × B → T C , ΛT
−1
(idT C B )
(3.5)
so that, for each f : A × B → T C,
f = ΛT (f ) ×B ; evTB,C .
In fact, T -exponentiation upgrades to a functor (T )− : C op × C → C which takes each
f : A′ → A and g : B ′ → B to
′
evT
id×f
Tg
−→ T B) .
T g f : T B ′ A → T B A , ΛT (T B ′ A × A′ −−−→ T B ′ A × A −−→ T B ′ −
(3.6)
Naturality of ΛTA,B,C in A implies its naturality in B, C too, by use of the above construct.
Comonads. Comonads are the dual notion of monads. They were first used in denotational semantics by Brookes and Geva [9] for modelling programs intensionally, that is, as
mechanisms which receive external computation data and decide on an output. Monadiccomonadic approaches were examined by Brookes and van Stone [10].
Definition 3.7. A comonad on a category C is a triple (Q, ε, δ), where Q is an endofunctor
in C and ε : Q → IdC , δ : Q → Q2 are natural transformations such that the following
diagrams commute.
Q3O A o
δQA
QδA
Q2A o
Q2O A
δA
δA
QA oeL
εQA
Q2A
O
LLL
LLL
δ
L A
idQA LLL
QA
QA
QεA
/ QA
r9
r
r
r
rridr
r
r
QA
rr
Now assume C has binary products. We define a transformation ζ̄ : Q( × ) →
hQπ1 ,Qπ2 i
× Q( ),
εA ×idQB
ζ̄ A,B , Q(A × B) −−−−−−→ QA × QB −−−−−−→ A × QB .
Q is called a product comonad if ζ̄ is a natural isomorphism, and is written (Q, ε, δ, ζ)
where ζ is the inverse of ζ̄.
N
It is easy to see that the transformation ζ̄ makes the relevant (dualised) diagrams of definition 3.6 commute, even without stipulating the existence of the inverse ζ. Note that we
write ζ ′ , ζ̄ ′ for the symmetric counterparts of ζ, ζ̄.
Product comonads are a stronger version of “strong comonads” of [10]. A product
comonad Q can be written as:
Q ∼
= Q1 ×
6
hence the name. We say that Q1 is the basis of the comonad .
6Note this is an isomorphism between comonads, not merely between functors.
14
N. TZEVELEKOS
Monadic-comonadic setting. In the presence of both a strong monad (T, η, µ, τ ) and a product comonad (Q, ε, δ, ζ) in a cartesian category C, one may want to solely consider arrows
from some initial computation data (i.e., some initial state) of type A to some computation
of type B, that is, arrows of type:
QA → T B
T
This amounts to applying the biKleisli construction on C, that is, defining the category CQ
with the same objects as C, and arrows
T
CQ
(A, B) , C(QA, T B) .
For arrow composition to work in the biKleisli category, we need a distributive law between
Q and T , that is, a natural transformation ℓ : QT → T Q making the following diagrams
commute.
QηA
εT A
/ TA
/ QT A
QA M
q8
MMM
q
q
MMM
qq
ℓ
qqTq εA
ηQA MMMM A
q
q
& q
T QA
QT 2 A
QµA
ℓT A ; T ℓA
/ QT A
δT A
/ Q2 T A
QℓA ; ℓQA
ℓA
T 2 QA
µQA
/ T QA
T δA
/ T Q2 A
In this case, composition of f : QA → T B and g : QB → T C is performed as:
δ
Qf
Tg
ℓ
µC
B
QA −
→ Q2A −−→ QT B −→
T QB −−
→ T 2 C −−→ T C
Since we are examining a monadic-comonadic setting for strong monad T and product
comonad Q, a distributive law amounts to a natural transformation
ℓ : Q1 × T
→ T (Q1 × ) ,
which is therefore given for free: take ℓ , τQ1, . The distributivity equations follow
straightforwardly from the monadic equations.
Exponentials and the intrinsic preorder. The notion of T -exponentials can be generalised to
the monadic-comonadic setting as follows.
Definition 3.8. Let C be a category with finite products and let (T, η, µ, τ ), (Q, ε, δ) be a
strong monad and comonad, respectively, on C. We say that C has (Q, T )-exponentials
if, for each pair B, C of in C there exists an object (Q, T )C B such that, for each object A,
there exists a bijection
∼
=
→ C(QA, (Q, T )C B )
φA,B,C : C(Q(A × B), T C) −
N
natural in A.
Assume now we are in a monadic-comonadic setting (C, Q, T ) with T a strong monad with
T -exponentials and Q a product comonad. (Q, T )-exponentials then come for free.
Proposition 3.9. In the setting of the previous definition, if T is a strong monad with
exponentials and Q is a product comonad then C has (Q, T )-exponentials defined by:
(Q, T )C B , T C B ,
ζ′
f
φ(f ) , ΛT (QA × B −
→ Q(A × B) −
→ T C) .
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
15
φ is a bijection with its inverse sending each g : QA → T C B to the arrow:
ζ̄ ′
evT
g×id
Q(A × B) −
→ QA × B −−−→ T C B × B −−→ T C .
In the same setting, we can define a notion of intrinsic preorder . Assuming an object
O of observables and a collection O ⊆ C(1, T O) of observable arrows, we can have the
following.
Definition 3.10. Let C, Q, T, O, O be as above. We define . to be the union, over all
objects A, B, of relations . A,B ⊆ C(QA, T B)2 defined by:
f . A,B g ⇐⇒ ∀ρ ∈ C(Q(T B A ), T O). ΛQ,T (f ); ρ ∈ O =⇒ ΛQ,T (g); ρ ∈ O ,
δ
QΛT (ζ ′ ; f )
where ΛQ,T (f ) , Q1 −
→ Q21 −−−−−−−→ Q(T B A ) .
N
We have the following enrichment properties.
Proposition 3.11. Let C, Q, T, O, O and . be as above. Then, for any f, g : QA → T B
and any arrow h, if f . g then:
• if h : QB → T B ′ then δ ; Qf ; ℓ ; T h ; µ . δ ; Qg ; ℓ ; T h ; µ ,
• if h : QA′ → T A then δ ; Qh ; ℓ ; T f ; µ . δ ; Qh ; ℓ ; T g ; µ ,
• if h : QA → T C then hf, hi ; ψ . hg, hi ; ψ
• if A = A1 × A2 then
ΛTQA1 ,A2 ,B (ζ ′ ; f ) ; η
.
and
hh, f i ; ψ . hh, gi ; ψ ,
ΛTQA1 ,A2 ,B (ζ ′ ; g) ; η .
3.2.2. Soundness. We proceed to present categorical models of the νρ-calculus. The approach we take is a monadic and comonadic one, over a computational monad T and
a family of local-state comonads Q = (Qā )ā∈A# , so that the morphism related to each
ā | Γ |− M : A be of the form JM K : Qā JΓK → T JAK. Computation in νρ is store-update and
fresh-name creation, so T is a store monad, while initial state is given by product comonads.
Definition 3.12. A νρ-model M is a structure (M, T, Q) such that:
I. M is a category with finite products, with 1 being the terminal object and A × B the
product of A and B.
II. T is a strong monad (T, η, µ, τ ) with exponentials.
III. M contains an appropriate natural numbers object N equipped with successor and
predecessor arrows and ñ : 1 → N, each n ∈ N. Moreover, for each object A, there is
an arrow cndA : N × T A × T A → T A for zero-equality tests.
IV. Q is a family of product comonads (Qā , ε, δ, ζ)ā∈A# on M such that:
′
(a) the basis of Qǫ is 1, and Qā = Qā whenever [ā] = [ā′ ] (i.e., whenever π ◦ ā = ā′ ),
′
(b) if S(ā′ ) ⊆ S(ā) then there exists a comonad morphism āā′ : Qā → Qā such that
ā
ā
′
′′
ǫ = ε, ā = id and, whenever S(ā ) ⊆ S(ā ) ⊆ S(ā),
ā ā′′
ā
;
= ′
ā′′ ā′
ā
16
N. TZEVELEKOS
(c) for each āa ∈ A# there exists a natural transformation nuāa : Qā → T Qāa such
that, for each A, B ∈ Ob(M) and āa, ā′a with S(āa) ⊆ S(ā′a), the following diagrams commute.
ā′
Q A
ā′
ā
hid,nuāa i
/ Qā A
′
τ
nuāa
nuāa
′
T Qā a A
ā′a
T āa
/ T Qāa A
/ Qā (A × B)
A × T Qāa B
/ T (Qā A × Qāa A)
āa
T h ā ,idi
(N2)
nuA×B
id×nuB
ζ
A × Qā B
/ Qā A × T Qāa A
/ T Qāa (A × B)
τ ;Tζ
V. Setting AA , Qa 1, for each a ∈ AA , there is a name-equality arrow eqA : AA × AA → N
such that, for any distinct a, b ∈ AA , the following diagram commutes.
∆
Qa 1
/
ab ab
ha, b i
AA × AA o
Qab 1
eqA
!
0̃
1
/
(N1)
!
1̃
No
1
VI. Setting J1K , 1, JNK , N, J[A]K , AA , JA → BK , T JBK JAK , JA × BK , JAK × JBK,
M contains, for each A ∈ TY, arrows
drfA : AA → T JAK
and
updA : AA × JAK → T 1
such that the following diagrams commute,
AA × JAK
hid,updA i ; τ ; ∼
=
/ T (AA × JAK)
T (π1 ; drfA ) ; µ
-
1 T JAK
T π2
hid×π1 ;updA ,id×π2 ;updA i
AA × JAK × JAK
Qab 1 ×
ψ ;∼
=
+
/ T1 × T1
π2
JAK × JBK
ab
ab
h a ×π1 ;updA , b ×π2 ;updB i
/
3 T1
(NR)
ψ;∼
=
T1 × T1
ψ′ ; ∼
=
+
3 T1
and, moreover,
āa
′
(nuāa
A × updB ) ; ψ = (nuA × updB ) ; ψ ,
i.e., updates and fresh names are independent effects.
(SNR)
N
The second subcondition of (N2) above essentially states that, for each object A, nuA can
be expressed as:
∼
=
∼
=
τ′
nu ×id
Qā A −
→ Qā 1 × A −−1−−→ T Qāa 1 × A −
→ T (Qāa 1 × A) −
→ T Qāa A
It is evident that the role reserved for nu in our semantics is that of fresh name creation.
Accordingly, nu gives rise to a categorical name-abstraction operation: for any arrow f :
Qāa A → T B in M, we define
nu
Tf
µ
A
^ a _ f , Qā A −−→
T Qāa A −−→ T 2 B −
→ TB .
(3.7)
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
17
The (NR) diagrams give the basic equations for dereferencings and updates (cf. [38, definition 1] and [44, section 5.8]). The first diagram stipulates that by dereferencing an updated
reference we get the value of the update. The second diagram ensures that the value of a
reference is that of the last update: doing two consecutive updates to the same reference
is the same as doing only the last one. The last diagram states that updates of distinct
references are independent effects.
Let us now proceed with the semantics of νρ in νρ-models.
Definition 3.13. Let (M, T, Q) be a νρ-model. Recall the type-translation:
J 1K , 1 ,
JNK , N ,
J[A]K , AA ,
JA → BK , T JBK JAK ,
JA × BK , JAK × JBK .
A typing judgement ā | Γ |− M : A is translated to an arrow JM Kā|Γ : Qā JΓK → T JAK in
M, which we write simply as JM K : Qā Γ → T A, as in figure 3.
N
We note that the translation of values follows a common pattern: for any ā | Γ |− V : B,
we have JV K = |V | ; η , where
|x| , Qā π ; āǫ
|a| , Qā ! ; āa
|ñ| , Qā ! ; āǫ ; ñ
|skip| , Qā ! ; āǫ
|λx.M | , ΛT (ζ ′ ; JM K)
(3.8)
|hV, W i| , h|V |, |W |i .
We can show the following lemmas, which will be used in the proof of Correctness.
′
Lemma 3.14. For any ā | Γ |− M : A and S(ā) ⊆ S(ā′ ), JM Kā′ |Γ = āā ; JM Kā|Γ .
Moreover, if Γ = x1 : B1 , ..., xn : Bn , and ā | Γ |− M : A and ā | Γ |− Vi : Bi are derivable,
ζ ′ ; Qā π2
hid,|V1 |,...,|Vn|i
JM K
~ /~x}K = Qā Γ −−−−−−−−−→ Qā Γ × Γ −−−−−→ Qā Γ −−→ T A .
JM {V
Lemma 3.15. For any relevant f, g,
āa
hf, ā ; gi
h^ a _ f,gi
ψ
ψ
āa
→ T (B × C) ,
^ a _ Q A −−−−−→ T B × T C −
→ T (B × C) = Qā A −−−−−→ T B × T C −
^a_f
f
Tg
µ
Tg
µ
→ T B −−
→ T 2C −
→ T C = Qā A −−−→ T B −−
^ a _ Qāa A −
→ T 2C −
→ TC .
Lemma 3.16. Let ā | Γ |− M : A and ā | Γ |− E[M ] : B be derivable, with E[ ] being an
evaluation context. Then JE[M ]K is equal to:
hid,JM Ki
T ζ′
τ
T JE[x]K
µ
Qā Γ −−−−−→ Qā Γ × T A −
→ T (Qā Γ × A) −−→ T Qā (Γ × A) −−−−−
→ T 2B −
→ TB .
r
We write S |= M −
→ S ′ |= M ′ with r ∈ {NEW,SUC,EQ,...,LAM′ } if the last non-CTX rule
in the related derivation is r. Also, to any store S, we relate the term S̄ of type 1 as:
ǭ , skip ,
a, S , S̄ ,
a :: V, S , (a := V ; S̄)
(3.9)
Proposition 3.17 (Correctness). For any typed term ā | Γ |− M : A, and S with
dom(S) = ā, and r as above,
r
1. if r ∈
/ {NEW, UPD, DRF} then S |= M −
→ S |= M ′ =⇒ JM K = JM ′ K ,
r
2. if r ∈ {UPD, DRF} then S |= M −
→ S ′ |= M ′ =⇒ JS̄ ; M K = JS̄ ′ ; M ′ K ,
NEW
3. S |= M −−−→ S, a |= M ′ =⇒ JS̄ ; M K = ^ a _ JS̄ ; M ′ K .
Therefore, S |= M −
→ S ′ |= M ′ =⇒ JS̄ ; M K = ^ ā′ _ JS̄ ′ ; M ′ K , with dom(S ′ ) = āā′ .
18
N. TZEVELEKOS
Qā !
ā
ǫ
ā
ā
ǫ
Qā π
η
JxK , Qā Γ −−−→ Qā A −→ A −
→ TA
ā
Qā !
η
a
→ T AA
JaK , Qā Γ −−→ Qā 1 −→
AA −
ā
Qā !
JM K : Qā Γ −
→ TN
η
ñ
→N−
JñK , Q Γ −−→ Q 1 −→ 1 −
→ TN
ā
JMK
/ TN
Qā Γ S
S S
S S
T succ
Jsucc MK S S S
)
TN
η
JM K : Qā Γ −
→ T AA
ǫ
1−
→ T1
JskipK , Qā Γ −−→ Qā 1 −→
JN K : Qā Γ −
→ T AA
JM K : Qāa Γ −
→ TA
^ a _ JMK
Jνa.M K : Qā Γ −−−−−→ T A
Qā Γ K
hJMK,JN Ki
K
K
ā
K
JM K : Q (Γ × A) −
→ TB
/ T AA × T AA
ψ
K
K
J[M=N ]K
K
T (AA × AA )
K
′
T
Λ (ζ ; JMK)
/ TBA
Qā Γ T T
T T
T T
η
Jλx.MK T T*
T (T B A )
K
K
T eq
K%
TN
JM K : Qā Γ −
→ T AA
JN K : Qā Γ −
→ TA
ā
A
JM K : Q Γ −
→ T (T B )
JN K : Qā Γ −
→ TA
hJMK,JN Ki
ā
Q ΓL
L
LL
hJMK,JN Ki
K
LL
L
K
/ T (T B A ) × T A
A
L L T (T B × A)
LL
T evT ; µ
L%
TB
JM K : Qā Γ −
→ T (A × B)
JMK
/ T (A × B)
Qā Γ U U
U U
U U
T π1
U U
Jfst MK
U*
TA
JM K : Qā Γ −
→ TA
JN K : Q Γ −
→ TB
hJMK,JN Ki
/ TA × TB
Q ΓT T
T T
T T
JhM,N iK T T*
ψ
T (A × B)
K
K
K
T (AA × A)
K
K
K
T updA ; µ
K%
T1
JM K : Qā Γ −
→ T AA
JMK
/ T AA
Qā Γ T T
T T
T
T drfA ; µ
J!MK T T T
)
TA
JM K : Qā Γ −
→ TN
JNi K : Qā Γ −
→ TA
Qā Γ K
ā
/ T AA × T A
ψ
K
JM:=N K
ψ
JM N K
ā
Qā Γ K
hJMK,JN1 K,JN2 Ki
K
K
K
Jif0 M then N1 else
/ T N × T A2
τ′
K
T
(
N
×
T A2 )
K
N2 K
K
K
cndA ; µ
K
K%
TA
K
Figure 3: The semantic translation.
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
19
Proof: The last assertion follows easily from 1-3. For 1-3 we do induction on the size of the
reduction’s derivation. The base case follows from the specifications of definition 3.12 and
lemma 3.14. For the inductive step we have that, for any S, M, E, the following diagram
commutes.
′
2
′
hid,JS̄Ki
τ ;Tζ
/ T QāΓ T hid,JMKi ; T τ / T 2 (Qā Γ × A) T (ζ ; JE[x]K) / T 3 B
/ Qā Γ × T 1
Qā Γ XXXXX
XXXXX
XXXXX
XXXXX
µ
µ
Tµ
XXXXX
hid,JS̄ ; MKi
XXX+
′
τ
/ T (Qā Γ × A) T (ζ ; JE[x]K) / T 2 B
Qā Γ × T AT
TTTT
TTTT
µ
T (ΛT (ζ ′ ; JE[x]K)×id)
ΛT (ζ ′ ; JE[x]K)×id ; τ TTTTT
hΛT (ζ ′ ; JE[x]K) ; η,JS̄ ; MKi ; ψ ′
T)
/ TB
/ T ((A −−⊗ T B) × A)
T
T ev
;µ
By the previous lemma, the upper path is equal to hid, JS̄Ki ; τ ; T ζ ′ ; T JE[M ]K ; µ and therefore to JS̄ ; E[M ]K. Hence, we can immediately show the inductive steps of 1-2. For 3,
NEW
assuming S |= E[M ] −−−→ S, a |= E[M ′ ] and JS̄ ; M K = ^ a _ JS̄ ; M ′ K , we have, using also
lemmas 3.14 and 3.15,
^ a _ JS̄ ; E[M ′ ]K = ^ a _ (hΛT (ζ ′ ; JE[x]K) ; η, JS̄ ; M ′ Ki ; ψ′ ; T evT ; µ)
= ^ a _ (hΛT (ζ ′ ; JE[x]K) ; η, JS̄ ; M ′ Ki ; ψ ′ ) ; T evT ; µ
= hΛT (ζ ′ ; JE[x]K) ; η, ^ a _ JS̄ ; M ′ Ki ; ψ ′ ; T evT ; µ
= hΛT (ζ ′ ; JE[x]K) ; η, JS̄ ; M Ki ; ψ ′ ; T evT ; µ = JS̄ ; E[M ]K .
In order for the model to be sound, we need computational adequacy. This is added explicitly as a specification.
Definition 3.18. Let M be a νρ-model and J K the respective translation of νρ. M is
adequate if
→ S ′ |= 0̃ ,
∃S, b̄. JM K = ^ b̄ _ JS̄ ; 0̃K =⇒ ∃S ′ . ā |= M −→
for any typed term ā | ∅ |− M : N.
N
Proposition 3.19 (Equational Soundness). If M is an adequate νρ-model,
JM K = JN K =⇒ M / N .
3.2.3. Completeness. We equip the semantics with a preorder to match the observational
preorder of the syntax as in (FA). The chosen preorder is the intrinsic preorder with regard
to a collection of observable arrows in the biKleisli monadic-comonadic setting (cf. definition 3.10). In particular, since we have a collection of monad-comonad pairs, we also need
a collection of sets of observable arrows.
Definition 3.20. An adequate νρ-model M = (M, T, Q) is observational if, for all ā:
• There exists Oā ⊆ M(Qā 1, T N) such that, for all ā | ∅ |− M : N,
JM K ∈ Oā ⇐⇒ ∃S, b̄. JM K = ^ b̄ _ JS̄ ; 0̃K .
20
N. TZEVELEKOS
• The induced intrinsic preorder on arrows in M(Qā A, T B) defined by
f .ā g ⇐⇒ ∀ρ : Qā (T B A ) → T N. (Λā (f ) ; ρ ∈ Oā =⇒ Λā (g) ; ρ ∈ Oā )
with Λā (f ) , ΛQ
ā ,T
(f ), satisfies, for all relevant a, ā′ , f, f ′ ,
f .āa f ′ =⇒ ^ a _ f .ā ^ a _ f ′
∧
f .ā f ′ =⇒
ā′
ā
′
; f .a
ā′
ā
; f′ .
N
We write M as (M, T, Q, O).
Recurring to ΛQ
ā ,T
of definition 3.10, we have that Λā (f ) is the arrow:
δ
Qā ΛT (ζ ′ ; f )
Qā 1 −
→ Qā Qā 1 −−−−−−−→ Qā (T B A ) .
(3.10)
Hence, Oā contains those arrows that have a specific observable behaviour in the model, and
the semantic preorder is built over this notion. In particular, terms that yield a number
have observable behaviour.
In order to make good use of the semantic preorder we need it to be a congruence with
regard to the semantic translation. Congruences for νρ, along with typed contexts, are
defined properly in [48]. For now, we state the following.
Lemma 3.21. Let (M, T, Q, O) be an observational νρ-model. Then, for any pair ā | Γ |−
M, N : A of typed terms and any context C such that ā′ | Γ′ |− C[M ], C[N ] : B are valid,
′
JM K .ā JN K =⇒ JC[M ]K .ā JC[N ]K .
Assuming that we translate νρ into an observational νρ-model, we can now show one direction of (FA).
Proposition 3.22 (Inequational Soundness). For typed terms ā | Γ |− M, N : A,
JM K . JN K =⇒ M / N .
Proof: Assume JM K .ā JN K and |= C[M ] −→
→ S ′ |= 0̃ , so JC[M ]K = ^ ā′ _ JS̄ ′ ; 0̃K with
′
′
ā
ā = dom(S ). JM K . JN K implies JC[M ]K . JC[N ]K , and hence JC[N ]K ∈ Oǫ . Thus, by
adequacy, there exists S ′′ such that |= C[N ] −→
→ S ′′ |= 0̃ .
In order to achieve completeness, and hence full-abstraction, we need our semantic translation to satisfy some definability requirement with regard to the intrinsic preorder.
Definition 3.23. Let (M, T, Q, O) be an observational νρ-model and let J K be the semantic translation of νρ to M. M satisfies ip-definability if, for any ā, A, B, there exists
ā
⊆ M(Qā JAK, T JBK) such that:
DA,B
ā
• For each f ∈ DA,B
there exists a term M such that JM K = f .
• For each f, g ∈ M(Qā A, T B),
ā
f .ā g ⇐⇒ ∀ρ ∈ DA→B,N
. (Λā (f ) ; ρ ∈ Oā =⇒ Λā (g) ; ρ ∈ Oā ) .
We write M as (M, T, Q, O, D).
For such a model M we achieve full abstraction.
Theorem 3.24 (FA). For typed terms ā | Γ |− M, N : A,
JM K . JN K ⇐⇒ M / N .
N
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
21
Proof: Soundness is by previous proposition. For completeness (“⇐=”), we do induction
on the size of Γ.
For the base case suppose ā | ∅ |− M / N and take any ρ ∈ D 1→A,N such that
Λā (JM K) ; ρ ∈ Oā . Let ρ = Jā | y : 1 → A |− L : NK , some term L, so Λā (JM K) ; ρ is
Λā (JM K) ; JLK = δ ; Qā |λz.M | ; JLK = J(λy.L)(λz.M )K
for some z : 1. The latter being in Oā implies that it equals ^ b̄ _ JS̄ ; 0̃K, some S. Now,
M / N implies (λy.L)(λz.M ) / (λy.L)(λz.N ) , hence ν b̄.(S̄ ; 0̃) / (λy.L)(λz.N ) , by soundness. But this implies that ā |= (λy.L)(λz.N ) −→
→ S ′ |= 0̃ , so J(λy.L)(λz.N )K ∈ Oā , by
ā
ā
ā
correctness. Hence, Λ (JN K) ; ρ ∈ O , so JM K . JN K, by ip-definability.
For the inductive step, if Γ = x : B, Γ′ then
IH
ā | Γ |− M / N =⇒ ā | Γ′ |− λx.M / λx.N =⇒ Jλx.M K .ā Jλx.N K
=⇒ JM K = J(λx.M )xK .ā J(λx.N )xK = JN K
where the last approximation follows from lemma 3.21.
4. Nominal games
In this section we introduce nominal games and strategies, and construct the basic structure
from which a fully abstract model of νρ will be obtained in the next section. We first
introduce nominal arenas and strategies, which form the category G. We afterwards refine
G by restricting to innocent, total strategies, obtaining thus the category Vt .
Vt is essentially a semantical basis for call-by-value nominal computation in general. In
fact, from it we can obtain not only fully abstract models of νρ, but also of the ν-calculus [2],
the νερ-calculus [47] (νρ+exceptions), etc.
4.1. The basic category G. The basis for all constructions to follow is the category Nom
of nominal sets. We proceed to arenas.
Definition 4.1. A nominal arena A , (MA , IA , ⊢A , λA ) is given by:
• a strong nominal set MA of moves,
• a nominal subset IA ⊆ MA of initial moves,
• a nominal justification relation ⊢A ⊆ MA × (MA \ IA ),
• a nominal labelling function λA : MA → {O, P } × {A, Q}, which labels moves as
Opponent or Player moves, and as Answers or Questions.
An arena A is subject to the following conditions.
(f) For each m ∈ MA , there exists unique k ≥ 0 such that IA ∋ m1 ⊢A · · · ⊢A mk ⊢A m ,
for some ml ’s in MA . k is called the level of m, so initial moves have level 0.
(l1) Initial moves are P-Answers.
(l2) If m1 , m2 ∈ MA are at consecutive levels then they have complementary OP-labels.
(l3) Answers may only justify Questions.
N
We let level-1 moves form the set JA ; since ⊢A is a nominal relation, JA is a nominal subset
of MA . Moves in MA are denoted by mA and variants, initial moves by iA and variants, and
level-1 moves by jA and variants. By I¯A we denote MA \ IA , and by J¯A the set MA \ JA .
22
N. TZEVELEKOS
Note that, although the nominal arenas of [2] are defined by use of a set of weaker
conditions than those above, the actual arenas used there fall within the above definition.
We move on to prearenas, which are the ‘boards’ on which nominal games are played.
Definition 4.2. A prearena is defined exactly as an arena, with the only exception of
condition (l1): in a prearena initial moves are O-Questions.
Given arenas A and B, construct the prearena A → B as:
MA→B , MA + MB
IA→B , IA
λA→B , [ (iA 7→ OQ , mA 7→ λ̄A (mA )) , λB ]
⊢A→B , {(iA , iB )} ∪ { (m, n) | m ⊢A,B n }
N
where λ̄A is the OP -complement of λA .
It is useful to think of the (pre)arena A as a vertex-labelled directed graph with vertex-set
MA and edge-set ⊢A such that the labels on vertices are given by λA (and satisfying (l1-3)).
It follows from (f) that the graph so defined is levelled: its vertices can be partitioned into
disjoint sets L0, L1, L2,. . . such that the edges may only travel from level i to level i + 1
and only level-0 vertices have no incoming edges (and therefore (pre)arenas are directed
acyclic). Accordingly, we will be depicting arenas by levelled graphs or triangles.
The simplest arena is 0 , (∅, ∅, ∅, ∅). Other (flat) arenas are 1 (unit arena), N (arena
of naturals) and Aā (arena of ā-names), for any ā ∈ A# , which we define as
(4.1)
MAā = IAā , Aā ,
where Aā , { π ◦ ā | π ∈ PERM(A) }. Note that for ā empty we get Aǫ = 1, and that we
write Ai for Aa with a being of type i.
More involved are the following constructions. For arenas A, B, define the arenas A⊗B,
A⊥ , A −−⊗ B and A ⇒ B as follows.
MA⊗B , IA ×IB + I¯A + I¯B
M1 = I1 , {∗} ,
MN = IN , N ,
IA⊗B , IA ×IB
λA⊗B , [ ((iA , iB ) 7→ P A) , λA ↾ I¯A , λB ↾ I¯B ]
2
2
⊢A⊗B , { ((iA , iB ), m) | iA ⊢A m ∨ iB ⊢B m } ∪ (⊢A ↾ I¯A ) ∪ (⊢B ↾ I¯B )
B
, IB + IA ×JB + I¯A + I¯B ∩ J¯B
B
, IB
B
, [ (iB 7→ P A) , ((iA , jB ) 7→ OQ) , λ̄A ↾ I¯A , λB ↾ (I¯B ∩ J¯B ) ]
B
, { (iB , (iA , jB )) | iB ⊢B jB } ∪ { ((iA , jB ), m) | iA ⊢A m }
MA
−−⊗
IA
−−⊗
λA
−−⊗
⊢A
−−⊗
2
∪ { ((iA , jB ), m) | jB ⊢B m } ∪ (⊢A ↾ I¯A ) ∪ (⊢B ↾ (I¯B ∩ J¯B )2 )
A B
A⊗B
A B
A −−⊗ B
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
MA⊥ , {∗1 } + {∗2 } + MA
IA⊥ , {∗1 }
23
∗
∗
∗
A
A⊥
A B
A⇒B
λA⊥ , [ (∗1 7→ P A) , (∗2 7→ OQ) , λA ]
⊢A⊥ , {(∗1 , ∗2 ), (∗2 , iA )} ∪ (⊢A ↾ MA 2 )
A ⇒ B , A −−⊗ B⊥
In the constructions above it is assumed that all moves which are not hereditarily justified
by initial moves are discarded. Hence, for example, for any A, B
JB = ∅ =⇒ A −−⊗ B = B
Moreover, we usually identify arenas with graph-isomorphic structures; for example,
1 −−⊗ A = A ,
0 ⇒ A = A⊥ .
Using the latter convention, the construction of A⇒B in the previous definition is equivalent
to A ⇒ B of [15, 2] ; concretely, it is given by:
(4.2)
MA⇒B , {∗} + IA + I¯A + MB
IA⇒B , {∗}
λA⇒B , [ (∗ 7→ P A) , (iA 7→ OQ) , λ̄A , λB ]
⊢A⇒B , {(∗, iA )} ∪ { (iA , m) | iA ⊢A m ∨ m ∈ IB } ∪ (⊢A ↾ I¯A 2 ) ∪ (⊢B ↾ MB 2 )
Of the previous constructors all look familiar apart from −−⊗ (which in [46] appears as ⇒).
˜
The latter can be seen as a function-space constructor merging the contravariant part of its
RHS with its LHS. For example, for any A, B, C, we have
A −−⊗ N = N
and A −−⊗ (B ⇒ C) = (A⊗B) ⇒ C
In the first equality we see that N which appears on the RHS of −−⊗ has no contravariant
part, and hence A is redundant. In the second equality B, which is the contravariant part
of B ⇒ C, is merged with A. This construction will be of great use when considering a
monadic semantics for store.
We move on to describe how are nominal games played. Plays of a game consist of
sequences of moves from some prearena. These moves are attached with name-lists to the
effect of capturing name-environments.
Definition 4.3. A move-with-names of a (pre)arena A is a pair, written mā , where m
is a move of A and ā is a finite list of distinct names (name-list).
N
If x is a move-with-names then its name-list is denoted by nlist(x) and its underlying move
by x ; therefore,
x = xnlist(x) .
We introduce some notation for sequences (and lists).
Notation 4.4 (Sequences). A sequence s will be usually denoted by xy . . . , where x, y, ...
are the elements of s. For sequences s, t,
• s ≤ t denotes that s is a prefix of t, and then t = s(t \ s),
• s t denotes that s is a (not necessarily initial or contiguous) subsequence of t,
• s− denotes s with its last element removed,
• if s = s1 . . . sn then s1 is the first element of s and sn the last. Also,
24
N. TZEVELEKOS
◦ n is the length of s, and is denoted by |s|,
◦ s.i denotes si and s.−i denotes sn+1−i , that is, the i-th element from the end of s
(for example, s.−1 is sn ),
◦ s≤si denotes s1 . . . si , and so does s<si+1 ,
◦ if s is a sequence of moves-with-names then, by extending our previous notation, we
N
have s = snlist(s) , where nlist(s) is a list of length |s| of lists of names.
A justified sequence over a prearena A is a finite sequence s of OP-alternating moves
such that, except for s.1 which is initial, every move s.i has a justification pointer to
some s.j such that j < i and s.j ⊢A s.i ; we say that s.j (explicitly) justifies s.i . A move
in s is an open question if it is a question and there is no answer inside s justified by it.
There are two standard technical conditions that one may want to apply to justified
sequences: well-bracketing and visibility . We say that a justified sequence s is wellbracketed if each answer s.i appearing in s is explicitly justified by the last open question
in s<i (called the pending question). For visibility, we need to introduce the notions of
Player- and Opponent-view . For a justified sequence s, its P-view psq and its O-view
xsy are defined as follows.
pǫq , ǫ
xǫy , ǫ
psxq , psq x
pxq , x
if x a P-move
if x is initial
psxs′ yq , psq xy if y an O-move
expl. justified by x
xsxy , xsyx if x an O-move
′
xsxs yy , xsyxy if y a P-move
expl. justified by x
The visibility condition states that any O-move x in s is justified by a move in xs<xy , and
any P-move y in s is justified by a move in ps<yq. We can now define plays.
Definition 4.5. Let A be a prearena. A legal sequence on A is sequence of moves-withnames s such that s is a justified sequence satisfying Visibility and Well-Bracketing. A
legal sequence s is a play if s.1 has empty name-list and s also satisfies the following Name
Change Conditions (cf. [34]):
(NC1) The name-list of a P-move x in s contains as a prefix the name-list of the move
preceding it. It possibly contains some other names, all of which are fresh for s<x .
(NC2) Any name in the support of a P-move x in s that is fresh for s<x is contained in
the name-list of x.
(NC3) The name-list of a non-initial O-move in s is that of the move justifying it.
The set of plays on a prearena A is denoted by PA .
N
It is important to observe that plays have strong support, due to the tagging of moves with
lists of names (instead of sets of names [2]). Note also that plays are the ǫ-plays of [46].
Now, some further notation.
Notation 4.6 (Name-introduction). A name a is introduced (by Player) in a play s,
written a ∈ L(s), if there exist consecutive moves yx in s such that x is a P-move and
a ∈ S(nlist(x) \ nlist(y)).
N
From plays we move on to strategies. Recall the notion of name-restriction we introduced
in definition 2.4; for any nominal set X and any x ∈ X, [x] = { π ◦ x | π ∈ PERM(A) } .
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
25
Definition 4.7. Let A be a prearena. A strategy σ on A is a set of equivalence classes [s]
of plays in A, satisfying:
• Prefix closure: If [su] ∈ σ then [s] ∈ σ.
• Contingency completeness: If even-length [s] ∈ σ and sx is a play then [sx] ∈ σ.
• Determinacy: If even-length [s1 x1 ], [s2 x2 ] ∈ σ and [s1 ] = [s2 ] then [s1 x1 ] = [s2 x2 ].
We write σ : A whenever σ is a strategy on A.
N
By convention, the empty sequence ǫ is a play and hence, by prefix closure and contingency
completeness, all strategies contain [ǫ] and [iA ]’s. Some basic strategies are the following —
note that we give definitions modulo prefix closure.
Definition 4.8. For any ā′ , ā ∈ A# with S(ā′ ) ⊆ S(ā), n ∈ N and any arena B, define the
following strategies.
• ñ : 1 → N , {[∗ n]}
• !B : B → 1 , {[iB ∗]}
•
ā
ā′
: Aā → Aā , {[ā ā′ ]}
′
• idB : B → B , { [s] | s ∈ PB(1) →B(2) ∧ ∀t ≤even s. t ↾ B(1) = t ↾ B(2) }
N
It is easy to see that the aforedefined are indeed strategies. That definitions are given
modulo prefix closure means that e.g. ñ is in fact:
ñ = { [ǫ], [∗], [∗ n] } .
We proceed to composition of plays and strategies. In ordinary games, plays are composed
by doing “parallel composition plus hiding” (v. [4]); in nominal games we need also take
some extra care for names.
Definition 4.9. Let s ∈ PA→B and t ∈ PB→C . We say that:
• s and t are almost composable, s ` t, if s ↾ B = t ↾ B.
• s and t are composable, s ≍ t, if s ` t and, for any s′ ≤ s, t′ ≤ t with s′ ` t′ :
(C1) If s′ ends in a (Player) move in A introducing some name a then a # t′ .
Dually, if t′ ends in a move in C introducing some name a then a # s′ .
(C2) If both s′ , t′ end in B and s′ ends in a move introducing some name a then a # t′− .
Dually, if t′ ends in a move introducing some name a then a # s′− .
N
The following lemma is taken verbatim from [15], adapted from [7].
Lemma 4.10 (Zipper lemma). If s ∈ PA→B and t ∈ PB→C with s ` t then either
s ↾ B = t = ǫ, or s ends in A and t in B, or s ends in B and t in C, or both s and t end
in B.
Note that in the sequel we will use some standard switching condition results (see e.g. [15, 5])
without further mention. Composable plays are composed as below. Note that we may tag
a move m as m(O) (or m(P ) ) to specify it is an O-move (a P-move).
Definition 4.11. Let s ∈ PA→B and t ∈ PB→C with s ≍ t . Their parallel interaction
s k t and their mix s • t, which returns the final name-list in s k t, are defined by mutual
26
N. TZEVELEKOS
recursion as follows. We set ǫ k ǫ , ǫ , ǫ • ǫ , ǫ , and:
smb̄A • t
smb̄A k t , (s k t)mA
smb̄B • tmc̄B
smb̄B k tmc̄B , (s k t)mB
s • tmc̄C
s k tmc̄C , (s k t)mC
b̄s b̄
smA(P
) • t , (s • t) b̄
b̄s b̄
c̄
smB(P
) • tmB(O) , (s • t) b̄
c̄t c̄
s • tmC(P
) , (s • t) c̄
smb̄A(O) • t , b̄′
c̄t c̄
smb̄B(O) • tmB(P
) , (s • t) c̄
s • tmc̄C(O) , c̄ ′ ,
where b̄s is the name-list of the last move in s, and b̄′ is the name-list of mA(O) ’s justifier
inside s k t ; similarly for c̄t and c̄ ′ .
The composite of s and t is:
s ; t , (s k t) ↾ AC .
The set of interaction sequences of A, B, C is defined as:
ISeq(A, B, C) , { s k t | s ∈ PA→B ∧ t ∈ PB→C ∧ s ≍ t } .
N
When composing compatible plays s and t, although their parts appearing in the common
component (B) are hidden, the names appearing in (the support of) s and t are not lost
but rather propagated to the output components (A and C). This is shown in the following
lemma (the proof of which is tedious but not difficult, see [48]).
Lemma 4.12. Let s ≍ t with s ∈ PA→B and t ∈ PB→C .
(a) If s k t ends in a generalised P-move mb̄ then b̄ contains as a prefix the name-list of
(s k t).−2 . It possibly contains some other names, all of which are fresh for (s k t)− .
(b) If s ; t ends in a P-move mb̄ then b̄ contains as a prefix the name-list of (s ; t).−2 . It
possibly contains some other names, all of which are fresh for (s ; t)− .
(c) If s k t ends in a move mb̄ then b̄ contains as a prefix the name-list of the move explicitly
justifying mb̄ .
(d) If s = s′ mb̄ ends in A and t in B then b̄ s • t,
if s = s′ mb̄ and t = t′ mc̄ end in B then b̄ s • t and c̄ s • t,
if s ends in B and t = t′ mc̄ in C then c̄ s • t.
(e) S(s) ∪ S(t) = S(s k t) = S(s ; t) ∪ S(s • t) .
Proposition 4.13 (Plays compose). If s ∈ PA→B and t ∈ PB→C with s ≍ t, then
s ; t ∈ PA→C .
Proof: We skip visibility and well-bracketing, as these follow from ordinary CBV game
analysis. It remains to show that the name change conditions hold for s ; t. (NC3) clearly
does by definition, while (NC1) is part (b) of previous lemma.
For (NC2), let s ; t end in some P-move ms • t and suppose a ∈ S(ms • t ) and a # (s ; t)− .
Suppose wlog that s = s′ mb̄ , and so (s ; t)− = s′ ; t. Now, if a # s′ • t then, by part (e) of
previous lemma, a # s′ , t and therefore a ∈ b̄ , by (NC2) of s. By part (d) then, a ∈ S(s • t).
Otherwise, a ∈ S(s′ • t) and hence, by part (a), a ∈ S(s • t).
We now proceed to composition of strategies. Recall that we write σ : A → B if σ is a
strategy on the prearena A → B.
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
27
Definition 4.14. For strategies σ : A → B and τ : B → C, their composition is defined as
σ ; τ , { [s ; t] | [s] ∈ σ ∧ [t] ∈ τ ∧ s ≍ t } ,
N
and is a candidate strategy on A → C.
Note that, for any sequence u, if [u] ∈ σ ; τ then u = π ◦ (s ; t) = (π ◦ s) ;(π ◦ t) for some
[s] ∈ σ, [t] ∈ τ, s ≍ t and π. Therefore, we can always assume u = s ; t with [s] ∈ σ, [t] ∈ τ
and s ≍ t. Our next aim is to show that composites of strategies are indeed strategies.
Again, the proofs of the following technical lemmata are omitted for economy (but see [48]).
Lemma 4.15. For plays s1 ≍ t1 and s2 ≍ t2 , if s1 k t1 = s2 k t2 then s1 = s2 and t1 = t2 .
Hence, if s1 k t1 ≤ s2 k t2 then s1 ≤ s2 and t1 ≤ t2 .
Lemma 4.16. Let σ : A → B and τ : B → C be strategies with [s1 ], [s2 ] ∈ σ and [t1 ], [t2 ] ∈
τ . If |s1 k t1 | ≤ |s2 k t2 | and [s1 ; t1 ] = [s2 ; t2 ] then there exists some π such that π ◦ (s1 k t1 ) ≤
s 2 k t2 .
Proposition 4.17 (Strategies compose). If σ : A → B and τ : B → C are strategies
then so is σ ; τ .
Proof: By definition and proposition 4.13, σ ; τ contains equivalence classes of plays. We
need also check prefix closure, contingency completeness and determinacy. The former two
are rather straightforward, so we concentrate on the latter.
Assume even-length [u1 x1 ], [u2 x2 ] ∈ σ ; τ with [u1 ] = [u2 ], say ui xi = si ; ti , [si ] ∈ σ and
[ti ] ∈ τ , i = 1, 2 . By prefix-closure of σ, τ we may assume that si , ti don’t both end in B,
for i = 1, 2.
b̄′
If si end in A then si = s′i nb̄i i and si ; ti = (s′i ; ti )ni i , i = 1, 2 . Now, [s′1 ; t1 ] = [u1 ] =
[u2 ] = [s′2 ; t2 ], so, by lemma 4.16 and assuming wlog that |s′1 k t1 | ≤ |s′2 k t2 |, we have
′′
′
′′′
π ◦ (s′1 k t1 ) ≤ (s′2 k t2 ), ∴ π ◦ s′1 ≤ s′2 , say s′2 = s′′2 s′′′
2 with s2 = π ◦ s1 and s2 in B. Then
b̄
b̄
′
′
′′′
′
1
2
[s′′2 ] = [s′1 ], ∴ [s′′2 (s′′′
2 n2 ).1] = [s1 n1 ], by determinacy of σ, and hence |s2 | = 0, s2 = π ◦ s1
b̄
b̄
and t2 = π ◦ t1 . Moreover, π ′ ◦ s′1 n11 = s′2 n22 , some permutation π ′ . Now we can apply the
Strong Support Lemma, as (C1) implies (S(nb̄i i ) \ S(s′i )) ∩ S(ti ) = ∅. Hence, there exists a
permutation π ′′ such that π ′′ ◦ s1 = s2 and π ′′ ◦ t1 = t2 , ∴ [s1 ; t1 ] = [s2 ; t2 ] , as required.
If si end in B and ti in C, then work similarly as above. These are, in fact, the only cases we
need to check. Because if, say, s2 , t1 end in B, s1 in A and t2 in C then t1 , s2 end in P-moves
−
− −
−
−
and [s−
1 ; t1 ] = [s2 ; t2 ] implies that s1 , t2 end in O-moves in B. If, say, |s1 k t1 | ≤ |s2 k t2 |
−
′
then we have, by lemma 4.16, π ◦ s−
1 ≤ s2 , some permutation π. So if π ◦ s1 = s2 and
′′
′′
′
s2 = s2 s2 , determinacy of σ dictates that s2 .1 be in A, to |s1 ; t1 | = |s2 ; t2 | and s2 ; t2
ending in C.
In order to obtain a category of nominal games, we still need to show that strategy composition is associative. We omit the (rather long) proof and refer the interested reader
to [48].
Proposition 4.18. For any σ : A → B, σ1 : A′ → A and σ3 : B → B ′ ,
idA ; σ = σ = σ ; idB
∧
(σ1 ; σ) ; σ3 = σ1 ;(σ ; σ3 ) .
Definition 4.19. The category G of nominal games contains nominal arenas as objects
and nominal strategies as arrows.
N
28
N. TZEVELEKOS
In the rest of this section let us examine closer the proof of proposition 4.17 in order identify
where exactly is strong support needed, and for which reasons is the nominal games model
of [2] flawed.
Remark 4.20 (The need for strong support). The nominal games presented here
differ from those of [2] crucially in one aspect; namely, the modelling of local state. In [2]
local state is modelled by finite sets of names, so a move-with-names is a move attached
with a finite set of names, and other definitions differ accordingly. The problem is that
thus determinacy is not preserved by strategy composition: information separating freshly
created names may be hidden by composition and hence a composite strategy may break
determinacy by distinguishing between composite plays that are equivalent.
In particular, in the proof of determinacy above we first derived from [s′1 ; t1 ] = [s′2 ; t2 ]
that there exists some π so that π ◦ s′1 = s2 and π ◦ t1 = t2 , by appealing to lemma 4.16;
in the (omitted) proof of that lemma, the Strong Support Lemma needs to be used several
times. In fact, the statement
|s′1 k t1 | = |s′2 k t2 | ∧ [s′1 ; t1 ] = [s′2 ; t2 ] =⇒ ∃π. π ◦ s′1 = s′2 ∧ π ◦ t1 = t2
does not hold in a weak support setting such that of [2]. For take some i ∈ ω and consider
the following AGMOS-strategies (i.e. strategies of [2]).
σ : 1 → Ai , { [∗ a{a,b} ] | a, b ∈ Ai ∧ a 6= b } ,
(4.20:A)
τ : Ai → Ai ⇒ Ai , { [a ∗ c a] | a, c ∈ Ai } .
Then,
[∗ a{a,b} ; a ∗ b] = [∗ ∗{a,b} b{a,b} ] = [∗ ∗{a,b} a{a,b} ] = [∗ a{a,b} ; a ∗ a] ,
yet for no π do we have π ◦ (∗ a{a,b} ) = ∗ a{a,b} and π ◦ (a∗b) = a∗a. As a result, determinacy
fails for σ ; τ since both [∗ ∗{a,b} b{a,b} a{a,b} ], [∗ ∗{a,b} a{a,b} a{a,b} ] ∈ σ ; τ .
Another point where we used the Strong Support Lemma in the proof of determinacy
was in showing (the dual of):
∃π, π ′ . π ◦ (s1 , t′1 ) = (s2 , t′2 ) ∧ π ′ ◦ t′1 nb̄11 = t′2 nb̄22 =⇒ ∃π ′′ . π ′′ ◦ (s1 , t′1 nb̄11 ) = (s2 , t′2 nb̄22 )
i.e.
[s1 , t′1 ] = [s2 , t′2 ] ∧ [t′1 nb̄11 ] = [t′2 nb̄22 ] =⇒ [s1 , t′1 nb̄11 ] = [s2 , t′2 nb̄22 ] .
The above statement does not hold for AGMOS-games. To show this, we need to introduce7
the flat arena Ai ⊙ Ai with MAi ⊙Ai , P2 (Ai ) (the set of 2-element subsets of Ai ). This is
not a legal arena in our setting, since its moves are not strongly supported, but it is in the
AGMOS setting. Consider the following strategies.
σ : Ai ⊗ Ai → Ai ⊙ Ai , { [ (a, b) {a, b}] | a, b ∈ Ai ∧ a 6= b }
τ : Ai ⊙ Ai → Ai , { [{a, b} a] | a, b ∈ Ai ∧ a 6= b }
(4.20:B)
We have that [ (a, b) {a, b}, {a, b}] = [ (a, b) {a, b}, {a, b}] and [{a, b} a] = [{a, b} b] , yet
[ (a, b) {a, b}, {a, b} a] 6= [ (a, b) {a, b}, {a, b} b] .
N
In fact, determinacy is broken since [ (a, b) a], [ (a, b) b] ∈ σ ; τ .
7This is because our presentation of nominal games does not include plays and strategies with non-empty
initial local state. In the AGMOS setting we could have used to the same effect the {a, b}-strategies:
σ : Ai ⊗ Ai → 1 , { [ (a, b){a,b} ∗{a,b} ]{a,b} } ,
τ : 1 → Ai , { [∗{a,b} a
{a,b}
]{a,b} } .
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
29
4.2. Arena and strategy orders in G. G is the raw material from which several subcategories of nominal games will emerge. Still, though, there is structure in G which will be
inherited to the refined subcategories we will consider later on. In particular, we consider
(subset) orderings for arenas and strategies, the latter enriching G over Cpo.8 These will
prove useful for solving domain equations in categories of nominal games.
Definition 4.21. For any arenas A, B and each σ, τ ∈ G(A, B) define σ ⊑ τ ⇐⇒ σ ⊆ τ .
F
S
For each ⊑-increasing sequence (σi )i∈ω take i σi , i σi .
N
F
It is straightforward to see that each such i σi is indeed a strategy: prefix closure, contingency completeness and determinacy easily follow from the fact that the sequences we
consider are ⊑-increasing. Hence, each G(A, B) is a cpo with least element the empty
strategy (i.e. the one containing only [ǫ]). More than that, these cpo’s enrich G.
Proposition 4.22. G is Cpo-enriched wrt ⊑.
Proof: Enrichment amounts to showing the following straightforward assertions.
σ ⊑ σ ′ ∧ τ ⊑ τ ′ =⇒ σ ; τ ⊑ σ ′ ; τ ′
G
G
(σi ; τ )
(σi )i∈ω an ω-chain =⇒ ( σi ) ; τ ⊑
i∈ω
i∈ω
(τi )i∈ω an ω-chain =⇒ σ ;(
G
τi ) ⊑
i∈ω
G
(σ ; τi )
i∈ω
On the other hand, arenas are structured sets and hence also ordered by a ‘subset relation’.
Definition 4.23. For any A, B ∈ Ob(G) define
A E B ⇐⇒ MA ⊆ MB ∧ IA ⊆ IB ∧ λA ⊆ λB ∧ ⊢A ⊆ ⊢B ,
and for any E-increasing sequence (Ai )i∈ω define
[
G
Ai .
Ai ,
i∈ω
i∈ω
If A E B then we can define an embedding-projection pair of arrows by setting:
inclA,B : A → B , { [s] ∈ [PA→B ] | [s] ∈ idA ∨ (odd(|s|) ∧ [s− ] ∈ idA ) } ,
projB,A : B → A , { [s] ∈ [PB→A ] | [s] ∈ idA ∨ (odd(|s|) ∧ [s− ] ∈ idA ) } .
There is also an indexed version of E, for any k ∈ N,
A Ek B ⇐⇒ A E B ∧ { m ∈ MB | level(m) < k } ⊆ MA .
N
F
It is straightforward to see that i∈ω Ai is well-defined, and that E forms a cpo on Ob(G)
with least element the empty arena 0. By inclA,B and projB,A being an embeddingprojection pair we mean that:
inclA,B ; projB,A = idA
∧
projB,A ; inclA,B ⊑ idB
(4.3)
8By cpo we mean a partially ordered set with least element and least upper bounds for increasing ωsequences. Cpo is the category of cpos and continuous functions.
30
N. TZEVELEKOS
Note that in essence both inclA,B and projB,A are equal to idA , the latter seen as a
partially defined strategy on prearenas A → B and B → A. Finally, it is easy to show the
following.
A E B E C =⇒ inclA,B ; inclB,C = inclA,C
(TRN)
4.3. Innocence: the category V. In game semantics for pure functional languages (e.g.
PCF [16]), the absence of computational effects corresponds to innocence of the strategies.
Here, although our aim is to model a language with effects,
our model will use innocent strategies: the effects will still
be achieved, by using monads.
Innocence is the condition stipulating that the strategies
be completely determined by their behaviour on P-views.
In our current setting the manipulation of P-views presents
some difficulties, since P-views of plays need not be plays
themselves. For example, the P-view of the play on the
side (where curved lines represent justification pointers) is
∗ (∗, ∗) ∗ a and violates (NC2). Consequently, we need to
explicitly impose innocence on plays.
1
/ 1⊥ ⊗ A
i
∗
OQ
(∗, ∗)
PA
∗
OQ
∗a
PA
∗
OQ
a
PA
Definition 4.24. A legal sequence s is an innocent play if s.1 has empty name-list and
s also satisfies the following Name Change Conditions:
(NC1) The name-list of a P-move x in s contains as a prefix the name-list of the move
preceding it. It possibly contains some other names, all of which are fresh for s<x .
(NC2′ ) Any name in the support of a P-move x in s that is fresh for ps<xq is contained
in the name-list of x.
(NC3) The name-list of a non-initial O-move in s is that of the P-move justifying it.
The set of innocent plays of A is denoted by PAi .
N
It is not difficult to show now that a play s is innocent iff, for any t ≤ s, ptq is a play. We
can obtain the following characterisation of name-introduction in innocent plays.
Proposition 4.25 (Name-introduction). Let s be an innocent play. A name a is introduced by Player in s iff there exists a P-move x in s such that a ∈ S(x) and a # ps<xq.
Proof: If a is introduced by a P-move x in s then a ∈ nlist(x) and a # nlist(s<x .−1), hence,
by (NC1), a # s<x so a # ps<xq. Conversely, if a ∈ S(x) and a # ps<xq then, by (NC2′ ),
a ∈ nlist(x), while a # ps<xq implies a # nlist(s<x .−1).
Innocent plays are closed under composition (proof omitted, v. [48]).
Proposition 4.26. If s ∈ PA→B , t ∈ PB→C are innocent and s ≍ t then s ; t is innocent.
We now move on to innocent strategies and show some basic properties.
Definition 4.27. A strategy σ is an innocent strategy if [s] ∈ σ implies that s is innocent,
and if even-length [s1 x1 ] ∈ σ and odd-length [s2 ] ∈ σ have [ps1q] = [ps2q] then there exists
x2 such that [s2 x2 ] ∈ σ and [ps1 x1q] = [ps2 x2q].
N
Lemma 4.28. Let σ be an innocent strategy.
(1) If [s] ∈ σ then [psq] ∈ σ.
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
31
(2) If sy is an even-length innocent play and [s], [psyq] ∈ σ then [sy] ∈ σ.
(3) If psyq is even-length with nlist(y) = nlist(s.−1) and [s], [psyq] ∈ σ then [sy] ∈ σ.
(4) If s is an even-length innocent play and, for any s′ ≤even s, [ps′q] ∈ σ then [s] ∈ σ.
Proof: For (1) we do induction on |s|. The base case is trivial. Now, if s = s′ y with y a
P-move then psq = ps′q y and [ps′q] ∈ σ by prefix closure and IH. By innocence, there exists
y ′ such that [ps′q y ′ ] ∈ σ and [ps′q y ′ ] = [psyq], so done. If s = s1 ys2 x and x an O-move
justified by y then [ps1 yq] ∈ σ by prefix closure and IH, hence [ps1 yq x] ∈ σ by contingency
completeness.
For (2) note that by innocence we have [sy ′ ] ∈ σ for some y ′ such that [psyq] = [psy ′q].
Then,
[psq, y] = [psq, y ′ ] ∧ [psq, s] = [psq, s] ∧ (S(y) \ S(psq)) ∩ S(s) = (S(y ′ ) \ S(psq)) ∩ S(s) = ∅ .
Thus we can apply the strong support lemma and get [sy] = [sy ′ ], as required.
For (3) it suffices to show that sy is an innocent play. As s, psq y are plays, it suffices
to show that sy satisfies the name conditions at y. (NC3) and (NC2′ ) hold because psyq a
play. (NC1) also holds, as y is non-introducing.
For (4) we do induction on |s|. The base case is encompassed in psq = s, which is trivial.
For the inductive step, let s = s− x with psq 6= s. By IH and contingency completeness we
have [s− ] ∈ σ, and since [psq] ∈ σ, by (2), [s] ∈ σ.
We can now show that innocent strategies are closed under composition (details in [48]).
Proposition 4.29. If σ : A → B, τ : B → C are innocent strategies then so is σ ; τ .
Definition 4.30. V is the lluf subcategory of G of innocent strategies.
N
Henceforth, when we consider plays and strategies we presuppose them being innocent.
Viewfunctions. We argued previously that innocent strategies are specified by their behaviour on P-views. We formalise this argument by representing innocent strategies by
viewfunctions.
Definition 4.31. Let A be a prearena. A viewfunction f on A is a set of equivalence
classes of innocent plays of A which are even-length P-views, satisfying:
• Even-prefix closure: If [s] ∈ f and t is an even-length prefix of s then [t] ∈ f .
• Single-valuedness: If [s1 x1 ], [s2 x2 ] ∈ f and [s1 ] = [s2 ] then [s1 x1 ] = [s2 x2 ].
Let σ be an innocent strategy and let f be a viewfunction. Then, we can define a corresponding viewfunction and a strategy by:
viewf(σ) , { [s] ∈ σ | |s| even ∧ psq = s } ,
[
strat(f ) ,
stratn (f ) ,
n
where strat0 (f ) , {[ǫ]} and:
strat2n+1 (f ) , { [sx] | sx ∈ PAi ∧ [s] ∈ strat2n (f ) } ,
strat2n+2 (f ) , { [sy] | sy ∈ PAi ∧ [s] ∈ strat2n+1 (f ) ∧ [psyq] ∈ f } .
N
Note in the above definition that, for any even-length s, [s] ∈ strat(f ) implies [psq] ∈ f .
We can show that the conversion functions are well-defined inverses.
32
N. TZEVELEKOS
Proposition 4.32. For any innocent strategy σ, viewf(σ) is a viewfunction. Conversely,
for any viewfunction f , strat(f ) is an innocent strategy. Moreover,
f = viewf(strat(f )) ∧ σ = strat(viewf(σ)) .
Recall the subset ordering ⊑ of strategies given in definition 4.21. It is easy to see that
the ordering induces a cpo on innocent strategies and that V is Cpo-enriched. We can also
show the following.
Corollary 4.33. For all viewfunctions f, g and innocent strategies σ, τ ,
(1) f ⊆ strat(f ) ,
(2) σ ⊆ τ ⇐⇒ viewf(σ) ⊆ viewf(τ ) , f ⊆ g ⇐⇒ strat(f ) ⊆ strat(g) ,
(3) viewf(σ) ⊆ τ ∧ viewf(τ ) ⊆ σ =⇒ σ = τ .
Moreover, ⊑ yields a cpo on viewfunctions, and viewf and strat are continuous with
respect to ⊑.
Notation 4.34 (Diagrams of viewfunctions). We saw previously that innocent strategies can be represented by their viewfunctions. A viewfunction is a set of (equivalence
classes of) plays, so the formal way to express such a construction is explicitly as a set. For
example, we have that
viewf(idA ) = { [sm(1) m(2) ] | [s] ∈ viewf(idA ) ∧ (m ∈ IA ∨ (s.−1 ⊢A m(1) ∧ s.−2 ⊢A m(2) )) } .
The above behaviour is called copycat (v. [4]) and is perhaps the most focal notion in game
semantics.
A more convenient way to express viewfunctions is by means of diagrams. For example,
for idA we can have the following depiction.
idA : A
iA
/A
OQ
iA
PA
The polygonal line in the above depiction stands for a copycat link , meaning that the
strategy copycats between the two iA ’s. A more advanced example of this notation is the
strategy in the middle below.
A⇒B
∗
iA
hA,B : (A ⇒ B)⊗A
PA
(∗, iA )
OQ
A−
OQ
∗
∗
iA
hA,B : (A ⇒ B)⊗A
/ B⊥
(∗, iA )
PA
OQ
∗
∗
OQ
PQ
/ B⊥
iA
PA
OQ
PQ
B
jA
OQ
jA
PQ
iB
OA
iB
PA
Note first that curved lines (and also the line connecting the two ∗’s) stand for justification
pointers. Moreover, recall that the arena A ⇒ B has the form given on the left above, so
the leftmost iA (l-iA ) in the diagram of hA,B has two child components, A− and B. Then,
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
33
the copycat links starting from the l-iA have the following meaning. hA,B copycats between
the A− -component of l-iA and the other iA , and copycats also between the B-component
of l-iA and the lower ∗. That is (modulo prefix-closure),
hA,B , strat{ [ (∗, iA ) ∗ ∗ iA s ] | [ iA iA s ] ∈ viewf(idA ) ∨ [s] ∈ viewf(idB ) } .
Another way to depict hA,B is by cases with regard to Opponent’s next move after l-iA , as
seen on the right diagram above.
Finally, we will sometimes label copycat links by strategies (e.g. in the proof of proposition 4.42). Labelling a copycat link by a strategy σ means that the specified strategy plays
like σ between the linked moves, instead of doing copycat. In this sense, ordinary copycat
links can be seen as links labelled with identities.
4.4. Totality: the category Vt . We introduce the notion of total strategies, specifying
those strategies which immediately answer initial questions without introducing fresh names.
We extend this type of reasoning level-1 moves, yielding several subclasses of innocent
strategies. Note that an arena A is pointed if IA is singleton.
Definition 4.35. An innocent strategy σ : A → B is total if for any [iA ] ∈ σ there exists
[iA iB ] ∈ σ. A total strategy σ : A → B is:
• l4 if whenever [s] ∈ σ and s.−1 ∈ JA then | psq | = 4,
b̄ ] ∈ σ,
• t4 if for any [iA iB jB ] ∈ σ there exists [iA iB jB jA
• tl4 if it is both t4 and l4,
• ttotal if it is tl4 and for any [iA iB jB ] ∈ σ there exists [iA iB jB jA ] ∈ σ.
A total strategy τ : C ⊗A → B is:
• l4* if whenever [s] ∈ τ and s.−1 ∈ JA then | psq | = 4,
b̄ ] ∈ τ ,
• t4* if for any [ (iC , iA )iB jB ] ∈ τ there exists [ (iC , iA )iB jB jA
• tl4* if it is both t4* and l4*.
We let Vt be the lluf subcategory of V of total strategies, and Vtt its lluf subcategory of
ttotal strategies. Vt∗ and Vtt∗ are the full subcategories of Vt and Vtt respectively containing
pointed arenas.
N
The above subclasses of strategies will be demystified in the sequel. For now, we show a
technical lemma. Let us define, for each arena A, the diagonal strategy ∆A as follows.
∆A : A → A⊗A , strat{ [ iA (iA , iA ) s ] | [ iA iA s ] ∈ viewf(idA ) }
(4.4)
Lemma 4.36 (Separation of Head Occurrence). Let A be a pointed arena and let
f : A → B be a t4 strategy. There exists a tl4* strategy f˜ : A⊗A → B such that f = ∆ ; f˜.
Proof: Let us tag the two copies of A in A ⊗ A as A(1) and A(2) , and take
b̄
b̄
˜ viewf(f ) ∧ ∀i. s.i ∈
/ JA(2) } ,
f˜ , strat{ [ (iA , iA )iB jB jA
s ] | [ iA iB jB jA
s]∈
(2)
(2)
˜ is the composition of de-indexing from MA(1) and MA(2) to MA with ∈. Intuitively,
where ∈
f˜ plays the first JA -move of f in A(2) , and then mimics f until the next JA -move of f ,
which is played in A(1) . All subsequent JA -moves are also played in A(1) . Clearly, f˜ is tl4*
and f = ∆ ; f˜.
34
N. TZEVELEKOS
We proceed to examine Vt . Eventually, we will see that it contains finite products and that
it contains some exponentials, and that lifting promotes to a functor.
Lifting and product. We first promote the lifting and tensor arena-constructions to functors.
In the following definition recall L from notation 4.6 and note that we write L(m) # m′ for
L(m) ∩ S(m′ ) = ∅.
Definition 4.37. Let f : A → A′ , g : B → B ′ in Vt . Define the arrows
f ⊗g , strat{ [ (iA , iB ) (iA′ , iB ′ ) s ] |
( [ iA iA′ s ] ∈ viewf(f ) ∧ [iB iB ′ ] ∈ g ∧ L(iA iA′ s) # iB )
∨ ( [ iB iB ′ s ] ∈ viewf(g) ∧ [iA iA′ ] ∈ f ∧ L(iB iB ′ s) # iA ) } ,
f⊥ , strat{ [∗ ∗′ ∗′ ∗ s] | [s] ∈ viewf(f ) } ,
of types f ⊗g : A⊗B → A′ ⊗B ′ and f⊥ : A⊥ → A′⊥ .
N
Let us give an informal description of the above constructions:
• f⊥ : A⊥ → A′⊥ initially plays a sequence of asterisks [∗ ∗′ ∗′ ∗] and then continues playing
like f .
• f ⊗g : A⊗B → A′ ⊗B ′ answers initial moves [ (iA , iB ) ] with f ’s answer to [iA ] and g’s
answer to [iB ]. Then, according to whether Opponent plays in JA′ or in JB ′ , Player
plays like f or like g respectively.
Note that f⊥ is always ttotal. We can show the following.
Proposition 4.38.
⊗
: Vt × Vt → Vt and ( )⊥ : Vt → Vtt∗ are functors.
Moreover, ⊗ yields products and hence Vt is cartesian.
Proposition 4.39. Vt is cartesian: 1 is a terminal object and ⊗ is a product constructor.
Proof: Terminality of 1 is clear. Moreover, it is straightforward to see that ⊗ yields
a symmetric monoidal structure on Vt , with its unit being 1 and its associativity, leftunit, right-unit and symmetry isomorphisms being the canonical ones. Hence, it suffices
to show that there exists a natural coherent diagonal, that is, a natural transformation
∆ : IdVt → ⊗ ◦ hIdVt , IdVt i (where hIdVt , IdVt i is the diagonal functor on Vt ) such that the
following diagrams commute for any A, B in Vt .
∆A ⊗∆B
/ (A⊗A)⊗(B ⊗B)
A⊗B TT
TTTT
TTTT
∼
TTT
=
∆A⊗ B TTTT)
(A⊗B)⊗(A⊗B)
A
qq MMMMM ∼
MM=
MMM
∆A
MM&
/ A⊗1
A⊗A
q
∼
=qqqq
qq
xqqq
1⊗A o
!A ⊗idA
idA ⊗!A
But it is easy to see that the diagonal of (4.4) makes the above diagrams commute. Naturality follows from the single-threaded nature of strategies (v. [14]).
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
π
35
π
1
2
Products are concretely given by triples A ←−
A⊗B −→
B, where
π1 = strat{ [ (iA , iB ) iA s ] | [iA iA s] ∈ viewf(idA ) }
f
g
and π2 similarly, while for each A ←
−C−
→ B we have
hf, gi : C → A⊗B = strat{ [ iC (iA , iB ) s ] |
( [iC iA s] ∈ viewf(f ) ∧ [iC iB ] ∈ viewf(g) )
∨ ( [iC iA ] ∈ viewf(f ) ∧ [iC iB s] ∈ viewf(g) ) } .
Finally, we want to generalise the tensor product to a version applicable to countably many
arguments. In arenas, the construction comprises of gluing countably many arenas together
at their initial moves. The problem that arises then is that the product of infinitely many
(initial) moves need not have finite support, breaking the arena specifications. Nevertheless,
in case we are interested only in pointed arenas, this is easily bypassed: a pointed arena has
a unique initial move, which is therefore equivariant, and the product of equivariant moves
is of course also equivariant.
N
Proposition and Definition 4.40. For pointed arenas {Ai }i∈ω define i Ai by:
]
MNi Ai , {∗} +
λNi Ai , [ (∗ 7→ P A), [λAi i∈ω ]] ,
I¯Ai ,
i
[
⊢Ni Ai , {(†, ∗)} ∪ { (∗, jAi ) | i ∈ ω } ∪
INi Ai , {∗} ,
(⊢Ai ↾ I¯Ai 2 ) .
i
For {fi : Ai → Bi }i∈ω with Ai ’s and Bi ’s pointed define:
O
fi , strat{ [∗ ∗ s] | ∃k. [iAk iBk s] ∈ viewf(fk ) } .
i
N Q
: Vt∗ → Vt∗ is a functor.
Then,
In fact, we could proceed and show that the aforedefined tensor yields general products of
pointed objects, but this will not be of use here.
Partial exponentials. We saw that Vt has products, given by the tensor functor ⊗. We
now show that the arrow constructor yields appropriate partial exponentials, which will be
sufficient for our modelling tasks.
Let us introduce the following transformations on strategies.
Definition 4.41. For all arenas A, B, C with C pointed, define a bijection
∼
=
→ Vt (A, B −−⊗ C)
ΛB
A,C : Vt (A⊗B, C) −
by taking, for each h : A⊗B → C and g : A → B −−⊗ C ,9
−−⊗ C , strat{ [i
ΛB
A iC (iB , jC ) s] | [ (iA , iB ) iC jC s ] ∈ viewf(h) } ,
A,C (h) : A → B
−1
ΛB
A,C (g) : A⊗B → C , strat{ [ (iA , iB ) iC jC s ] | [iA iC (iB , jC ) s] ∈ viewf(g) } .
For each (f, g) : (A, B) → (A′ , B ′ ), define the arrows
A
evA,B : (A −−⊗ B)⊗A → B , ΛA
f
−−⊗
−1
B,B (idA−−⊗B ) ,
−−⊗
g : A′ −−⊗ B → A −−⊗ B ′ , ΛA
A
′
B,A′ −−⊗B ′ (id⊗f
−−⊗
; ev ; g) .
N
9Note the reassignment of pointers that takes place implicitly in the definitions of Λ, Λ−1 , in order e.g. for
(iA , iB ) iC jC s to be a play of viewf(h).
36
N. TZEVELEKOS
It is not difficult to see that Λ and Λ−1 are well-defined and mutual inverses. What is more,
they supply us with exponentials.
Proposition 4.42. Vt has partial exponentials wrt to ⊗, in the following sense. For any
object B, the functor ⊗B : Vt → Vt has a partial right adjoint B −−⊗ : Vt∗ → Vt , that
is, for any object A and any pointed object C the bijection ΛB
A,C is natural in A.
Proof: It suffices to show that, for any
f : A⊗B → C and g : A → B −−⊗ C,
Λ(f )⊗id ; ev = f ,
Λ(f ) ⊗id
A⊗B
A consequence of partial exponentiation is that
: (Vt )
Now, in case g is ttotal, the strategy f
strat(φ), where
−−⊗
/C
(iC , iB )
These equalities are straightforward. For example, the viewfunction of Λ(f )⊗id ; ev is given by
the diagram on the side, which also gives the
viewfunction of f .
−−⊗
ev
(iA , iB )
g ⊗id ; ev = Λ−1 (g) .
op
/ (B −−⊗ C)⊗B
iC
jC
(iB , jC )
f
−−⊗
naturally upgrades to a functor:
× Vt∗ → Vt .
g : A′
−−⊗
B → A −−⊗ B ′ is given concretely by
φ = { [iB iB ′ (iA , jB ′ ) (iA′ , jB ) s] |
([iA iA′ s] ∈ viewf(f ) ∧ [iB iB ′ jB ′ jB ] ∈ g ∧ L(iA iA′ s)#iB , jB ′ )
∨ ([iB iB ′ jB ′ jB s] ∈ viewf(g) ∧ [iA iA′ ] ∈ f ∧ L(iB iB ′ jB ′ jB s)#iA ) }.
That is, f −−⊗ g answers initial moves [iB ] like g and then responds to [iB iB ′ (iA , jB ′ ) ] with
f ’s answer to [iA ] and g’s response to [iB iB ′ jB ′ ] (recall g ttotal). It then plays like f or
like g, according to Opponent’s next move. Note that φ is a viewfunction even if B, B ′ are
not pointed.
A special case of ttotality in the second argument arises in the defined functor:
⇒
: (Vt )op × Vt → Vtt∗ ,
−−⊗
( )⊥ .
(4.5)
Remark 4.43. In the work on CBV games of Honda & Yoshida [15] the following version
of partial exponentiation is shown.
V(A⊗B, C) ∼
(4.6)
= Vt (A, B ⇒ C)
Interestingly, that version can be derived from ours (using also another bijection shown
in [15]),
V(A⊗B, C) ∼
= Vt (A⊗B, C⊥ ) ∼
= Vt (A, B −−⊗ C⊥ ) = Vt (A, B ⇒ C) .
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
37
But also vice versa, if C is pointed then C ∼
= C2 ⇒ C1 , for some arenas C1 , C2 ,10 and
(4.6)
(4.6)
Vt (A⊗B, C2 ⇒C1 ) ∼
= Vt (A, (B ⊗ C2 )⇒C1 ) = Vt (A, B −−⊗(C2 ⇒C1 )) .
= V(A⊗B ⊗C2 , C1 ) ∼
Strategy and arena orders. Recall the orders defined for strategies (⊑) and arenas (E) in
section 4.2. These being subset orderings are automatically inherited by Vt . Moreover,
by use of corollary 4.33 we can easily show that the aforedefined functors are continuous.
Note that, although the strategy order ⊑ is inherited from V, the least element (the empty
strategy) is lost, as it is not total.
Proposition 4.44. Vt and Vtt are PreCpo-enriched wrt ⊑.11 Moreover,
O
Y
( )⊥ : Vt → Vtt∗ ,
( ⊗ ) : Vt × Vt → Vt ,
(
):
Vt∗ → Vt∗ ,
(
−−⊗
) : Vtop × Vtt∗ → Vtt∗ , ( ⇒ ) : Vtop × Vt → Vtt∗
are locally continuous functors.
The order of arenas in Vt is the same as in G, and therefore Ob(Vt ) is a cpo with least
element 0. Note that A E B does not imply that the corresponding projection is a total
strategy — but A E1 B does imply it. In fact,
A E1 B =⇒ projB,A ∈ Vtt (B, A)
∧
A E2 B =⇒ inclA,B ∈ Vtt (A, B) .
Moreover, we have the following.
Proposition 4.45. All of the functors of proposition 4.44 are continuous wrt E . Moreover,
A E A′ ∧ B E B ′ =⇒ inclA,A′ ⊗inclB,B ′ = inclA⊗B,A′ ⊗B ′
A E1 A′ ∧ B E1 B ′ =⇒ projA′ ,A ⊗projB ′ ,B = projA′ ⊗B ′ ,A⊗B
O
∀i ∈ ω. Ai E A′i =⇒
inclAi ,A′i = inclNi Ai ,Ni A′i
i
O
∀i ∈ ω. Ai E A′i =⇒
projA′ ,Ai = projNi A′ ,Ni Ai
i
i
i
A E1 A′ ∧ B E B ′ =⇒ projA′ ,A ⇒ inclB,B ′ = inclA⇒B,A′ ⇒B ′
A E A′ ∧ B E1 B ′ =⇒ inclA,A′ ⇒ projB ′ ,B = projA′ ⇒B ′ ,A⇒B
A E1 A′ ∧ B E2 B ′ =⇒ projA′ ,A −−⊗ inclB,B ′ = inclA
′
′
A E A ∧ B E1 B =⇒ inclA,A′
−−⊗
B,A′ −−⊗B ′
−−⊗
projB ′ ,B = projA′
B ′ ,A−−⊗B
−−⊗
.
Proof: All the clauses are in effect functoriality statements, since the underlying sets of
inclusions and projections correspond to identity strategies.
10 In fact, for C to be expressed as C ⇒ C we need a stronger version of condition (f), namely:
2
1
(f’)
For each m ∈ MA , there exists unique k ≥ 0 and a unique sequence x1 . . . xn ∈ {Q, A}∗ such that
IA ∋ m1 ⊢A · · · ⊢A mk ⊢A m , for some ml ’s in MA with λQA
C (ml ) = xl .
A
In such a case, C1 and C2 are given by taking KC
, { m ∈ MC | ∃jC . jC ⊢C m ∧ λC (m) = P A } and
A
A
A
⊢C1 , ⊢C ↾ (MC1 × I¯C1 ) λC1 , λC ↾ MC1
MC1 , KC + { m ∈ MC | ∃k ∈ KC . k ⊢C · · · ⊢C m } IC1 , KC
MC2 , I¯C \ MC1
λC2 , [iC2 7→ P A, m 7→ λ̄C (m) ] IC2 , JC
⊢C2 , ⊢C ↾ (MC2 × I¯C2 ) .
11 By precpo we mean a cpo which may not have a least element. PreCpo is the category of precpos and
continuous functions.
38
N. TZEVELEKOS
4.5. A monad, and some comonads. We now proceed to construct a monad and a
family of comonads on Vt that will be of use in later sections. Specifically, we will upgrade
lifting to a monad and introduce a family of product comonads for initial state.
Lifting monad. It is a more-or-less standard result that the lifting functor induces a monad.
Definition 4.46. Define the natural transformations up, dn, st as follows.
upA : A → A⊥ = strat{ [iA ∗1 ∗2 iA s] | [iA iA s] ∈ viewf(idA ) }
dnA : A⊥⊥ → A⊥ , strat{ [∗1 ∗′1 ∗′2 ∗2 ∗3 ∗4 s] | [s] ∈ viewf(idA ) }
stA,B : A⊗B⊥ → (A⊗B)⊥ , strat{ [ (iA , ∗1 ) ∗′1 ∗′2 ∗2 iB (iA , iB ) s]
| [ (iA , iB ) (iA , iB ) s] ∈ viewf(idA⊗B ) }
N
(primed asterisks are used for arenas on the RHS, where necessary).
Proposition 4.47. The quadruple (( )⊥ , up, dn, st) is a strong monad on Vt . Moreover,
it yields monadic exponentials by taking (C⊥ )B to be B ⇒ C, for each B, C.
Proof: It is not difficult to see that (( )⊥ , up, dn, st) is a strong monad. Moreover, for
each B, C we have that B ⇒ C = B −−⊗ C⊥ is a ( )⊥ -exponential, because of exponentiation
properties of −−⊗.
Although finding a canonical arrow from A to A⊥ is elementary (upA ), finding a canonical
arrow in the inverse direction is not always possible. In some cases, e.g. A = Ai , there is no
such arrow at all, let alone canonical. An exception occurs when A is pointed, by setting:
puA : A⊥ → A , strat{ [∗ iA jA ∗ iA jA s] | [iA iA jA jA s] ∈ viewf(idA ) } .
(4.7)
Lemma 4.48. puA yields a natural transformation pu : ( )⊥(Vtt∗ ) → IdVtt∗ . Moreover, for
any arenas A, B with B pointed, upA ; puA = idA , puA⊥ = dnA and
puB
st′
ev⊥
puA B = Λ (A −−⊗ B)⊥ ⊗A −−→ ((A −−⊗ B)⊗A)⊥ −−→
B⊥ −−→
B .
−−⊗
Initial-state comonads. Our way of modelling terms-in-local-state will be by using initial
state comonads, in the spirit of intensional program modelling of Brookes & Geva [9]. In
our setting, the initial state can be any list ā of distinct names; we define a comonad for
each one of those lists.
Definition 4.49 (Initial-state comonads). For each ā ∈ A# define the triple (Qā , ε, δ)
and
by taking Qā : Vt → Vt , Aā ⊗
π
2
ε : Qā → IdVt , { εA : Aā ⊗A −→
A},
∆⊗id
δ : Qā → (Qā )2 , { δA : Aā ⊗A −−−−→ Aā ⊗ Aā ⊗A } .
For each S(ā′ ) ⊆ S(ā) define the natural transformation
ā
ā′
′
: Qā → Qā by taking
′
( āā′ )A : Aā ⊗A → Aā ⊗A , ( āā′ )1 ⊗idA ,
where ( āā′ )1 is
ā
ā′
of definition 4.8, that is, ( āā′ )1 , { [ (ā, ∗) (ā′ , ∗) ] } .
N
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
39
Note that Qǫ , the comonad for empty initial state, is the identity comonad. Note also that
we have suppressed indices ā from transformations ε, δ for notational economy.
Clearly, each triple (Qā , ε, δ) forms a product comonad on Vt . Moreover, it is straightforward to show the following.
Proposition 4.50 (Chain rule). For each S(ā′ ) ⊆ S(ā) ∈ A# , the transformation āā′ is
a comonad morphism. Moreover, āǫ = ε : Qā → IdVt , āā = id : Qā → Qā and, for each
′′
S(ā′ ) ⊆ S(ā′′ ) ⊆ S(ā), āā′′ ; āā′ = āā′ .
Finally, for each name-type i, we can define a name-test arrow:
eqi : Ai ⊗ Ai → N , { [ (a, a) 0] } ∪ { [ (a, b) 1] | a 6= b } ,
(4.8)
which clearly makes the (N1) diagram (definition 3.12) commute.
Fresh-name constructors. Combining the monad and comonads defined previously we can
obtain a monadic-comonadic setting (Vt , ( )⊥ , Q), where Q denotes the family (Qā )ā∈A# .
This setting, which in fact yields a sound model of the ν-calculus [2, 48], will be used as
the basis of our semantics of nominal computation in the sequel. Nominal computation of
type A, in name-environment ā and variable-environment Γ, will be translated into the set
of strategies
{ σ : Qā JΓK → JAK ⊥ } .
The lifting functor, representing the monadic part of our semantical setting, will therefore
incorporate the computational effect of fresh-name creation.
We describe in this section the semantical expression of fresh-name creation. Fresh
names are created by means of natural transformations which transform a comonad Qā ,
say, to a monad-comonad composite (Qāa )⊥ .
Definition 4.51. Consider the setting (Vt , ( )⊥ , Q). We define natural transformations
newāa : Qā → (Qāa )⊥ by
newāa ⊗idA
st′
1
ā
āa
−→ (Aāa ⊗A)⊥ ,
newāa
A , A ⊗A −−−−−−−→ (A )⊥ ⊗A −
ā
āa
a
newāa
1 : A ⊗1 → (A ⊗1)⊥ , strat{ [ (ā, ∗) ∗ ∗ (āa, ∗) ] } ,
for each āa ∈ A# .
N
That new is a natural transformation is straightforward: for any f : A → B we can form
the following commutative diagram.
Aā ⊗A
new1 ⊗id
/ (Aāa )⊥ ⊗A
st′
/ (Aāa ⊗A)⊥
id ⊗f
id ⊗f
Aā ⊗B
(id ⊗f )⊥
new1 ⊗id
/ (Aāa )⊥ ⊗B
st′
/ (Aāa ⊗B)⊥
Moreover, we can show the following.
Proposition 4.52. In the setting (Vt , ( )⊥ , Q) with new defined as above, the (N2) diagrams (definition 3.12) commute.
40
N. TZEVELEKOS
The fresh-name constructor allows us to define name-abstraction on strategies by taking:
newāa
pu
σ
C
⊥
^ a _ σ , Qā B −−−B→ (Qāa B)⊥ −→
C⊥ −−→
C.
(4.9)
Name-abstraction can be given an explicit description as follows. For any sequence of
moves-with-names s and any name a # nlist(a), let sa be s with a in the head of all of its
name-lists. Then, for σ as above, we can show that:
viewf(^ a _ σ) = { [ (ā, iB ) iC jC mab̄ sa ] | [ (āa, iB ) iC jC mb̄ s] ∈ viewf(σ) ∧ a # iB , jC }
(4.10)
We end our discussion on fresh-name constructors with a technical lemma stating that
name-abstraction and currying commute.
Lemma 4.53. Let f : Qāa (A⊗B) → C with C a pointed arena. Then,
^ a _ Λ(ζ ′ ; f ) = Λ(ζ ′ ; ^ a _ f ) : Qā A → B −−⊗ C .
Proof: As follows.
′
^ a _ Λ(ζ ′ ; f ) = newāa
A ;(Λ(ζ ; f ))⊥ ; puB
C
−−⊗
′
′
= newāa
A ;(Λ(ζ ; f ))⊥ ; Λ(st ; ev ⊥ ; puC )
′
′
= Λ(newāa
A ⊗idB ;(Λ(ζ ; f ))⊥ ⊗idB ; st ; ev ⊥ ; puC )
′
′
= Λ(newāa
A ⊗idB ; st ;(Λ(ζ ; f )⊗idB )⊥ ; ev ⊥ ; puC )
(N2)
′
′
′
āa
= Λ(newāa
A ⊗idB ; st ;(ζ ; f )⊥ ; puC ) = Λ(ζ ; newA⊗B ; f⊥ ; puC )
and the latter equals Λ(ζ ′ ; ^ a _ f ).
Note that the above result does not imply that ν- and λ-abstractions commute in our
semantics of nominal languages, i.e. that we obtain identifications of the form Jνa.λx.M K =
Jλx.νa.M K. As we will see in the sequel, λ-abstraction is not simply currying, because of
the use of monads.
4.6. Nominal games à la Laird. As aforementioned, there have been two independent
original presentations of nominal games, one due to Abramsky, Ghica, Murawski, Ong and
Stark (AGMOS) [2] and another one due to Laird [21, 24]. Although Laird’s constructions
are are not explicitly based on nominal sets (natural numbers are used instead of atoms),
they constitute nominal constructions nonetheless. In this section we highlight the main
differences between our nominal games, which follow AGMOS, and those of [21, 24].
Laird’s presentation concerns the ν-calculus with pointers, i.e. with references to names.
The main difference in his presentation is in the treatment of name-introduction. In particular, a name does not appear in a play at the point of evaluation of its ν-constructor, but
rather at the point of its first use; let us refer to this condition as name-frugality (cf. [31]).
An immediate result is that strategies are no longer innocent, as otherwise e.g. νa.λx.a
and λx.νa.a would have the same denotation.12 More importantly, name-frugality implies
that strategies capture the examined nominal language more accurately: Opponent is not
expected to guess names he is not supposed to know and thus, for example, the denotations of νa.skip and skip are identical. In our setting, Player is not frugal with his names
12Non-innocence can be seen as beneficial in terms of simplicity of the model, since strategies then
have one condition less. On the other hand, though, innocent strategies are specified by means of their
viewfunctions, which makes their presentation simpler. Moreover, non-innocence diminishes the power of
definability results, as finitary behaviours are less expressive in absence of innocence.
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
41
and therefore the two terms above are identified only at the extensional level (i.e. after
quotienting).13
The major difference between [21] and [24] lies in the modelling of (ground-type, namestoring) store. In [21] the store is modelled by attaching to strategies a global, top-level
(non-monadic), store arena. Then, a good-store-discipline is imposed on strategies via
extra conditions on strategy composition which enforce that hidden store-moves follow the
standard read/write pattern. As a result (and in contrast to our model), the model relies
heavily on quotienting by the intrinsic preorder in order for the store to work properly.
The added accuracy obtained by using frugality conditions is fully exploited in [24],
where a carefully formulated setting of moves-with-store14 allows for an explicit characterisation result, that is, a semantic characterisation of operational equality at the intensional
level. The contribution of using moves-with-store in that result is that thus the semantics is relieved from the (too revealing) internal workings of store: for example, terms like
(a := b) ; λx. ! a ; 0 and (a := b) ; λx.0 are equated semantically at the intensional level, in
contrast to what happens in our model.15 Note, though, that in a setting with higher-order
store such that of νρ, moves-with-store would not be as simple since stores would need to
store higher-order values, that is, strategies.
Laird’s approach is therefore advantageous in its use of name-frugality conditions, which
allow for more accurate models. At the same time, though, frugality conditions are an extra
burden in constructing a model: apart from the fact that they need to be dynamically preserved in play-composition by garbage collection, they presuppose an appropriately defined
notion of name-use. In [21, 24], a name is considered as used in a play if it is accessible
through the store (in a reflexive transitive manner) from a name that has been explicitly
played. This definition, however, does not directly apply to languages with different nominal effects (e.g. higher-order store). Moreover, frugality alone is not enough for languages
like Reduced ML or the ν-calculus: a name may have been used in a play but may still
be inaccessible to some participant (that is, if it is outside his view [31]). On the other
hand, our approach is advantageous in its simplicity and its applicability on a wide rage of
nominal effects (see [48]), but suffers from the accuracy issues discussed above.
5. The nominal games model
We embark on the adventure of modelling νρ in a category of nominal arenas and strategies.
Our starting point is the category Vt of nominal arenas and total strategies. Recall that Vt
is constructed within the category Nom of nominal sets so, for each type A, we have an
arena AA for references to type A.
13Note here, though, that the semantics being too explicit about the created names can prove beneficial:
here we are able to give a particularly concise proof adequacy for νρ (see section 5.3 and compare e.g. with
respective proof in [3]) by exploiting precisely this extra information!
14
Inter alia, frugality of names implies that sequences of moves-with-store have strong support even if
stores are represented by sets!
15In our model they correspond to the strategies (see also section 5):
σ1 , { [ (a, b) ∗ ⊛(∗, ⊛)(n, ⊛) a c 0] } ,
σ2 , { [ (a, b) ∗ ⊛(∗, ⊛)(n, ⊛) 0] } .
Thus, the inner-workings of the store revealled by σ1 (i.e. the moves a c) differentiate it from σ2 . In fact, in our
attempts to obtain an explicit characterisation result from our model, we found store-related innaccuracies
to be the most stubborn ones.
42
N. TZEVELEKOS
The semantics is monadic in a store monad built around a store arena ξ, and comonadic in an initial state comonad. The store monad is defined on top of the lifting monad
(see definition 4.46) by use of a side-effect monad constructor, that is,
T A , ξ −−⊗ (A ⊗ ξ)⊥
i.e. T A = ξ ⇒ A ⊗ ξ .
Now, ξ contains the values assigned to each name (reference), and thus it is of the form
O
(AA ⇒ JAK)
A∈TY
where JAK is the translation of each type A. Thus, a recursive (wrt type-structure) definition
of the type-translation is not possible because of the following cyclicity.
JA → BK = JAK −−⊗ (ξ ⇒ JBK ⊗ ξ)
O
(SE)
ξ=
(AA ⇒ JAK)
A
Rather, both ξ and the type-translation have to be computed as the least solution to the
above domain equation. By the way, observe that JA → BK = JAK ⊗ ξ ⇒ JBK ⊗ ξ .
5.1. Solving the Store Equation. The full form of the store equation (SE) is:
J1K = 1 ,
JNK = N ,
J[A]K = AA ,
JA → BK = JAK ⊗JBK ,
N
ξ = A(AA ⇒ JAK) .
JA → BK = JAK −−⊗ (ξ ⇒ JBK ⊗ ξ) ,
This can be solved either as a fixpoint equation in the cpo of nominal arenas or as a domain
equation in the PreCpo-enriched category Vt . We follow the latter approach, which provides
the most general notion of canonical solution (and which incorporates the solution in the
cpo of nominal arenas, analogously to [26]). It uses the categorical constructions of [43, 11]
for solving recursive domain equations, as adapted to games in [26].
Definition 5.1. Define the category
C , Vt ×
Y
Vt
A∈TY
with objects D of the form (Dξ , DA A∈TY ) and arrows f of the form (fξ , fA A∈TY ).
Now take F : (C)op ×C → C to be defined on objects by F (D, E) , (ξD,E , JAK D,E A∈TY ),
where:
JA × BK D,E , JAK D,E ⊗JBK D,E
J[A]K D,E , AA
J1K D,E , 1
N
JNK D,E , N
JA → BK D,E , DA −−⊗ (ξE,D ⇒ EB ⊗ξD,E )
ξD,E ,
A∈TY(AA ⇒ EA )
and similarly for arrows, with F (f, g) , (ξf,g , JAK f,g A∈TY ) .
N
Now (SE) has been reduced to:
D = F (D, D)
(SE∗ )
where F is a locally continuous functor wrt the strategy ordering (proposition 4.44), and
continuous wrt the arena ordering (proposition 4.45). The solution to (SE∗ ) is given via a
local bilimit construction to the following ω-chain in C.16
16Recall that we call an arrow e : A → B an embedding if there exists eR : B → A such that
e ; eR = idA ∧ eR ; e ⊑ idB .
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
43
Definition 5.2. In C form the sequence (Di )i∈ω taking D0 as below and Di+1 , F (Di , Di ).
D0,N , N
D0,1 , 1
D0,A→B , 1
D0,A×B , D0,A ⊗D0,B
Moreover, define arrows ei : Di → Di+1 and
e0 , inclD0 ,D1
D0,[A] , AA
O
D0,ξ ,
(AA ⇒ 0)
eR
0
eR
i
A
: Di+1 → Di as:
ei+1 , F (eR
i , ei )
, projD1 ,D0
R
eR
i+1 , F (ei , ei ) .
N
The above inclusion and projection arrows are defined componentwise. In fact, there is a
hidden lemma here which allows us to define the projection arrow, namely that D0 E1 D1
(which means D0,ξ E1 D1,ξ and D0,A E1 D1,A for all A).
(∆)
e0
D0
/ D1
e1
/ D2
e2
/ D3
e3
/ ···
Thus, we have formed the ω-chain ∆. We show that ∆ is a E-increasing sequence of objects
and embeddings, and proceed to the main result.
Lemma 5.3. For (ei , eR
i )i∈ω as above and any i ∈ ω,
ei = inclDi ,Di+1
∧
eR
i = projDi+1 ,Di .
Proof: It is easy to see that Di E1 Di+1 , all i ∈ ω, so the above are well-defined. We now
do induction on i; the base case is true by definition. The inductive step follows easily from
proposition 4.45.
Theorem 5.4. We obtain a local bilimit (D∗ , ηi i∈ω ) for ∆ by taking:
G
D∗ ,
Di ,
ηi , inclDi ,D∗ (each i ∈ ω).
i
Hence, idD∗ : F (D ∗ , D∗ ) → D ∗ is a minimal invariant for F .
Proof: First, note that D0 E1 Di , for all i ∈ ω, implies that all Di ’s share the same initial
moves, and hence Di E1 D ∗ . Thus, for each i ∈ ω, we can define ηiR , projD∗ ,Di , and
hence each ηi is an embedding. We now need to show the following.
(1) (D∗ , ηi i∈ω ) is a cone for ∆,
R ;η
(2) for all i ∈ ω, ηiR ; ηi ⊑ ηi+1
i+1 ,
F
R
∗
(3) i∈ω (ηi ; ηi ) = idD .
∗ , which follows from
For 1, we nts that, for any i, inclD1 ,D∗ = inclDi ,Di+1 ; inclDi+1 ,DS
(TRN). For 2 we essentially nts that idDi ⊆ idDi+1 , and for 3 that i idDi = idD∗ ; these
are both straightforward.
From the local bilimit (D∗ , ηi i∈ω ) we obtain a minimal invariant α : F (D∗ , D∗ ) → D∗
by taking (see e.g. [1]):
G
prop. 4.45
α ,
αi , αi , F (ηi , ηiR ) ; ηi+1
=
projF (D∗ ,D∗ ),Di+1 ; inclDi+1 ,D∗ .
i
D∗
R ;η
Moreover,
= F (D∗ , D∗ ) by the Tarski-Knaster theorem, and therefore αi = ηi+1
i+1 ,
which implies α = idD∗ .
Given an ω-chain ∆ = (Di , ei )i∈ω of objects and embeddings, a cone for ∆ is an object D together with a
family (ηi : Di → D)i∈ω of embeddings such that, for all i ∈ ω, ηi = ei ; ηi+1 . Such a cone is a local bilimit
for ∆ if, for all i ∈ ω,
G
R
(ηiR ; ηi ) = idD .
ηiR ; ηi ⊑ ηi+1
; ηi+1 ∧
i∈ω
44
N. TZEVELEKOS
Thus, D ∗ is the canonical solution to D = F (D, D), and in particular it solves:
O
DA→B = DA −−⊗ (Dξ ⇒ DB ⊗Dξ ) , Dξ =
(AA ⇒ DA ) .
A
Definition 5.5. Taking
D∗
as in the previous theorem define, for each type A,
ξ , Dξ∗ ,
∗
JAK , DA
.
N
The arena ξ and the translation of compound types
N are given explicitly in the following
figure. ξ is depicted by means of unfolding it to A(AA ⇒ JAK) : it consists of an initial
move ⊛ which justifies each name-question a ∈ AA , all types A, with the answer to the
latter being the denotation of A (and modelling the stored value of a). Note that we reserve
the symbol “⊛” for the initial move of ξ. ⊛-moves in type-translations can be seen as
opening a new store.
ξ
⊛
a
PA
JA × BK
(iJAK , iJBK )
PA
OQ
(a ∈ AA )
JAK
JAK
−
JBK
JA → BK
∗
(iJAK , ⊛)
(iJBK , ⊛)
−
JAK
−
PA
OQ
PA
ξ−
JBK
−
ξ−
Figure 4: The store arena and the type translation.
The store monad T . There is a standard construction (v. [28]) for defining a monad of Aside-effects (any object A) starting from a given strong monad with exponentials. Here we
define a store monad, i.e. a ξ-side-effects monad, from the lifting monad as follows.
T : C → C , ξ ⇒ ( ⊗ ξ)
up
ηA : A → T A , Λ A ⊗ ξ −→ (A ⊗ ξ)⊥
(5.1)
ev⊥
ev
dn
µA : T 2 A → T A , Λ T 2 A ⊗ ξ −→ (T A ⊗ ξ)⊥ −−→
(A ⊗ ξ)⊥⊥ −→ (A ⊗ ξ)⊥
id⊗ev
st
τA,B : A ⊗ T B → T (A ⊗ B) , Λ A ⊗ T B ⊗ ξ −−−−→ A ⊗ (B ⊗ ξ)⊥ −→ (A ⊗ B ⊗ ξ)⊥
A concrete description of the store monad is given in figure 5 (the diagrams of strategies
depict their viewfunctions, as described in notation 4.34). For the particular case of ⊛moves which appear as second moves in T A’s, let us recall the convention we are following.
Looking at the diagram for T A (figure 5), we see that ⊛ justifies a copy of ξ − (left) and
a copy of A⊗ξ (right). Thus, a copycat link connecting to the lower-left of a ⊛ expresses
a copycat concerning the ξ − justified by ⊛ (e.g. the link between the first two ⊛-moves in
the diagram for µA ), and similarly for copycat links connecting to the lower-right of a ⊛.
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
ηA : A
iA
TA
∗
⊛
∗
⊛
OQ
(iA , ⊛)
PA
OQ
∗
PA
⊛
OQ
(iA , ⊛)
ξ
OQ
PA
/ TB
Tf : TA
∗
/ TA
PQ
(iA , ⊛)
OA
(iB , ⊛)
f
A
ξ−
µA : T 2 A
∗
τA,B : A⊗T B
/ TA
OQ
∗
(iA , ∗)
(∗, ⊛)
⊛
OQ
∗
⊛
OQ
⊛
PA
/ T (A⊗B)
PA
⊛
PA
OQ
⊛
PA
−
−
45
PA
OQ
PQ
⊛
PQ
OA
(iB , ⊛)
OA
PQ
(iA , iB , ⊛)
PA
Figure 5: The store monad.
Thus, for example, µA is given by:
µA = strat( { [∗ ∗ ⊛ ⊛ s] | [⊛ ⊛ s] ∈ viewf(idξ ) }
∪ { [∗ ∗ ⊛ ⊛ (∗, ⊛′ ) ⊛′ s] | [⊛′ ⊛′ s] ∈ viewf(idξ ) ∨ [s] ∈ viewf(idA⊗ξ ) } ) .
A consequence of lifting being a strong monad with exponentials is that the store monad is
also a strong monad with exponentials. T -exponentials are given by:
T B A , A −−⊗ T B ,
ΛT (f : A⊗B → T C) , Λ(f ) .
(5.2)
Moreover, for each arena A we can define an arrow:
(ηA )⊥
pu
TA
αA , A⊥ −−−−→ (T A)⊥ −−−
→ TA.
(5.3)
The transformation pu was introduced in (4.7). Using lemma 4.48 we obtain αA = Λ(st′A,ξ ).
Moreover, we can show that α : ( )⊥ → T is a monad morphism.
5.2. Obtaining the νρ-model. Let us recapitulate the structure that we have constructed
thus far to the effect of obtaining a νρ-model in Vt . Our numbering below follows that of
definition 3.12.
I. Vt is a category with finite products (proposition 4.39).
II. The store monad T is a strong monad with exponentials.
III. Vt contains adequate structure for numerals.
IV. There is a family (Qā , ε, δ, ζ)ā∈A# of product comonads, with each Qā having basis Aā
(see section 4.5), which fulfils specifications (a,b). There are also fresh-name constructors, newāa : Qā → (Qāa )⊥ , which satisfy (N2).
46
N. TZEVELEKOS
V. There are name-equality arrows, eqA for each type A, making the (N1) diagram commute (section 4.5).
From new we can obtain a fresh-name transformation for the store monad.
Definition 5.6. For each āa ∈ A# , define a natural transformation nuāa : Qā → T Qāa by:
new
αQāa A
A
ā
nuāa
−−→
(Qāa A)⊥ −−−−→ T Qāa A .
A , Q A−
nu
Tf
µ
B
A
T Qāa A −−→ T 2 B −−→
TB . N
Moreover, for each f : Qāa A → T B, take ^ a _ f , Qā A −−→
Each arrow nuāa
A is explicitly given by (note we use the same conventions as in (4.10)):
a a
nuāa
A = strat{ [(ā, iA ) ∗ ⊛ (āa, iA , ⊛) s ] |
a # iA ∧ ([iA iA s] ∈ viewf(idA ) ∨ [⊛ ⊛ s] ∈ viewf(idξ )) }
and diagrammatically as in figure 6. Moreover, using the fact that α is a monad morphism
and lemma 4.48 we can show that, in fact, ^ a _ f is given exactly as in (4.9), that is,
^ a _ f = newA ; f⊥ ; puT B .
Finally, α being is a monad morphism implies also the following.
Proposition 5.7. The nu transformation satisfies the (N2) diagrams of definition 3.12.
What we are only missing for a νρ-model is update and dereferencing maps.
Definition 5.8. For any type A we define the following arrows in Vt ,
drfA , strat{ [ a ∗ ⊛ a iJAK (iJAK , ⊛) s ] |
[⊛ ⊛ s] ∈ viewf(idξ ) ∨ [iJAK iJAK s] ∈ viewf(idJAK ) } ,
updA , strat { [ (a, iJAK ) ∗ ⊛ b b s ] | [⊛ ⊛ b b s] ∈ viewf(idξ ) ∧ b#a }
depicted also in figure 6.
∪ { [ (a, iJAK ) ∗ ⊛ a iJAK s ] | [iJAK iJAK s] ∈ viewf(idJAK ) } ,
N
These strategies work as follows. updA responds with the answer (∗, ⊛) to the initial sequence (a, iJAK ) ∗ ⊛ and then:
• for any name b # a that is asked by O to (∗, ⊛) (which is a store-opening move), it copies
b under the store ⊛ (opened by O) and establishes a copycat link between the two b’s;
• if O asks a to (∗, ⊛), it answers iJAK and establishes a copycat link between the two iJAK ’s.
On the other hand, drfA does not immediately answer to the initial sequence a ∗ ⊛ but
rather asks (the value of) a to ⊛. Upon receiving O’s answer iJAK , it answers (iJAK , ⊛) and
establishes two copycat links. We can show by direct computation the following.
Proposition 5.9. The (NR) and (SNR) diagrams of definition 3.12 commute.
We have therefore established the following.
Theorem 5.10. (Vt , T, Q) is a νρ-model.
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
drfA : AA
updA : AA ⊗JAK
/ T1
47
/ T JAK
a
(a, iJAK )
OQ
∗
OQ
∗
⊛
PA
⊛
(∗, ⊛)
PA
OQ
OQ
a
PQ
PA
iJAK
OA
(iJAK , ⊛)
b
PA
OQ
b
PQ
ā
nuāa
A : Q A
/ T Qāa A
(ā, iA )
a
OQ
iJAK
PA
OQ
∗
PA
⊛
OQ
(āa, iA , ⊛)
a
PA
Figure 6: Strategies for update, dereferencing and fresh-name creation.
We close this section with a discussion on how the store-effect is achieved in our innocent
setting, and with some examples of translations of νρ-terms in Vt .
Remark 5.11 (Innocent store). The approach to the modelling of store which we
have presented differs fundamentally from previous such approaches in game semantics.
Those approaches, be they for basic or higher-order store [6, 3], are based on the following
methodology. References are modelled by read/write product types, and fresh-reference
creation is modelled by a “cell” strategy which creates the fresh cell and imposes a good
read/write discipline on it. In order for a cell to be able to return the last stored value,
innocence has to be broken since each read-request hides previous write-requests from the
P-view. Higher-order cells have to also break visibility in order to establish copycat links
between read- and write-requests.
Here instead we have only P – What’s the value of a?
used innocent strategies and a
O – I don’t know, you tell me: what’s the value of a?
monad on a store ξ. Because of
P – I don’t know, you tell me: what’s the value of a?
..
the monad, an arena JAK con.
tains several copies of ξ, thereO – I don’t know, you tell me: what’s the value of a?
fore several stores are opened
P – I know it, it is v.
inside a play. The read/ write
..
.
discipline is then kept in an inO – I know it, it is v.
teractive way: when a particiP – I know it, it is v.
pant asks (the value of) a name
O
–
I know it, it is v.
a at the last (relevant) store,17
Figure 7: A dialogue in innocent store.
17i.e. at the last store-opening move played by the other participant.
48
N. TZEVELEKOS
the other participant either answers with a value or asks himself a at the penultimate store,
and so on until one of the participants answers or the first store in the play is reached. At
each step, a participant answers the question a only if he updated the value of a before
opening the current store (of that step, i.e. the last store in the participant’s view) — note
that this behaviour does not break innocence. If no such update was made by the participant then he simply passes a to the previous store and establishes a copycat link between
the two a’s. These links ensure that when an answer is eventually obtained then it will
be copycatted all the way to answer the original question a. Thus, we innocently obtain a
read/write discipline: at each question a, the last update of a is returned.
Example 5.12. Consider the typed terms:
ǫ | ∅ |− νa.a := hfst ! a, snd ! ai ,
b | ∅ |− b := λx.(! b)skip ,
b | ∅ |− (! b)skip
with a ∈ AN×N and b ∈ A1→B . Their translations in Vt are as follows.
A1→B
/ T1
1
∗
OQ
∗
b
OQ
∗
PA
⊛
OQ
aa
(n, n )
(∗, ⊛)
c
PQ
PA
OQ
c
a
(l, l′ )
(∗, ⊛)
b
PA
OQ
∗
b
a
(∗, ⊛)
OQ
ba
PQ
aa
OQ
a
(n, l′ )
OQ
∗
⊛
PA
OQ
b
PQ
∗
OA
(∗, ⊛)
PQ
PQ
OA
a
b
OQ
OA
aa
/ T JBK
PA
⊛
PQ
′ a
A1→B
/ T1
b
PA
(iB , ⊛)
(iB , ⊛)
OA
PA
OQ
PQ
∗
OA
(∗, ⊛)
PQ
PA
In the first example we see that, although the strategy is looking up the fresh (and therefore
uninitialised) reference a, the play does not deadlock: if Opponent answered the question
aa then the play would proceed as depicted. In practice, however, Opponent will never
be able to answer that question and the play will halt indeed (this is because Opponent
must play tidily, see section 5.4). Moreover, from the latter two examples we can compute
JstopB K : 1 → T JBK = { [∗ ∗ ⊛] } .
5.3. Adequacy. We proceed to show that Vt is adequate (v. definition 3.18). First we
characterise non-reducing terms as follows.
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
49
Lemma 5.13. Let ā | ∅ |− M : A be a typed term. M is a value iff there exists a store S
such that S |= M has no reducts and [(ā, ∗) ∗ ⊛ (iA , ⊛)b̄ ] ∈ JS̄ ; M K , for some iA , b̄.
Proof: The “only if”-part is straightforward. For the “if”-part assume that M is a non-value
and take any S such that S |= M has no reducts. We show by induction on M that there
exist no iA , b̄ such that [(ā, ∗) ∗ ⊛ (iA , ⊛)b̄ ] ∈ JS̄ ; M K. The base case follows trivially from
M not being a value. Now, for the inductive step, the specifications of S |= M (and M )
imply that either M ≡ ! a with a not having a value in S, or M ≡ E[K] with E an evaluation
context and K a non-value typed as ā | ∅ |− K : B and such that S |= K non-reducing.
In case of M ≡ ! a, we have that [(ā, ∗) ∗ ⊛ a] ∈ JS̄ ; M K, which proves the claim because
of determinacy. On the other hand, if M ≡ E[K] then, as in proof of proposition 3.17, we
have:
JS̄ ; M K = hΛ(ζ ′ ; JE[x]K), JS̄ ; KKi ; τ ; T ev ; µ = hid, JS̄ ; KKi ; τ ; T (ζ ′ ; JE[x]K) ; µ
By IH, there are no iB , c̄ such that [(ā, ∗) ∗ ⊛ (iB , ⊛)c̄ ] ∈ JS̄ ; KK, which implies that there
are no iA , b̄ such that [(ā, ∗) ∗ ⊛ (iA , ⊛)b̄ ] ∈ JS̄ ; M K.
Because of the previous result, in order to show adequacy it suffices to show that, whenever
JM K = ^ b̄ _ JS̄ ; 0̃K, there is no infinite reduction sequence starting from ā |= M . We will
carry out the following reasoning.
• Firstly, since the calculus without DRF reductions is strongly normalising — this is inherited from strong normalisation of the ν-calculus — it suffices to show there is no reduction
sequence starting from ā |= M and containing infinitely many DRF reduction steps.
• In fact, the problem can be further reduced to showing that, whenever [(ā, ∗)∗⊛ (0, ⊛)b̄ ] ∈
JM K, there is no reduction sequence starting from ā |= M and containing infinitely many
NEW reduction steps. The latter clearly holds, since M cannot create more than |b̄| fresh
names in that case, because of correctness.
The reduction to this simpler problem is achieved as follows. For each term M , we
construct a term M ′ by adding immediately before each dereferencing in M a freshname construction. The result is that, whenever there is a sequence with infinitely many
DRF’s starting from S |= M , there is a sequence with infinitely many NEW’s starting
from S |= M ′ . The reduction is completed by finally showing that, whenever we have
′
[(ā, ∗) ∗ ⊛ (0, ⊛)b̄ ] ∈ JM K, we also have [(ā, ∗) ∗ ⊛ (0, ⊛)b̄ ] ∈ JM ′ K.
The crucial step in the proof is the reduction to “the simpler problem”, and particularly
showing the connection between JM K and JM ′ K described above. The latter is carried out
by using the observational equivalence relation on strategies, defined later in this section.
Note, though, that a direct proof can also be given (see [48]).
Proposition 5.14 (Adequacy). (Vt , T, Q) is adequate.
Proof: This follows from O-adequacy (lemma 5.28), which is proved independently.
Hence, (Vt , T, Q) is a sound model for νρ and thus, for all terms M, N ,
JM K = JN K =⇒ M / N .
50
N. TZEVELEKOS
5.4. Tidy strategies. Leaving adequacy behind, the route for obtaining a fully abstract
model of νρ proceeds to definability. That is, we aim for a model in which elements with
finite descriptions correspond to translations of νρ-terms.
However, Vt does not satisfy such a requirement: it includes (finitary) store-related
behaviours that are disallowed in the operational semantics of νρ. In fact, our strategies
treat the store ξ like any other arena, while in νρ the treatment of store follows some basic
guidelines. For example, if a store S is updated to S ′ then the original store S is not
accessible any more (irreversibility). In strategies we do not have such a condition: in
a play there may be several ξ’s opened, yet there is no discipline on which of these are
accessible to Player whenever he makes a move. Another condition involves the fact that
a store either ‘knows’ the value of a name or it doesn’t know it. Hence, when a name is
asked, the store either returns its value or it deadlocks: there is no third option. In a play,
however, when Opponent asks the value of some name, Player is free to evade answering
and play somewhere else!
To disallow such behaviours we will constrain total strategies with further conditions,
defining thus what we call tidy strategies. But first, let us specify store-related moves inside
type-translating nominal arenas.
Definition 5.15. Consider Vνρ , the full subcategory of Vt with objects given by:
Ob(Vνρ ) ∋ A, B ::= 1 | N | Aā | A⊗B | A −−⊗ T B
For each such arena A we define its set of store-Handles, HA , as follows.
H1 = HN = HAā , ∅ ,
HA⊗B , HA ∪ HB ,
[
, {(iA , ⊛A ), (iB , ⊛B )} ∪ HA ∪ HB ∪ HξA ∪ HξB with Hξ ,
HJCK ,
C
N
where we write A −−⊗ T B as A −−⊗ (ξA ⇒ B ⊗ξB ), and ξ as C(AC ⇒ JCK).
In an arena A ∈ Ob(Vνρ ), a store-Handle justifies (all) questions of the form a, which
we call store-Questions. Answers to store-Questions are called store-Answers.
N
HA
TB
−−⊗
Note in particular that, for each type A, we have JAK, Qā JAK, T JAK ∈ Ob(Vνρ ), assuming
that T JAK is equated with 1 −−⊗ T JAK. Note also there is a circularity in HA T B in the
S
i and,
above definition. In fact, it is a definition by induction: we take HA , i∈ω HA
−−⊗
0
, ∅,
H1i = HNi = HAi ā = HA
i
i
i
HA⊗B
, HA
∪ HB
,
i+1
i
i
i+1
i+1
HA
T B , {(iA , ⊛A ), (iB , ⊛B )} ∪ HA ∪ HB ∪ HξA ∪ HξB
−−⊗
Intuitively, store-H’s are store-opening
moves, while store-Q’s and store-A’s are
obtained from unfolding the store structure. On the side we give examples of
store-related moves in a simple arena.
From now on we work in Vνρ , unless stated otherwise. A first property
we can show is that a move is exclusively either initial or an element of the
aforedefined move-classes.
with Hξi+1 ,
[
C
i
HJCK
.
T 1 = ξ ⇒ 1⊗ξ
∗
⊛
store-Q’s
(∗, ⊛)
a
iA
store-A’s
store-H’s
b
iB
Figure 8: Store-H’s -Q’s -A’s in arena T 1.
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
51
Proposition 5.16. For any A ∈ Ob(Vνρ ),
MA = IA ⊎ HA ⊎ { m ∈ MA | m a store-Q} ⊎ {m ∈ MA | m a store-A } .
Proof: We show that any m ∈ MA belongs to exactly one of the above sets. We do
induction on the level of m, l(m), inside A and on the size of A, |A|, specified by the
inductive definition of Ob(Vνρ ). If m is initial then, by definition, it can’t be a store-H.
Neither can it be a store-Q or store-A, as these moves presuppose non-initiality.
Assume l(m) > 0. If A is base then trivial, while if A = A1 ⊗A2 then use the IH on
(l(m), |A|). Now, if A = A1 −−⊗ T A2 then let us write A as A1 −−⊗ (ξ1 ⇒ A2 ⊗ξ2 ); we have the
following cases.
• If m = (iA1 , ⊛1 ) ∈ HA then m a question and not a store-Q, as store-Q’s are names.
• If m = (iA2 , ⊛2 ) ∈ HA then m an answer and not a store-A as its justifier is (iA1 , ⊛1 ).
• If m is in A1 or in A2 then use the IH.
• If m is in ξ1 then it is either some store-Q a to (iA1 , ⊛1 ) (and hence not a store-H or
store-A), or it is in some JCK. In the latter case, if m initial in JCK then a store-A in
JAK and therefore not a store-H, as m not a store-H in JCK by IH (on l(m)). If m is
non-initial in JCK then use the IH and the fact that store-H’s -Q’s -A’s of JCK are the
same in JAK.
• Similarly if m is in ξ2 .
The notion of store-handles can be straightforwardly extended to prearenas.
Definition 5.17. Let A, B ∈ Ob(Vνρ ). The set HA→B of store-handles in prearena A → B
is HA ∪ HB . Store-Q’s and store-A’s are defined accordingly.
N
Using the previous proposition, we can see that, for any A and B, the set MA→B can be
decomposed as:
IA ⊎ IB ⊎ HA→B ⊎ { m ∈ MA→B | m a store-Q } ⊎ { m ∈ MA→B | m a store-A }
(5.4)
We proceed to define tidy strategies. We endorse the following notational convention. Since
stores ξ may occur in several places inside a (pre)arena we may use parenthesised indices
to distinguish identical moves from different stores. For example, the same store-question q
may be occasionally denoted q(O) or q(P ) , the particular notation denoting the OP-polarity
of the move. Moreover, by O-store-H’s we mean store-H’s played by Opponent, etc.
Definition 5.18 (Tidy strategies). A total strategy σ is tidy if whenever odd-length
[s] ∈ σ then:
(TD1) If s ends in a store-Q q then [sx] ∈ σ , with x being either a store-A to q introducing
no new names, or a copy of q. In particular, if q = aā with a # psq− then the latter
case holds.
(TD2) If [sq(P ) ] ∈ σ with q a store-Q then q(P ) is justified by last O-store-H in psq.
(TD3) If psq = s′ q(O) q(P ) t y(O) with q a store-Q then [sy(P ) ] ∈ σ, where y(P ) is justified by
psq .−3 .
N
(TD1) states that, whenever Opponent asks the value of a name, Player either immediately
answers with its value or it copycats the question to the previous store-H. The former case
corresponds to Player having updated the given name lastly (i.e. between the previous Ostore-H and the last one). The latter case corresponds to Player not having done so and
hence asking its value to the previous store configuration, starting thus a copycat between
the last and the previous store-H. Hence, the store is, in fact, composed by layers of stores
52
N. TZEVELEKOS
— one on top of the other — and only when a name has not been updated in the top layer
is Player allowed to search for it in layers underneath. We can say that this is the nominal
games equivalent of a memory cell (cf. remark 5.11). (TD3) further guarantees the abovedescribed behaviour. It states that when Player starts a store-copycat then he must copycat
the store-A and all following moves he receives, unless Opponent chooses to play elsewhere.
(TD2) guarantees the multi-layer discipline in the store: Player can see one store at each
time, namely the last played by Opponent in the P-view.
The following straightforward result shows that (TD3), as stated, provides the intended
copycat behaviour.
Proposition 5.19. Let σ be a tidy strategy. If [s′ q(O) q(P ) t] ∈ σ is an even-length P-view
and q is a store-Q then q(O) q(P ) t is a copycat.
Proof: We do induction on |t|. The base case is straightforward. For the inductive step, let
t = t′ xz. Then, by prefix closure, [s′ q(O) q(P ) t′ x] ∈ σ, this latter a P-view. By IH, q(O)q(P ) t′ is
a copycat. Moreover, by (TD3), [s′ q(O) q(P ) t′ xx] ∈ σ with last x justified by (q(O) q(P ) t′ x).−3,
thus s′ q(O) q(P ) t′ xx a copycat. Now, by determinacy, [s′ q(O) q(P ) t′ xx] = [s′ q(O) q(P ) t′ xz], so
there exists π such that π ◦ x = x ∧ π ◦ x = z, ∴ x = z, as required.
A good store discipline would guarantee that store-Handles OP-alternate in a play. This
indeed happens in P-views played by tidy strategies. In fact, such P-views have canonical
decompositions, as we show below.
Proposition 5.20 (Tidy Discipline). Let σ : A → B be a tidy strategy and [s] ∈ σ with
psq = s. Then, s is decomposed as in the following diagram.
@ABC
GFED
iA
HIJK
ONML
@ABC
GFED
CC o
GFED
/ @ABC
iB
S-H
HIJK
/ ONML
X
e XXXXXXXXXX
mm6 OQ
m
m
M
XXXXX
m
m
XXXXX
mmm
XXXXXX
m
m
%
mm
XXXX,
m
S-H
S-A
S-H
S-Q
HIJK
ONML
HIJK
ONML
HIJK
ONML
HIJK
ONML
lXRXXXXX
PQ hRR
hRRR
P
PA hR
l6 P
l
X
RRR XXlXlXlX M
RRR
l
l
RRR
L
RRlRll XXXXXX
R ll
RRR
XXXXlXlXllRlRRRR
l RRR
l
RRR
l
RR
R
vlll
lll XXXXXXRXR S-A
S-Q
S-H
HIJK
ONML
HIJK
ONML
HIJK
ONML
O
OA
O
(by CC we mean the state that, when reached by a sequence s = psq, the rest of s is copycat.)
Proof: The first two transitions are clear. After them neither P nor O can play initial
moves, so all remaining moves in s are store-H -Q -A’s. Assume now O has just played a
question x0 which is a store-H and the play continues with moves x1 x2 x3 ... .
x1 cannot be a store-A, as this would not be justified by x0 , breaching well-bracketing.
If x1 is a store-Q then x2 must be a store-A, by P-view. If x1 is an answer-store-H then x2
is an OQ, while if x1 a question-store-H then x2 is either a store-Q or a store-H.
If x2 is a store-Q then, by (TD1), x3 either a store-A or a store-Q, the latter case
meaning transition to the CC state. If x2 is not a store-Q then x3 can’t be a store-A: if
x3 were a store-A justified by q 6= x2 then, as q wouldn’t have been immediately answered,
s≥q would be a copycat and therefore we would be in the CC state right after playing q.
Finally, if x3 is a store-A then x4 must be justified by it, so it must be a Q-store-H.
Corollary 5.21 (Good Store Discipline). Let [s] ∈ σ with σ tidy and psq = s. Then:
• The subsequence of s containing its store-H’s is OP-alternating and O-starting.
• If s.−1 = q is a P-store-Q then either q is justified by last store-H in s, or s is in copycat
mode at q.
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
53
Observe that strategies that mostly do copycats are tidy; in particular, identities are tidy.
Moreover, tidy strategies are closed under composition (proof delegated to the appendix).
Proposition 5.22. If σ : A → B and τ : B → C are tidy strategies then so is σ ; τ .
N
Definition 5.23. T is the lluf subcategory of Vνρ of tidy strategies.
Finally, we need to check that all structure required for a sound νρ-model pass from Vt to
T . It is not difficult to see that all such structure which does not handle the store remains
safely within the tidy universe. On the other hand, strategies for update and dereferencing
are tidy by construction. (A fully formal proof is given in [48].)
Proposition 5.24. T forms an adequate νρ-model by inheriting all the necessary structure
from Vt .
Henceforth, by strategies we shall mean tidy strategies, unless stated otherwise.
5.5. Observationality. Strategy equality is too fine grained to capture contextual equivalence in a complete manner. For example, even simple contextual equivalences like
skip ≅ νa.skip
are not preserved by the semantical translation, since strategies include in their name-lists
all introduced names, even useless ones. For similar reasons, equivalences like
νa.νb.M ≅ νb.νa.M
are not valid semantically. In fact, it is not only because of the treatment of name-creation
that the semantics is not complete. Terms like
a := 1 ; λx. ! a ; 2 ≅ a := 1 ; λx.2
are distinguished because of the ‘explicit’ way in which the store works.
So there are many ways in which our semantics is too expressive for our language.
We therefore proceed to a quotienting by the intrinsic preorder and prove full-abstraction
in the extensional model. Following the steps described in section 3.2, in this section we
introduce the intrinsic preorder on T and show that the resulting model is observational.
Full-abstraction is then shown in the following section.
Definition 5.25. Expand T to (T , T, Q, O) by setting, for each ā ∈ A# ,
Oā , { f ∈ T (Qā 1, T N) | ∃b̄. [(ā, ∗) ∗ ⊛ (0, ⊛)b̄ ] ∈ f } .
Then, for each f, g ∈ T (Qā A, T B), f .ā g if
∀ρ : Qā (A −−⊗ T B) → T N. (Λā (f ) ; ρ ∈ Oā =⇒ Λā (g) ; ρ ∈ Oā ) .
Thus, the observability predicate O is a family (Oā )ā∈A# , and the intrinsic
ā
family (.ā )ā∈A# . Recall that by Λā (f ) we mean ΛQ ,T (f ), that is,
δ
N
preorder . is a
Qā Λ(ζ ′ ; f )
Λā (f ) = Qā 1 −
→ Qā Qā 1 −
−−−−−−
→ Qā (A −−⊗ T B) .
Note in particular that f ⊑ g implies Λā (f ) ; ρ ⊑ Λā (g) ; ρ, for any relevant ρ, and therefore:
f ⊑ g =⇒ f .ā g
(5.5)
The intrinsic preorder is defined by use of test arrows ρ, which stand for possible program
contexts. As the following result shows, not all such tests are necessary.
54
N. TZEVELEKOS
Lemma 5.26 (tl4 tests suffice). Let f, g ∈ T (Qā 1, B) with B pointed. The following are
equivalent (recall definition 4.35).
I.
∀ρ : Qā B → T N. δ ; Qā f ; ρ ∈ Oā =⇒ δ ; Qā g ; ρ ∈ Oā
II. ∀ρ : Qā B → T N. ρ is tl4 =⇒ (δ ; Qā f ; ρ ∈ Oā =⇒ δ ; Qā g ; ρ ∈ Oā )
Hence, for each ā and f, g ∈ T (Qā A, T B), f .ā g iff
∀ρ : Qā (A −−⊗ T B) → T N. ρ is tl4 =⇒ (Λā (f ) ; ρ ∈ Oā =⇒ Λā (g) ; ρ ∈ Oā ) .
Proof: I ⇒ II is trivial. Now assume II holds and let ρ : Qā B → T N be any strategy such
that δ ; Qā f ; ρ ∈ Oā . Then, there exist [s] ∈ δ ; Qā f and [t] ∈ ρ such that [s ; t] = [(ā, ∗) ∗
⊛ (0, ⊛)b̄ ] ∈ (δ ; Qā f ) ; ρ. We show by induction on the number of JB -moves appearing in
s k t that δ ; Qā g ; ρ ∈ Oā .
If no such moves appear then t = (ā, iB ) ∗ ⊛ (0, ⊛)b̄ , so done. If n + 1 such moves
appear then ρ is necessarily t4, as B is pointed, so by lemma 4.36 there exists tl4* strategy ρ̃ such that ρ = ∆ ; ρ̃. It is not difficult to see that ρ being tidy implies that ρ̃ is
tidy. Moreover, δ ; Qā f ; ρ = δ ; Qā f ; ∆ ; ρ̃ = δ ; Qā f ;hid, Qā ! ; δ ; Qā f i ; ρ̃ = δ ; Qā f ; ρ′ ,
with ρ′ being hid, Qā ! ; δ ; Qā f i ; ρ̃. Now, by definition of ρ̃, [(ā, ∗) ∗ ⊛ (0, ⊛)b̄ ] = [s′ ; t′ ] ∈
δ ; Qā f ; ρ′ with s′ k t′ containing n JB -moves so, by IH, δ ; Qā g ; ρ′ ∈ Oā . But δ ; Qā g ; ρ′ =
δ ; Qā g ;hid, Qā ! ; δ ; Qā f i ; ρ̃ = δ ; Qā f ;hQā ! ; δ ; Qā g, idi ; ρ̃ = δ ; Qā f ; ρ′′ , where ρ′′ is given
by hQā ! ; δ ; Qā g, idi ; ρ̃. But ρ′′ is tl4, thus, by hypothesis, Oā ∋ δ ; Qā g ; ρ′′ = δ ; Qā g ; ρ , as
required.
We can now prove the second half of observationality.
Lemma 5.27. For any morphism f : Qāa 1 → B, with B pointed, and any tl4 morphism
ρ : Qā B → T N,
āa
δ ; Qā ^ a _ f ; ρ ∈ Oā ⇐⇒ δ ; Qāa f ; āa
ā ; ρ ∈ O
Moreover, for each ā and relevant a, ā′ , f, g,
f .āa g =⇒ ^ a _ f .ā ^ a _ g ,
f .ā g =⇒
ā′
ā
′
; f .ā
ā′
ā
;g.
Proof: For the first part, ρ being tl4 and B being pointed imply that there exists some
b̄ # ā and a ttotal strategy ρ′ such that ρ = ^ b̄ _ ρ′ . Now let δ ; Qā ^ a _ f ; ρ ∈ Oā , so there
¯
exists [s ; t] = [(ā, ∗) ∗ ⊛ (0, ⊛)b̄ac̄ ] ∈ (δ ; Qā ^ a _ f ) ; ρ, and let s = (ā, ∗) (ā, iB ) jB mad s′ and
¯
b̄ t′ . Letting sra be snlist(s)ra , we can see that [(āa, ∗) i j md s′ra ] ∈ f and
t = (ā, iB ) ∗ ⊛ jB
B B
¯ ′ra
āa
′′
′′
d
āa
thus [s ] , [(āa, ∗) (ā, iB ) jB m s ] ∈ δ ; Q f ; ā . Hence, [s ; t] = [(āa, ∗) ∗ ⊛ (0, ⊛)b̄c̄ ] ∈
δ ; Qāa f ; āa
ā ; ρ, as required. The converse is shown similarly.
For the second part, suppose f .āa g : Qāa A → T B and take any tl4 morphism
ρ : Qā (A −−⊗ T B) → T N. Then,
lem 4.53
Λā (^ a _ f ) ; ρ ∈ Oā ⇐⇒ δ ; Qā Λ(ζ ′ ; ^ a _ f ) ; ρ ∈ Oā ⇐⇒ δ ; Qā ^ a _ (Λ(ζ ′ ; f )) ; ρ ∈ Oā
āa
⇐⇒ δ ; Qāa Λ(ζ ′ ; f ) ; āa
ā ; ρ ∈ O
f .āa g
āa
⇐⇒ Λā (^ a _ g) ; ρ ∈ Oā .
=⇒ δ ; Qāa Λ(ζ ′ ; g) ; āa
ā ; ρ ∈ O
For the other claim, let us generalise the fresh-name constructors new to:
ā
′
′
: Aā → (Aā )⊥ , { [ (ā, ∗) ∗ ∗ (ā′ , ∗)ā r ā ] }
ā′
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
55
′
for any S(ā) ⊆ S(ā′ ). The above yields a natural transformation of type Qā → Qā⊥ . It is
′
′
easy to see that, for any h : Qā 1 → T N, h ∈ Oā iff āā′ ; h⊥ ; pu ∈ Oā and, moreover, that
the diagram on the right below commutes. Hence, if f .ā g then
′
′
′
δ ; Qā Λ(ζ ′ ; āā ; f ) ; ρ ∈ Oā
⇐⇒
⇐⇒
⇐⇒
f .ā g
′
′
′ ′
δ ; Qā āā ; Qā Λ(ζ ′ ; f ) ; ρ ∈ Oā
ā
ā′ ā′
′
ā′
ā′ ;(δ ; Q ā ; Q Λ(ζ ; f ) ; ρ)⊥ ; pu
δ ; Qā Λ(ζ ′ ; f ) ; āā′ ; ρ⊥ ; pu ∈ Oā
ā
′
=⇒ δ ; Q Λ(ζ ; g) ;
ā′
⇐⇒ δ ; Q Λ(ζ ′ ;
ā′
ā
ā
ā′ ; ρ⊥
; pu ∈ O
∈ Oā
ā
Aā
ā
ā′
ā
h ā′ ,idi
/ (Aā′ ) ⊗ Aā
⊥
st′
′
(Aā )⊥
ā′
; g) ; ρ ∈ O ,
ā′
hid, ā i⊥
/ (Aā′ ⊗ Aā )
⊥
as required.
In order to prove that T is observational, we are only left to show that
JM K ∈ Oā ⇐⇒ ∃b̄, S. JM K = ^ b̄ _ JS̄ ; 0K
for any ā | ∅ |− M : N. The “⇐=” direction is trivial. For the converse, because of
correctness, it suffices to show the following generalisation of adequacy.
Lemma 5.28 (O-Adequacy). Let ā | ∅ |− M : N be a typed term. If JM K ∈ Oǫ then
there exists some S such that ā |= M −→
→ S |= 0.
Proof: The idea behind the proof is given above proposition 5.14. It suffices to show that,
for any such M , there is a non-reducing sequent S |= N such that ā |= M −→
→ S |= N ;
therefore, because of Strong Normalisation in the ν-calculus, it suffices to show that there
is no infinite reduction sequence starting from ā |= M and containing infinitely many DRF
reduction steps.
To show the latter we will use an operation on terms adding new-name constructors just
before dereferencings. The operation yields, for each term M , a term (M )◦ the semantics of
which is equivalent to that of M . On the other hand, ā |= (M )◦ cannot perform infinitely
many DRF reduction steps without creating infinitely many new names. For each term M ,
define (M )◦ by induction as:
(a)◦ , a ,
(! N )◦
(x)◦ , x ,
... (λx.M )◦ , λx.(M )◦ ,
(M N )◦ , (M )◦ (N )◦ ,
...
νa. !(N )◦
,
, some a # N .
We show that J(M )◦ K ⋍ JM K, by induction on M ; the base cases are trivial. The
induction step follows immediately from the IH and the fact that ⋍ is a congruence, in all
◦
cases except for M being ! N . In the latter case we have that J(M )◦ K = ^ a _ ( āa
ā ; J!(N ) K) ,
◦
ā
while the IH implies that JM K ⋍ J!(N ) K. Hence, it sts that for each f : Q A → T B we
have f ⋍ ^ a _ ( āa
ā ; f ) . Indeed, for any relevant ρ which is tl4,
and
lem 5.27
ā
āa
āa
⇐⇒ δ ; Qāa Λ(ζ ′ ; āa
Λā (^ a _ ( āa
ā ; f )) ; ρ ∈ O
ā ; f ) ; ā ; ρ ∈ O
ā
′
āa
āa
⇐⇒ δ ; Qāa āa
ā ; ā ; Q Λ(ζ ; f ) ; ρ ∈ O
⇐⇒
āa
ā
; Λā (f ) ; ρ ∈ Oāa ⇐⇒ Λā (f ) ; ρ ∈ Oā .
Now, take any ā | ∅ |− M : N and assume JM K ∈ Oā , and that ā |= M diverges using
infinitely many DRF reduction steps. Then, ā |= (M )◦ diverges using infinitely many
NEW reduction steps. However, since J(M )◦ K ⋍ JM K, we have J(M )◦ K ∈ Oā and therefore
56
N. TZEVELEKOS
[(ā, ∗) ∗ ⊛ (0̃, ⊛)b̄ ] ∈ J(M )◦ K for some b̄. However, ā |= (M )◦ reduces to some S |= M ′ using
|b̄|+ 1 NEW reduction steps, so J(M )◦ K = ^ c̄ _ JS̄ ; M ′ K with |c̄| = |b̄|+ 1, to determinacy.
We have therefore shown observationality.
Proposition 5.29 (Observationality). (T , T, Q, O) is observational.
5.6. Definability and full-abstraction. We now proceed to show definability for T , and
through it ip-definability. According to the results of section 3.2.3, this will suffice for full
abstraction.
We first make precise the notion of finitary strategy, that is, of (tidy) strategy with
finite description, by introducing truncation functions that remove inessential branches
from a strategy’s description.
Definition 5.30. Let σ : A → B in T and let [s] ∈ viewf(σ) be of even length. Define
trunc(s) and trunc′ (s) by induction as follows.
trunc(ǫ) = trunc′ (ǫ) , ǫ
(
ǫ
, if x = y are store-Q’s
trunc(x(O) y(P ) s′ ) ,
′
xy trunc(s ) , o.w.
ǫ
, if x = y are store-Q’s
ǫ
, if x store-Q , y a store-A and s′ = ǫ
trunc′ (x(O) y(P ) s′ ) ,
ǫ
, if x ∈ IA , y ∈ IB and s′ = ǫ
xy trunc′ (s′ ) , o.w.
Moreover, say σ is finitary if trunc(σ) is finite, where
trunc(σ) , { [trunc(s)] | [s] ∈ viewf(σ) ∧ |s| > 3 } .
Finally, for any [t] ∈ σ define:
σ≤t , strat{ [s] ∈ viewf(σ) | ∃ t′ ≤ t. trunc′ (s) = pt′q } .
N
Hence, finitary are those strategies whose viewfunctions become finite if we delete all the
store-copycats and all default initial answers — the latter dictated by totality. Moreover,
the strategy σ≤t is the strategy we are left with if we truncate viewf(σ) by removing all
its branches of size greater than 3 that are not contained in t, except for the store-copycats
which are left intact and for the store-A’s branches which are truncated to the point of
leaving solely the store-A, so that we retain tidiness. Note that, in general, trunc′ (s) ≤
trunc(s) ≤ s. We can then show the following (proof in [48]).
Proposition 5.31. If σ is a strategy and [t] ∈ σ is even-length then σ≤t is a finitary strategy
with [t] ∈ σ≤t and σ≤t ⊑ σ.
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
57
We proceed to show definability. The proof is facilitated by the following lemma, the proof
of which is delegated to the appendix. Note that for economy we define strategies by means
of their viewfunctions modulo totality and even-prefix closure. Moreover, we write σ ↾ i
for the (total) restriction of a strategy σ to an initial move i, and srb̄ for s with b̄ removed
from all of its name-lists.
Lemma 5.32 (Decomposition Lemma). Let σ : Qā JAK → T JBK be a strategy. We can
decompose σ as follows.
1. If there exists an iA(0) such that ∃x0 . [(ā, iA(0) ) ∗ ⊛x0 ] ∈ σ then
Qā JAK
MMM
ā
′
Mh[x
MMM=iA(0) ],hσ0 ,σ ii
MMM
&
T JBK o
N ⊗(T JBK)2
σ
cnd
where:
ā
[x = iA(0) ] : Qā JAK → N
, { [(ā, iA(0) ) 0 ]} ∪ { [ (ā, iA ) 1 ] | [ (ā, iA ) ] 6= [ (ā, iA(0) ) ] } ,
ā
σ0 : Q JAK → T JBK , strat{ [ (ā, iA(0) ) s ] ∈ viewf(σ) } ,
σ ′ : Qā JAK → T JBK , strat{ [ (ā, iA ) s ] ∈ viewf(σ) | [ (ā, iA ) ] 6= [ (ā, iA(0) ) ] } .
2. If there exists iA(0) such that ∀iA . (∃x0 . [(ā, iA ) ∗ ⊛ x0 ] ∈ σ) ⇐⇒ [(ā, iA )] = [(ā, iA(0) )] ,
then σ = ^ b̄ _ σb̄ where:
σb̄ : Qāb̄ JAK → T JBK , strat{ [ (āb̄, iA(0) ) ∗ ⊛ m0 srb̄ ] |
[ (ā, iA(0) ) ∗ ⊛ mb̄0 s ] ∈ viewf(σ) } .
3. If there exist iA(0) , m0 such that ∀iA , x. [(ā, iA ) ∗ ⊛ x] ∈ σ ⇐⇒ [(ā, iA ) x] = [(ā, iA(0) ) m0 ] ,
then one of the following is the case.
(a) m0 = a, a store-Q of type C under ⊛, in which case σ = σ ′ ↾ (ā, iA(0) ) where
σ ′ : Qā JAK → T JBK , hid, φi; τ ; T ζ ′ ; T σa ; µ
σa : Qā (JAK ⊗JCK) → T JBK , strat{ [ (ā, iA(0) , iC ) ∗ ⊛ s] |
[ (ā, iA(0) ) ∗ ⊛ a iC s ] ∈ viewf(σ) } ,
(
, if a ∈ S(ā)
Qā ! ; aā ; drfC
φ : Qā JAK → T JCK ,
ā
ā
Q πj ; ǫ ; drfC , if a # ā .
(b) m0 = jA ∨ m0 = (iB , ⊛) , a store-H, in which case if [ (ā, iA(0) ) ∗ ⊛ m0 a iC ] ∈ σ,
for some store-Q a and store-A iC , then
Qā JAK
h∆,σa i
/ Qā JAK ⊗Qā JAK ⊗T JCK
τ ;T (id⊗φ;τ );µ
σ
T JBK o
where:
T σ′ ; µ
T Qā JAK
58
N. TZEVELEKOS
σa : Qā JAK → T JCK , strat{ [ (ā, iA(0) ) ∗ ⊛ (iC , ⊛) s ] |
[ (ā, iA(0) ) ∗ ⊛ m0 a iC s ] ∈ viewf(σ)
∨ [ ⊛ ⊛ s ] ∈ viewf(idξ ) } ,
′
ā
σ : Q JAK → T JBK , strat( { [ (ā, iA(0) ) ∗ ⊛ m0 y s ] ∈ viewf(σ) | y 6= a }
∪ { [ (ā, iA(0) ) ∗ ⊛ m0 a s ] |
[ ⊛ ⊛ a s ] ∈ viewf(idξ )} ) ,
φ : Qā JAK ⊗JCK → T 1 ,
(
(Qā ! ; aā )⊗idJCK ; updC
(Qā πj ; āǫ )⊗idJCK ; updC
, if a ∈ S(ā)
, if a # ā .
In both cases above, we take j = min{ j | (iA(0) )j = a }.
The proof of definability is a nominal version of standard definability results in game semantics. In fact, using the Decomposition Lemma we reduce the problem of definability of
a finitary strategy σ to that of definability of a finitary strategy σ0 of equal length, with
σ0 having no initial effects (i.e. fresh-name creation, name-update or name-dereferencing).
On σ0 we then apply almost verbatim the methodology of [15] — itself based on previous
proofs of definability.
Theorem 5.33 (Definability). Let A, B be types and σ : Qā JAK → T JBK be finitary.
Then σ is definable.
Proof: We do induction on (|trunc(σ)|, kσk), where we let kσk , max{ |L(s)| | [s] ∈
viewf(σ) }, i.e. the maximum number of names introduced in any play of trunc(σ). If
|trunc(σ)| = 0 then σ = JstopB K ; otherwise, there exist x0 , iA(0) such that [(ā, iA(0) ) ∗
⊛ x0 ] ∈ σ . By Decomposition Lemma,
ā
σ = h[x = iA(0) ], hσ0 , σ ′ ii; cnd
with |trunc(σ ′ )| < |trunc(σ)| and (0, 0) < (|trunc(σ0 )|, kσ0 k) ≤ (|trunc(σ)|, kσk) , so by
IH there exists term M ′ such that JM ′ K = σ ′ . Hence, if there exist terms M0 , N0 with
ā
JM0 K ↾ (ā, iA(0) ) = σ0 and JN0 K = [x = iA(0) ]; η , then we can see that
σ = Jif0 N0 then M0 else M ′ K .
We first construct N0 . Assume that A = A1 × A2 × · · · × An with Ai ’s non-products,
and similarly B = B1 × · · · × Bm . Moreover, assume without loss of generality that A is
segmented in four parts: each of A1 , ..., Ak is N; each of Ak+1 , ..., Ak+i , ..., Ak+k′ is [A′′′
i ]; each
of Ak+k′ +1 , ..., Ak+k′ +i , ..., Ak+k′ +k′′ is A′i → A′′i ; and the rest are all 1. Take z̄, z̄ ′ , z̄ ′′ , z̄ ′′′ to
be variable-lists of respective types. Define φ0 , φ′0 by:
φ0 , κ1 , ..., κk , with (κ1 , ..., κk ) being the initial N-segment of iA(0) ,
(iA(0) )k+i , if (iA(0) )k+i ∈ S(ā)
z ′
, if (iA(0) )k+i # ā
j
φ′0 , κ′1 , ..., κ′k′ , with each κ′i ,
∧ j = min{ j < i | (iA(0) )k+i = (iA(0) )k+j }
fresh(i) , otherwise .
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
59
fresh(i) is a meta-constant denoting that Opponent has played a fresh name in Ak+i . If
the same fresh name is played in several places inside iA(0) then we regard its leftmost
occurrence as introducing it — this explains the second item in the cases-definition above.
Now, define
N0 , [hz̄, z̄ ′ i = hφ0 , φ′0 i] where:
[hz̄, z̄ ′ i = h~κ, ~κ′ i] , [z1 = κ1 ] ∧ · · · ∧ [zk = κk ] ∧ [z1′ = κ′1 ] ∧ · · · ∧ [zk′ ′ = κ′k′ ] ,
′
[z ′ = fresh(i)] , [z ′ 6= a1 ] ∧ · · · ∧ [z ′ 6= a|ā| ] ∧ [z ′ 6= z1′ ] ∧ · · · ∧ [z ′ 6= zi−1
],
with the logical connectives ∧ and ¬ defined using if0’s, and [zi = κi ] using pred ’s, in the
ā
standard way. It is not difficult to show that indeed JN0 K = [x = iA(0) ]; η .
We proceed to find M0 . By second part of Decomposition Lemma, σ0 = ^ b̄ _ σb̄ with
b̄ = nlist(x0 ), |trunc(σb̄ )| = |trunc(σ0 )| and kσb̄ k = kσ0 k − |b̄| . If |b̄| > 0 then, by IH, there
exists term Mb̄ such that JMb̄ K = σb̄ , so taking
M0 , ν b̄.Mb̄
we have σ0 = JM0 K .
Assume now |b̄| = 0, so x0 = m0 . σ0 satisfies the hypotheses of the third part of the
Decomposition Lemma. Hence, if m0 = a, a store-Q of type C under ⊛, then
σ0 = (hid, φi ; τ ; T ζ ′ ; T σa ; µ) ↾ (ā, iA(0) )
with trunc(σa ) < trunc(σ0 ) . Then, by IH, there exists ā | Γ, y : C |− Ma : B such that
σa = JMa K , and taking
(
(λy.Ma )(! a) , if a ∈ S(ā)
M0 ,
(λy.Ma )(! zj′ ) , if a # ā ∧ j = min{ j | a = (iA(0) )k+j }
we have σ0 = JM0 K ↾ (ā, iA(0) ).
Otherwise, m0 = jA ∨ m0 = (iB , ⊛), a store-H. If there exists an a ∈ AC such that σ0
answers to [iA(0) ∗ ⊛ m0 a] then, by Decomposition Lemma,
σ0 = h∆, σa i ; τ ; T (id⊗φ ; τ ) ; µ ; T σ ′ ; µ
with |trunc(σa )| , |trunc(σ ′ )| < |trunc(σ0 )| . By IH, there exist ā | Γ |− Ma : C and
ā | Γ |− M ′ : B such that σa = JMa K and σ ′ = JM ′ K. Taking
(
(a := Ma ); M ′
, if a ∈ S(ā)
M0 ,
′
′
(zj := Ma ); M
, if a # ā ∧ j = min{ j | a = (iA(0) )k+j }
we obtain σ0 = JM0 K . Note here that σa blocks initial moves [ā, iA ] 6= [ā, iA(0) ] and hence
we do not need the restriction.
We are left with the case of m0 being as above and σ0 not answering to any store-Q,
which corresponds to the case of Player not updating any names before playing m0 .
If m0 = (iB , ⊛) then we need to derive a value term hV1 , ..., Vm i (as B = B1 × · · · × Bm ).
For each p, if Bp is a base or reference type then we can choose a Vp canonically so that
its denotation be iBp (the only interesting such case is this of iBp being a name a # ā,
where we take Vp to be zj′ , for j = min{ j | a = (iA(0) )k+j }). Otherwise, Bp = Bp′ → Bp′′
and from σ0 we obtain the (tidy) viewfunction f : Qā (JAK ⊗JBp′ K) → T JBp′′ K by:
f , { [ (ā, iA(0) , iBp′ ) ∗ ⊛ s ] | [ (ā, iA(0) ) ∗ ⊛ (iB , ⊛) (iBp′ , ⊛) s ] ∈ viewf(σ0 ) }.
60
N. TZEVELEKOS
Note that, for any [(ā, iA ) ∗ ⊛ (iB , ⊛) (iBp′ , ⊛) s] ∈ viewf(σ0 ), s cannot contain store-Q’s
justified by ⊛ , as these would break (TD2). Hence, f fully describes σ0 after (iBp′ , ⊛) . By
IH, there exists ā | Γ, y : Bp′ |− N : Bp′′ such that JN K = strat(f ) ; take then Vp , λy.N .
Hence, taking
M0 , hV1 , ..., Vm i
we obtain σ0 = JM0 K ↾ (ā, iA(0) ).
If m0 = jA , played in some Ak+k′ +i = A′i → A′′i , then m0 = (iA′i , ⊛) . Assume that A′i =
A′i,1 × · · · × A′i,ni with A′i,p ’s being non-products. Now, O can either ask some name a
(which would lead to a store-CC), or answer at A′′i , or play at some A′i,p of arrow type,
′ . Hence,
say A′i,p = Ci,p → Ci,p
[ni
viewf(σ0 ) = fA ∪
fp where:
p=1
fA , f0 ∪ { [ (ā, iA(0) ) ∗ ⊛ (iA′i , ⊛) (iA′′i , ⊛) s ] ∈ viewf(σ0 ) }
fp , f0 ∪ { [ (ā, iA(0) ) ∗ ⊛ (iA′i , ⊛) (iCi,p , ⊛) s ] ∈ viewf(σ0 ) }
f0 , { [ (ā, iA(0) ) ∗ ⊛ (iA′i , ⊛) s] | [⊛ ⊛ s ] ∈ viewf(idξ ) }
and where we assume fp , f0 if A′i,p is not an arrow type. It is not difficult to see that
fA , fp are viewfunctions. Now, from fA we obtain:
fA′ : Qā (JAK ⊗JA′′i K) → T JBK , { [ (ā, iA(0) , iA′′i ) ∗ ⊛ s ] |
[(ā, iA(0) ) ∗ ⊛ (iA′i , ⊛) (iA′′i , ⊛) s ] ∈ fA } .
It is not difficult to see that fA′ is indeed a viewfunction (note that P cannot play a storeQ under ⊛ on the RHS once (iA′′i , ⊛) is played, by tidiness). By IH, there exists some
ā | Γ, y : A′′i |− MA : B such that JMA K = strat(fA′ ).
′ K by:
From each fp 6= f0 we obtain a viewfunction fp′ : Qā (JAK ⊗JCi,pK) → T JCi,p
fp′ , { [ (ā, iA(0) , iCi,p ) ∗ ⊛ s ] | [ (ā, iA(0) ) ∗ ⊛ (iA′i , ⊛) (iCi,p , ⊛) s ] ∈ fp } .
′ such that JM K = strat(f ′ ) , so
By IH, there exists some ā | Γ, y ′ : Ci,p |− Mp : Ci,p
p
p
take Vp , λy ′ .Mp . For each A′i,p of non-arrow type, the behaviour of σ0 at A′i,p is fully
described by (iA′i )p , so we choose Vp canonically as previously. hV1 , ..., Vni i is now of type
A′i and describes σ0 ’s behaviour in A′i .
Now, taking
M0 , (λy.MA )(zi′′ hV1 , ..., Vni i)
we obtain σ0 = JM0 K ↾ (ā, iA(0) ).
Finally, using the definability result and proposition 5.31 we can now show the following.
Corollary 5.34. T = (T , T, Q, O) satisfies ip-definability.
ā
Proof: For each ā, A, B, define DA,B
, { f : Qā JAK → T JBK | f is finitary } . By definabilā
ity, every f ∈ DA,B
is definable. We need also show:
ā
. Λā (f ) ; ρ ∈ Oā =⇒ Λā (g) ; ρ ∈ Oā ) =⇒ f .ā g .
(∀ρ ∈ DA→B,N
Assume the LHS assertion holds and let Λā (f ) ; ρ ∈ Oā , some ρ : Qā (JAK −−⊗ T JBK) → T N.
Then, let [s ; t] = [(ā, ∗) ∗ ⊛ (0, ⊛)b̄ ] ∈ Λā (f ) ; ρ , [s] ∈ Λā (f ) and [t] ∈ ρ. By proposition 5.31,
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
61
ā
[t] ∈ ρ≤t , so Λā (f ) ; ρ≤t ∈ Oā . Moreover, ρ≤t ∈ DA→B,N
, so Λā (g) ; ρ≤t ∈ Oā , by hypothesis.
Finally, ρ≤t ⊑ ρ implies Λā (g) ; ρ≤t ⊑ Λā (g) ; ρ , hence the latter observable, so f .ā g.
Hence, we have shown full abstraction.
Theorem 5.35. T = (T , T, Q, O) is a fully abstract model of νρ.
5.7. An equivalence established semantically. In this last section we prove that the
following terms M and N are equivalent. The particular equivalence exemplifies the fact
that exceptional behaviour cannot be simulated in general by use of references, even of
higher-order.
M , λf. stop : (1 → 1) → 1 ,
N , λf. f skip ; stop : (1 → 1) → 1 .
By full-abstraction, it suffices to show JM K ⋍ JN K, where the latter are given as follows.
1
JM K
/ T ((1 −−⊗ T 1) −−⊗ T 1)
∗
1
OQ
∗
OQ
(∗, ⊛)(1)
⊥
/ T ((1 −−⊗ T 1) −−⊗ T 1)
∗
PA
⊛
JN K
OQ
∗
PA
⊛
OQ
(∗, ⊛)(1)
PA
(∗, ⊛)(2)
(∗, ⊛)(3)
⊥
PA
OQ
PQ
Bottom links stand for deadlocks: if Opponent plays a move (∗, ⊛)(2) under the last ∗ in JM K
(thus providing the function f ) then Player must play JstopK, i.e. remain idle. Similarly
for JN K: if Opponent gives an answer to (∗, ⊛)(3) (providing thus the outcome of f skip)
then Player deadlocks the play.
We have that JM K ⊑ JN K and therefore, by (5.5), JM K . JN K . Conversely, let ρ :
T ((1 −−⊗ T 1) −−⊗ T 1) → T N be a tl4 tidy strategy such that [∗ ∗ ⊛ (0, ⊛)ā ] ∈ JN K ; ρ for
some ā. Then, because of the form of JN K, ρ can only play initial moves up to (∗, ⊛)(1) ,
then possibly ask some names to (∗, ⊛)(1) , and finally play (0, ⊛)ā . Crucially, ρ cannot play
(∗, ⊛)(2) under ∗: this would introduce a question that could never be answered by JN K,
and therefore ρ would not be able to play (0, ⊛)ā without breaking well-bracketing. Hence,
JM K and ρ can simulate the whole interaction and therefore [∗ ∗ ⊛ (0, ⊛)ā ] ∈ JM K ; ρ.
6. Conclusion
Until recently, names used to be bypassed in Denotational Semantics: most approaches focussed on the effect achieved by use of names rather than names themselves. Characteristic
of this attitude was the ‘object-oriented’ modelling of references [6, 3] and exceptions [19]
as products of their effect-related methods (in the spirit of [39]). These approaches were
unsatisfactory to some extent, due to the need for ‘bad’ syntactic constructors in the examined languages. Moreover, they could not apply to the simplest nominal language, the
62
N. TZEVELEKOS
ν-calculus [36], since there the achieved effect could not be given an extensional, name-free
description. These issues revealed the need that names be treated as a proper computational
effect [44], and led to the advent of nominal games [2, 21].
In this paper we have taken some further steps in the semantics of nominal computation
by examining the effect of (nominal) general references. We have shown that nominal games
provide a framework expressive enough that, by use of appropriate monadic (and comonadic)
constructions, one can model general references without moving too far from the model of
the ν-calculus [2]. This approach can be extended to other nominal effects too; e.g. in [47]
it is applied to exceptions (with and without references). Moreover, we have examined
abstract categorical models for nominal computation, and references in particular (in the
spirit of [45, 44]).
There are many threads in the semantics of nominal computation which need to be
pursued further. Firstly, there are many nominal games models to build yet: research in
this direction has already been undertaken in [24, 22, 47, 31]. By constructing models for
more nominal languages we better understand the essential features of nominal computation (e.g. name-availability [31]) and build stronger intuitions on nominal games. Another
direction for further research is that of characterising the nominal effect — i.e. the computational effect that rises from the use of names — in abstract categorical terms. Here we
have pursued this task to some extent by introducing the monadic-comonadic description
of nominal computation, but it is evident that the description needs further investigation.
We see that there are more monad-comonad connections to be revealed, which will simplify
and further substantiate the presentation. The work of Schöpp which examines categories
with names [41] seems to be particularly helpful in this direction.
A direction which has not been pursued here is that of decidability of observational
equivalence in nominal languages. The use of denotational methods, and game semantics
in particular, for attacking the problem has been extremely successful in the ‘non-nominal’
case, having characterised decidability of (fragments of) Idealized Algol [13, 34, 32]. It
would therefore be useful to ‘nominalise’ that body of work and apply it to nominal calculi.
Already from [32] we can deduce that nominal languages with ground store are undecidable,
and from [36] we know that equivalence is decidable for programs of first-order type in the
ν-calculus, but otherwise the problem remains open.
Acknowledgements. I would like to thank Samson Abramsky for his constant encouragement,
support and guidance. I would also like to thank Andy Pitts, Andrzej Murawski, Dan
Ghica, Ian Stark, Luke Ong, Guy McCusker, Jim Laird, Paul Levy, Sam Sanjabi and the
anonymous reviewers for fruitful discussions, suggestions and criticisms.
Appendix A. Deferred proofs
I. Proof of closure of tidiness under composition.
Lemma A.1. Let σ : A → B and τ : B → C be tidy strategies, and let [s ; t] ∈ σ ; τ , [s] ∈ σ
and [t] ∈ τ , with ps k tq = s k t ending in a generalised O-move in AB and x, an O-move,
being the last store-H in psq. Let x appear in s k t as x̃. Then, x̃ is the last store-H in s k t
and if x is in A then all moves after x̃ in s k t are in A. Similarly for BC and t.
Proof: We show the (AB, s) case, the other case being entirely dual. Let s = s1 xs2 and let
x appear in s k t as some x̃. If x is in A then we claim that s2 is in A. Suppose otherwise,
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
63
so s = s1 xs21 ys22 with s21 in A and y a P-move in B. Since x appears in psq, the whole
of s21 y appears in it, as it is in P-view mode already. Since x is last store-H in psq, s21 y is
store-H-less. If y a store-Q then it should be justified by last O-store-H in ps<yq, that is x,
which is not possible as x is in A. Thus, y must be a store-A, say to some O-store-Q q in
B. Now, since q wasn’t immediately answered by P, tidiness dictates that psq be a copycat
from move q and on. But then the move following x in s must be a copy of x in B, .
Hence, s2 is in A and therefore it appears in psq, which implies that it is store-H-less. Thus,
x̃ is last store-H in s k t.
If x is in B then we do induction on |s k t|. The base case is encompassed in the case of
s2 being empty, which is trivial. So let s2 = s21 ys22 z with y justifying z (since x appears
in psq, z has to be justified in s2 ). z is not a store-H and neither is it a store-Q, as then
y would be a store-H after x in psq. Thus z a store-A and y a store-Q, the latter justified
by last O-store-H in ps<yq = psq<y , that is x, so y, z in B. Now, s = s1 xs21 ys22 z and
t = t1 x′ t21 y ′ t22 z ′ ; we claim that s21 and t21 are store-H-less. Indeed, s<y k t<y′ ends in a
generalised O-move in AB and x is still the last store-H in ps<yq , from which we have, by
IH, that x̃ is the last store-H in s<y k t<y′ .
Thus, s k t = (s1 k t1 )x̃v ỹuz̃ with v store-H-less. It suffices to show that u is also storeH-less. In fact, u = ỹ . . . ỹ z̃ . . . z̃ for some n ≥ 0. Indeed, by tidiness of τ , (t22 z ′ ).1 is either
| {z } | {z }
n
n
an answer to y ′ , whence t22 = u = ǫ, or a copy of it under the last O-store-H in pt≤y′q. If
the latter is in B then σ reacts analogously, and so on, so there is initially a sequence ỹ . . . ỹ
in u, played in B. As u finite, at some point σ (or τ ) either answers y (y ′ ) or copycats it
in A (in C). In the latter case, O immediately answers, as s (t) is in P-view mode in A (in
C). Hence, in either cases there is an answer that is copycatted to all open ỹ in u, yielding
thus the required pattern. Therefore, u is store-H-less.
Lemma A.2. Let σ : A → B and τ : B → C be tidy strategies, and let [s ; t] ∈ σ ; τ ,
[s] ∈ σ and [t] ∈ τ , with ps k tq = s k t ending in a generalised O-move. If there exists i ≥ 1
and store-Q’s q̃1 , ..., q̃i with q̃ = q̃j , all 1 ≤ j ≤ i, and q̃1 , ..., q̃i−1 in B and q̃i in AC and
[(s k t)q̃1 ...q̃i ] ∈ σ k τ , then q̃i is justified by the last O-store-H in s ; t.
Proof: By induction on |s k t|. The base case is encompassed in the case of s ; t containing
at most one O-store-H, which is trivial. Now let without loss of generality (s k t)q̃1 ...q̃i =
′ ) with [sq ...q ] ∈ σ and [tq ′ ...q ′ ] ∈ τ , and let each q be justified by
(sq1 ...qi ) k(tq1′ ...qi−1
1
i
j
1
i−1
xj and each qj′ by x′j . Moreover, by hypothesis, xj = x′j , for 1 ≤ j ≤ i − 1, and therefore
each such pair xj , x′j appears in s k t as some x̃j , the latter justifying q̃j in s k t.
Now, assume without loss of generality that s k t ends in AB. Then, by tidiness of σ
and τ we have that, for each j ≥ 1,
q2j+1 = q2j
,
′
′
q2j
= q2j−1
,
qj = qj′
For each j ≥ 1, q2j+1 is a P-move of σ justified by some store-H, say x2j+1 . By tidiness of
σ, x2j+1 is the last O-store-H in ps<q2j+1q = ps≤q2jq, and therefore x2j+1 is the last store-H
in ps<x2jq. Then, by previous lemma, x̃2j+1 is the last store-H in s<x2j k t<x′2j = (s k t)<x̃2j .
Similarly, x̃2j is the last store-H in (s k t)<x̃2j−1 . Hence, the store-H subsequence of (s k t)≤x̃1
ends in x̃i ...x̃1 .
Now, by tidiness of σ, x1 is the last O-store-H in psq. If x1 is also the last store-H in
psq then, by previous lemma, x̃1 is the last store-H in s k t, hence x̃i is the last store-H in
s ; t. Otherwise, by corollary 5.21, q1 is a copy of s.−1 = q0 . If q0 is in A then its justifier is
64
N. TZEVELEKOS
s.−2 = x0 and, because of CC-mode, the store-H subsequence of s k t ends in x̃i ...x̃1 x̃0 , so x̃i
is the last O-store-H in s ; t. If q0 is in B then we can use the IH on s− k t− and q̃0 , q̃1 , ..., q̃i ,
and obtain that x̃i is the last O-store-H in s− ; t− = s ; t.
Proposition A.3. If σ : A → B and τ : B → C are tidy strategies then so is σ ; τ .
Proof: Take odd-length [s ; t] ∈ σ ; τ with not both s and t ending in B, ps k tq = s k t and
|s ; t| odd. We need to show that s ; t satisfies (TD1-3). As (TD2) is a direct consequence
of the previous lemma, we need only show the other two conditions. Assume without loss
of generality that s ; t ends in A.
For (TD1), assume s ; t ends in a store-Q q̃. Then s ends in some q, which is justified
by the P-store-H s.−2 = x (also in A). q is either answered or copied by σ ; in particular,
if q̃ = aā with a # ps ; tq− = s− ; t then a # s− , t , so σ copies q. If σ answers q with z then
z doesn’t introduce new names, so [(s ; t)z̃] ∈ σ ; τ with nlist(z̃) = nlist(q̃) and z̃ = z , as
required.
Otherwise, let σ copy q as q1 , say, under last O-store-H in psq, say x1 . If x1 is in B
then sq1 ≍ tq1′ , with q1 , q1′ in B and q1′ being q1 with name-list that of its justifier, say x′1 ,
where x1 = x′1 . Now [tq1′ ] ∈ τ and it ends in a store-Q, so τ either answers it or copies it
under last O-store-H in ptq1′q. In particular, if q = aā with a # ps ; tq then, as above, a # t
and τ copies q1′ . This same reasoning can be applied consecutively, with copycats attaching
store-Q’s to store-H’s appearing each time earlier in s and t. As the latter are finite and
initial store-H’s are third moves in s and t, at some point either σ plays qi in A or answers it
in B, or τ plays qi′ in C or answers it in B. If an answer occurs then it doesn’t introduce new
names (by tidiness), so it is copycatted back to q closing all open qj ’s and qj′ ’s. Otherwise,
we need only show that, for each j, q̃j = q̃, which we do by induction on j: q̃1 = q s • t,ǫ and
(s≤q ) •(t
′
),ǫ
IH
≤q
j
j
q̃j+1 = q
= q̃j = q̃. This proves (TD1).
For (TD3), assume s ; t = uq̃(O) q̃(P ) v ỹ with q̃(O)q̃(P ) v a copycat. Then, either both
q̃(O) , q̃(P ) are in A, or one is in A and the other in C. Let’s assume q̃(O) in A and q̃(P ) in C
— the other cases are shown similarly. Then, q̃(O) her(editarily)-justifies ỹ, and let s.−1 = y
be justified by some x in s. Now, as above, q̃(O) q̃(P ) is witnessed by some q̃(O)q̃1 . . . q̃i q̃(P ) in
s k t, with odd i ≥ 1 and all q̃j ’s in B. We show by induction on 1 ≤ k ≤ i that there exist
x1 , ..., xk , x′1 , ..., x′k , y1 , ..., yk , y1′ , ..., yk′ in B such that (sy1 . . . yk k ty1′ . . . yk′ ) ∈ σ k τ and, for
each relevant j ≥ 1,
yj = yj′ = y
,
y1 = y
,
y2j = y2j+1
,
′
′
y2j−1
= y2j
,
xj = x′j
with qj her-justifying xj in s and xj justifying yj (and qj′ her-justifying x′j in t and x′j
justifying yj′ ), and x̃j+1 , x̃j consecutive in s k t, and x̃1 , x̃ also consecutive.
For k = 1, let s = s1 q(O) q1 s2 y. Now, q̃(O) her-justifying ỹ implies that q(O) her-justifies
y, hence it appears in psq. Thus psq = s′1 q(O) q1 s′2 y, so, by (original definition of) tidiness,
[sy1 ] ∈ σ with y1 = y justified by x1 = psq .−3 = s.−3. Then, [ty1′ ] ∈ τ with y1′ = y1 .
By proposition 5.19, q(O) q1 s′2 is a copycat, so q1 her-justifies x1 and therefore x1 , y1 in B.
Finally, x = psq .−2 = s.−2 is a P-move so x̃1 , x̃ are consecutive in s k t.
′ ) ∈ σ k τ with y ′
For even k > 1 we have, by IH, that (sy1 . . . yk−1 k ty1′ . . . yk−1
k−1 an O′ q, so pty ′ ...y ′ q =
′
′
move her-justified by qk−1 , an O-move. Then, qk−1 appears in pty1′ ...yk−1
1
k−1
′
′ q ′ t y ′ , thus (by tidiness) [ty ′ ...y ′ y ′ ] ∈ τ with y ′ = y ′
t1 qk−1
1
k−1 justified by xk =
k
k−1 k
k 2 k−1
′
′
′
′
′
′
′
′
pty1 ...yk−1q .−3. Now, qk−1 qk t2 is a copycat so qk her-justifies xk . Moreover, xk , xk−1 are
consecutive in ptq, so, as x′k−1 a P-move, they are consecutive in t, and therefore x̃k , x̃k−1
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
65
consecutive in s k t. Finally, [sy1 . . . yk−1 yk ] ∈ σ with yk = yk′ . The case of k odd is entirely
dual.
′ ]∈
′
Now, just as above, we can show that there exist x′i+1 , yi+1
in C such that [ty1′ ...yi′ yi+1
′
′
′
τ and yi+1 justified by xi+1 , xi+1 her. justified by q(P ) , etc. Then [(s ; t)ỹi+1 ] ∈ σ ; τ with
x̃i+1 , x̃i , ..., x̃1 , x̃ consecutive in s k t, so x̃i+1 = (s ; t).−3. Finally, as above, ỹi+1 = ỹj = ỹ,
all j, as required.
II. Proof of Decomposition Lemma 5.32: 1 is straightforward: we just partition σ into σ0
ā
and σ ′ and recover it by use of [x = iA(0) ] and cnd. For 2, we just use the definition of
name-abstraction for strategies and the condition on σ.
For 3, it is clear that m0 is either a store-Q a under ⊛, or a store-H jA , or a store-H
(iB , ⊛).
In case m0 = a with a ∈ AC , we define σa : Qā (JAK ⊗JCK) → T JBK , strat(fa ) ,
where
fa , { [ (ā, iA(0) , iC ) ∗ ⊛ s ] | [ (ā, iA(0) ) ∗ ⊛ a iC s ] ∈ viewf(σ) } .
To see that fa is a viewfunction it suffices to show that its elements are plays, and for
that it suffices to show that they are legal. Now, for any [(ā, iA(0) , iC ) ∗ ⊛ s] ∈ fa with
[(ā, iA(0) ) ∗ ⊛ a iC s] ∈ viewf(σ), (ā, iA(0) , iC ) ∗ ⊛ s is a justified sequence and satisfies wellbracketing, as its open Q’s outside s are the same as those in (ā, iA(0) ) ∗ ⊛ a iC s , i.e. ⊛.
Moreover, visibility is obvious. Hence, fa is a viewfunction, and it inherits tidiness from σ.
Moreover, we have the following diagram.
Qā JAK
hid,φi ; τ ; T ζ ′
/ T Qā (JAK ⊗JCK)
T σa
/ T 2 JBK
µ
/ T JBK
(ā, iA(0) )
∗
∗
∗
⊛
⊛
⊛
a
a
a
iC
iC
iC
(ā, iA(0) , iC , ⊛)
(∗, ⊛)
⊛
66
N. TZEVELEKOS
Because of the copycat links, we see that
viewf(hid, φi ; τ ; T ζ ′ ; T σa ; µ) ↾ (ā, iA(0) )
= {[(ā, iA(0) ) ∗ ⊛ a iC s] | [(ā, iA(0) , iC ) ∗ ⊛ s] ∈ viewf(σa )} = viewf(σ) ,
as required. Note that the restriction to initial moves [ā, iA(0) ] taken above is necessary in
case φ contains a projection (in which case it may also answer other initial moves).
In case m0 = jA (so m0 a store-H) and [(ā, iA(0) ) ∗ ⊛ m0 a iC ] ∈ σ, we have that
σ = strat(fa ∪ (f ′ \ fa′ )) ,
where fa , f ′ are viewfunctions of type Qā JAK → T JBK, so that fa determines σ’s behaviour
if O plays a at the given point, and f ′ \ fa′ determines σ’s behaviour if O plays something
else. That is,
fa , { [ (ā, iA(0) ) ∗ ⊛ jA a iC s ] ∈ viewf(σ) }
fa′ , { [ (ā, iA(0) ) ∗ ⊛ jA a s ] | [⊛ ⊛ a s] ∈ viewf(idξ ) }
f ′ , fa′ ∪ { [ (ā, iA(0) ) ∗ ⊛ jA y s ] ∈ viewf(σ) | y 6= a } .
f ′ differs from viewf(σ) solely in the fact that it doesn’t answer a but copycats it instead;
it is a version of viewf(σ) which has forgotten the name-update of a. On the other hand,
fa contains exactly the information for this update. It is not difficult to see that f ′ , fa are
indeed viewfunctions. We now define
fa′′ : Qā JAK → T JCK , { [ (ā, iA(0) ) ∗ ⊛(iC , ⊛) s ] |
[ (ā, iA(0) ) ∗ ⊛jA a iC s ] ∈ fa ∨ [⊛ ⊛ s] ∈ viewf(idξ ) }
σa : Qā JAK → T JCK , strat(fa′′ )
σ ′ : Qā JAK → T JBK , strat(f ′ )
σ ′′ : Qā JAK → T JBK , h∆, σa i ; τ ; T (id⊗φ ; τ ) ; µ ; ∼
= ; T σ′ ; µ .
We can see that σ ′ is a tidy strategy. For σa , it suffices to show that fa′′ is a viewfunction,
since tidiness is straightforward. For that, we note that even-prefix closure and singlevaluedness are clear, so it suffices to show that the elements of fa′′ are plays.
So let [(ā, iA(0) ) ∗ ⊛ (iC , ⊛) s] ∈ fa′′ with [(ā, iA(0) ) ∗ ⊛ jA a iC s] ∈ viewf(σ). We have
that (ā, iA(0) ) ∗ ⊛ (iC , ⊛) s is a justified sequence, because s does not contain any moves
justified by jA or a. In the former case this holds because we have a P-view, and in the
latter because a is a closed (answered) Q. Note also that there is no move in s justified by
⊛: such a move (iB , ⊛) would be an A ruining well-bracketing as jA is an open Q, while a
store-Q under ⊛ is disallowed by tidiness as s.1 is an O-store-H. Finally, well-bracketing,
visibility and NC’s are straightforward.
We now proceed to show that σ = σ ′′ . By the previous analysis on fa′′ we have that
σa = σa′ ; η (modulo totality) where σa′ is the possibly non-total strategy
σa′ : Qā JAK → JCK , strat{ [ (ā, iA(0) ) iC s ] | [ (ā, iA(0) ) ∗ ⊛ jA a iC ] ∈ fa } ,
∼ ; T σ ′ ; µ . Analysing the behaviour of the
and hence σ ′′ ↾ (ā, iA(0) ) = h∆, σa′ i ; id⊗φ ; τ ; =
latter composite strategy and observing that the response of σ ′′ to inputs different than
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
67
[ā, iA(0) ] is merely the initial answer ∗ imposed by totality, we obtain:
viewf(σ ′′ ) = { [ (ā, iA(0) ) ∗ ⊛ jA a s ], [(ā, iA(0) ) ∗ ⊛ jA y s] ∈ viewf(σ ′′ ) | y 6= a }
= { [ (ā, iA(0) ) ∗ ⊛ jA a iC s ] | [ (ā, iA(0) ) ∗ ⊛ (iC , ⊛) s] ∈ fa′′ ∧ s.1 ∈ JJCK }
∪ { [ (ā, iA(0) ) ∗ ⊛ jA y s ] ∈ f ′ | y 6= a }
= fa ∪ (f ′ \ fa′ ) = viewf(σ)
as required.
In case x = (iB , ⊛) we work similarly as above.
References
[1] Abramsky, S. Domain theory. Lecture Notes, Oxford University Computing Laboratory, 2007.
[2] Abramsky, S., Ghica, D., Murawski, A., Ong, L., and Stark, I. Nominal games and full abstraction for the nu-calculus. In LICS ’04: Proceedings of the 19th Annual IEEE Symposium on Logic in
Computer Science (Turku, 2004), IEEE Computer Society Press, pp. 150–159.
[3] Abramsky, S., Honda, K., and McCusker, G. A fully abstract game semantics for general references. In LICS ’98: Proceedings of the 13th Annual IEEE Symposium on Logic in Computer Science
(Indianapolis, 1998), IEEE Computer Society Press, pp. 334–344.
[4] Abramsky, S., and Jagadeesan, R. Games and full completeness for multiplicative linear logic.
Journal of Symbolic Logic 59, 2 (1994), 543–574.
[5] Abramsky, S., Jagadeesan, R., and Malacaria, P. Full abstraction for PCF. Information and
Computation 163, 2 (2000), 409–470.
[6] Abramsky, S., and McCusker, G. Linearity, Sharing and State: a fully abstract game semantics for
Idealized Algol. In O’Hearn and Tennent [33], pp. 297–329. Vol. 2, 1997.
[7] Baillot, P., Danos, V., and Ehrhard, T. Believe it or not, AJM’s games model is a model of classical
linear logic. In LICS ’97: Proceedings of the 12th Annual IEEE Symposium on Logic in Computer
Science (Warsaw, 1997), IEEE Computer Society Press, pp. 68–75.
[8] Barr, M., and Wells, C. Category theory for computing science, third ed. Les Publications CRM,
Montreal, 1999.
[9] Brookes, S., and Geva, S. Computational comonads and intensional semantics. In Applications of
Categories in Computer Science: Proceedings LMS Symposium (Durham, 1991), vol. 177, Cambridge
University Press, pp. 1–44.
[10] Brookes, S., and van Stone, K. Monads and comonads in intensional semantics. Tech. Rep. CMUCS-93-140, Carnegie Mellon University, 1993.
[11] Freyd, P. J. Recursive types reduced to inductive types. In LICS’90: Proceedings of the 5th Annual
IEEE Symposium on Logic in Computer Science (Philadelphia, 1990), IEEE CS Press, pp. 498–507.
[12] Gabbay, M. J., and Pitts, A. M. A new approach to abstract syntax with variable binding. Formal
Aspects of Computing 13 (2002), 341–363.
[13] Ghica, D. R., and McCusker, G. Reasoning about Idealized Algol using regular languages. In
ICALP ’00: Proceedings of 27th International Colloquium on Automata, Languages and Programming
(Geneva, 2000), vol. 1853 of LNCS, Springer-Verlag, pp. 103–116.
[14] Harmer, R. Games and full abstraction for nondeterministic languages. DPhil thesis, University of
London, 1999.
[15] Honda, K., and Yoshida, N. Game-theoretic analysis of call-by-value computation. Theoretical Computer Science 221, 1–2 (1999), 393–456.
[16] Hyland, J. M. E., and Ong, C.-H. L. On full abstraction for PCF: I, II, III. Information and
Computation 163, 2 (2000), 285–408.
[17] Jeffrey, A., and Rathke, J. A fully abstract may testing semantics for concurrent objects. In
LICS ’02: Proceedings of the 17th Annual IEEE Symposium on Logic in Computer Science (Copenhagen, 2002), IEEE Computer Society Press, pp. 101–112.
[18] Jones, S. P. Haskell 98 Language and Libraries: The Revised Report. Cambridge University Press,
May 2003.
68
N. TZEVELEKOS
[19] Laird, J. A fully abstract game semantics of local exceptions. In LICS ’01: Proceedings of the 16th
Annual IEEE Symposium on Logic in Computer Science (Boston, 2001), IEEE CS Press, p. 105.
[20] Laird, J. A categorical semantics of higher order store. In CTCS ’02: Category Theory and Computer
Science (Ottawa, 2002), vol. 69 of Electronic Notes in Theoretical Computer Science, pp. 209–226.
[21] Laird, J. A game semantics of local names and good variables. In FoSSaCS ’04: Proceedings of the 7th
International Conference on Foundations of Software Science and Computation Structures (Barcelona,
2004), vol. 2987 of Lecture Notes in Computer Science, Springer, pp. 289–303.
[22] Laird, J. Game semantics for higher-order concurrency. In FSTTCS ’06: Proceedings of the 26th
International Conference on Foundations of Software Technology and Theoretical Computer Science
(Kolkata, 2006), vol. 4337 of Lecture Notes in Computer Science, Springer, pp. 417–428.
[23] Laird, J. A fully abstract trace semantics for general references. In ICALP ’07: Proceedings of the
34th International Colloquium on Automata, Languages and Programming (Wroclaw, 2007), vol. 4596
of Lecture Notes in Computer Science, Springer-Verlag, pp. 667–679.
[24] Laird, J. A game semantics of names and pointers. Annals of Pure and Applied Logic 151 (2008), 151–
169. GaLoP ’05: First Games for Logic and Programming Languages Workshop (post-proceedings).
[25] Mac Lane, S. Categories for the working mathematician, second ed., vol. 5 of Graduate texts in
mathematics. Springer Verlag, 1998.
[26] McCusker, G. Games and Full Abstraction for a Functional Metalanguage with Recursive Types.
Distinguished Dissertations. Springer-Verlag, London, 1998.
[27] Milner, R., Tofte, M., and Macqueen, D. The Definition of Standard ML. MIT Press, 1997.
[28] Moggi, E. Computational lambda calculus and monads. Tech. Rep. ECS-LFCS-88-86, University of
Edinburgh, 1988.
[29] Moggi, E. Computational lambda-calculus and monads. In LICS ’89: Proceedings of 4th Annual IEEE
Symposium on Logic in Computer Science (Pacific Grove, 1989), IEEE CS Press, pp. 14–23.
[30] Moggi, E. Notions of computation and monads. Information and Computation 93, 1 (1991), 55–92.
[31] Murawski, A., and Tzevelekos, N. Full abstraction for Reduced ML. In FoSSaCS ’09: Proceedings
of the 12th International Conference on Foundations of Software Science and Computation Structures
(York, 2009), vol. 5504 of Lecture Notes in Computer Science, Springer, pp. 32–47.
[32] Murawski, A. S. On program equivalence in languages with ground-type references. In LICS ’03:
Proceedings of the 18th IEEE Symposium on Logic in Computer Science (Ottawa, 2003), pp. 108–117.
[33] O’Hearn, P. W., and Tennent, R. D., Eds. ALGOL-like Languages. Birkhäuser, 1997.
[34] Ong, C.-H. L. Observational equivalence of third-order Idealized Algol is decidable. In LICS ’02:
Proceedings of the 17th Annual IEEE Symposium on Logic in Computer Science (Copenhagen, 2002),
IEEE Computer Society Press, pp. 245–256.
[35] Pitts, A. M. Nominal logic, a first order theory of names and binding. Information and Computation
186 (2003), 165–193.
[36] Pitts, A. M., and Stark, I. D. B. Observable properties of higher order functions that dynamically
create local names, or: What’s new? In MFCS ’93: Proceedings of 18th International Symposium on
Mathematical Foundations of Computer Science (Gdańsk, 1993), vol. 711 of Lecture Notes in Computer
Science, Springer-Verlag, Berlin, pp. 122–141.
[37] Plotkin, G. D. LCF considered as a programming language. Theoretical Computer Science 5 (1977),
223–255.
[38] Plotkin, G. D., and Power, J. Notions of computation determine monads. In FoSSaCS ’02: Proceedings of the 5th International Conference on Foundations of Software Science and Computation
Structures (Grenoble, 2002), Springer-Verlag, pp. 342–356.
[39] Reynolds, J. C. The essence of Algol. In Proceedings of the International Symposium on Algorithmic
Languages (Amsterdam, 1981), North-Holland, pp. 345–372. Reprinted in [33, vol. 1, pages 67–88].
[40] Sanjabi, S. B., and Ong, C.-H. L. Fully abstract semantics of additive aspects by translation. In
AOSD ’07: Proceedings of the 6th international conference on Aspect-oriented software development
(Vancouver, 2007), ACM, pp. 135–148.
[41] Schöpp, U. Names and Binding in Type Theory. DPhil thesis, University of Edinburgh, 2006.
[42] Scott, D. S. A type-theoretical alternative to ISWIM, CUCH, OWHY. Theoretical Computer Science
121, 1-2 (1993), 411–440. First written in 1969 and circulated privately.
[43] Smyth, M. B., and Plotkin, G. D. The category-theoretic solution of recursive domain equations.
SIAM Journal on Computing 11, 4 (1982), 761–783.
FULL ABSTRACTION FOR NOMINAL GENERAL REFERENCES
69
[44] Stark, I. D. B. Names and Higher-Order Functions. PhD thesis, University of Cambridge, Dec. 1994.
Also available as Technical Report 363, University of Cambridge Computer Laboratory.
[45] Stark, I. D. B. Categorical models for local names. Lisp and Symbolic Computation 9, 1 (Feb. 1996),
77–107.
[46] Tzevelekos, N. Full abstraction for nominal general references. In LICS ’07: Proceedings of the 22nd
Annual IEEE Symposium on Logic in Computer Science (Wroclaw, 2007), IEEE Computer Society
Press, pp. 399–410.
[47] Tzevelekos, N. Full abstraction for nominal exceptions and general references. In GaLoP ’08: Games
for Logic and Programming Languages (Budapest, 2008). Journal version submitted to APAL.
[48] Tzevelekos, N. Nominal game semantics. DPhil thesis, Oxford University, 2008.
[49] Wadler, P. The essence of functional programming. In POPL ’92: Conference Record of the 19th
ACM Symposium on Principles of Programming Languages (Albuquerque, 1992), pp. 1–14.
This work is licensed under the Creative Commons Attribution-NoDerivs License. To view
a copy of this license, visit http:// reative ommons.org/li enses/by-nd/2.0/ or send a
letter to Creative Commons, 171 Second St, Suite 300, San Francisco, CA 94105, USA, or
Eisenacher Strasse 2, 10777 Berlin, Germany
| 6 |
Privacy-Enhanced Architecture for Occupancy-based
HVAC Control
Ruoxi Jia1 , Roy Dong2 , S. Shankar Sastry2 , Costas J. Spanos1
Department of Electrical Engineering and Computer Sciences
University of California, Berkeley
arXiv:1607.03140v1 [cs.CR] 11 Jul 2016
ruoxijia@berkeley.edu,roydong@eecs.berkeley.edu
sastry@eecs.berkeley.edu,spanos@berkeley.edu
ABSTRACT
CCS Concepts
Large-scale sensing and actuation infrastructures have allowed buildings to achieve significant energy savings; at the
same time, these technologies introduce significant privacy
risks that must be addressed. In this paper, we present
a framework for modeling the trade-off between improved
control performance and increased privacy risks due to occupancy sensing. More specifically, we consider occupancybased HVAC control as the control objective and the location traces of individual occupants as the private variables.
Previous studies have shown that individual location information can be inferred from occupancy measurements. To
ensure privacy, we design an architecture that distorts the
occupancy data in order to hide individual occupant location
information while maintaining HVAC performance. Using
mutual information between the individual’s location trace
and the reported occupancy measurement as a privacy metric, we are able to optimally design a scheme to minimize
privacy risk subject to a control performance guarantee. We
evaluate our framework using real-world occupancy data:
first, we verify that our privacy metric accurately assesses
the adversary’s ability to infer private variables from the
distorted sensor measurements; then, we show that control
performance is maintained through simulations of building
operations using these distorted occupancy readings.
•Security and privacy → Privacy protections;
•Computing methodologies → Control methods; Modeling methodologies;
1
This research is funded by the Republic of Singapore’s National Research Foundation through a grant to the Berkeley Education Alliance for Research in Singapore (BEARS)
for the Singapore-Berkeley Building Efficiency and Sustainability in the Tropics (SinBerBEST) Program. BEARS has
been established by the University of California, Berkeley as
a center for intellectual excellence in research and education
in Singapore.
2
This research is supported in part by FORCES (Foundations Of Resilient CybEr-Physical Systems), which receives
support from the National Science Foundation (NSF award
numbers CNS-1238959, CNS-1238962, CNS-1239054, CNS1239166).
ACM ISBN 978-1-4503-2138-9. . . $15.00
DOI: 10.1145/1235
Keywords
Energy; privacy; model predictive control; HVAC; optimization; occupancy
1.
INTRODUCTION
Large-scale sensing and actuation infrastructures have endowed buildings with the intelligence to perceive the status of their environment, energy usage, and occupancy, and
to provide fine-grained and responsive controls over heating, cooling, illumination, and other facilities. However, the
information that is collected and harnessed to enable such
levels of intelligence may potentially be used for undesirable
purposes, thereby raising the question of privacy. To spotlight the value of building sensory data and its potential for
exploitation in the inference of private information, we consider as a motivating example the occupancy data, i.e., the
number of occupants in a given space over time.
Occupancy data is a key component to perform energyefficient and user-friendly building management. Particularly, it offers considerable potential for improving energy
efficiency of the heating, ventilation, and air conditioning
(HVAC) system, a significant source of energy consumption
which contributes to more than 50% of the energy consumed
in buildings [12]. Recent papers [4, 24, 13] have demonstrated substantial energy savings of up to 40% by enabling
intelligent HVAC control in response to occupancy variations. The value of occupancy data in building management
has also inspired extensive research on occupancy sensing [9,
19, 20, 23, 35] as well as a number of commercial products
which can provide high accuracy occupancy data.
While people have enjoyed the benefits brought by occupancy data, the privacy risks potentially posed by the
data are largely overlooked (Figure 1). In effect, location
traces of individual occupants can be inferred from the occupancy data with some auxiliary information [34]. Throughout this paper, we refer to the individual location trace as
the private information to be protected. The contextual
information attached to location traces tells much about
the individuals’ habits, interests, activities, and relationships [25]. It can also reveal their personal or corporate secrets, expose them to unwanted advertisement and locationbased spams/scams, cause social reputation or economic
HVAC
Controller
Control signal
Auxiliary Knowledge
ZN
…
Office Directory
Previous Location Traces
Z1
Alice
Z2
Chris
Z3
Bob
Z1
Z1
Z2
Pantry
Privacy-Sensitive
Information
Occupancy Sensor
Occupancy
1
1
0
Z2
0
1
2
1
ZN 1
Zone
Time 1
0
1
1
2
3
K
Z1
Z2
Z1
Z2
Z5
Z3
Z3
Z2
Z2
Z2
Z2
Time 1
2
3
K
Alice
Adversary
Bob
…
0
…
Building
Manager
Location Traces
Z1
Chris
Z1
Figure 1: An overview of the problem of individual occupant
location recovery. The building manager collects occupancy
data to enable intelligent HVAC controls adapted to occupancy variations. However, an adversary with malicious intent may exploit occupancy data in combination with the
auxiliary information to infer privacy details about indoor
locations of building users.
damage, make them victims of blackmail or even physical
violence [31].
At a first glance, it is surprising that occupancy data may
incur risks of privacy breach, since it only reports the number of occupants in a given space over time without revealing
the identities of the occupants. To illustrate why it is possible to infer location traces from seemingly “anonymized”
occupancy data, consider the following scenario. We start
by observing two users in one room and then one of them
leaves the room and enters another room. We cannot tell
which one of the two made this transition by observing the
occupancy change. However, if the one who left entered an
private office, the user can be identified with high probability based on the ownership of the office. Although a change
in occupancy data may correspond to location shifts of many
possible potential users, the knowledge of where the individuals mostly spend their time rules out many possibilities and
renders the individual who made the transition identifiable.
It has been shown in [34] that by simply combining some
ancillary information, such as an office directory and user
mobility patterns, individual location traces can be inferred
from the occupancy data with the accuracy of more than
90%. It is, therefore, the objective of this paper to enable
an occupancy-based HVAC control system that provides privacy features for each user on a par with thermal comfort
and energy efficiency.
A simple yet effective way to preserve privacy is to obfuscate occupancy data by injecting noise to make the data
itself less informative. This approach has been widely used
in privacy disclosure control of various databases, ranging
from healthcare [7], geolocation [2], web-browsing behavior
data [14], etc. While reducing the risk of privacy breach,
this approach would also deteriorate the utility of the data.
There have been attempts to balance learning the statistics
of interest reliably with safeguarding the private information [32]. Cryptography [8] and access control [33] are also
effective means to ease privacy concerns, but they do not
provide protection against all privacy breaches. There may
be insiders who can access the private, decrypted data, or
the building manager may not want to have access to (and
responsibility for) the private data.
The objective of this paper cannot be attained by simply
extending the techniques developed previously. Our task is
more challenging. Firstly, as opposed to learning some fixed
statistics from static data in most database applications, the
data is used for controlling a highly complex and dynamic
system in our case, and the control performance relies on
the data fidelity. With highly accurate occupancy data, the
infrastructure can correctly sense the environment and enable proper response to occupancy variations; nevertheless,
the location privacy is sacrificed. On the other hand, the
usage of severely distorted occupancy data reduces the risks
of privacy leakage, but may lead to even higher levels of
energy consumption and discomfort. Essentially, we need
to address the trade-off between the performance of a controller on a dynamical system, and, similarly, privacy of a
time-varying signal, i.e. the location traces of individual
occupants. Secondly, from the perspective of the building
manager, the building performance is paramount: adding
the privacy feature into the HVAC control system should
not impair the performance of HVAC controller in terms
of energy efficiency and thermal comfort. To achieve this,
the injected noise should be calculated to minimally affect
performance of the controller, while maximizing the amount
privacy gained from the distortion.
In this paper we develop a method which minimizes the
privacy risks incurred by collection of occupancy data while
guaranteeing the HVAC system operating in a “nearly” optimal condition. Our solution relies on an occupancy distortion mechanism, which informs the building manager how
to distort occupancy data before any form of storage or
transport of the data. We draw the inspiration from the
information-theoretic approach in [29, 10] for characterizing
the privacy-utility trade-off, and choose the mutual information (MI) between reported occupancy measurements and
individual location traces as our privacy metric. The design
problem of finding the optimal occupancy distortion mechanism is cast as an optimization problem where the privacy
risk is minimized for a set of constraints on the controller
performance. This allows us to find points on the Pareto
frontier in the utility-privacy trade-off, and to further analyze the economic side of privacy concerns [30]. The formulation can be easily generalized to resolve the tension between privacy and data utility in other cases where a control
system utilizes some privacy-sensitive information as one of
the control inputs, although in this paper we limit our focus to addressing the privacy concern of occupancy-based
HVAC controller. In addition, our work here is complementary to the work being done in the cryptography communities: we can use our distortion mechanism to process sensor
measurements, and then transmit the processed measurements across secure channels. Our work also serves as a
complement for the privacy-preserving access control protocol in [33], as it provides distortion mechanisms against
adversaries who might be able to subvert the protocol while
still retaining the benefits for the occupancy data.
The main contributions of our paper are as follow:
• We present a systematic methodology to characterize
the privacy loss and control performance loss.
• We develop a holistic and tractable framework to balance the privacy pursuit and control performance.
• We evaluate the trade-off between privacy and HVAC
control performance using the real-world occupancy
data and simulated building dynamics.
The rest of the paper is organized as follows: Section 2
reviews the existing work on occupancy-based control algorithms and privacy metrics. Section 3 describes the models
connecting location and occupancy, and the HVAC system
model that will be considered in this paper. In Section 4 we
present a framework for quantifying the trade-off between
privacy and controller performance. We will evaluate the
framework and demonstrate its practical values based on
experimental studies in Section 5. Section 6 concludes the
paper.
2.
2.1
RELATED WORK
Occupancy-based HVAC control
Occupancy-based HVAC systems exploit real-time occupancy measurements to condition the space appropriate to
usage. The occupancy-based controllers in the existing work
can be categorized into two types: rule-based controller and
optimization-based controller or model predictive control
(MPC). The rule-based controller uses an “if condition then
action” logic for decision making in accordance with occupancy variations [13, 4]. MPC is a more advanced control
scheme, which employs a model of building thermal dynamics in order to predict the future evolution of the system, and
solves an optimization problem in real-time to determine
control actions [27]. A number of papers including [16, 17,
3] analyzed in large-scale simulative or experimental studies
the energy saving potential in building climate control by
using MPC, which was shown to be well-suited for building
applications. This leads to our choice of MPC to exemplify
the trade-off between controller performance and privacy.
Occupancy information can be leveraged in different ways
in an MPC-based controller. One approach is to build an occupancy model to predict future occupancy based on which
the MPC optimizes control actions [5]. Another method is to
use the instantaneous occupancy measurement and consider
it to be constant during the control horizon of MPC [15].
This method has been demonstrated to achieve comparable
performance with the MPC that exploits occupancy predictions. We will thus without loss of generality follow the
latter set-up to avoid explicit modeling of occupancy.
2.2
Privacy
Privacy, although not a new topic, has recently developed
renewed interest, due in no small part to new technologies
and modern infrastructures collecting and storing unprecedented amounts of data. Since privacy is an abstract and
subjective concept, it is necessary to develop proper measures for privacy before any privacy protection technique is
discussed.
Differential privacy [11] is one of the most popular metrics for privacy from the area of statistical databases. It is
is typically assured by adding appropriately chosen random
noise to the database output. However, calculating optimal
noise for differential privacy is very difficult, and research
on the applications of differential privacy mostly assumes
the injected noise to be an additive zero-mean Gaussian
or Laplacian random variable, which offers no guarantee
on data utility. As mentioned in the introduction, in our
case the performance of HVAC control systems is crucial:
as such, our work is an effort to maintain control efficacy by
optimally designing noise distribution to maximize privacy
subject to a performance guarantee.
Recently, MI has become a popular privacy metric [29, 10,
18]. Intuitively, MI reflects the change in the uncertainty of
a private variable due to the observation of a public random
variable. In fact, it is the only metric of information leakage that satisfies the data processing inequality [18]. Unlike
differential privacy, this requires some modeling of the adversary’s available ancillary information; however, in practice, we can suppose an adversary with access to a large
amount of ancillary information, which gives a bound on any
weaker adversary’s performance. A framework for characterizing privacy-utility trade-off based on MI was proposed
in [10], where the MI between a private variable and a distorted measurement is minimized subject to the bound on
the value of an exogenous distortion metric that measures
the utility loss from replacing a true measurement with a
distorted measurement. Our work is an extension of [10]
to the situations where dynamics at present. We propose a
method to abstract out control performance of a dynamical
system into a distortion metric, as well as a set of reasonable assumptions for the probabilistic dependencies between
occupancy and location data, which allow us to re-write our
privacy metric on time-series data into a static situation akin
to that developed in [10].
3.
PRELIMINARIES
This section collects the concepts we need before introducing the theoretical framework that characterizes the tradeoff between privacy and control performance in Section 4.
Two models are described: the occupancy-location model
that formulates the relationship between occupancy observations and individual location traces, and the model for the
HVAC system. We will first consider an occupancy detection system that can collect noise-free or true occupancy,
which is then processed by a distortion mechanism into the
obfuscated data that the controller observes. We will see
the distortion can be similarly applied to noisy occupancy,
as elaborated in Section 4.
3.1
Occupancy-location model
Suppose the building of interest consists of N zones represented by Z = {z0 , z1 , · · · , zN }, where a special zone z0
is added to refer to the outside of the building. Let O =
{o1 , · · · , oM } denote the set of occupants. The location of
occupant om at time k is a random variable denoted by
(m)
Xk which takes values in the set Z, for m = 1, · · · , M .
The true occupancy of zone zn at time k is denoted by Ykn ,
n = 0, 1, · · · , N . Ykn takes values from {0, 1, · · · , M }, where
M is the total number of occupants in the building. Note
that the true occupancy and individual location traces are
P
(m)
connected by Ykn = M
= zn ], where 1[·] is the
m=1 1[Xk
indicator function.
Additionally, we suppose that the controller observes a
distorted version of the true occupancy, denoted by Vkn which
takes values from {0, 1, . . . , M }. P(Vkn |Ykn ) represents the
distortion mechanism we wish to design. If no distortion
on the occupancy data is applied, then Vkn = Ykn . We fur(1:M )
(1)
(M )
ther define some shorthands: Xk
:= {Xk , · · · , Xk },
1:N
1
N
Vk := {Vk , · · · , Vk }.
We make the following assumptions.
Assumption 1. The location traces for different occupants
Q
(1:M )
(m)
are mutually independent: P(Xk
)= M
).
m=1 P(Xk
Assumption 2. The location trace for any given occupant
om , m ∈ {1, . . . , M }, has the first-order Markov property:
(m)
(m)
(m)
(m)
P(Xk |Xk−1 , Xk−2 , . . . , X1 )
=
(m)
(m)
P(Xk |Xk−1 )
(1)
Assumption 3. The true occupancy Ykn is a sufficient stat(1:M )
istics for Vkn , i.e., P(Vkn |Xk
) = P(Vkn |Ykn ).
Assumption 3 is naturally justified since the distribution
of Vkn depends only on the value of Ykn in our distortion
mechanism. The first two assumptions are necessary to design the optimal distortion method, but we will show that
our distortion method will work on the real-world occupancy dataset, which provides a support for Assumption 1
and 2. These assumptions allow us to model occupancy
and location traces via the Factorial Hidden Markov model
(FHMM), illustrated in Figure 2. The FHMM consists of
several independent Markov chains evolving in parallel, representing the location trace of each occupant. Since we only
observe the aggregate occupancy information, the location
traces are considered to be hidden states.
(1)
X k−1
…
…
…
Occupant om’s …
Location Trace
(m )
X k−1
X k(m )
(m )
X k+1
Suppose the thermal comfort of the building space of interest is regulated by the HVAC system shown in Figure 3,
which provides a system-wide Air Handling Unit (AHU) and
Variable Air Volume (VAV) boxes distributed at the zones.
In this type of HVAC system, the outside air is conditioned
at the AHU to a setpoint temperature Ta by the cooling coil
inside. The conditioned air, which is usually cold, is then
supplied to all zones via the VAV box at each zone. The
VAV box controls the supply air flow rate to the thermal
zone, and heats up the air using the reheat coils at the box,
if required. The control inputs are temperature and flow
rate of the air supplied to the zone by its VAV box. The
AHU outlet air temperature setpoint Ta is assumed to be
constant in this paper. The HVAC system models described
in the subsequent paragraphs will follow [22, 5, 15] closely1 .
Outside
Air
Conditioned Air
AHU
VAV
VAV
Zone 1
Zone N
n
Vk−1
…
…
Figure 3: A schematic of a typical multi-zone commercial
building with a VAV-based HVAC system.
1
Vk+1
Vk1
…
…
…
…
State model. With reference to the notations in Table 1,
the continuous time dynamics for the temperature T n of
zone zn can be expressed as
n
Vk+1
Vkn
Figure 2: The graphical model representation of the FHMM
model.
The FHMM model can be specified by the transition probabilities and emission probabilities. The transition probabilities describe the mobility pattern of an occupant, which is
denoted as a (N + 1) × (N + 1) transition matrix. We de(m)
fine the transition matrix for occupant om as A(m) = [aij ],
(m)
(m)
(m)
i, j = 0, 1, · · · , N , where aij = P(Xk+1 = zj |Xk = zi )
for k = 0, 1, · · · , K − 1. The transition parameters can be
learned from the occupancy data based on maximum likelihood estimation. If the prior knowledge about the past
location traces is also available, it can be encoded as the
prior distribution of transition parameters from a Bayesian
point of view, and then the transition parameters can be
learned via maximum a posteriori (MAP) estimation. We
refer the readers to [34] for the details of parameter learning.
The emission probabilities characterize the conditional distribution of distorted occupancy given the location of each
occupant, defined by
)=
N
Y
n=1
(1:M )
P(Vkn |Xk
Exhaust
Air
…
Zone zm’s
Occupancy
(M )
X k+1
X k( M )
…
1
Vk−1
…
…
…
(M )
X k−1
Zone z1’s
Occupancy
(1)
X k+1
X k(1)
…
…
Occupant oM’s
…
Location Trace
(1:M )
HVAC system model
Supply Air
Occupant o1’s …
Location Trace
P(Vk1:N |Xk
3.2
)=
N
Y
P(Vkn |Ykn ) (2)
n=1
The above equalities result from Assumption 3, which, in
other words, indicates that the distorted occupancy depends
on individual location traces only via the true occupancy.
Cn
d n
n
n
T = Rn · T + Qn + ṁn
s cp (Ts − T )
dt
(3)
where the superscript n indicates that the associated quantities are attached to zone zn . T := [T 1 , · · · , T N ] is a vector of
temperature of all N zones. Rn indicates the heat transfer
among different zones and outside. Qn is the thermal load,
which can be obtained by applying a thermal coefficient co
to the number of occupants V n , i.e., Qn = co V n . The conn
trol inputs U n := [ṁn
s , Ts ] are the supply air mass flow rate
n
n
and temperature. Assuming ṁn
s , Ts and Q are zero-order
held at sample rate ∆t, we can discretize (3) using the trapezoidal method and obtain a discrete-time model, which can
be expressed as
n
T n −Tkn
Tk+1
+Tkn
n
C n k+1
= Rn·Tk +coVkn + ṁn
(4)
s,k cp Ts,k−
∆t
2
where k is the discrete time index and Tkn = Ttn |t=k∆t . Qn
k,
n
ṁn
and
T
are
similarly
defined.
s,k
s,k
Cost function. The control objective is to condition
the room while minimizing the energy cost. The power
n
consumption at time k consists of reheating power Ph,k
=
cp
cp
n
n
n
n
ṁs,k (Ts,k −Ta ), cooling power Pc,k = ηc ṁs,k (To −Ta ) and
ηh
1
Controlling the flow rate is actually more preferable in
building codes in consideration of energy efficiency. Herein,
we consider both reheat temperature and flow rate are controllable, while the HVAC model with flow rate as the only
control input is a simple application of our model.
Param.
∆t
cp
Cn
co
R
ηh
ηc
β
re
rh
T
T
Ta
ms
ms
Th
Meaning
Discretization step
Thermal capacity of air
Thermal capacity of the env.
Thermal load per person
Heat transfer vector
Heating efficiency
Cooling efficiency
System parameter
Electricity price
Heating fuel price
Upper bound of comfort zone
Lower bound of comfort zone
AHU outlet air temperature
Minimum air flow rate
Maximum air flow rate
Heating coil capacity
Value & Units
60s
1kJ/(kg · K)
1000kJ/K
0.1kW
0kW/K
0.9
4
0.5kW · s/kg
1.5 · 10−4 $/kJ
5 · 10−6 $/kJ
24◦ C
26◦ C
12.8◦ C
0.0084kg/s
1.5kg/s
40◦ C
Table 1: Parameters used in the HVAC controller.
4.
PRIVACY-ENHANCED CONTROL
With the HVAC model established, we can now develop
the mathematical framework to discuss a privacy-enhanced
architecture. We will first introduce MI as the metric we use
throughout the paper to quantify privacy, and then present a
method to optimally design the distortion mechanism which
minimizes the privacy loss within a pre-specified constraint
on control performance.
4.1
Privacy metric
Definition 1. [6] For random variables X and V , the mutual information is given by:
I(X; V ) = H(X) − H(X|V )
(5)
where H(X) and H(X|V ) represent entropy and conditional
entropy, respectively. Let PX (x) = P(X = x), H(X) and
H(X|V ) are defined as
X
H(X) = −
PX (x) log(PX (x))
(6)
x
n
Pf,k
X
X
H(X|V ) = −
(7)
PV (v)
PX|V (x|v) log PX|V (x|v)
β ṁn
s,k ,
fan power
=
where ηh and ηc capture the efficiencies for heating and cooling side, respectively. β stands
for a system dependent constant. We introduce several parameters to reflect utility pricing, re for electricity and rh
for heating fuel. These parameters may vary over time.
Therefore, the total utility
cost of zone zn from time k
=
PK
n
n
n
n
1, · · · , K is J = k=1 (re,k Pf,k +rh,k Ph,k +re,k Pc,k )∆t .
Constraints. The system states and control inputs are
subject to the following constraints:
C1: T ≤ Tkn ≤ T , comfort range;
C2: ṁs ≤ ṁn
s,k ≤ ṁs , minimum ventilation requirement
and maximum VAV box capacity;
n
C3: Ts,k
≥ Ta , heating coils can only increase temperature;
n
≤ T h , heating coil capacity.
C4: Ts,k
These constraints hold at all times k and all zones {zn }N
n=1 .
MPC controller. Knitting together the models described above, we present an MPC-based control strategy for
the HVAC system to efficiently accommodate for occupancy
variations. In this control algorithm, we assume that the
predicted occupancy during the optimization horizon to be
the same as the instanteneous occupancy observed at the
beginning of control horizon. It was shown to be in [15] that
the control algorithm with this assumption can achieve comparable performance with the MPC that constructs explicit
occupancy model to predict occupancy for future time steps.
1:N
Let U1:K
be the shorthand for {Ukn |k = 1, · · · , K, n =
1, · · · , N }. The optimal control inputs for the next K time
P
n
steps are obtained by solving minU 1:N N
n=1 J , subject to
1:K
the inequality constraints C1-C4 and the equality constraint
n
n
(4) and T1n = Tinit
, ∀n = 1, · · · , N , where Tinit
is the initial
temperature of zone zn at each MPC iteration. We can see
that the optimal control input is a function of the distorted
occupancy that the controller sees and the initial temperature. We express this relationship explicitly by denoting the
n
n
n
optimal control action at zone zn as UM
P C (V , Tinit ) . In
addition, the energy cost incurred by applying the optimal
n
n
n
n
n
control action is denoted by JM
P C (UM P C (V , Tinit ), Y ),
where the second argument stresses that the actual control
cost is dependent on the real occupancy.
v
x
Remark. Entropy measures uncertainty about X, and conditional entropy can be interpreted as the uncertainty about
X after observing V . By the definition above, MI is a measure of the reduction in uncertainty about X given knowledge of V . We can see that it is a natural measure of privacy
since it characterizes how much information one variable
tells about another. It is also worth noting that inference
technologies evolve and MI as a privacy metric does not depend on any particular adversarial inference algorithm [29]
as it models the statistical relationship between two variables.
In this paper, we will be using the MI between location
(1:M )
traces and occupancy observations, i.e., I(Xk
; Vk1:N ), as
a metric of privacy loss. This metric reflects the reduction
(1:M )
in uncertainty about location traces Xk
due to observa1:N
tions of Vk . As a proof of concept, we will verify that this
metric serves as an accurate proxy for an adversary’s ability
to infer individual location traces in the experiments. We
further introduce some assumptions which allow us to simplify the expression of the privacy loss and obtain a form of
MI that has direct relationship with the distortion mechanism P (Vkn |Ykn ) we wish to design.
Based on results in ergodic theory [21], we know that
the probability distribution of individual location traces will
converge to a unique stationary distribution under very mild
assumptions2 . For more details on stationary distributions,
we refer the readesr to [21]. This observation justifies the
following:
(m)
Assumption 4. The Markov chains Xk
have a unique
stationary distribution for all occupants om and are distributed according to those stationary distributions for all
time steps k.
Combining this assumption and the occupancy-location
model we presented in the preceding section, we present a
2
Since there are only finitely many zones, a sufficient condition is the existence of a path from zi to zj with positive
probability for any two zones zi and zj .
proposition that allows us to great simplify the form of the
privacy loss:
Proposition 1. By Assumption 3, we have that:
(1:M )
I(Xk
; Vk1:N ) = I(Yk1:N ; Vk1:N )
(8)
By Assumption 4, we have that I(Yk1:N ; Vk1:N ) is a constant for all k, so we will drop the subscript: I(Y 1:N ; V 1:N ).
Finally, by the various conditional independences introduced in Assumption 3:
I(Y 1:N ; V 1:N ) =
N
X
I(Y n ; V n )
(9)
n=1
Remark. The result that I(Yk1:N ; Vk1:N ) is a constant value
for all k allows us to design a single distortion mechanism
P (V n |Y n ) for all time steps (note that we drop the subscript k to indicate the time-homogeneity of the distortion
mechanism). By Proposition 1, minimization of privacy loss
(1:M )
I(Xk
; Vk1:N ) can be conducted by minimizing a simpler
P
n
n
expression N
n=1 I(Y ; V ).
4.2
Optimal distortion design
We wish to find a distortion mechanism P (Y n |V n ) that
can produce some perturbed occupancy data with minimum
information leakage, while the performance of the controller
using the perturbed occupancy data is on a par with that
using true occupancy. To be specific, we will bound the
difference of energy costs incurred by the controllers seeing
distorted and real occupancy data.
Let Tinit1 and Tinit2 be initial temperature of the controller using distorted and real occupancy, respectively. Ren
n
n
n
n
n
n
n
call that UM
P C (V , Tinit ) and JM P C (UM P C (V , Tinit ), Y )
stand for the optimal control actions and the associated cost
based on the distorted occupancy; correspondingly, if the
controller sees the real occupancy data, the optimal control
n
n
n
action and the associated cost will be UM
P C (Y , Tinit ) and
n
n
n
n
n
(Y
,
T
),
Y
),
respectively.
We
denote
the
(U
JM
init
MP C
PC
resulting temperature after applying optimal control actions
n
n
n
n
n
as TM
P C (UM P C (V , Tinit ), Y ), where the second argument
emphasizes that the temperature evoluation depends on the
true occupancy. We introduce the following constraints:
∀|Tinit1 − Tinit2 | ≤ ∆0T , y = 0, · · · , M , n = 1, · · · , N ,
C5: Cost difference constraint
n
n
n
EP(V n |Y n =y) JM
P C UM P C (Tinit1 , V ), y −
n
n
JM
U
(T
,
y),
y
≤∆
(10)
init2
PC
MP C
C6: Resulting temperature constraint
n
n
n
EP(V n |Y n =y) TM
P C UM P C (Tinit1 , V ), y −
n
n
TM
≤ ∆T
P C UM P C (Tinit2 , y), y
(11)
C5 states that the cost difference between using the distorted occupancy measurements V n and using the ground
truth occupancy measurements Y n is bounded by ∆ in expectation, for any possible value of Y n . The cost difference
can be regarded as the control performance loss due to the
usage of distorted data, and ∆ stands for the tolerance on
the control performance loss. C5 alone is a one-step performance guarantee, that is, it only bounds the cost difference
associated with a single MPC iteration. In practice, MPC is
repeatedly solved from the new initial temperature, yielding
new control actions and temperature trajectories. In order
to offer a guarantee for future cost difference, we introduce
another constraint C6 on the resulting temperature difference of one MPC iteration. The idea is that the resulting
temperature will become the new initial temperature of the
next MPC iteration. If the resulting temperature difference
between using distorted occupancy data and using true occupancy data is bounded within a small interval ∆T , in the
next MPC iteration C5 will provide a bound on cost difference for new initial temperatures that do not differ too
much, since the cost difference constraint C5 is imposed to
hold for all |Tinit1 − Tinit2 | ≤ ∆0T . Typically, ∆0T is set to be
similar to ∆T , but a small value of ∆0T is preferred in order
to assure the feasibility of the optimization problem (since
the number of constraints increases with ∆0T ).
Now, we are ready to present the main optimization for
privacy-enhanced HVAC controller by combining the privacy
metric and performance constraint just presented. Suppose
the assumptions of Proposition 1 hold. Given the control
performance loss tolerance ∆, the optimal distortion mechanism is given by solving:
min
n
n
P(V |Y )
n=1,··· ,N
N
X
I(Y n ; V n )
(12)
n=1
subject to the constraint C5-C6. ∆ serves as a knob to
adjust the balance between privacy and the controller performance loss. Increasing ∆ leads to larger feasible set for
the optimization problem, and thus a smaller value of MI (or
privacy loss) is expected. Using the methodology presented
in Section 3, we are able to calculate the terms inside the
expectation in (10) and (11) for all |Tinit1 − Tinit2 | ≤ ∆0T
and y = 0, · · · , M . Treating these as constants, calculating
the optimal privacy-aware sensing mechanism is a convex
optimization program, and can be efficiently solved. Additionally, since the constraints are enforced for each zone,
the optimization (12) can actually be decomposed to N
sub-problems and thus we can solve the optimal distortion
scheme separately for each zone.
Remark on noisy occupancy data. In the preceding privacy-enhanced framework, we consider the occupancy
can be accurately detected. In practice, the occupancy data
may be noisy itself, and thereby the distortion mechanism
will be designed based on noisy occupancy Wkn instead of
true occupancy Ykn . In effect, the distortion designed using
noisy occupancy provides an upper bound on the privacy
loss. That is, in practice we could use noisy occupancy to
design the distortion mechanism and the realized privacy
loss can only be lower than the minimum privacy loss obtained from the optimization. Note that we have the Markov
relationship: Ykn → Wkn → Vkn when the distortion is applied to noisy data. Then the proof follows from the data
processing inequality [6].
5.
5.1
EVALUATION
Experiment Setup
Occupancy dataset. The occupancy data used in this
paper is from the Augsburg Indoor Location Tracking Bench-
mark [28], which includes location traces for 4 users in a office building with 15 zones. The location data in the benchmark dataset was recorded every second over a period of 4 to
9 weeks. Since the dataset contains some missing observations due to technical issues or the vacation interruption, we
finally use the dataset from November 5th to 24th in our experiment, during which the location traces of all the 4 users
are complete, and subsample the dataset with 1-minute resolution. The ground truth occupancy data was synthesized
by aggregating the locations trace of each user. Table 2
shows two statistics of the benchmark dataset. Notably, of
all transitions per day, 66.7% to 84.6% either start from or
end at one’s own office, and office location can divulge one’s
identity. This sheds light on why location traces of individual users can be actually inferred from the “anonymized”
occupancy data.
User
1
2
3
4
avg # of transitions
per day
9.3
20.2
9.9
7.6
avg % of transitions
from/to office per day
84.6%
75.4%
66.7%
75.5%
Table 2: The average number of transitions each user made
in each workday, and the average percentage of transitions
from or to one’s office.
Adversary inference. We consider the adversary to be
an insider with authorized building automation system access. One can think of it as the worst case of privacy breach,
because insiders not only learn the ancillary information that
is public-available, but are familiar with building operation
policies. To be specific, the following auxiliary information
is assumed to be available to the adversary: (1) Building
directory and occupant mobility patterns, encoded by the
transition matrix of each occupant3 ; (2) Occupancy distortion mechanism designed by building manager.
The adversary attempts to reconstruct the most probable
location trace given the occupancy data and the auxiliary
information. That is, the attack is to find the MAP of location traces given the other information. The approach to
finding MAP is well known as Viterbi algorithm in HMM.
However, Viterbi is infeasible in the FHMM case as the location traces to be solved reside in a exponentially large state
space (N M × K). We propose a fast inference method based
on Mixed Integer Programming, and thus more efficiently
evaluate the adversary’s inference attack. The interested
readers are referred to the code implementation of this paper for the details of the fast inference algorithm.
Controller parameters. Without loss of generality, we
consider the zones have the same thermal properties. The
comfort range of temperature in the zones is defined to be
within 24 − 26◦ C as in [26]. The minimum flow rate is set to
be 0.084kg/s to fulfill the minimum ventilation requirement
for 25m2 -sized zone as per ASHRAE ventilation standard
62.1-2013 [1]. The optimization horizon of the MPC is 120
min, and the control commands are solved for and updated
every 15 min [15]. Other design parameters are shown in
Table 1, which bascially follows the choices in [22].
3
In the experiment, we use 4 days’ occupancy data and 2
days’ location traces to learn these parameters and the rest
for evaluating our framework.
Platform. The algorithms are implemented in MATLAB; The interior-point algorithm is used to solve the bilinear optimization problem in MPC. To encourage the research on the privacy-preserving controller, the codes involved in this paper will be open-sourced in http://people.
eecs.berkeley.edu/˜ruoxijia/code.
5.2
5.2.1
Results
MI as proxy for privacy
We solve the MI optimization for different tolerance levels of control performance deterioration due to the usage of
the distorted data, i.e., ∆, and obtain a set of optimal distortion designs and corresponding optimal values of MI. We
then randomly perturb the true occupancy data using the
different distortion designs, and infer location traces from
the perturbed occupancy data. Monte Carlo (MC) simulations are carried out to assess results under the random distortion design. The inference accuracy is defined to be the
ratio between the counts of correct location predictions over
the total time steps. Figure 4 demonstrates the monotonically increasing relationship between adversarial location inference accuracy and MI, which justifies the usage of MI as
a measure of privacy loss. When the adversary has perfect
occupancy data, individual location traces can be inferred
with accuracy of 96.81%. On the contrary, when the MI approaches zero, the adversary tends to estimate the location
of each user to be constantly outside of the building, which
is the best estimate the adversary can generate based on
the uninformative occupancy data since people spend most
of their time in a day outside. In this case, the inference
accuracy is 77% but the adversary actually has no knowledge about users’ movement. This serves as a baseline of
the adversarial location inference performance.
Figure 4: The adversary location inference accuracy increases as MI increases. The black line and the band around
it show the mean and standard deviation of inference accuracy across ten MC simulations, respectively. The black
square shows the location inference accuracy if the adversary sees true occupancy data. The black triangle gives the
accuracy when the adversary outputs a constant location
estimate.
5.2.2
Utility-Privacy Trade-off
Figure 5 shows the variation of privacy loss and controller
performance loss with respect to different choices of ∆, which
is the theoretical guarantee on controller performance loss.
It is evident that privacy loss and control performance loss
Visualization of Distortion Matrix
#10 -3
1.5
3
0
1
0.03
0.00
0.00
0.00
0
0.67
0.33
0.00
0.00
0.00
0
0.07
0.93
0.00
0.00
0.01
1
0.01
0.99
0.00
0.00
0.00
1
0.11
0.88
0.00
0.00
0.01
1
0.07
0.93
0.00
0.00
0.01
2
0.00
0.29
0.44
0.27
0.00
2
0.04
0.69
0.00
0.00
0.27
2
0.07
0.93
0.00
0.00
0.01
3
0.00
0.00
0.01
0.99
0.00
3
0.01
0.42
0.00
0.00
0.58
3
0.00
0.74
0.00
0.00
0.26
0.84
4
0.01
0.06
0.10
0.19
0.64
5.6
5.65
5.7
5.75
Guarantee on Controller Performance Loss
0
5.8
3
#10 -3
Figure 5: The changes of MI and actual control cost difference between using true and perturbed occupancy as the
theoretical control cost difference changes. The blue dot line
and errorbar demonstrate the mean and standard deviation
of actual control cost difference across ten MC simulations,
respectively.
exhibit opposite trends as ∆ changes. The privacy loss, measured by MI, monotonically decreases as ∆ gets larger. This
is the manifestation of the intrinsic utility-privacy trade-off
embedded in the main optimization problem (12). As the
performance constraint ∆ is more relaxed, a smaller value of
MI can be attained and thus privacy can be better preserved.
The actual performance loss, measured by the HVAC control
cost difference (between using distorted and true data) averaged across different MPC iterations and difference zones,
generally increases with ∆ and is upper bounded by ∆. This
indicates that the theoretical constraint on controller performance loss in our framework is effective and can actually
provide a guarantee on the actual controller performance.
We can see that the bound is far from tight, since the framework enforces the constraints on the controller performance
for every possible true occupancy value to ensure the robustness while in practice the occupancy distribution is very
spiked about the mean occupancy.
Figure 6 visualizes the distortion mechanism obtained by
solving the MI under different choices of the tolerance on
the control performance loss ∆. It can be clearly seen that
the mechanism creates a higher level of distortion as ∆ increases. When ∆ is small, the resulting distortion matrix
assigns most probability mass on the diagonal, i.e., the occupancy is very likely to keep unperturbed. As ∆ gets larger,
the distortion mechanism tends to have the same rows, in
which case the distribution of distorted occupancy data is
invariant under the change of true occupancy and MI between true occupancy and perturbed occupancy, i.e., the
privacy loss, tends to be zero. We also plot the temperature evolution under different distortion levels. Since we
enforce a hard constraint on temperature, we can see that
the zone temperature stays within the comfort zone for all
∆’s. However, larger ∆ would lead to a larger deviation
from the temperature controlled using the true occupancy.
Comparison with Other Methods
We compare the performance of the HVAC controller using our optimally perturbed data against using unperturbed
occupancy data, fixed occupancy schedule as well as randomly perturbed data by other distortion methods. In Figure 7a we plot the privacy loss and control cost for con-
0.00
0.00
0.00
0.00
0.97
0.03
26.5
0.99
0.01
0.00
26
0.29
0.00
0.00
4
0.00
0
0.00
0.02
0.04
0.09
" = 0.0055
2
3
0
1
0.67
0.33
" = 0.0058
2
3
0
1
0.08
0.91
4
gets0.00
larger
Δ 0.00
0.00
0.8
0.7
0.6
0.5
0.3
0.00
0.44
0.00
0.27
0.02
0.83
25
0.00
0.00
0.00
0.00
1
0.11
0.88
0.00
0.00
Optimal
Controller
0.00 2 0.01 0.75 0.00 0.00
"=0.005402
"=0.0055
3
0.00
0.43 0.00 0.00
0.11
"=0.0058
0.02 0.04 0.09
0.99 4 0.00Zone
Comfort
0.1
4
0.9
0
0.00
0.00
0.00
Temperature Evolutions for Different Distortion Levels
0.0025.5
0.05
0.00
4
0.99
" = 0.005402
1
2
3
0
0
4
5.2.3
4
0.2
4
2
5.55
4
0.4
1
5.5
" = 0.0058
2
3
1
0.97
Temperature (°C)
bits
0.5
1
dollars per 15 min per zone
1
5.45
" = 0.0055
2
3
0
0
4
0.9
2
0
5.4
" = 0.005402
1
2
3
0
Privacy Loss
Actual Performance Loss
0.01
1
0.08
0.91
0.00
0.00
0.00
0.24
2
0.08
0.91
0.00
0.00
0.00
0.57
3
0.00
0.72
0.02
0.03
0.23
0.84
4
0.01
0.06
0.10
0.19
0.64
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
24.5
24
23.5
03:00
06:00
09:00
12:00
15:00
18:00
21:00
24:00
Time of day (hrs)
Figure 6: Illustration of distortion matrix P (V |Y ) under
different controller performance guarantees. The row index
corresponds to the value of Y , while colomn index corresponds to V . The zone temperature traces resulted from
the controllers using occupancy data that is randomly distorted by different distortion matrices are also shown.
trollers that use the various forms of occupancy data. Fixed
occupancy schedule (assuming maximum occupancy during
working hours and zero otherwise) exposes zero information
about individual location traces, but cannot adapt to occupancy variations and thus incurs considerable control cost.
The controller based on clean occupancy data is most costeffective but discloses maximum private information. One
of the random distortion method to be compared is uniform distortion scheme in which the true occupancy is perturbed to some value between zero to maximum occupancy
with equal probability. We carry out 10 MC simulations to
obtain the control cost incurred under this random perturbation scheme. It can be seen that the uniform distortion
scheme protects the private information with compromised
controller performance.
A natural question arising is if the current occupancy sensing systems provide intrinsic privacy-preserving features as
there always exists occupancy estimation errors. Can we
use a cheaper and inaccurate occupancy sensor to acqiure
privacy? As is suggested by the occupancy sensing results
in [19], the estimation noise of a real occupancy sensing system can be modeled by a multinomial distribution which has
most probability mass at zero. Inspired by this, we use the
following multinomial distortion schemes to imitate a real
occupancy sensing system with disparate accuracies acc,
acc, V = y
n
n
1−acc
, V = y−1 or y+1 if y 6= 0 (13)
P (V |Y = y) =
2
1−acc
, V = 1 or 2, if y = 0
2
Again, MC simulations are performed to evaluate the control performance under this random perturbation, and the
results are shown in Figure 7a. It can be seen that when
the privacy loss is relatively large (or data is slightly distorted), the control cost of our optimal noising scheme and
the multinomial noising scheme do not differ too much. This
is because at this level of privacy loss the two distortion
schemes behave similarly, as shown in Figure 6, where the
Utility-Privacy Trade-off for Different Schemes
#10 -3
5.8
Optimal Distortion
Fixed Schedule
Unperturbed Occupancy Data
Uniform Distortion
Multinomial Distortion
11
6.
0.9
5.75
0.8
5.7
10.5
0.7
5.65
5.6
0.6
9.5
acc
10
"
Cumulative Control Cost (dollars/day)
11.5
5.55
0.5
9
5.5
0.4
8.5
8
5.45
0
0.5
1
1.5
2
2.5
3
0.3
3.5
Privacy Loss (bits)
(a)
Utility-Privacy Trade-off for Different Schemes
#10 -3
5.8
Optimal Distortion
Fixed Schedule
Unperturbed Occupancy Data
Uniform Distortion
Multinomial Distortion
11.5
11
0.9
5.75
0.8
0.7
10.5
7.
5.6
0.6
acc
5.65
10
9.5
5.55
0.5
9
5.5
0.4
8.5
5.45
8
0
0.5
1
1.5
2
2.5
3
In this paper, we present a tractable framework to model
the trade-off between privacy and controller performance in
a holistic manner. We take occupancy-based HVAC controller as an example where the objective is to utilize occupancy data to enable smart controls over the HVAC system
while protect individual location information from being inferred from the occupancy data. We use MI as the measure
of privacy loss, and formulate the privacy-utility trade-off by
a convex optimization problem that minimizes the privacy
loss subject to a pre-specified controller performance constraint. By solving the optimization problem, we can obtain
a mechanism that injects optimal amount of noise to occupancy data to enhance privacy with control performance
guarantee. We verify our framework using real-world occupancy data and simulated building dynamics. It is shown
that our theoretical framework is able to provide guidelines
for practical privacy-enhanced occupancy-based HVAC system design, and reaches a better balance of privacy and
control performance compared with other occupancy-based
controllers.
5.7
"
Cumulative Control Cost (dollars/day)
12
CONCLUSIONS
3.5
0.3
Privacy Loss (bits)
(b)
Figure 7: Comparison of the privacy-utility trade-off of controllers using different forms of occupancy data, evaluated
based on (a) real-world occupancy data and (b) synthesized
data.
occupancy keeps untainted with high probability. But as
the privacy loss decreases, our optimal noising scheme’s intelligent noise placement begins to significantly improve control performance. In addition, our optimal distortion Pareto
dominates the other schemes.
To investigate the scalability of our proposed scheme, we
create synthetic data that simulates location traces for 15 occupants based on the Augsburg dataset. We extract the occupants’ movement profile, i.e., transition parameters, from
the original dataset and randomly assign the profiles to synthesized occupants. An occupant randomly chooses the next
location according to the movement profile. The privacyutility curve evaluated on this larger synthesized dataset is
illustrated in Figure 7b, which demonstrates that the optimality of our distortion scheme is preserved when the experiment is scaled up. We can see that the privacy loss of the
controller using the unperturbed occupancy gets lower when
incorporating more occupants. Although privacy risks are
lower as we scale up the experiment since with more people
sharing the space it will be more difficult to identify each
individuals, adding distortion to occupancy measurements
can preserve the privacy even further as shown in Figure 7b.
REFERENCES
[1] ANSI/ASHRAE Standard 62.1-2013: Ventilation for
Acceptable Indoor Air Quality. American Society of
Heating, Refrigerating and Air-Conditioning
Engineers, 2013.
[2] M. E. Andrés, N. E. Bordenabe, K. Chatzikokolakis,
and C. Palamidessi. Geo-indistinguishability:
Differential privacy for location-based systems. In
Proceedings of the 2013 ACM SIGSAC conference on
Computer & communications security, pages 901–914.
ACM, 2013.
[3] A. Aswani, N. Master, J. Taneja, D. Culler, and
C. Tomlin. Reducing transient and steady state
electricity consumption in hvac using learning-based
model-predictive control. Proceedings of the IEEE,
100(1):240–253, 2012.
[4] B. Balaji, J. Xu, A. Nwokafor, R. Gupta, and
Y. Agarwal. Sentinel: occupancy based hvac actuation
using existing wifi infrastructure within commercial
buildings. In Proceedings of the 11th ACM Conference
on Embedded Networked Sensor Systems, page 17.
ACM, 2013.
[5] A. Beltran and A. E. Cerpa. Optimal hvac building
control with occupancy prediction. In Proceedings of
the 1st ACM Conference on Embedded Systems for
Energy-Efficient Buildings, pages 168–171. ACM,
2014.
[6] T. M. Cover and J. A. Thomas. Elements of
information theory. John Wiley & Sons, 2012.
[7] F. K. Dankar and K. El Emam. Practicing differential
privacy in health care: A review. Transactions on
Data Privacy, 6(1):35–67, 2013.
[8] W. Diffie and M. E. Hellman. Privacy and
authentication: An introduction to cryptography.
Proceedings of the IEEE, 67(3):397–427, 1979.
[9] B. Dong, B. Andrews, K. P. Lam, M. Höynck,
R. Zhang, Y.-S. Chiou, and D. Benitez. An
information technology enabled sustainability test-bed
(itest) for occupancy detection through an
environmental sensing network. Energy and Buildings,
42(7):1038–1046, 2010.
[10] F. du Pin Calmon and N. Fawaz. Privacy against
statistical inference. In 2012 50th Annu. Allerton
Conf. on Commun., Control, and Computing
(Allerton), pages 1401–1408, Oct 2012.
[11] C. Dwork. Differential privacy. In Proc. of the Int.
Colloq. on Automata, Languages and Programming,
pages 1–12. Springer, 2006.
[12] U. EIA. Annual energy review. Energy Information
Administration, US Department of Energy:
Washington, DC www. eia. doe. gov/emeu/aer, 2011.
[13] V. L. Erickson and A. E. Cerpa. Occupancy based
demand response hvac control strategy. In Proceedings
of the 2nd ACM Workshop on Embedded Sensing
Systems for Energy-Efficiency in Building, pages 7–12.
ACM, 2010.
[14] L. Fan, L. Bonomi, L. Xiong, and V. Sunderam.
Monitoring web browsing behavior with differential
privacy. In Proceedings of the 23rd international
conference on World wide web, pages 177–188. ACM,
2014.
[15] S. Goyal, H. A. Ingley, and P. Barooah.
Occupancy-based zone-climate control for
energy-efficient buildings: Complexity vs.
performance. Applied Energy, 106:209–221, 2013.
[16] D. Gyalistras and M. Gwerder. Use of weather and
occupancy forecasts for optimal building climate
control (opticontrol): Two years progress report main
report. Terrestrial Systems Ecology ETH Zurich R&D
HVAC Products, Building Technologies Division,
Siemens Switzerland Ltd, Zug, Switzerland, 2010.
[17] J. Hu and P. Karava. Model predictive control
strategies for buildings with mixed-mode cooling.
Building and Environment, 71:233–244, 2014.
[18] J. Jiao, T. A. Courtade, K. Venkat, and T. Weissman.
Justification of logarithmic loss via the benefit of side
information. IEEE Transactions on Information
Theory, 61(10):5357–5365, Oct 2015.
[19] M. Jin, N. Bekiaris-Liberis, K. Weekly, C. Spanos, and
A. Bayen. Sensing by proxy: Occupancy detection
based on indoor co2 concentration. UBICOMM 2015,
page 14, 2015.
[20] M. Jin, R. Jia, Z. Kang, I. C. Konstantakopoulos, and
C. J. Spanos. Presencesense: Zero-training algorithm
for individual presence detection based on power
monitoring. In Proceedings of the 1st ACM Conference
on Embedded Systems for Energy-Efficient Buildings,
pages 1–10. ACM, 2014.
[21] O. Kallenberg. Foundations of Modern Probability.
Springer, 2002.
[22] A. Kelman and F. Borrelli. Bilinear model predictive
control of a hvac system using sequential quadratic
programming. In Ifac world congress, volume 18,
pages 9869–9874, 2011.
[23] M. A. A. H. Khan, H. Hossain, and N. Roy.
Infrastructure-less occupancy detection and semantic
localization in smart environments. In proceedings of
the 12th EAI International Conference on Mobile and
Ubiquitous Systems, pages 51–60. ICST (Institute for
Computer Sciences, Social-Informatics and
Telecommunications Engineering), 2015.
[24] W. Kleiminger, S. Santini, and F. Mattern. Smart
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
[35]
heating control with occupancy prediction: how much
can one save? In Proceedings of the 2014 ACM
International Joint Conference on Pervasive and
Ubiquitous Computing: Adjunct Publication, pages
947–954. ACM, 2014.
M. Lisovich, D. Mulligan, and S. Wicker. Inferring
personal information from demand-response systems.
IEEE Security & Privacy, 8:11–20, 2010.
S. Nagarathinam, A. Vasan, V. Ramakrishna P, S. R.
Iyer, V. Sarangan, and A. Sivasubramaniam.
Centralized management of hvac energy in large
multi-ahu zones. In Proceedings of the 2nd ACM
International Conference on Embedded Systems for
Energy-Efficient Built Environments, pages 157–166.
ACM, 2015.
F. Oldewurtel, A. Parisio, C. N. Jones, D. Gyalistras,
M. Gwerder, V. Stauch, B. Lehmann, and M. Morari.
Use of model predictive control and weather forecasts
for energy efficient building climate control. Energy
and Buildings, 45:15–27, 2012.
J. Petzold. Augsburg indoor location tracking
benchmarks. 2004.
S. R. Rajagopalan, L. Sankar, S. Mohajer, and H. V.
Poor. Smart meter privacy: A utility-privacy
framework. In Smart Grid Communications
(SmartGridComm), 2011 IEEE International
Conference on, pages 190–195, Oct 2011.
L. J. Ratliff, C. Barreto, R. Dong, H. Ohlsson,
A. Cárdenas, and S. S. Sastry. Effects of risk on
privacy contracts for demand-side management.
arXiv:1409.7926v3, 2015.
R. Shokri, G. Theodorakopoulos, J.-Y. Le Boudec,
and J.-P. Hubaux. Quantifying location privacy. In
Security and privacy (sp), 2011 ieee symposium on,
pages 247–262. IEEE, 2011.
J. Soria-Comas, J. Domingo-Ferrer, D. Sánchez, and
S. Martı́nez. Enhancing data utility in differential
privacy via microaggregation-based k-anonymity. The
VLDB Journal, 23(5):771–794, 2014.
H. Wang, L. Sun, and E. Bertino. Building access
control policy model for privacy preserving and testing
policy conflicting problems. Journal of Computer and
System Sciences, 80(8):1493 – 1503, 2014. Special
Issue on Theory and Applications in Parallel and
Distributed Computing Systems.
X. Wang and P. Tague. Non-invasive user tracking via
passive sensing: Privacy risks of time-series occupancy
measurement. In Proceedings of the 2014 Workshop on
Artificial Intelligent and Security Workshop, pages
113–124. ACM, 2014.
Z. Yang and B. Becerik-Gerber. Cross-space building
occupancy modeling by contextual information based
learning. In Proceedings of the 2nd ACM International
Conference on Embedded Systems for Energy-Efficient
Built Environments, pages 177–186. ACM, 2015.
| 3 |
1
Fundamental Limits of Covert Communication over
MIMO AWGN Channel
Amr Abdelaziz and C. Emre Koksal
Department of Electrical and Computer Engineering
The Ohio State University
Columbus, Ohio 43201
arXiv:1705.02303v4 [] 13 Mar 2018
Abstract
Fundamental limits of covert communication
have been studied for different models of scalar channels. It was shown that,
√
over n independent channel uses, O( n) bits can be transmitted reliably over a public channel while achieving an arbitrarily
low probability of detection (LPD) by other stations. This result is well known as the square-root law and even to achieve this
diminishing rate of covert communication, all existing studies utilized some form of secret shared between the transmitter and the
receiver. In this paper, we establish the limits of LPD communication over the MIMO AWGN channel. In particular, using relative
entropy as our LPD metric, we study the maximum codebook size for which the transmitter can guarantee reliability and LPD
conditions are met. We first show that, the optimal codebook generating input distribution under δ-PD constraint is the zero-mean
Gaussian distribution. Then, assuming channel state information (CSI) on only the main channel at the transmitter, we derive the
optimal input covariance matrix, hence, establishing scaling laws of the codebook size. We evaluate the codebook scaling rates
in the limiting regimes for the number of channel uses (asymptotic block length) and the number of antennas (massive MIMO).
We show that, in the asymptotic block-length regime, square-root
law still holds for the MIMO AWGN. Meanwhile, in massive
√
MIMO limit, the codebook size, while it scales linearly with n, it scales exponentially with the number of transmitting antennas.
Further, we derive equivalent results when no shared secret is present. For that scenario, in the massive MIMO limit, higher covert
rate up to the non-LPD constrained capacity still can be achieved, yet, with much slower scaling compared to the scenario with
shared secret. The practical implication of our result is that, MIMO has the potential to provide a substantial increase in the file
sizes that can be covertly communicated subject to a reasonably low delay.
Index Terms
LPD communication, Covert MIMO Communication, MIMO physical layer security, LPD Capacity.
I. I NTRODUCTION
Conditions for secure communication under a passive eavesdropping attack fall in two broad categories: 1) low probability
of intercept (LPI). 2) low probability of detection (LPD). Communication with LPI requires the message exchanged by two
legitimate parties to be kept secret from an illegitimate adversary. Meanwhile, LPD constrained communication is more
restrictive as it requires the adversary to be unable to decide whether communication between legitimate parties has taken
place. Fundamental limits of LPD constrained communication over scalar AWGN has been established in [1] where the
square-root law for LPD communication was established. Assuming a shared secret of sufficient length between transmitter
√
and receiver, square-root law states that, over n independent channel uses of an AWGN channel, transmitter can send O( n)
bits reliably to the receiver while keeping arbitrary low probability of detection at the adversary. In this paper, we study the
fundamental limits of communication with LPD over MIMO AWGN channels.
Consider the scenario in which a transmitter (Alice) wishes to communicate to a receiver (Bob) while being undetected by a
passive adversary (Willie) when all nodes are equipped with multiple antennas. To that end, Alice wish to generate a codebook
that satisfies both reliability, in terms of low error probability , over her channel to Bob and, in the same time, ensures, a
certain maximum PD, namely δ, at Willie. Denote the maximum possible size of such codebook by Kn (δ, ). In this paper,
we are interested in establishing the fundamental limits of Kn (δ, ) in the asymptotic length length regime and in the limit
of large number of transmitting antenna. First we show that, the maximum codebook size is attained when the codebook is
generated according to zero mean circular symmetric complex Gaussian distribution. We establish this result building upon the
the Principle Minimum Relative Entropy [2] and Information Projection [3].
Some
p of our findings can be summarized as follows. For an isotropic Willie channel, we show that Alice can transmit
O(N n/M ) bits reliably in n independent channel uses, where N and M are the number of active eigenmodes of Bob and
Willie channels, respectively. Further, we evaluate δ-PD rates in the limiting regimes for the number of channel uses (asymptotic
block length) and the number of antennas (massive MIMO). We show that, while the square-root law still holds for the MIMO
AWGN, the number of bits that can be transmitted covertly scales exponentially with the number of transmitting antennas.
This work was submitted in part to IEEE CNS-2017.
This work was in part supported by the National Science Foundation under Grants NSF NeTs 1618566 and 1514260 and Office of Naval Research under
Grant N00014-16-1-2253.
2
TABLE I
S UMMARY OF R ESULTS
Result
Main Channel
Adversary Channel
Shared
Secret
Kn (δ, ) Scales Like
Theorems 2&3
Deterministic
Bounded Spectral norm
Yes
N
Theorem 4
Deterministic and of Unit
Rank
Deterministic and of Unit
Rank
Yes
Theorems 5&6
Deterministic
Bounded Spectral norm
No
N/M
Theorem 7
Deterministic and of Unit
Rank
Deterministic and of Unit
Rank
No
1/cos2 (θ)
Theorem 8
Deterministic and of Unit
Rank
Unit Rank chosen uniformly at random
Yes
Theorem 9
Deterministic and of Unit
Rank
Unit Rank chosen uniformly at random
No
√
p
n/M
n/ cos2 (θ)
n
c
(1 + √ )(Na −2)/2
K 2 Na
n
where c is constant independent
on n and Na .
r
1
c
√
(1 + )(Na −2)/2 where
n
K Na
c is constant independent on n and
Na .
r
n
c
(1 + √ )(Na −2)/2 where Na is the
K 2 Na
n
number of transmitting antennas, K is a universal constant and c is constant independent on n and Na . Further, we derive the
scaling of Kn (δ, ) with no shared secret between Alice and Bob. In particular, we show that achieving better covert rate is
a resource arm race between Alice, Bob and Willie. Alice can transmit O(N/M ) bits reliably in n independent channel uses,
i.e., the covert rate is in the order of the ratio between active eigenmodes of both channels. The practical implication of our
findings is that, MIMO has the potential to provide a substantial increase in the file sizes that can be covertly communicated
subject to a reasonably low delay. The results obtained in this paper are summarized in Table I1 .
The contributions of this work can be summarized as follows:
• Using the Principle Minimum Relative Entropy [2] and Information Projection [3], we show that the Kn (δ, ) is achievable
when the codebook is generated according to zero mean complex Gaussian distribution in MIMO AWGN channels.
• With the availability of only the main CSI to Alice, we evaluate the optimal input covariance matrix under the assumption
that Willie channel satisfies a bounded spectral norm constraint [4], [5]. Singular value decomposition (SVD) precoding
is shown to be the optimal signaling strategy and the optimal water-filling strategy is also provided.
• We evaluate the block-length and massive MIMO asymptotics for Kn (δ, ). We show that, while the square-root law
cannot be avoided, Kn (δ, ) scales exponentially with the number of antennas. Thus, MIMO has the potential to provide
a substantial increase in the file sizes that can be covertly communicated subject to a reasonably low delay.
• We evaluate scaling laws of Kn (δ, ) when there is no shared secret between Alice and Bob in both limits of large block
length and massive MIMO.
Related Work. Fundamental limits of covert communication have been studied in literature for different models of scalar
channels. In [6], LPD communication over the binary symmetric channel was considered. It was shown that, square-root law
holds for the binary symmetric channel, yet, without requiring a shared secret between Alice and Bob when Willie channel is
significantly noisier. Further, it was shown that Alice achieves a non-diminishing LPD rate, exploiting Willie’s uncertainty about
his own channel transition probabilities. Recently in [7], LPD communication was studied from a resolvability prespective for
the discrete memoryless channel (DMC). Therein, a trade-off between the secret length and asymmetries between Bob and
Willie channels has been studied. Later in [8], the exact capacity (using relative entropy instead of total variation distance
as LPD measure) of DMC and AWGN have been characterized. For a detailed summary of the recent results for different
channel models on the relationship between secret key length, LPD security metric and achievable LPD rate, readers may
refer to Table II in [6]. LPD communication over MIMO fading channel was first studied in [9]. Under different assumption
of CSI availability, the author derived the average power that satisfies the LPD requirement. However, the authors did not
obtain the square-root law, since the focus was not on the achievable rates of reliable LPD communication. Recently in [10],
LPD communication with multiple antennas at both Alice and Bob is considered when Willie has only a single antenna over
Rayleigh fading channel. An approximation to the LPD constrained rate when Willie employs a radiometer detector and has
uncertainty about his noise variance was presented. However, a full characterization of the capacity of MIMO channel with
LPD constraint was not established.
More precisely, for a unit rank MIMO channel, we show that Kn (δ, ) scales as
1θ
is the angle between right singular vectors of main and adversary channels in the unit rank channel model.
3
Despite not explicitly stated, the assumption of keeping the codebook generated by Alice secret from Willie (or at least a
secret of sufficient length [1], [7]) is common in all aforementioned studies of covert communication. Without this assumption,
LPD condition cannot be met along with arbitrarily low probability of error at Bob. This is because, when Willie is informed
about the codebook, he can decode the message using the same decoding strategy as that of Bob [1]. Only in [6], square-root
law was obtained over binary symmetric channel without this assumption when Willie channel is significantly noisier than that
of Bob, i.e., when there is a positive secrecy rate over the underlying wiretap channel. Despite the availability of the codebook
at Willie, [6] uses the total variation distance as the LPD metric.
In short, the square root law is shown to be a fundamental upper limitation that cannot be overcome unless the attack model
is relaxed to cases such as the lack of CSI or the lack of the knowledge of when the session starts at Willie. Here, we do not
make such assumptions on Willie and solely take advantage of increasing spatial dimension via the use of MIMO.
II. S YSTEM M ODEL AND P ROBLEM S TATEMENT
In the rest of this paper we use boldface uppercase letters for vectors/matrices. Meanwhile, (.)∗ denotes conjugate of complex
number, (.)† denotes conjugate transpose, IN denotes identity matrix of size N , tr(.) denotes matrix trace operator, |A| denotes
the determinant of matrix A and 1m×n denotes a m × n matrix of all 1’s. We say A B when the difference A − B is
positive semi-definite. The mutual information between two random variables x and y denoted by I(x; y) while lim denotes
the limit inferior. We use the standard order notation f (n) = O(g(n)) to denote an upper bound on f (n) that is asymptotically
tight, i.e., there exist a constant m and n0 > 0 such that 0 ≤ f (n) ≤ mg(n) for all n > n0 .
A. Communication Model
We consider the MIMO channel scenario in which a transmitter, Alice, with Na ≥ 1 antennas aims to communicate with a
receiver, Bob, having Nb ≥ 1 antennas without being detected by a passive adversary, Willie, equipped with Nw ≥ 1 antennas.
The discrete baseband equivalent channel for the signal y and z, received by Bob and Willie, respectively, can be written as:
y = Hb x + eb ,
z = Hw x + ew ,
(1)
where x ∈ CNa ×1 is the transmitted signal vector constrained by an average power constraint E[tr(xx† )] ≤ P . Also,
Hb ∈ CNb ×Na and Hw ∈ CNw ×Na are the channel coefficient matrices for Alice-Bob and Alice-Willie channels respectively.
Throughout this paper, unless otherwise noted, Hb and Hw are assumed deterministic, also, we assume that Hb is known to all
parties, meanwhile, Hw is known only to Willie. We define N , min{Na , Nb } and M , min{Na , Nw }. Finally, eb ∈ CNb ×1
and ew ∈ CNw ×1 are an independent zero mean circular symmetric complex Gaussian random vectors for both destination
and adversary channels respectively, where, eb ∼ CN (0, σb2 INb ) and ew ∼ CN (0, σe2 INw ).
We further assume Hw lies in the set of matrices with bounded spectral norms:
n
√ o
Sw = Hw : kHw kop ≤ γw
n
o
= Ww , H†w Hw : kWw kop ≤ γw ,
(2)
where kAkop is the operator (spectral) norm of A, i.e., the maximum eigenvalue of A. The set Sw incorporates all possible
Ww that is less than or equal to γw Î (in positive semi-definite sense) with no restriction on its eigenvectors, where Î is diagonal
matrix with the first M diagonal elements equal to 1 while the rest Na − M elements of the diagonal are zeros. Observe that,
kWw kop represents the largest possible power gain of Willie channel. Unless otherwise noted, throughout this paper we will
assume that Hw ∈ Sw .
B. Problem Statement
Our objective is to establish the fundamental limits of reliable transmission over Alice to Bob MIMO channel, constrained
by the low detection probability at Willie. Scalar AWGN channel channel have been studied in [8], we use a formulation that
follows closely the one used therein while taking into consideration the vector nature of the MIMO channel. Alice employs a
stochastic encoder with blocklength2 nNa , where n is the number of channel uses, for message set M consists of:
1) An encoder M 7→ CnNa , m 7→ xn where x ∈ CNa .
2) A decoder CnNb 7→ M, yn 7→ m̂ where y ∈ CNb .
Alice chooses a message M from M uniformly at random to transmit to Bob. Let us denote by H0 the null hypothesis under
which Alice is not transmitting and denote by P0 the probability distribution of Willie’s observation under the null hypothesis.
2 Note that, when Alice has nN bits to transmit, two alternative options are available for her. Either she splits the incoming stream into N streams of
a
a
n bits each and use each stream to select one from 2n messages for each single antenna, or, use the entire nNa bits to choose from 2nNa message. The
latter of these alternatives provides a gain factor of Na in the error exponent, of course, in the expense of much greater complexity [11], [12]. However, in
the restrictive LPD scenario, Alice would choose the latter alternative as to achieve the best decoding performance at Bob.
4
Conversely, let H1 be the true hypothesis under which Alice is transmitting her chosen message M and let P1 be the probability
distribution of Willie’s observation under the true hypothesis. Further, define type I error α to be the probability of mistakenly
accepting H1 and type II error β to be the probability of mistakenly accepting H0 . For the optimal hypothesis test generated
by Willie we have [13]
α + β = 1 − V(P0 , P1 ),
(3)
where V(P0 , P1 ) the total variation distance between P0 and P1 and is given by
1
kp0 (x) − p1 (x)k1 ,
(4)
2
where p0 (x) and p1 (x) are, respectively, the densities of P0 and P1 and k.k1 is the L1 norm. The variation distance between
P0 and P1 is related to the Kullback–Leibler Divergence (relative entropy) by the well known Pinsker’s inequality [14]:
r
1
V(P0 , P1 ) ≤
D(P0 k P1 )
(5)
2
where
V(P0 , P1 ) =
D(P0 k P1 ) = EP0 [log P0 − log P1 ] .
(6)
Note that, since the channel is memoryless, across n independent channel uses, we have
D (Pn0 k Pn1 ) = nD (P0 k P1 )
(7)
by the chain rule of relative entropy. Accordingly, for Alice to guarantee a low detection probability at Willie’s optimal detector,
she needs to bound V(Pn0 , Pn1 ) above by some δ chosen according to the desired probability of detection. Consequently, she
ensure that the sum of error probabilities at Willie is bounded as α + β ≥ 1 − δ. Using (5), Alice can achieve her goal by
designing her signaling strategy (based on the amount of information available) subject to
2δ 2
.
(8)
n
Throughout this paper, we adopt (8) as our LPD metric. Thus, the input distribution used by Alice to generate the codebook
has to satisfy (8). As in [8], our goal is to find the maximum value of log |M| for which a random codebook of length nNa
exists and satisfies (8) and whose average probability of error is at most . We denote this maximum by Kn (δ, ) and we
define
Kn (δ, )
.
(9)
L , lim lim √
↓0 n→∞
2nδ 2
√
Note that L has unit nats. We are interested in the characterization of L under different conditions of Bob and Willie
channels in order to derive scaling laws for the number of covert bits over MIMO AWGN channel. We first give the following
Proposition which provides a general expression for L by extending Theorem 1 in [8] to the MIMO AWGN channel with
infinite input and output alphabet.
D(P0 k P1 ) ≤
Proposition 1. For the considered MIMO AWGN channel,
r
L=
max
lim
{fn (x)}
n→∞
tr(En [xx† ])≤P
n
2δ 2
I(fn (x), fn (y))
Subject to: D(Pn0 k Pn1 ) − 2δ 2 ≤ 0
(10)
where {fn (x)} is a sequence of input distributions over CNa and En [·] denotes the expectation with respect to fn (x).
Before we give the proof of Proposition 1, we would like to highlight why the second moment constraint on the input signal
is meaningful in our formulation. It was explicitly stated in [8] that, an average power constraint on the input signal is to be
superseded by the LPD constraint. The reason is that, the LPD constraint requires the average power to tend to zero as the
block length tends to ∞. However, unlike the single antenna setting, over a MIMO channel, there exist scenarios in which the
LPD constraint can be met without requiring the Alice to reduce her power. In the sequel, we will discuss such scenarios in
which the power constraint remains active.
Proof. First, using the encoder/decoder structure described above, we see that the converse part of Theorem 1 in [8] can be
directly applied here. Meanwhile, the achievability part there was derived based on the finiteness of input and output alphabet.
It was not generalized to the continuous alphabet input over scalar AWGN channel. Rather, the achievability over AWGN
channel was shown for Gaussian distributed input in Theorem 5. Here, we argue that, showing achievability for Gaussian
distributed input is sufficient and, hence, we give achievability proof in Appendix B that follow closely the proof of Theorem
5
5 in [8]. Unlike the non LPD constrained capacity which attains its maximum when the underlying input distribution is zero
mean complex Gaussian, it is not straightforward to infer what input distribution is optimal. However, using the Principle
Minimum Relative Entropy [2] and Information Projection [3], we verify that, the distribution P1 that minimizes D(P0 k P1 )
is the zero mean circularly symmetric complex Gaussian distribution.
Further, we provide a more convenient expression for L in the following Theorem which provides an extension of Corollary
1 in [8] to the MIMO AWGN channel.
Theorem 1. For the considered MIMO AWGN channel,
r
n
L = lim
n→∞
2δ 2
max
fn (x)
tr(En [xx† ])≤P
I(fn (x), fn (y))
Subject to: D(Pn0 k Pn1 ) − 2δ 2 ≤ 0
(11)
where fn (x) is the input distribution over CNa and En [·] denotes the expectation with respect to fn (x).
Proof. The proof is given in Appendix C.
Now, since we now know that zero mean circular symmetric
Gaussian input distribution is optimal, the only
complex
remaining task is to characterize the covariance matrix, Q = E xx† , of the optimal input distribution. Accordingly, (11) can
be rewritten as:
r
Wb Q
n
(12)
max log INa +
L = lim
n→∞
2δ 2 Q0
σb2
tr(Q)≤P
Subject to: D(Pn0 k Pn1 ) − 2δ 2 ≤ 0,
where Wb , H†b Hb . Further, we can evaluate the relative entropy at Willie as follows (see Appendix A for detailed derivation):
1
Hw QH†w + INw
2
σw
(
−1 )
1
†
Hw QHw + INw
− Nw .
+ tr
2
σw
D (P0 k P1 ) = log
(13)
In this paper, we are mainly concerned with characterizing L when Alice knows only Hb . To that end, let us define:
Cpd (δ) , max log INa +
Q0
tr(Q)≤P
Wb Q
σb2
(14)
Subject to: D(Pn0 k Pn1 ) − 2δ 2 ≤ 0.
r
Clearly, L = limn→∞
n
Cpd (δ). In what follows, we characterize Cpd (δ) and, hence, L under different models of Hb and
2δ 2
Hw .
Remark 1. Observe that, since Bob and Willie channels are different, Willies does not observe the same channel output as
Bob. Hence, there exist situations in which D(Pn0 k Pn1 ) does not increase without bound as n tends to infinity. In the next
Section, we provide some examples.
III. M OTIVATING E XAMPLES
Consider the scenario in which both of Willie’s and Bob’s channels are of unit rank. Accordingly, we can write H◦ = λ◦ v◦ u†◦ ,
where v◦ ∈ CN◦ and u◦ ∈ CNa are the left and right singular vectors of H◦ where the subscript ◦ ∈ {e, b} used to denote
Bob and Willie channels respectively.
Under the above settings, consider the scenario in which Alice has a prior (non-causal) knowledge about both channels.
Alice task is to find Q∗ that solve (14). Note that, since both channels are of unit rank, so is Q∗ and it can be written as
Q∗ = Pth q∗ q∗† where Pth ≤ P is the power threshold above which she will be detected by Willie. Now suppose that Alice
choose q∗ to be the solution of the following optimization problem:
max
< q † , ub >
q
kqk=1
Subject to < q † , uw >= 0,
(15)
6
whose solution is given by
I − uw u†w ub
i
.
q∗ = h
I − uw u†w ub
(16)
The beamforming direction q∗ is known as null steering (NS) beamforming [15], that is, transmission in the direction orthogonal
to Willie’s direction while maintaining the maximum possible gain in the direction of Bob. Recall that Willies channel is of unit
rank and is in the direction uw , thus, the choice of Q = Q∗ implies that Hw Q∗ H†w = 0. Accordingly, Σ1 = Σ0 , i.e, Willie
is kept completely ignorant by observing absolutely no power from Alice’s transmission. More precisely, D (Pn0 k Pn1 ) = 0.
However, this doesn’t mean that Alice can communicate at the full rate to Bob as if Willie was not observing, rather, the
LPD constraint forced Alice to sacrifice some of its power to keep Willie oblivious of their transmission. More precisely, the
effective power seen by Bob scales down with cosine the angle between ub and uw . In the special case when < ub , uw >= 0,
Alice communicate at the full rate to Bob without being detected by Willie. In addition, the codebook between Alice and Bob
need not to be kept secret from Willie. That is because the power observed at Willie from Alice transmission is, in fact, zero.
Fig. 1. Radiation pattern as a function of the number of transmitting antennas. When number of antennas gets large, < ub , uw >→ 0.
IV. Cpd (δ) WITH S ECRET C ODEBOOK
With uncertainty about Willie’s channel, Hw ∈ Sw , it is intuitive to think that Alice should design her signaling strategy
against the worst (stronger) possible Willie channel. We first derive the worst case Willie channel, then, we establish the saddle
point property of the considered class of channels in the form of min max = max min, where the maximum is taken over all
admissible input covariance matrices and the minimum is over all Hw ∈ Sw . Thus, we show that Cpd (δ) equals to the Cpd (δ)
evaluated at the worst possible Hw .
A. Worst Willie Channel and Saddle Point Property
w
To characterize Cpd (δ) when Hw ∈ Sw , we need first to establish the worst case Cpd (δ) denoted by Cpd
(δ). Suppose we
w
have obtained Cpd (δ) for every possible state of Hw , then, Cpd (δ) is the minimum Cpd (δ) over all possible state of Hw . First,
let us define
Wb Q
.
(17)
R(Ww , Q, δ) = log INa +
σb2
w
We give Cpd
(δ) in the following proposition.
Proposition 2. Consider the class of channels in (2), for any Q < 0 satisfies tr{Q} ≤ P and Ww ∈ Sw we have:
w
Cpd
(δ) = min
Ww ∈Sw
Subject
max R(Ww , Q, δ)
Q0
tr(Q)≤P
to: D(Pn0
k Pn1 ) − 2δ 2 ≤ 0
= max R(γw Î, Q, δ)
Q0
tr(Q)≤P
Subject to: D(Pn0 k Pn1 ) − 2δ 2 ≤ 0
(18)
7
i.e., the worst Willie channel is isotropic.
Proof. See Appendix D.
w
w
Proposition 2 establishes Cpd
(δ). The following proposition proves that Cpd (δ) = Cpd
(δ) by establishing the saddle point
property of the considered class of channels.
Proposition 3. (Saddle Point Property.) Consider the class of channels in (2), for any Q < 0 satisfies tr{Q} ≤ P and
Ww ∈ Sw we have:
Cpd (δ) = min
Ww ∈Sw
Subject
= max
max R(Ww , Q, δ)
Q0
tr(Q)≤P
to: D(Pn0
k Pn1 ) − 2δ 2 ≤ 0
min R(Ww , Q, δ)
Q0 Ww ∈Sw
tr(Q)≤P
Subject to: D(Pn0
w
= Cpd
(δ).
k Pn1 ) − 2δ 2 ≤ 0
(19)
Proof. By realizing that, for any feasible Q, the function D (P0 k P1 (Ww )) is monotonically increasing in Ww , we have that
min R(Ww , Q, δ) = R(γw Î, Q, δ)
Ww ∈Sw
Subject to: D(Pn0 k Pn1 ) − 2δ 2 .
(20)
Hence, the required result follows by using proposition 2.
B. Evaluation of Cpd (δ)
In light of the saddle point property established in the previous Section, in this Section we characterize Cpd (δ) by solving
(19) for the optimal signaling strategy, Q∗ . We give the main result of this Section in the following theorem.
Theorem 2. The eigenvalue decomposition of the capacity achieving input covariance matrix that solves (14) is given by
Q∗ = Ub ΛU†b where Ub ∈ CNa ×Na is the matrix whose columns are the right singular vectors of Hb and Λ is a diagonal
matrix whose diagonal entries, Λii , are given by the solution of
−1
λ =(σb2 λ−1
i (Wb ) + Λii )
2
−2 2
−1 !
σw
σw
+η
+ Λii
+ Λii
−
γw
γw
where λ and η are constants determined from the constraints tr {Q} ≤ P and (8), respectively. Moreover,
N
X
Λii λi (Wb )
log 1 +
Cpd (δ) =
σb2
i=1
(21)
(22)
where λi is the ith non zero eigenvalue of Wb .
Proof. See Appendix E.
The result of Theorem 2 provides the full characterization of Cpd (δ) of the considered class of channels. It can be seen that,
the singular value decomposition (SVD) precoding [12] is the optimal signaling strategy except for the water filling strategy
in (21) which is chosen to satisfy both power and LPD constraints. Unlike both MIMO channel without security constraint
and MIMO wiretap channel, transmission with full power is, indeed, not optimal. Let
X
Pth ,
Λii ,
(23)
i
be the maximum total power that is transmitted by Alice. An equivalent visualization of our problem is that Alice need to
choose a certain power threshold, Pth , to satisfy the LPD constraint. However, again, Pth is distributed along the eigenmodes
using conventional water filling solution. Although it is not straightforward to obtain a closed form expression3 for Cpd (δ)
and, hence, L, we could obtain both upper and lower bounds on Cpd (δ) which leads to upper and lower bounds on L. Based
on the obtained bounds, we give the square-root law for MIMO AWGN channel in the following Theorem.
3 Using
Mathematica, Λii was found to be an expression of almost 30 lines which does not provide the required insights here.
8
Theorem 3 (Square-root Law of MIMO AWGN channel). For the considered class of channels, the following bounds on
Cpd (δ) holds
!
√ 2
N
X
2σw δλi (Wb )
√
log 1 +
≤ Cpd (δ)
σb2 γw nM
i=1
!
√ 2
N
X
2σw ξδλi (Wb )
√
≤
(24)
log 1 +
σb2 γw nM
i=1
where ξ ≥ 1 is a function of δ that approaches 1 as δ goes to 0. Moreover,
N
N
2
2
X
X
σw
λi (Wb )
σw
ξλi (Wb )
√
√
≤
L
≤
.
2γ
2γ
σ
M
σ
b w
b w M
i=1
i=1
Proof. We give both achievability and converse results in Appendix F.
(25)
Theorem 3 extends the square-root lawpfor scalar AWGN channel to the MIMO AWGN channel. In particular, it states that
Alice can transmit a maximum of O(N n/M ) bits reliably to Bob in n independent channel uses while keeping Willie’s
sum of error probabilities lower bounded by 1 − δ. The interesting result here is that, the gain in covert rate scales linearly
with number of active eigenmodes of Bob channel. Meanwhile, it scales down with the square-root of the number of active
eigenmodes over Willie channel. This fact will be of great importance when we study the case of massive MIMO limit. Further,
the bounds on L in (25) can, with small effort, generate the result of Theorem 5 in [8] by setting N = M = 1, λ(Wb ) = γw
and σb = σw .
It worth mentioning that, in some practical situations, compound MIMO channel can be too conservative for resource
allocation. In particular, the bounded spectral norm condition in (2) not only leads us to the worst case Willie channel, but it
also does not restrict its eigenvectors leaving the beamforming strategy used by Alice (SVD precoding) to be of insignificant
gain in protecting against Willie. Although we believe that the eigenvectors of Willie channel plays an important role in the
determination of the achievable covert rate, the ignorance of Alice about Willie channel leaves the compound framework as
our best option.
V. U NIT R ANK MIMO C HANNEL
As pointed out in the previous Section, the distinction between the eigenvectors of Bob and Willie channels would have a
considerable effect on the achievable covert rate. However, unavailability of Willie’s CSI left the compound framework as the
best model for Hw . In this Section, we consider the case when either Hw or Both Hw and Hb are of unit rank. This scenario
not only models the case when both Bob and Willie have a single antenna, but it also covers the case when they have a strong
line of sight with Alice. Moreover, this scenario allow us to evaluate the effect of the eigenvectors of Hw and Hb on the
achievable covert rate.
A. Unit Rank Willie Channel
In this Section we analyze the scenario in which only Willie channel is of unit rank. In this case, we can write Hw =
1/2
λw vw u†w , where vw ∈ CNw and uw ∈ CNa are the left and right singular vectors of Hw . Accordingly, Ww = λw uw u†w
and the product Ww Q has only one non zero eigenvalue. The nonzero eigenvalue λ(Ww Q) is loosely upper bounded by
λw λmax (Q). Accordingly, following the same steps of the proof of Theorem 3 we can get (assuming well conditioned Bob
channel, i.e. λi (Wb ) = λb for all i)
!
√ 2
2σw δλb
N log 1 + 2 √
≤ Cpd (δ)
σb λw n
!
√ 2
2σw ξδλb
≤ N log 1 + 2 √
(26)
σb λw n
consequently,
N
2
σw
λb
σ 2 ξλb
≤ L ≤ N w2
.
2
σb λw
σb λw
(27)
√
Again, Bob gets O(N n) bits in n independent channel uses. We note also that, the achievable covert rate increases linearly
with N .
9
B. Both Channels are of Unit Rank
Consider the case when both Hb and Hw are of unit rank. In this case, we have N = 1. However, setting N = 1 in the
results established so far will yield a loose bounds on the achievable LPD constrained rate. The reason is that, the bound
λ(We Q) ≤ λw λmax (Q) is, in fact, too loose especially for large values of Na . Although it is hard to establish a tighter upper
bound on λ(We Q) when Q is of high rank, it is straightforward to obtain the exact expression for λ(We Q) when Q is of
1/2
unit rank (which is the case when rank{Hb } = 1). Given that Hb = λb vb u†b , Alice will set Q = Pth ub u†b . Accordingly, we
have
2
λ(We Q) = λw Pth |< ub , uw >|
= λw Pth cos2 (θ),
(28)
where θ is the angle between ub and uw . We give the main result of this Section in the following theorem.
Theorem 4. If rank{Hb } = rank{He } = 1, then,
(
√
! )
2
2σw
δλb
√
min log 1 + 2
, C ≤ Cpd (δ)
σb λw cos2 (θ) n
(
! )
√ 2
2σw ξδλb
√
≤ min log 1 + 2
,C
σb λw cos2 (θ) n
where C is the non LPD constrained capacity of Alice to Bob channel. Accordingly,
if θ = π/2
L = ∞,
2
2
.
σw
λb
σw
ξλb
2
≤L≤ 2
, otherwise
2
2
σb λw cos (θ)
σb λw cos (θ)
(29)
(30)
Proof. Follows directly by substituting (28) into (13) and following the same steps as in the proof of Theorem 3 while realizing
that Pth ≤ P .
√
2
Theorem 4 proves that Alice can transmit a maximum of O( n/ cos (θ)) bits reliably to Bob in n independent channel
uses while keeping Willie’s sum of error probabilities lower bounded by 1 − δ. In the statement of Theorem 4, the minimum
is taken since the first term diverges as θ → π/2, i.e., when ub and uw are orthogonal. In such case, we will have L = ∞.
This fact proves that, over MIMO channel Alice can communicate at full rate to Bob without being detected by Willie. An
interesting question is, how rare is the case of having ub and uw to be orthogonal? When the angle of the vectors (i.e., the
antenna orientation) are chosen uniformly at random, as the number of antennas at Alice gets large, we will see in Section
VII that cos2 (θ) approaches 0 exponentially fast with the number of antennas at Alice.
Remark 2. It should not be inferred from the results in this Section that the unit rank Bob channel can offer covert rate better
than that of higher rank. In fact, we used a loose upper bound on the eigenvalue of Willie’s channel for higher rank case. That
is due to the technical difficulty in setting tight bounds to the power received by Willie when Bob channel has higher rank.
Also, we see that unit rank channel offers better covert rate than that shown under the compound settings for Willie channel.
That is because, unlike the scenario of this Section, compound settings does not restrict the eigenvectors of Willie’s channel.
VI. Cpd (δ) W ITHOUT S HARED S ECRET
So far, we have established fundamental limits of covert communication over MIMO AWGN channel under the assumption
that the codebook generated by Alice is kept secret from Willie (or at least a secret of sufficiet length). In this Section, we
study the LPD communication problem without this assumption. The assumption of keeping the codebook generated by Alice
secret from Willie (or at least a secret of sufficient length [1], [7]) is common in all studies of covert communication. Without
this assumption, LPD condition cannot be met along with arbitrarily low probability of error at Bob, since, when Willie is
informed about the codebook, he can decode the message using the same decoding strategy as that of Bob [1]. Also we note
that, acheiving covertness does not require positive secrecy rate over the underlying wiretap channel. Indeed, recalling the
expression for relative entropy at Willie
1
D (P0 k P1 ) = log 2 Hw QH†w + INw
σw
|
{z
}
Willie’s Channel Capacity
(
−1 )
1
†
+ tr
Hw QHw + INw
− Nw ,
2
σw
|
{z
}
≤0, Willie’s penalty due to codebook ignorance
(31)
10
we observe that, the first term in (31) is the channel capacity of Willie channel with the implicit assumption of the knowledge
of the codebook generated by Alice. In particular, the first term in (31) equals to I(x; (z, Hw )). Meanwhile, it can be easily
verified that the remaining difference term in (31) is always non positive. This term represents Willie’s penalty from his
ignorance of the codebook. Analogous result for the scalar AWGN channel can be found in [1] for which the same arguments
can be made. This in fact provides an interpretation to the scenario on which the secrecy capacity of the main channel may
be zero, meanwhile, Alice still can covertly communicate to Bob. Let us define
Kn (δ, )
√
.
n 2δ 2
(32)
n
Cpd (δ).
L̂ = lim √
n→∞
2δ 2
(33)
L̂ , lim lim
↓0 n→∞
Following Proposition 1 and theorem 1, we can show that
√
Observe that, in the definition of L̂, we used the normalization over n instead of n. Now suppose that Alice chooses Q
such that the first term in (31), which is the capacity of Willie’ channel, is upper bounded by 2δ 2 /n. Indeed this signaling
strategy satisfies the LPD metric (8) and, thus, achieves covertness. Moreover, we have limn→∞ I(x; (z, Hw )) = 0, thus,
strong secrecy condition is also met. In particular, if I(x; (z, Hw )) ≤ 2δ 2 /n, Willie can reliably decode at most 2δ 2 nats
of Alice’s message in n independent channel uses. It worth mentioning that, requiring limn→∞ I(x; (z, Hw )) = 0 is more
restrictive than the strong secrecy condition. In principle, if Alice has a message m to transmit, strong secrecy condition
requires limn→∞ I(m; (z, Hw )) = 0. Meanwhile, since m = f −1 (x) for some encoding function f : m 7→ x, we have
I(m; (z, Hw )) ≤ I(x; (z, Hw )). Now, Cpd (δ) without any shared secret can be reformulated as follows:
Cpd (δ) = max log INa +
Q0
tr(Q)≤P
Subject to: log INa +
Wb Q
σb2
(34)
Ww Q
− 2δ 2 /n ≤ 0.
2
σw
(35)
In light of the saddle point property established in Section IV, we characterize Cpd (δ) without shared secret by solving (34)
for the optimal signaling strategy, Q∗ . We give the main result of this Section in the following theorem.
Theorem 5. The eigenvalue decomposition of the capacity achieving input covariance matrix that solves (34) is given by
Q∗ = ub Λu†b where ub ∈ CNa ×Na is the matrix whose columns are the right singular vectors of Hb and Λ is a diagonal
matrix whose diagonal entries, Λii , are given by the solution of
2
−1
σw
2 −1
−1
λ =(σb λi (Wb ) + Λii ) − η
+ Λii
(36)
γw
where λ and η are constants determined from the constraints tr {Q} ≤ P and (8), respectively. Moreover,
N
X
Λii λi (Wb )
log 1 +
Cpd (δ) =
σb2
i=1
(37)
where λi is the ith non zero eigenvalue of Wb .
Proof. See Appendix G.
Theorem 5 provides the full characterization of the Cpd (δ) of the considered class of channels without requiring any shared
secret between Alice and Bob. Again, it is not straightforward to obtain a closed form expression for Cpd (δ). Thus, we obtain
both upper and lower bounds on Cpd (δ) as we did in Section IV. Based on the obtained bounds, we give the square-root law
for MIMO AWGN channel without shared secret in the following theorem.
Theorem 6. For the considered class of channels without any shared secret between Alice and Bob, the following bounds on
Cpd (δ) holds
N
2 2
X
2σw
δ λi (Wb )
log 1 +
≤ Cpd (δ)
σb2 γw nM
i=1
N
X
2σ 2 ξδ 2 λi (Wb )
≤
log 1 + w 2
(38)
σb γw nM
i=1
11
where ξ =
nM
. Accordingly,
nM − 2δ 2
N
X
i=1
√
N
2
X
2σw
δλi (Wb )
≤ L̂ ≤
2
σb γw M
i=1
√
2
2σw
ξδλi (Wb )
2
σb γw M
Proof. We give both achievability and converse results in Appendix H.
(39)
Theorem 6 extends the result of Theorem 3 to the scenario when Alice and Bob do not share any form of secret. It proves
that Alice can transmit a maximum of O(N/M ) bits reliably to Bob in n independent channel uses while keeping Willie’s
sum of error probabilities lower bounded by 1 − δ.
Now let us consider the case when both Hb and Hw are of unit rank. Under this assumption, we give converse and
achievability results of Cpd (δ) over Alice to Bob channel without a shared secret in the following theorem.
Theorem 7. If rank{Hb } = rank{He } = 1, then, δ-PD constrained capacity over Alice to Bob channel without a shared
secret between Alice and Bob is bounded as
2σ 2 δ 2 λb
, C ≤ Cpd (δ)
min log 1 + 2 w 2
σ λw cos (θ)n
b
2
2σw
ξδ 2 λb
,C
(40)
≤ min log 1 + 2
σb λw cos2 (θ)n
where ξ is as defined in Theorem 6 and C is the non LPD constrained capacity of Alice to Bob channel. Accordingly,
= ∞,
if θ = π/2
L̂ √
√ 2
2
2σw
δλb
2σw ξδλb
≤ L̂ ≤ 2
, otherwise
2
σb λw cos2 (θ)
σb λw cos2 (θ)
(41)
Proof. Follows directly by substituting (28) into (35) and following the same steps as in the proof of Theorem 6 while realizing
that Pth ≤ P .
Again, in (40), the minimum is taken since the first term diverges as θ → π/2, i.e., when ub and uw are orthogonal. The
theorem proves that Alice can transmit a maximum of O(1/ cos2 (θ)) bits reliably to Bob in n independent channel uses while
keeping Willie’s sum of error probabilities lower bounded by 1 − δ. This fact proves that, over MIMO channel Alice can
communicate at full rate to Bob without being detected by Willie without requiring Alice and Bob to have any form of shared
secret.
VII. C OVERT C OMMUNICATION WITH M ASSIVE MIMO
In Theorems 4 and 7, it was shown that Alice can communicate at full rate with Bob without being detected by Willie
whenever cos(θ) = 0 regardless of the presence of a shared secret. In this Section, we study the behavior of covert rate as
the number of antennas scale, which we call the massive MIMO limit, with and without codebook availability at Willie. In
particular, the high beamforming capability of the massive MIMO system can provide substantial gain in the achievable LPD
rate. However, a quantitative relation between the achievable LPD rate and the number of transmitting antennas seems to be
unavailable. To that end, we address the question: how does the achievable LPD rate scale with the number of transmitting
antennas? We also study how does the presence of a shared secret between Alice and Bob affects the scaling of the covert rate
in the massive MIMO limit. Before we answer these questions, we state some necessary basic results on the inner product of
unit vectors in higher dimensions [16].
A. Basic Foundation
In this Section, we reproduce some established results on the inner product of unit vectors in higher dimensions.
Lemma 1. [Proposition 1 in [16]] Let a and b any two vectors in the unit sphere in Cp chosen uniformly at random. Let
θ = cos−1 (< a, b >) be the angle between them. Then
π
√
Pr θ −
≤ ζ ≥ 1 − K p(cos ζ)p−2
(42)
2
π
for all p ≥ 2 and ζ ∈ 0,
where K is a universal constant.
2
The statement of Lemma 1 states that, the probability that any two vectors chosen uniformly at random being orthogonal
increases exponentially fast with the dimension p. Indeed, note that, for any 0 < a < 1, ap has the same decay rate as (2−a)−p .
√
Thus, the probability that θ is within ζ from π/2 scales like (2 − cos(ζ))p−2 /K p.
12
Corollary 1. Let a and b any two vectors in the unit sphere in Cp chosen uniformly at random and let θ = cos−1 (< a, b >)
be the angle between them. Let A, B ∈ Cp×p be two matrices of unit rank generated as A = λa aa† and B = λb bb† . Then,
the probability that the eigenvalue of of the product λ(AB) approaches 0 grows to 1 exponentially fast with the dimension p.
Proof. It can be easily verified that λ(AB) = λa λb cos2 (θ). Using Lemma 1, we see that, the probability that θ approaches
π/2 increases exponentially with p. Hence, the probability that cos(θ) approaches 0 increases in the same order. Then so is
cos2 (θ).
B. Massive MIMO Limit With Shared Secret
In the previous Section it was demonstrated that, in higher dimensions every two independent vectors chosen uniformly
at random are orthogonal with very high probability. More generally, using spherical invariance [16], given ub , for any uw
chosen uniformly at random in CNa , the result of Lemma 1 still holds. This scenario typically models the scenario when Alice
knows her channel to Bob, meanwhile, she models uw as a uniform random unit vector.
Recall that, when Alice has nNa bits to transmit, two alternative options are available for her. Either she splits the incoming
stream into Na streams of n bits each and use each stream to select one from 2n messages for each single antenna, or, use
the entire nNa bits to choose from 2nNa message. The latter of these alternatives provides a gain factor of Na in the error
exponent, of course, in the expense of much greater complexity [11], [12]. However, in the restrictive LPD scenario, Alice
would choose the latter alternative as to achieve the best decoding performance at Bob. Therefore, we need to analyze the
LPD rate in the limiting case of the product nNa when both n and Na grow. Of course, the number of antennas at Alice is
a physical resource which can not be compared to n that can approach ∞ very fast. The more interesting question is, how
fast cos2 (θ) approaches 0 as Na increase. As illustrated in corollary 1, we know that cos(θ) approaches 0 exponentially fast
with Na . Consequently, we conclude that cos2 (θ) , also, approaches 0 exponentially fast with Na . For proper handling of the
scaling of Kn (δ, ) in massive MIMO limit, let us define
S , lim
lim
↓0 nNa →∞
Kn (δ, )
√
.
Na 2nδ 2
(43)
Note that the in the definition of S, both n and Na are allowed to grow without bound compared to L in which only n was
allowed to grow while Na was treated as constant. Now observe that, following Proposition 1 and Theorem 1, we can show
that
r
n
Cpd (δ).
(44)
S = lim Na
nNa →∞
2δ 2
We give the result of the massive MIMO limit with a pre-shared secret between Alice and Bob in the following Theorem.
Theorem 8. Assume that rank{Hb } = rank{He } = 1. Given ub , for any uw chosen uniformly at random, Cpd (δ) is as
given in Theorem 4 and
S = ∞.
(45)
√
!
2
δ
n
c (Na −2)/2
2σw
√
)
where
K
is
a
universal
constant
and
c
=
.
(1
+
K 2 Na
λw P
n
r
n
Proof. Combining the result of Theorem 4 and Corollary 1, multiplying (29) by Na
and taking the limit as both of n
2δ 2
and Na tend to infinity we obtain
r
n
lim lim Na
Cpd (δ)
Na →∞ n→∞
2δ 2
σ 2 λb
= lim Na 2 w 2
Na →∞
σb λw cos (θ)
= ∞,
(46)
r
Moreover, Kn grows like
where the last equality follow since cos2 (θ) → 0 as Na → ∞. On the other hand, we also can verify that
r
n
lim lim Na
Cpd (δ) = ∞.
(47)
n→∞ Na →∞
2δ 2
√
To show how Kn scales in this massive MIMO limit, we first note that, for fixed Na , Kn scales like n. Also note that,
S = ∞ implies that LPD constraint becomes inactive and full non-LPD capacity is achieved. This happens when the quantity
13
Cpd (δ) = C. The question we adress now is, how does Cpd (δ) behave in between these two extreme regimes. Following the
same steps of the proof of Theorem 3, we can obtain the following bound on Pth :
(
)
√ 2
2σw δ
Pth ≤ min √
,P .
(48)
nλw cos2 (θ)
Thus, we have Pth = P , and hence Cpd (δ) = C, when
√
P ≤√
2
2σw
δ
nλw cos2 (θ)
(49)
equivalently,
√ 2
2σ δ
cos (θ) ≤ √ w
nλw P
2
√ 2
2σ δ
√ w .
nλw P
(50)
√ 2 !(Na −2)/2
2σ δ
1− √ w
nλw P
(51)
s
⇒ |θ − π/2| ≤ π/2 − cos−1
This happens with probability no less than:
p
P r(Cpd (δ) = C) ≥ 1 − K Na
where (51) follows by setting ζ in Lemma 1 equal
√ to the RHS of (50) and using the following basic trigonometry facts:
2 . It can be seen that, the probability that C (δ) = C scales as
cos(π/2 − x) = sin(x) and sin(cos−1 (x)) = 1 − x!
pd
√ 2
√
2σw δ
(Na −2)/2
.
(1 + g)
/K Na up to 1, where g = √
nλw P
Theorem 8 states that Alice can communicate at full rate to Bob while satisfying the LPD constraint (8). Note that, the limit
in both orders yields S = ∞.
As Na → ∞, the radiation pattern of a wireless MIMO transmitter becomes so extremely directive (pencil beam). We call this
limit the wired limit of wireless MIMO communication. In the wired limit, Willie cannot detect Alice’s transmission unless he
wiretaps this virtual wire. Theorem 8 provides a rigorous characterization of the wired limit of wireless MIMO communication.
In principle, it answers the fundamental question: How fast does the LPD constrained rate increase with the√number of antennas
at Alice? It can be seen that the probability that Alice fully utilizes the channel scales like 2(Na −2)/2 /K Na n up to 1 using
the same justification given after Lemma 1.
C. Massive MIMO Limit Without Shared Secret
In Section VI it was shown that, only diminishing covert rate, O(N/M ), can be achieved without requiring a shared secret
between Alice and Bob. Again, we note that this diminishing rate was shown to be achievable when Willie’s channel is isotropic.
Also, we have shown shown that, in the massive MIMO limit, the achievable covert rate grows exponentially with the number
of transmitting antennas when there is a shared secret between Alice and Bob. Thus, it is also instructive to consider LPD
communication problem without a shared secret in the massive MIMO limit. As illustrated in Section III, if Alice has CSI of
both channels, not only can she communicate covertly and reliably at full rate whenever the eigen directions of both channels
are orthogonal, but also she does not need a shared secret to achieve this rate. Building on our analysis in Section VII, we
give the massive MIMO limit of the δ-PD capacity when there is no shared secret between Alice and Bob.
Now, let us consider the scenario in which uw is chosen uniformly at random and fixed once chosen. For proper handling
of the scaling of Kn (δ, ) in massive MIMO limit without a shred secret, let us define
Kn (δ, )
√
.
(52)
↓0 nNa →∞ nNa 2δ 2
√
Observe that, unlike S, Kn (δ, ) is normalized to n instead of n in the expression of Ŝ. Now, following Proposition 1 and
Theorem 1, we can show that
nNa
Ŝ = lim √
Cpd (δ).
(53)
nNa →∞
2δ 2
We give the result of this scenario in the following Theorem.
Ŝ , lim
lim
14
Theorem 9. Assume that rank{Hb } = rank{He } = 1 and suppose that there is no shared secret between Alice and Bob.
Given ub , for any uw chosen uniformly at random, Cpd (δ) is as given in Theorem 4 and
Ŝ = ∞.
r
Moreover, Kn grows like
1
c
(1 + )(Na −2)/2 where K is a universal constant and c =
K 2 Na
n
Proof. The proof follows exactly the same steps as in the proof of Theorem 8.
(54)
√
!
2
2σw
δ
.
λw P
Again, it can be seen that Alice achieve the maximum achievable non-LPD rate even under the LPD constraint.However,
the rate at which Cpd (δ) converges to C is much slower, compared to the case with shared secret codebook. Hence, it can be
deduced from Theorems 7 and 9 that, in the limit of large Na , Alice can transmit O(n) bits in n independent channel uses
while satisfying the LPD constraint without the need for any form of shared secret. Even though, it has to be considered that
the number of antennas required at Alice under this scenario is much larger than that when she shares a secret of sufficient
length with Bob. The following numerical example demonstrates the covert rates in massive MIMO limit with and without a
shared secret between Alice and Bob.
Example
1. Assume that Alice intend to use the channel for n = 109 times over a channel of bandwidth of 10M Hz, hence,
√
2
n = 3.1623 × 104 . Suppose that Alice is targeting δ = 10−2 . Let σw
= σb2 = 10−2 and λw = λb = 10−3 . Assume that Alice
is targeting SN R = 15dB
at
Bob,
hence,
P
=
316.228.
Then,
for
N
a = 100 it can be verified that, Alice can transmit O(n)
√
covert bits instead of O( n). Observe that, Alice needed only Na = 100 to communicate covertly at near full rate to Bob.
Also note that, at 6GHz, two dimensional array of 100 elements can fit within an area of a single sheet of paper. See Fig.
(2) for the relation between the δ-PD capacity and number of transmitting antenna for different values of number of antennas
at Willie with δ = 10−2 .
Fig. 2. The relation between achievable covert rate with and without a shared secret (plotted in log scale), in bits per second, and Na for different values of
Nw with target δ = 10−2 . It shows that Alice can communicate near full rate with Na around 100 when she share a secret with Bob. A large gap can be
observed when there is no shared secret.
As can be seen from Fig. (2), Alice could achieve a covert rate very close to the non-LPD constrained capacity of her
channel to Bob with Na ≥ 100 with a block of length n = 109 . Also we see that there is a significant gap (nearly 4 orders
of magnitude) between the achievable covert rate with and without a preshared secret. Although both rates converges to C as
Na → ∞, we see that without a shared secret, the number of antennas required to achieve near full rate is significantly greater
than that required when Alice and Bob are sharing a secret of sufficient length. For practical consideration, this result leaves
the massive MIMO limit of the δ-PD capacity without a shared secret of theoretical interest only.
VIII. D ISCUSSION
Impact of CSI level. Throughout this paper we have considered perfect CSI at Alice about the her channel to Bob. In
practical scenarios, this assumption might not hold true as CSI always suffer from imperfection due to e.g. channel estimation
error or non error free CSI feedback link. Despite these potential impairments, we argue that the case of imperfect CSI at Alice
does not affect the obtained results. That is because we have made the assumption that Willie has perfect CSI as well about
his channel. Further, in case when Alice has absolutely no CSI is of special interest as communication under these critical
conditions may not allow Bob to share his √
CSI to Alice, specially when dealing with passive Bob. In this scenario, we can
verify that the Alice can transmit O(N/M n) bits in n independent channel uses (details are omitted here). We see that,
15
√
the covert rate scales with N/M compared to N/ M when Alice has CSI. More interestingly, in massive MIMO limit with
absolutely no CSI at Alice, we can verify that the covert rate → 0 even with Na → ∞. While it was recognized as the most
favorable scenario for Alice when she has CSI, massive MIMO may make matters worse when she has absolutely no CSI.
Contrary, when Alice has CSI about both channels, covert rates up to the non LPD constrained rate of the channel may be
achieved (under certain conditions, see Section III) without the need for a shared secret. As it is the case for MIMO channel
with no secrecy constraint, CSI availability play an important role in achieving higher covert rates.
Impact of Willie’s ignorance. All results obtained in this paper assumes that Willie has perfect CSI about his channel
and he is aware of his channel noise statistics. Ignorance of Willie about one of these parameters is expected to have positive
impact on the achievable covert rate. For example, in [6], it was shown that O(n) bits can be transmitted reliably with low
probability of detection over BSC whose error probability is unknown to Willie except that it is drawn from a known interval.
This scenario is subject for future research.
Length of the Shared Secret. In our analysis, we have considered scenarios in which the entire codebook is either available
or unavailable at Willie. Meanwhile, secrets of shorter length were reported to be enough for fulfilling
√ LPD requirements. For
scalar AWGN channel, it was shown that the required secret secret length is in order of O(log n n) [1]. Similar result was
established for DMC in [8]. In this work, while we showed that there is a significant gap between the achievable covert rate
with and without a shared secret, the minimum length of the required shared secret has not been addressed.
IX. S UMMARY AND C ONCLUSIONS
We have established the limits of LPD communication over the MIMO AWGN channel. In particular, using relative entropy
as our LPD metric, we studied the maximum codebook size, Kn (δ, ), for which Alice can guarantee reliability and LPD
conditions are met. We first showed that, the optimal codebook generating input distribution under δ-PD constraint is the
zero-mean Gaussian distribution. We based our argumentspon the the principle of minimum relative entropy. For an isotropic
Willie channel, we showed that Alice can transmit O(N n/M ) bits reliably in n independent channel uses, where N and
M are the number of active eigenmodes of Bob and Willie channels, respectively. Further, we evaluated the scaling rates of
Kn (δ, ) in the limiting regimes for the number of channel uses (asymptotic block length) and the number of antennas (massive
MIMO). We showed that, while the square-root law still holds for the MIMO-AWGN channel, the number of bits that can
be transmitted covertly scales exponentiallyrwith the number of transmitting antennas. More precisely, for a unit rank MIMO
n
c
channel, we show that Kn (δ, ) scales as
(1 + √ )(Na −2)/2 where Na is the number of transmitting antennas, K
K 2 Na
n
is a universal constant and c is constant independent on n and Na . Also, we derived the scaling of Kn (δ, ) with no shared
secret between Alice and Bob. In particular, we showed that achieving better covert rate is a resource arm race between Alice,
Bob and Willie: Alice can transmit O(N/M ) bits reliably in n independent channel uses, i.e., the covert rate is in the order
of the ratio between active eigenmodes of both channels. Despite this diminishing rate, in the massive MIMO limit, Alice
can still achieve higher covert rate up to the non LPD constrained capacity of her channel to Bob, yet, with a significantly
greater number of antennas. Although the covert rates both with and without a shared secret are shown to converge the non
LPD constrained capacity as Na → ∞, numerical evaluations showed that without a shared secret, the number of antennas
required to achieve near full rate can be orders of magnitude greater. The practical implication of our result is that, MIMO
has the potential to provide a substantial increase in the file sizes that can be covertly communicated subject to a reasonably
low delay.
R EFERENCES
[1] B. A. Bash, D. Goeckel, and D. Towsley, “Limits of reliable communication with low probability of detection on awgn channels,” IEEE Journal on
Selected Areas in Communications, vol. 31, no. 9, pp. 1921–1930, 2013.
[2] A. D. Woodbury, “Minimum relative entropy, bayes and kapur,” Geophysical Journal International, vol. 185, no. 1, pp. 181–189, 2011.
[3] I. Csiszár and F. Matus, “Information projections revisited,” IEEE Transactions on Information Theory, vol. 49, no. 6, pp. 1474–1490, 2003.
[4] R. F. Schaefer and S. Loyka, “The secrecy capacity of compound gaussian mimo wiretap channels,” IEEE Transactions on Information Theory, vol. 61,
no. 10, pp. 5535–5552, 2015.
[5] A. Abdelaziz, A. Elbayoumy, C. E. Koksal, and H. El Gamal, “On the compound MIMO wiretap channel with mean feedback,” in 2017 IEEE International
Symposium on Information Theory (ISIT) (ISIT’2017), Aachen, Germany, Jun. 2017.
[6] P. H. Che, M. Bakshi, and S. Jaggi, “Reliable deniable communication: Hiding messages in noise,” in Information Theory Proceedings (ISIT), 2013
IEEE International Symposium on. IEEE, 2013, pp. 2945–2949.
[7] M. R. Bloch, “Covert communication over noisy channels: A resolvability perspective,” IEEE Transactions on Information Theory, vol. 62, no. 5, pp.
2334–2354, 2016.
[8] L. Wang, G. W. Wornell, and L. Zheng, “Fundamental limits of communication with low probability of detection,” IEEE Transactions on Information
Theory, vol. 62, no. 6, pp. 3493–3503, 2016.
[9] A. O. HERo, “Secure space-time communication,” IEEE Transactions on Information Theory, vol. 49, no. 12, pp. 3235–3249, 2003.
[10] S. Lee, R. J. Baxley, J. B. McMahon, and R. S. Frazier, “Achieving positive rate with undetectable communication over mimo rayleigh channels,” in
Sensor Array and Multichannel Signal Processing Workshop (SAM), 2014 IEEE 8th. IEEE, 2014, pp. 257–260.
[11] R. G. Gallager, Information theory and reliable communication. Springer, 1968, vol. 2.
[12] E. Telatar, “Capacity of multi-antenna gaussian channels,” European transactions on telecommunications, vol. 10, no. 6, pp. 585–595, 1999.
[13] E. L. Lehmann and J. P. Romano, Testing statistical hypotheses. Springer Science & Business Media, 2006.
[14] T. M. Cover and J. A. Thomas, Elements of information theory. John Wiley & Sons, 2012.
16
[15] B. Friedlander and B. Porat, “Performance analysis of a null-steering algorithm based on direction-of-arrival estimation,” IEEE Transactions on Acoustics,
Speech, and Signal Processing, vol. 37, no. 4, pp. 461–466, 1989.
[16] T. T. Cai, J. Fan, and T. Jiang, “Distributions of angles in random packing on spheres.” Journal of Machine Learning Research, vol. 14, no. 1, pp.
1837–1864, 2013.
[17] D. Tse and P. Viswanath, Fundamentals of wireless communication. Cambridge university press, 2005.
A PPENDIX A
K ULLBACK –L EIBLER D IVERGENCE AT W ILLIE
Assuming that Willie is informed about its own channel to Alice, Willie’s observation when Alice is silent is distributed as
P0 , meanwhile, it takes the distribution P1 whenever Alice is active where
−1
P0 = |πΣ0 | exp −z† Σ−1
0 z ,
−1
P1 = |πΣ1 | exp −z† Σ−1
(55)
1 z ,
†
2
†
2
where Σ0 = σw INw and Σ1 = Hw QHw + σw INw . Here, Q = E xx is the covariance matrix of signal transmitted by Alice.
Note that, the choice of Q is highly dependent on the amount of CSI available at Alice. Thus, we evaluate the KL divergence
at Willie in general, then, in the following Sections we will discuss the effect of CSI availability at Alice on the performance
of Willie’s optimal detector. Assuming that Willie channel is known and fixed, the KL divergence between P0 and P1 is given
as follows:
D
=
=
=
=
=
=
=
EP0 [log P0 − log P1 ]
EP0 − log |πΣ0 | − z† Σ−1
0 z
† −1
+ log |πΣ1 | + z Σ1 z
|Σ1 |
† −1
+ EP0 z† Σ−1
log
1 z − z Σ0 z
|Σ0 |
−1
log Σ1 Σ−1
+ EP0 z† Σ−1
z
0
1 − Σ0
−1
†
−1
−1
log Σ1 Σ0 + EP0 tr Σ1 − Σ0 zz
−1
log Σ1 Σ−1
+ tr Σ−1
0
1 Σ0 − tr Σ0 Σ0
log Σ1 Σ−1
+ tr Σ−1
0
1 Σ0 − Nw
(56)
1
1
1
Hw QH†w + INw and that 2 Hw QH†w + INw = 2 Ww Q + INa . Define Ww , H†w Hw
2
σw
σw
σw
we note that, the non-zero eigenvalues of Hw QH†w and Ww Q are identical, hence, we ca write:
Now observe that, Σ1 Σ−1
0 =
D (P0 k P1 )
=
1
log 2 Hw QH†w + INw
σw
|
{z
}
Willie’s Channel Capacity
(
−1 )
1
†
Hw QHw + INw
+ tr
− Nw
2
σw
|
{z
}
≤0, Willie’s penalty due to codebook ignorance
Nw
Y
λi (Ww Q)
= log
1+
2
σw
i=1
−1
Nw
X
λi (Ww Q)
+
− Nw
1+
2
σw
i=1
Nw
X
λi (Ww Q)
=
log 1 +
2
σw
i=1
−1
λi (Ww Q)
+ 1+
−1
2
σw
Nw
X
λi (Ww Q)
=
log 1 +
2
σw
i=1
−1 !−1
λi (Ww Q)
− 1+
,
2
σw
where λi (Ww Q) is the ith eigenvalue of Ww Q.
(57)
17
A PPENDIX B
ACHIEVABILITY P ROOF OF P ROPOSITION 1
We show the achievability for Gaussian input. As discussed in [8], the sequence {Kn } is achievable provided that:
Kn
1
f ×n (zn |xn )
lim √ ≤ P − lim inf √ log
,
n→∞
n→∞
f ×n (zn )
n
n
(58)
where P − lim inf denotes the limit inferior in probability, namely, the largest number such that the probability that the random
variable in consideration is greater than this number tends to one as n tends to infinity. Meanwhile, f ×n (xn ) denotes the nth
extension of the probability density function of a random vector x. Thus, we need to show
1
f ×n (zn |xn ) √
√ log
− nI(fn (x), fn (z|x)) → 0
f ×n (zn )
n
(59)
in probability as n tends to infinity. First observe that,
f ×n (zn |xn ) =
f ×n (zn ) =
n
Y
i=1
n
Y
−1
exp −(zi − xi )† Σ−1
0 (zi − xi ) ,
−1
exp −z†i Σ−1
z
,
i
1
|πΣ0 |
|πΣ1 |
(60)
i=1
2
2
INw and let Q = E xx† be chosen such that (8) is satisfied. Note that, Q has to
INw , Σ1 = Hw QH†w + σw
where Σ0 = σw
be a decreasing function of n. Then,
f ×n (zn |xn )
n
= Σ1 Σ−
01 ×
×n
n
f (z )
!
n
n
X
X
†
†
−1
−1
exp
tr(Σ1 zi zi ) −
tr(Σ0 ei ei ) .
i=1
(61)
i=1
Accordingly,
1
f ×n (zn |xn ) √
√ log
= n log Σ1 Σ−
01 +
f ×n (zn )
n
!
n
n
X
X
1
†
†
−1
−1
√
tr(Σ1 zi zi ) −
tr(Σ0 ei ei ) ,
n i=1
i=1
whose expectation can be found as
√
1
f ×n (zn |xn )
E √ log
= n log Σ1 Σ−
01 +
f ×n (zn )
n
!
n
n
X
X
1
−1
−1
√
tr(Σ1 Σ1 ) −
tr(Σ0 Σ0 )
n i=1
i=1
√
= n log Σ1 Σ−
01
√
1
= n log INw + 2 Hw QH†w
σw
√
= nI(fn (x), fn (z|x)).
It then follows by Chebyshev’s inequality that, for any constant a > 0,
1
f ×n (zn |xn ) √
P r √ log
nI(f
(x),
f
(z|x))
−
≥
a
n
n
f ×n (zn )
n
f ×n (zn |xn )
1
1
≤ 2 var √ log
a
f ×n (zn )
n
(62)
(63)
(64)
and, it remains to show that
lim var
n→∞
1
f ×n (zn |xn )
√ log
f ×n (zn )
n
= 0.
(65)
18
Note that,
1
f ×n (zn |xn )
√ log
f ×n (zn )
n
!!
n
n
X
X
1
† −1
† −1
z Σ zi −
ei Σ0 ei
= var √
n i=1 i 1
i=1
n
1X
† −1
=
var z†i Σ−1
1 zi − ei Σ0 ei
n i=1
† −1
= var z†i Σ−1
1 zi − ei Σ0 ei
h
† −1
Σ
e
z
−
e
= E z†i Σ−1
i
i
1
i 0
†
† −1
† −1
zi Σ1 zi − ei Σ0 ei
h
† −1
= E z†i Σ−1
1 z i z i Σ1 z i
var
† −1
− z†i Σ−1
1 zi ei Σ0 ei
† −1
− e†i Σ−1
0 ei zi Σ1 zi
i
† −1
−e†i Σ−1
e
e
Σ
e
i i 0
i
0
(66)
Now observe that, since Q → 0 as n tends to infinity, we can verify that each term in (66) tends to zero as n tends to infinity.
A PPENDIX C
P ROOF OF T HEOREM 1
We show that the limit in (11) always exists. Note that, for every n, fn (x) is zero mean Gaussian. Let Q∗n = En xx† be
Q∗n = arg max I(fn (x), fn (y))
(67)
Q0
tr(Q)≤P
where the maximum is subject to (8). Hence, we have
max I(fn (x), fn (y)) = log I +
Q0
tr(Q)≤P
Hb Q∗n H†b
.
σb2
(68)
Now, we have two cases to consider:
1) D(Pn0 k Pn1 ) = 0. In this case, δ can be made 0 causing the limit to be infinity.
2) D(Pn0 k Pn1 ) > 0. In this case, Q∗n has to be a decreasing function of n, otherwise, the constraint (8) can not be met. In
this case, the limit is ≥ 0 and < ∞.
In either cases, the limit exist and, hence, limit can be used in place of limit inferior and, also, the order of limit and maximum
can be interchanged.
A PPENDIX D
P ROOF OF P ROPOSITION 2
It is enough to show that
∗
Ww
= arg max D (P0 k P1 (Ww ))
Ww ∈Sw
= γw Î,
(69)
hence, we need to show that the function D (P0 k P1 (Ww )) is monotonically increasing in Ww , i.e., D1 , D (P0 k P1 (Ww1 )) ≥
D (P0 k P1 (Ww2 )) , D2 whenever Ww1 < Ww2 . Recalling the expression of D (P0 k P1 (Ww )) in (13), we note that the
19
function log |I + WQ| is monotonically increasing in W for any Q. Meanwhile, the second term in (13) is negative and
decreases monotonically in W. Even though, we have that
D1 − D 2
=
EP0 [log P0 − log P1 (Ww1 )
=
EP0 [log P1 (Ww2 ) − log P1 (Ww1 )]
P1 (Ww1 )
−EP0 log
P1 (Ww2 )
− log P0 + log P1 (Ww2 )]
(a)
≥
(b)
≥
0,
(70)
where (a) follows from Jensen’s inequality using convexity of log and (b)
with some standard matrix algebra
follows because,
P1 (Ww1 )
as in Appendix A, we can show that, for Ww1 < Ww2 , we have EP0 log
≤ 1.
P1 (Ww2 )
A PPENDIX E
P ROOF OF T HEOREM 2
∗
∗
We solve (14) for Ww = Ww
. Without loss of generality, assume that Nw ≥ Na , hence, Ww
= γw INw . For Nw < Na ,
∗
we can set the γw = 0 for the Na − Nw minimum eigenvalues of Q∗ . Plugging Ww
into (13), we get 4
(
!
−1 )
γ
γ
w
w
− Nw
(71)
INa + 2 Q
Dn =
n log INa + 2 Q + tr
σw
σw
Now observe that,
|I + Wb Q| ≤
Na
Y
(1 + λi (Wb )λi (Wb ))
(72)
i=1
∗
is
with equality if and only if Wb and Q have the same eigenvectors and note that this choice does not affect (71) since Ww
isotropic. Hence, the eigenvectors of Q∗ is the same as the eigenvectors of Wb which is the same as the right singular vectors
of Hb .
Now, we form the following Lagrange dual problem
1
L = log INa + 2 Wb Q + λ(tr(Q) − P ) − tr(MQ)
σb
2δ 2
+η D−
,
(73)
n
where λ, η ≥ 0 are the Lagrange multipliers that penalize violating the power and LPD constraints, respectively, and M 0
penalizes the violation of the constraint Q 0. Where the associated KKT conditions can be expressed as:
λ, η ≥ 0,
λ(tr(Q) − P ) = 0, MQ = 0,
Q 0, M 0, tr(Q) ≤ P,
(74)
where the equality constraints in (74) ar the complementary slackness conditions. Note that, (14) is not a concave problem in
general. Thus, KKT conditions are not sufficient for optimality. Yet, since the constraint set is compact and convex and the
objective function is continuous, KKT conditions are necessary for optimality. Hence, we proceed by finding the stationary
points of the gradient of the dual Lagrange problem in the direction of Q and obtain the stationary points that solve the KKT
conditions. By inspecting the objective function at these points, the global optimum can be identified. To identify the stationary
points of the Lagrangian (73), we get its gradient with respect to Q as follows:
−1
Wb
1
5Q L = − INa + 2 Wb Q
+ λINa − M
σb
σb2
−1
−2 !
ηγw
γw
γw
+ 2
INa + 2 Q
− INa + 2 Q
σb
σw
σw
−1
1
Wb
+ λINa − M
= − INa + 2 Wb Q
σb
σb2
2
−1 2
−2 !
σw
σw
+η
IN + Q
−
IN + Q
γw a
γw a
4 For
n
simplicity, we write Dn instead of D Pn
0 k P1
(75)
20
Assume, without loss of generality, that Q 0, then, from MQ = 0 it follows that M = 0. Now, from 5Q L = 0 we obtain
−1
λINa = σb2 Wb−1 + Q
−2 2
−1 !
2
σw
σw
−
+η
IN + Q
IN + Q
(76)
γw a
γw a
Since we know that Q∗ and Wb have the same eigenvectors, hence, the eigenvalues of Q∗ , Λii , can be found from
−1
λ =(σb2 λ−1
i (Wb ) + Λii )
2
−2 2
−1 !
σw
σw
+η
−
+ Λii
+ Λii
,
γw
γw
(77)
as required.
A PPENDIX F
P ROOF OF T HEOREM 3
Achievability. Starting from (71), we obtain
(a)
D
≤
(b)
=
=
(c)
≤
=
(
−1 )
γw
γw
− Nw
Q + tr
tr
INa + 2 Q
2
σw
σw
Nw
X
γw Λii
σ2 +
1
− 1
γw Λii
w
i=1
1+
2
σw
γw Λii
N
w
X γw Λii
2
σw
σ2 −
γw Λii
w
i=1
1+
2
σw
γw Pth
2
γw Pth
N σw
Nw
N σ2 −
γw Pth
w
1+
2
N σw
2 2
γw Pth
4
N 2 σw
,
Nw
γw Pth
1+
2
N σw
(78)
where (a) follows from the inequality log |A| ≤ tr {A − I}, (b) is straightforward matrix algebra, (c) follows since the RHS
is maximized for Λii = Pth /N for all i. For Alice to ensure (8), we need the RHS of (78(e)) to be less than or equal 2δ 2 .
After some manipulation, if Alice sets
√
2
2N σw
δ
√
Pth ≤
,
(79)
γw nNw
we can verify that (8) is satisfied. Now let Alice set Pth for (79) to be met with equality. Given that choice of Pth , the LPD
constraint is met and the rest of the problem is that of choosing the input distribution to maximize the achievable rate. The
solution of the problem is then the SVD precoding with conventional water filling [12], [17] as follows:
+
(µ − σb2 λ−1
for 1 ≤ i ≤ N
i (Wb ))
Λii =
(80)
0
for N < i ≤ Na ,
where N = min{Na , Nb }, λi is the ith non zero eigenvalue of Wb and x+ = max{0, x}. Further, µ is a constant chosen to
satisfy the power constraint tr{Λ} = Pth . Accordingly, the following rate is achievable over Alice to Bob channel:
N
+
X
(µ − σb2 λ−1
i (Wb )) λi (Wb )
Rpd (δ) =
log 1 +
σb2
i=1
+
N
X
µλi (Wb )
=
log
.
(81)
σb2
i=1
21
However, it is technically difficult to expand (81) to check the applicability of the square-root law. Therefore, we obtain an
achievable rate assuming that Alice splits Pth equally across active eigenmodes of her channel to Bob. Note that, this rate is
indeed achievable since it is less than or equal to (81). Also note that, when Alice to Bob channel is well conditioned, the
power allocation in (80) turns into equal power allocation. Then, the following rate is achievable:
!
√ 2
N
X
2σw δλi (Wb )
√
R(δ) =
log 1 +
.
(82)
σb2 γw nNw
i=1
Now suppose that Alice uses a code of rate R̂ ≤ R(δ), then, Bob can obtain
!
√ 2
N
X
2σw δλi (Wb )
√
nR̂ ≤ n
log 1 +
σb2 γw nNw
i=1
√ 2
N p
(a) X
2σw δλi (Wb )
≤
n/Nw
σb2 γw ln 2
i=1
(83)
bits in n independent channel uses, where (a) follows from ln(1 + x) ≤ x, and note that the inequality is met with equality
for sufficiently large n. Now, assume that λi (Wb ) = λb for all 1 ≤ i ≤ N , i.e., Bob’s channel is well conditioned. Then, Bob
can obtain
√ 2
p
2σ δλb
(84)
nR(δ) ≤ N n/Nw 2 w
σb γw ln 2
bits in n independent channel uses since the inequality is met with equality for sufficiently large n.
Converse. To show the converse, we assume the most favorable scenario for Alice to Bob channel when Hb is well
conditioned. That is because, the rate Alice can achieve over a well conditioned channel to Bob sets an upper bound to that
can be achieved over any other channel of the same Frobenius norm [17]. Then, Alice will split her power equally across
active eigenmodes of her channel to Bob. Now, Let us choose ξ ≥ 1 such that,
γw
γw
log INa + 2 Q − Nw ≥ tr
Q − ξNw
(85)
2
σw
σw
and note that for small D, ξ is a function of δ that approaches 1 as δ → 0. Hence, combining (71) with (85) we obtain:
(
−1 )
(a)
γw
γw
Q + tr
INa + 2 Q
− ξNw
D ≥ tr
2
σw
σw
(b)
=
γw Pth
Nw
N σ2 +
w
1
− ξ
γw Pth
1+
2
N σw
(86)
following the same steps as in the achievability proof, we can insure that
√
2
2N ξσw
δ
√
,
Pth ≤
γw nNw
(87)
otherwise, Alice can not ensure that (8) is satisfied. Now let Alice set Pth equals to the RHS of (87). Then, we can verify
that:
!
√
N
2
X
2ξσw
δλb
Cpd (δ) ≤
log 1 + 2 √
.
(88)
σb γw nNw
i=1
Now suppose that Alice uses a code of rate R(δ) ≤ Cpd (δ), then, Bob can obtain
!
√
2
2ξσw
δλb
nR(δ) ≤ nN log 1 + 2 √
σb γw nNw
√
p
2ξσ 2 δλb
≤ N n/Nw 2 w
σb γw ln 2
bits in n independent channel uses since the inequality is met with equality for sufficiently large n.
(89)
22
A PPENDIX G
P ROOF OF T HEOREM 5
∗
∗
We solve (34) for Ww = Ww
. Without loss of generality, assume that Nw ≥ Na , hence, Ww
= γw INw . For Nw < Na ,
we can set the γw = 0 for the Na − Nw minimum eigenvalues of Q∗ . Now observe that,
|I + Wb Q| ≤
Na
Y
(1 + λi (Wb )λi (Wb ))
(90)
i=1
∗
with equality if and only if Wb and Q have the same eigenvectors and note that this choice does not affect (71) since Ww
is
isotropic. Hence, the eigenvectors of Q∗ is the same as the eigenvectors of Wb which is the same as the right singular vectors
of Hb .
Now, we form the following Lagrange dual problem
1
L = log INa + 2 Wb Q + λ(tr(Q) − P ) − tr(MQ)
σb
1
2
+ η log INa + 2 Ww Q − 2δ /n ,
(91)
σw
where λ, η ≥ 0 are the Lagrange multipliers that penalize violating the power and LPD constraints, respectively, and M 0
penalizes the violation of the constraint Q 0. Where the associated KKT conditions can be expressed as:
λ, η ≥ 0,
λ(tr(Q) − P ) = 0, MQ = 0,
Q 0, M 0, tr(Q) ≤ P,
(92)
where the equality constraints in (92) are the complementary slackness conditions. Note that, (34) is not a concave problem
in general. Thus, KKT conditions are not sufficient for optimality. Yet, since the constraint set is compact and convex and the
objective function is continuous, KKT conditions are necessary for optimality. Hence, we proceed by finding the stationary
points of the gradient of the dual Lagrange problem in the direction of Q and obtain the stationary points that solve the KKT
conditions. By inspecting the objective function at these points, the global optimum can be identified. To identify the stationary
points of the Lagrangian (91), we get its gradient with respect to Q as follows:
−1
1
Wb
5Q L = − IN a + 2 W b Q
+ λINa − M
σb
σb2
!
−1
ηγw
γw
+ 2
INa + 2 Q
σw
σw
−1
1
Wb
= − INa + 2 Wb Q
+ λINa − M
σb
σb2
−1
2
σ
(93)
+ η w INa + Q
γw
Assume, without loss of generality, that Q 0, then, from MQ = 0 it follows that M = 0. Now, from 5Q L = 0 we obtain
−1
2
−1
σ
λINa = σb2 Wb−1 + Q
− η w INa + Q
(94)
γw
Since we know that Q∗ and Wb have the same eigenvectors, hence, the eigenvalues of Q∗ , Λii , can be found from
2
−1
σw
2 −1
−1
λ =(σb λi (Wb ) + Λii ) − η
+ Λii
,
(95)
γw
as required.
A PPENDIX H
P ROOF OF T HEOREM 6
Achievability. We observe that
log INa
1
+ 2 Ww Q
σw
(a)
1
Ww Q
2
σw
≤
tr
=
M
X
γw Λii
( 2 )
σw
i=1
(b)
≤
Pth M γw Λii
2
N σw
(96)
23
where (a) follows from the inequality log |A| ≤ tr {A − I}, (b) follows since the RHS is maximized for Λii = Pth /N for
all i. For Alice to ensure that the constraint in (34) is satisfied, she needs the RHS of (96(b)) to be less than or equal 2δ 2 /n.
Thus, Alice needs
Pth ≤
2 2
2N σw
δ
.
γw nM
(97)
Now let Alice set Pth for (97) to be met with equality. Given that choice of Pth , the LPD constraint is met and the rest of
the problem is that of choosing the input distribution to maximize the achievable rate. The solution of the problem is then the
SVD precoding with conventional water filling [12], [17] as follows:
+
(µ − σb2 λ−1
for 1 ≤ i ≤ N
i (Wb ))
Λii =
(98)
0
for N < i ≤ Na ,
where λi is the ith non zero eigenvalue of Wb . Further, µ is a constant chosen to satisfy the power constraint tr{Λ} = Pth .
Accordingly, the following rate is achievable over Alice to Bob channel:
N
+
X
(µ − σb2 λ−1
i (Wb )) λi (Wb )
Rpd (δ) =
log 1 +
σb2
i=1
+
N
X
µλi (Wb )
=
log
.
(99)
σb2
i=1
However, it is technically difficult to expand (99) to check the applicability of the square-root law. Therefore, we obtain an
achievable rate assuming that Alice splits Pth equally across active eigenmodes of her channel to Bob. Note that, this rate is
indeed achievable since it is less than or equal to (99). Also note that, when Alice to Bob channel is well conditioned, the
power allocation in (98) turns into equal power allocation. Then, the following rate is achievable:
N
2
X
δλi (Wb )
2σw
R(δ) =
log 1 +
.
(100)
σb2 γw nM
i=1
Now suppose that Alice uses a code of rate R̂ ≤ R(δ), then, Bob can obtain
N
X
2σ 2 δ 2 λi (Wb )
nR̂ ≤ n
log 1 + w2
σb γw nM
i=1
√
N
2 2
(a) X
2σw
δ λi (Wb )
≤
M σb2 γw ln 2
i=1
(101)
bits in n independent channel uses, where (a) follows from ln(1 + x) ≤ x, and note that the inequality is met with equality
for sufficiently large n. Now, assume that λi (Wb ) = λb for all 1 ≤ i ≤ N , i.e., Bob’s channel is well conditioned. Then, Bob
can obtain
√ 2 2
N 2σw
δ λb
nR(δ) ≤
(102)
2
M σb γw ln 2
bits in n independent channel uses since the inequality is met with equality for sufficiently large n.
Converse. To show converse, we assume the most favorable scenario for Alice to Bob channel when Hb is well conditioned.
Then, Alice will split her power equally across active eigenmodes of her channel to Bob. Now,
(
−1 )
(a)
1
1
log INa + 2 Ww Q ≥ tr INa − INa + 2 Ww Q
σw
σw
=
=
Na
X
1 −
1
γw Pth
i=1
1+
2
N σw
M γw Pth
2 +γ P
N σw
w th
(103)
where (a) follows from the inequality log |A| ≥ tr I − A−1 . Following the same steps as in the achievability proof, we can
insure that
2 2
2ξN ξσw
δ
Pth ≤
,
(104)
γw nM
24
nM
where ξ =
> 1, otherwise, Alice can not meet the LPD constraint. Now let Alice set Pth equals to the RHS of
nM − 2δ 2
(104). Then, we can verify that:
N
X
2ξσ 2 δ 2 λb
Cpd (δ) ≤
log 1 + 2 w
.
(105)
σb γw nM
i=1
Now suppose that Alice uses a code of rate R(δ) ≤ Cpd (δ), then, Bob can obtain
2ξσ 2 δ 2 λb
nR(δ) ≤ nN log 1 + 2 w
σb γw nM
2 2
2N ξσw
δ λb
≤
M σb2 γw ln 2
bits in n independent channel uses since the inequality is met with equality for sufficiently large n.
(106)
| 7 |
Some HCI Priorities for
GDPR-Compliant Machine Learning
Michael Veale
University College London
London, United Kingdom
m.veale@ucl.ac.uk
Reuben Binns
Max Van Kleek
University of Oxford
Oxford, United Kingdom
reuben.binns@cs.ox.ac.uk
emax@cs.ox.ac.uk
Abstract
In this short paper, we consider the roles of HCI in enabling
the better governance of consequential machine learning
systems using the rights and obligations laid out in the
recent 2016 EU General Data Protection Regulation
(GDPR)—a law which involves heavy interaction with
people and systems. Focussing on those areas that relate
to algorithmic systems in society, we propose roles for HCI
in legal contexts in relation to fairness, bias and
discrimination; data protection by design; data protection
impact assessments; transparency and explanations; the
mitigation and understanding of automation bias; and the
communication of envisaged consequences of processing.
Introduction
The General Data Protection Regulation: An Opportunity for the CHI Community?
(CHI-GDPR 2018), Workshop at ACM CHI’18, 22 April 2018, Montréal, Canada
The 2016 EU General Data Protection Regulation (GDPR)
is making waves. With all personal data relating to EU
residents or processed by EU companies within scope, it
seeks to strengthen the rights of data subjects and the
obligations of data controllers (see definitions in the box
overleaf) in an increasingly data-laden society, newly
underpinned with an overarching obligation of data
controller accountability as well as hefty maximum fines. Its
articles introduce new provisions and formalise existing
rights clarified by the European Court of Justice (the Court),
such as the “right to be forgotten”, as well as strengthening
those already present in the 1995 Data Protection Directive
arXiv:1803.06174v1 [cs.HC] 16 Mar 2018
(DPD).
Data Subjects & Controllers
EU DP law applies whenever
personal data is processed either in the Union, or outside
the Union relating to an EU
resident. Personal data is defined by how much it can render somebody identifiable—
going beyond email, phone
number, etc to include dynamic IP addresses, browser
fingerprints or smart meter
readings. The individual data
relates to is called the data
subject. The organisation(s)
who determine ‘the purposes
and means of the processing of personal data’ are
data controllers. Data subjects have rights over personal data, such as rights
of access, erasure, objection
to processing, and portability of data elsewhere. Data
controllers are subject to a
range of obligations, such as
ensuring confidentiality, notifying if data is breached,
and undertaking risk assessments. Additionally, they must
only process data where they
have a legal ground—such as
consent—to do so, for a specified and limited purpose, and
a limited period of storage.
The GDPR has been turned to by scholars and activists as
a tool for “algorithmic accountability” in a society where
machine learning (ML) seems to be increasingly important.
Machine learning models—statistical systems which use
data to improve their performance on particular tasks—are
the approach of choice to generate value from the ‘data
exhaust’ of digitised human activities. Critics, however, have
framed ML as powerful, opaque, and with potential to
endanger privacy [2], equality [10] and autonomy [20].
While the GDPR is intended to govern personal data rather
than ML, there are a range of included rights and
obligations which might be useful to exert control over
algorithmic systems [14].
Given that GDPR rights involve both individual
data-subjects and data controllers (see sidebar) interfacing
with computers in a wide variety of contexts, it strongly
implicates another abbreviation readers will likely find
familiar: Human–Computer Interaction (HCI). In this short
paper, we outline, non-exhaustively of course, some of the
crossovers between the GDPR provisions, HCI and ML that
appear most salient and pressing given current legal, social,
and technical debates. We group these in two broad
categories: those which primarily concern the building and
training of models before deployment, and those which
primarily concern the post-deployment application of
models to data subjects in particular situations.
HCI, GDPR and Model Training
An increasing proportion of collected personal data1 is used
to train machine learning systems, which are in turn used to
1
Note that the GDPR defines personal data broadly—including things
like dynamic IP addresses and home energy data—as opposed to the predominantly American notion of personally identifiable information (PII) [25].
make or support decisions in a variety of fields. As model
training with personal data is considered data processing
(assuming data is not solidly ‘anonymised’), the GDPR does
govern it to a varying degree. In this section, we consider to
what extent HCI might play a role in promoting the
governance of model training under the GDPR.
Fairness, discrimination and ‘special category’ data
Interest in unfair and/or illegal data-driven discrimination
has concerned researchers, journalists, pundits and
policy-makers [17, 3], particularly as the ease of
transforming seemingly non-sensitive data into potentially
damaging, private insights has become clear [9]. Most focus
on how to govern data (both in Europe and elsewhere
broadly [19]) has been centred on data protection, which is
not an anti-discrimination law and does not feature
anti-discrimination as a core concept. Yet the GDPR does
contain provisions which concern particularly sensitive
attributes of data.
Several “special” types of data are given higher protection in
the GDPR. The 1995 Data Protection Directive (art 8)
prohibits processing of data revealing racial or ethnic
origin, political opinions, religious or philosophical
beliefs and trade-union membership, in addition to data
concerning health or sex life. The GDPR (art 9(1)) adds
genetic and biometric data (the latter for the purposes of
identification), as well as clarifying sex life includes
orientation, to create 8 ‘special categories’ of data. This list
is similar, but not identical, to the ‘protected characteristics’
in many international anti-discrimination laws. Compared to
the UK’s Equality Act 2010, the GDPR omits age, sex and
marital status but includes political opinions, trade union
membership, and health data more broadly.
The collection, inference and processing of special category
data triggers both specific provisions (e.g. arts 9, 22) and
specific responsibilities (e.g. Data Protection Impact
Assessments, art 35 and below), as well as generally
heightening the level of risk of processing and therefore the
general responsibilities of a controller (art 24). Perhaps the
most important difference is that data controllers cannot rely
on their own legitimate interests to justify the processing of
special category data, which usually will mean they will
have to seek explicit, specified consent for the type of
processing they intend—which they may not have done for
their original data, and may not be built into their legal data
collection model.
Given that inferred special category data is also
characterised as special category data [28], there are
important questions around how both controllers and
regulators recognise that such inference is or might be
happening. Naturally, if a data controller trains a supervised
model for the purpose of inferring a special category of data,
this is quite a simple task (as long as they are honest about
it). Yet when they are using latent characteristics, such as
through principal components analysis, or features that are
embedded within a machine learning model, this becomes
more challenging. In particular it has been repeatedly
demonstrated that biases connected to special category
data can appear in trained systems even where those
special categories are not present in the datasets being
used [9].
The difficulty of this task is heightened by how the controller
is unlikely to possess ‘ground truth’ special category data in
order to assess what it is they are picking up. HCI might
play an important role here in establishing what has been
described as ‘exploratory fairness analysis’ [27]. The task is
to understand potential patterns of discrimination, or to
identify certain unexpected but sensitive clusters, with only
partial additional information abut the participants. A similar
proposal (and prototype of) a visual system, albeit one
assuming full information, has been proposed by
discrimination-aware data mining researchers concerned
that the formal statistical criteria for non-discrimination
established by researchers may not connect with ideas of
fairness in practice [4, 5]. If we do indeed also know
unfairness when we see it, exploratory visual analysis may
be a useful tool. A linked set of discussions have been
occurring in the information visualisation community around
desirable characteristics of feminist data visualisation, which
connects feminist principles around marginalisation and
dominance in the production of knowledge to information
design [11]. Finally, visual tools which help identify
misleading patterns in data, such as instances of Simpson’s
paradox (e.g. [24]), may prove useful in confirming apparent
disparities between groups. Building and testing interfaces
which help identify sensitive potential correlations and ask
critical questions around bias and discrimination in the data
is an important prerequisite to rigorously meeting
requirements in the GDPR.
Upstream provisions: Data Protection by Design (DPbD) and
Data Protection Impact Assessments (DPIAs)
The GDPR contains several provisions intended to move
considerations of risks to data subjects’ rights and freedoms
upstream into the hands of designers. Data Protection by
Design (DPbD), a close cousin of privacy by design, is a
requirement under the GDPR and means that controllers
should use organisational and technical measures to imbue
their products and processes with data protection
principles [8]. Data Protection Impact Assessments (DPIAs)
have a similar motivation [6]. Whenever controllers have
reason to believe that a processing activity brings high risks,
they must undertake continuous, documented analysis of
these, as well as any measures they are taking to mitigate.
The holistic nature of both DPbD and DPIAs is emphasised
in both the legal text and recent guidance. These are
creative processes mixing anticipation and foresight with
best practice and documentation. Some HCI research has
already addressed this in particular. Luger et al. [23] use
ideation cards to engage designers with regulation and
co-produce “data protection heuristics”.2 Whether DPIA
aides can be built into existing systems and software in a
user-centric way is an important area for future exploration.
Furthermore, many risks within data, such as bias, poor
representation, or the picking up of private features, may be
unknown to data controllers. Identifying these is the point of
a DPIA, but subtle issues are unlikely to leap out of the
page. In times like this, it has been suggested that a shared
knowledgebase [27] could be a useful resource, where
researchers and data controllers (or their modelling staff)
could log risks and issues in certain types of data from their
own experiences, creating a resource which might serve
useful in new situations. For example, such a system might
log found biases in public datasets (particularly when linked
to external data) or in whole genres of data, such as Wi-Fi
analytics or transport data. Such a system might be a useful
starting point for considering issues that may otherwise go
undetected, and for supporting low-capacity organisations
in their responsible use of analytics. From an HCI
perspective though, the design of such a system presents
significant challenges. How can often nuanced biases be
recorded and communicated both clearly and in such a way
that they generalise across applications? How might
individuals easily search a system for issues in their own
datasets, particularly when they might have a very large
number of variables in a combination the system has not
seen previously? Making this kind of knowledge accessible
to practitioners seems promising, but daunting.
2
The cards are downloadable at https://perma.cc/3VBQ-VVPQ.
HCI, GDPR and Model Application
The GDPR, and data protection law in general, was not
intended to significantly govern decision-making. Already a
strange law in the sense that it poses transparency
requirement that applies to the public and private sectors
alike, it is also a Frankenstein’s monster–style result
culminating from the melding of various European law and
global principles that preceded it [16].
Modes of Transparency
While transparency is generally spoken of as a virtue, the
causal link between it and better governance is rarely
simple or clear. A great deal of focus has been placed on
the so-called “right to an explanation”, where a short paper
at a machine learning conference workshop [18] gained
sudden notoriety, triggering reactions from lawyers and
technologists noting that the existence and applicability of
such a right was far from simple [29, 14]. Yet the
individualised transparency paradigm has rarely provided
much practical use for data subjects in their day-to-day lives
(consider the burden of ‘transparent’ privacy policies).
Consequently, HCI provides a useful place to start when
considering how to make the limited GDPR algorithmic
transparency provisions useful governance tools.
There are different places in which algorithmic transparency
rights can be found in the GDPR [29]. Each bring different
important HCI challenges.
Meaningful information about the logic of processing
Articles 13–14 oblige data controllers to provide information
at the time data is collected around the logics of certain
automated decision systems that might be applied to this
data. Current regulatory guidance [1] states that there is no
obligation to tailor this information to the specific situation of
a data subject (other than if they might be part of a
vulnerable group, like children, which might need further
support to make the information meaningful), although as
many provisions in data protection law, the Court may
interpret this more broadly or narrowly when challenged.
This points to an important HCI challenge in making (or
visualising) such general information, but with the potential
for specific relevance to individuals.
Right to be informed In addition, there is a so-called
‘right to be informed’ of automated decision-making [29]:
how might an interface seamlessly flag to users when a
potentially legally relevant automated decision is being
made? This is made more challenging by the potential for
adaptive interfaces or targeted advertising to meet the
criteria of a ‘decision’. In these cases, it is unclear at what
point the ‘decision’ is being made. Decisions might be seen
in the design process, or adaptive interfaces may be seen
as ‘deciding’ which information to provide or withold [14].
Exercise of data protection rights is different in further ways
in ambient environments [13], as smart cities and ambient
computing may bring significant challenges, if, for example,
they are construed as part of decision-making
environments. Existing work in HCI has focussed on the
difficulties in identifying “moments of consent” in ubiquitous
computing [22, 21]. Not only is this relevant when consent is
the legal basis for an automated decision, but additional
consideration will be needed in relation to what equivalent
“moments” of objection might look like. Given that moments
to object likely outnumber moments to consent, this might
pose challenges.
A right to an explanation? An explicit “right to an
explanation” of specific decisions, after they have happened,
sits in a non-binding recital in the GDPR [29], and thus its
applicability and enforceability depends heavily on
regulators and the Court. However, there is support for a
parallel right in varying forms in certain other laws, such as
French administrative law or the Council of Europe
Convention 108 [15], and HCI researchers have already
been testing different explanation facilities proposed by
machine learning researchers in qualitative and quantitative
settings to see how they compare in relation to different
notions of procedural justice [7]. Further research on
explanation facilities in-the-wild would be strongly welcome,
given that most explanation facilities to date have focussed
on the user of a decision-support system rather than an
individual subject to an automated decision.
Mitigating Automation Bias
A key trigger condition for the automated decision-making
provisions in the GDPR (art 22) [14] centres on the degree
of automation of the process. Significant decisions “based
solely on automated processing” require at least consent, a
contract or a basis in member state law. Recent regulatory
guidance indicates that there must be “meaningful” human
input undertaken by somebody with “authority and
competence” who does not simply “routinely apply” the
outputs of the model in order to be able to avoid
contestation or challenge [28]. Automation bias has long
been of interest to scholars of human factors in
computing [26, 12] and the GDPR provides two core
questions for HCI in this vein.
Firstly, this setup implies that systems that are expected to
outperform humans must always be considered “solely”
automated [28]. If a decision-making system is expected to
legitimately outperform humans it makes meaningful input
very difficult. Any routine disagreement would be at best
arbitrary and at worst, harmful. This serves as yet another
(legal) motivating factor to create systems where human
users can augment machine results. Even if this proves
difficult, when users contest an automated decision under
the GDPR, they have a right to human review. Interfaces
need to ensure that even where models may be complex
and high-dimensional, decision review systems are rigorous
and themselves have “meaningful” human input—or else
these reviewed decisions are equally open to contestation.
Secondly, how might a data controller or a regulator
understand whether systems have “meaningful” human
input or not, in order to either obey or enforce the law? How
might this input be justified and documented in a useful and
user-friendly way which could potentially be provided to the
subject of the decision? Recent French law does oblige this
in some cases: in the case of algorithmically-derived
administrative decisions, information should be provided to
decision-subjects on the “the degree and the mode of
contribution of the algorithmic processing to the
decision-making” [15]. Purpose-built interfaces and
increased knowledge from user studies both seem needed
for the aim of promoting meaningful, accountable input.
Communicating Envisaged Consequences
Where significant, automated decision-making using
machine learning is expected, the information rights in the
GDPR (arts 13–15) provide that a data subject should be
provided with the “envisaged consequences” of such
decision for her. What this means is far from clear. Recent
regulatory guidance provides only the example of giving
data subject applying for insurance premiums an app to
demonstrate the consequences of dangerous driving [1].
Where users are consenting to complex online
personalisation which could potentially bring significant
effects to their life, such as content delivery which might
lead to echo chambers or “filter bubbles”, it is unclear how
complex “envisaged consequences” might be best
displayed in order to promote user autonomy and choice.
Concluding remarks
HCI is well-placed to help enable the regulatory
effectiveness of the GDPR in relation to algorithmic fairness
and accountability. Here we have touched on different
points where governance might come into play—model
training and model application—but also different modes of
governance. Firstly, HCI might play a role in enabling
creative, rigorous, problem solving practices within
organisations. Many mechanisms in the GDPR, such as
data protection by design and data protection impact
assessments, will depend heavily on the communities,
practices and technologies that develop around them in
different contexts. Secondly, HCI might play a role in
enabling controllers do particular tasks better. Here, we
discussed the potential for exploratory data analysis tools,
such as detecting special category data even when it was
not explicitly collected. Finally, it might help data subjects
exercise their rights better. It appears especially important
to develop new modes and standards for transparency,
documentation of human input, and communication of tricky
notions such as “envisaged consequences”.
As the GDPR often defines data controllers’ obligations as a
function of “available technologies” and “technological
developments”, it is explicitly enabled and strengthened by
computational systems and practices designed with its
varied provisions in mind. Many parts of the HCI community
have already been building highly relevant technologies and
practices that could be applied in this way. Further
developing these with a regulatory focus might be
transformative in and of itself—and it is something we
believe should be promoted in this field and beyond.
REFERENCES
1. Article 29 Data Protection Working Party. 2018.
Guidelines on Automated individual decision-making
and Profiling for the purposes of Regulation 2016/679,
wp251rev.01.
2. Solon Barocas and Helen Nissenbaum. 2014. Big
Data’s End Run Around Procedural Privacy Protections.
Commun. ACM 57, 11 (2014), 31–33.
3. Solon Barocas and Andrew D Selbst. 2016. Big Data’s
Disparate Impact. California Law Review 104 (2016),
671–732.
4. Bettina Berendt and Sören Preibusch. 2012. Exploring
discrimination: A user-centric evaluation of
discrimination-aware data mining. In 12th IEEE
International Conference on Data Mining Workshops
(ICDMW). 344–351.
5. Bettina Berendt and Sören Preibusch. 2014. Better
decision support through exploratory
discrimination-aware data mining: Foundations and
empirical evidence. Artificial Intelligence and Law 22, 2
(2014), 175–209.
6. Reuben Binns. 2017. Data protection impact
assessments: A meta-regulatory approach.
International Data Privacy Law 7, 1 (2017), 22–35. DOI:
http://dx.doi.org/10.1093/idpl/ipw027
7. Reuben Binns, Max Van Kleek, Michael Veale, Ulrik
Lyngs, Jun Zhao, and Nigel Shadbolt. 2018. ‘It’s
Reducing a Human Being to a Percentage’; Perceptions
of Justice in Algorithmic Decisions. In CHI’18. DOI:
http://dx.doi.org/10.1145/3173574.3173951
8. Lee A Bygrave. 2017. Data Protection by Design and by
Default: Deciphering the EU’s Legislative Requirements.
Oslo Law Review 1, 02 (2017), 105–120.
9. Toon Calders and Indrė Žliobaitė. 2012. Why Unbiased
Computational Processes Can Lead to Discriminative
Decision Procedures. In Discrimination and Privacy in
the Information Society, Bart Custers, Toon Calders,
Bart Schermer, and Tal Zarsky (Eds.). Springer, Berlin,
Heidelberg, 43–59.
10. Bart Custers. 2012. Data Dilemmas in the Information
Society: Introduction and Overview. In Discrimination
and Privacy in the Information Society, Bart Custers,
Toon Calders, Bart Schermer, and Tal Zarsky (Eds.).
Springer, 3–25.
11. Catherine D’Ignazio and Lauren F Klein. 2016. Feminist
data visualization. In Workshop on Visualization for the
Digital Humanities (VIS4DH), IEEE VIS 2016.
12. Mary T Dzindolet, Scott A Peterson, Regina A
Pomranky, Linda G Pierce, and Hall P Beck. 2003. The
role of trust in automation reliance. International Journal
of Human-Computer Studies 58, 6 (2003), 697–718.
13. Lilian Edwards. 2016. Privacy, security and data
protection in smart cities: A critical EU law perspective.
Eur. Data Prot. L. Rev. 2 (2016), 28. DOI:
http://dx.doi.org/10.2139/ssrn.2711290
14. Lilian Edwards and Michael Veale. 2017. Slave to the
Algorithm? Why a ‘Right to an Explanation’ is Probably
Not The Remedy You Are Looking For. Duke Law &
Technology Review 16, 1 (2017), 18–84. DOI:
http://dx.doi.org/10.2139/ssrn.2972855
15. Lilian Edwards and Michael Veale. 2018. Enslaving the
algorithm: From a ‘right to an explanation’ to a ‘right to
better decisions’? IEEE Security & Privacy (2018). DOI:
http://dx.doi.org/10.2139/ssrn.3052831
16. Gloria González Fuster. 2014. The emergence of
personal data protection as a fundamental right of the
EU. Springer.
appearance and identical statistics through simulated
annealing. In CHI’17. 1290–1294. DOI:
http://dx.doi.org/10.1145/3025453.3025912
17. Oscar H Gandy. 2009. Coming to Terms with Chance:
Engaging Rational Discrimination and Cumulative
Disadvantage. Routledge, London.
25. Paul M Schwartz and Daniel J Solove. 2011. The PII
problem: Privacy and a new concept of personally
identifiable information. New York University Law
Review 86 (2011), 1814.
18. Bryce Goodman and Seth Flaxman. 2016. European
Union regulations on algorithmic decision-making and a
“right to explanation”. In 2016 ICML Workshop on
Human Interpretability in Machine Learning (WHI 2016),
New York, NY.
19. Graham Greenleaf. 2018. ‘European’ Data Privacy
Standards Implemented in Laws Outside Europe.
Privacy Laws & Business International Report 149
(2018), 21–23.
20. Mireille Hildebrandt. 2008. Defining Profiling: A New
Type of Knowledge? In Profiling the European Citizen:
Cross-Disciplinary Perspectives, Mireille Hildebrandt
and Serge Gutwirth (Eds.). Springer, 17–45.
21. Ewa Luger and Tom Rodden. 2013a. An Informed View
on Consent for UbiComp. In UbiComp ’13. 529–538.
DOI:http://dx.doi.org/10.1145/2493432.2493446
22. Ewa Luger and Tom Rodden. 2013b. Terms of
agreement: Rethinking consent for pervasive
computing. Interacting with Computers 25, 3 (2013),
229–241.
23. Ewa Luger, Lachlan Urquhart, Tom Rodden, and
Michael Golembewski. 2015. Playing the legal card:
Using ideation cards to raise data protection issues
within the design process. 457–466. DOI:
http://dx.doi.org/10.1145/2702123.2702142
24. Justin Matejka and George Fitzmaurice. 2017. Same
stats, different graphs: generating datasets with varied
26. Linda J Skitka, Kathleen L Mosier, and Mark Burdick.
1999. Does automation bias decision-making?
International Journal of Human-Computer Studies 51
(1999), 991–1006. DOI:
http://dx.doi.org/10.1006/ijhc.1999.0252
27. Michael Veale and Reuben Binns. 2017. Fairer
machine learning in the real world: Mitigating
discrimination without collecting sensitive data. Big
Data & Society 4, 2 (2017). DOI:
http://dx.doi.org/10.1177/2053951717743530
28. Michael Veale and Lilian Edwards. 2018. Clarity,
Surprises, and Further Questions in the Article 29
Working Party Draft Guidance on Automated
Decision-Making and Profiling. Computer Law &
Security Review 34, 2 (2018). DOI:
http://dx.doi.org/10.1016/j.clsr.2017.12.002
29. Sandra Wachter, Brent Mittelstadt, and Luciano Floridi.
2017. Why a right to explanation of automated
decision-making does not exist in the General Data
Protection Regulation. International Data Privacy Law
7, 2 (2017), 76–99.
| 2 |
Discrete Mathematics and Theoretical Computer Science
DMTCS vol. VOL:ISS, 2015, #NUM
arXiv:1603.03019v1 [] 8 Mar 2016
Reducing the generalised Sudoku problem to
the Hamiltonian cycle problem
Michael Haythorpe1
1
Flinders University, Australia
received 2016-03-09, revised xxxx-xx-xx, accepted yyyy-yy-yy.
The generalised Sudoku problem with N symbols is known to be NP-complete, and hence is equivalent to any other
NP-complete problem, even for the standard restricted version where N is a perfect square. In particular, generalised
Sudoku is equivalent to the, classical, Hamiltonian cycle problem. A constructive algorithm is given that reduces
generalised Sudoku to the Hamiltonian cycle problem, where the resultant instance of Hamiltonian cycle problem is
sparse, and has O(N 3 ) vertices. The Hamiltonian cycle problem instance so constructed is a directed graph, and so
a (known) conversion to undirected Hamiltonian cycle problem is also provided so that it can be submitted to the
best heuristics. A simple algorithm for obtaining the valid Sudoku solution from the Hamiltonian cycle is provided.
Techniques to reduce the size of the resultant graph are also discussed.
Keywords: Sudoku, NP-complete, Reduction, Hamiltonian cycle problem
1
Introduction
The generalised Sudoku problem is an NP-complete problem which, effectively, requests a Latin square
that satisfies some additional constraints. In addition to the standard requirement that each row and column
of the Latin square contains each symbol precisely once, Sudoku also demands block constraints. If there
are N symbols, the Latin square
size N × N . If N is a perfect square, then the Latin square can be
√ is of √
divided into N regions of size N × N , called blocks. Then the block constraints demand that each of
these blocks also contain each of the symbols precisely once. Typically, the symbols in a Sudoku puzzle
are simply taken as the natural numbers 1 to N . In addition, Sudoku puzzles typically have fixed values
in some of the cells, which dramatically limits the number of valid solutions. If the fixed values are such
that only a unique solution remains, the Sudoku puzzle is said to be well-formed.
The standard version where N = 9 has, in recent years, become a common form of puzzle found
in newspapers and magazines the world over. Although variants of the problem have existed for over a
century, Sudoku in its current format is a fairly recent problem, first published in 1979 under the name
Number Place. The name Sudoku only came into existence in the 1980s. In 2003, the generalised Sudoku
problem was shown to be ASP-complete [12], which in turn implies that it is NP-complete. Hence, it is
theoretically as difficult as any problems in the set N P of decision problems for which a positive solution
can be certified in polynomial time. Note that although there are more general versionsvariants of Sudoku
(such as rectangular versions), the square variant described above where N is a perfect square suffices for
ISSN subm. to DMTCS
c 2015 by the author(s)
Distributed under a Creative Commons Attribution 4.0 International License
2
Michael Haythorpe
NP-completeness. Hence, for the remainder of this manuscript, it will be assumed that we are restricted
to considering the square variant.
Since being shown to be NP-complete, Sudoku has subsequently been converted to various NP-complete
problems, most notably constraint satisfaction [11], boolean satisfiability [8] and integer programming [2].
Another famous NP-complete problem is the Hamiltonian cycle problem (HCP), which is defined as follows. For a simple graph (that is, one containing no self-loops or multi-edges) containing vertex set V and
edge set E : V → V , determine whether any simple cycles containing all vertices in V exist in the graph.
Such cycles are called Hamiltonian cycles, and a graph containing at least one Hamiltonian cycle is called
Hamiltonian. Although HCP is defined for directed graphs, in practice most heuristics that actually solve
HCP are written for undirected graphs.
Since both Sudoku and HCP are NP-complete, it should be possible to reduce Sudoku to HCP. In this
manuscript, a constructive algorithm that constitutes such a reduction is given. The resultant instance of
HCP is a sparse graph or order O(N 3 ). If many values are fixed, it is likely that the resultant graph can be
made smaller by clever graph reduction heuristics; to this end, we apply a basic graph reduction heuristic
to two example Sudoku instances to investigate the improvement offered.
It should be noted that reductions of NP-complete problems to HCP is an interesting but still largely
unexplored field of research. Being one of the classical NP-complete problems (indeed, one of the initial
21 NP-complete problems described by Karp [7]), HCP is widely studied and several very efficient algorithms for solving HCP exist. HCP is also an attractive target problem in many cases because the resultant
size of the instance is relatively small by comparison to other potential target problems. Indeed, the study
of which NP-complete problems provide the best target frameworks for reductions is an ongoing field of
research. For more on this topic, as well as examples of other reductions to HCP, the interested reader is
referred to [4, 3, 6, 5].
2
Conversion to HCP
At it’s core, a Sudoku problem with N symbols (which we will consider to be the natural numbers from
1 to N ) has three sets of constraints to be simultaneously satisfied.
1. Each of the N blocks must contain each number from 1 to N precisely once.
2. Each of the N rows must contain each number from 1 to N precisely once.
3. Each of the N columns must contain each number from 1 to N precisely once.
The variables of the problem are the N 2 cells, which can each be assigned any of the N possible values,
although some of the cells may have fixed values depending on the instance.
In order to cast an instance of Sudoku as an instance of Hamiltonian cycle problem, we need to first
encode every possible variable choice as a subgraph. The idea will be that traversing the various subgraphs
in certain ways will correspond to particular choices for each of the variables. Then, we will link the
various subgraphs together in such a way that they can only be consecutively traversed if none of the
constraints are violated by the variable choices.
In the final instance of HCP that is produced, the vertex set V will comprise of the following, where a,
i, j and k all take values from 1 to N :
• A single starting vertex s and finishing vertex f
Reducing the generalised Sudoku problem to the Hamiltonian cycle problem
3
• Block vertices: N 2 vertices bak , corresponding to number k in block a
• Row vertices: N 2 vertices rik , corresponding to number k in row i
• End Row vertices: N vertices ti corresponding to row i
• Column vertices: N 2 vertices cjk corresponding to number k in column j
• End Column vertices: N vertices dj corresponding to column j
• Puzzle vertices: 3N 3 vertices xijkl corresponding to number k in position (i, j), for l = 1, 2, 3
• End Puzzle vertices: N 2 vertices vij corresponding to position (i, j)
• Duplicate Puzzle vertices: 3N 3 vertices yijkl corresponding to number k in position (i, j), for
l = 1, 2, 3
• End Duplicate Puzzle vertices: N 2 vertices wij corresponding to position (i, j)
The graph will be linked together in such a way that any valid solution to the Sudoku puzzle will
correspond to a Hamiltonian cycle in the following manner.
1. The starting vertex s is visited first.
2. For each a and k, suppose number k is placed in position (i, j) in block a. Then, vertex bak is
visited, followed by all xijml for m 6= k, followed by all yijml for m 6= k. This process will ensure
constraint 1 is satisfied.
3. For each i and k, suppose number k is placed in position (i, j) in row i. Then, vertex rik is visited,
followed by xijk3 , xijk2 , xijk1 and then vij . If k = N (ie if i is about to be incremented or we are
finished step 3) then this is followed by ti . This process will ensure constraint 2 is satisfied.
4. For each j and k, suppose number k is placed in position (i, j) in column j. Then, vertex cjk is
visited, followed by yijk3 , yijk2 , yijk1 and then wij . If k = N (ie if j is about to be incremented or
we are finished step 4) then this is followed by dj . This process will ensure constraint 3 is satisfied.
5. The finishing vertex f is visited last and the Hamiltonian cycle returns to s.
What follows is a short description of how steps 1–5 are intended to work. A more detailed description
follows in the next section.
The idea of the above is that we effectively create two identical copies of the Sudoku puzzle. In step
2, we place numbers in the puzzles, which are linked together in such a way to ensure the numbers are
placed identically in both copies. Placing a number k into position (i, j), contained in block a, is achieved
by first visiting bak , and then proceeding to visit every puzzle vertex xijml except for when m = k,
effectively leaving the assigned number “open”, or unvisited. Immediately after visiting the appropriate
puzzle vertices, the exact same duplicate puzzle vertices yijml are visited as well, leaving the assigned
number unvisited in the second copy as well. Since each block vertex bak is only visited once, each number
is placed precisely once in each block, satisfying constraint 1. The hope is, after satisfying constraint 1,
4
Michael Haythorpe
that the row and column constraints have also been satisfied. If not, it will prove impossible to complete
steps 3 and 4 without needing to revisit a vertex that was visited in step 2.
In step 3, we traverse the row vertices one at a time. If number k was placed in position (i, j), then row
vertex rik is followed by the unvisited vertices xijk3 , xijk2 , xijk1 , and then by the end puzle vertex vij .
Once all rik vertices have been traversed for a given i, we visit the end row vertex ti . Note that the three
x vertices visited for each i and k in step 3 are the three that were skipped in step 2. Therefore, every
puzzle vertex is visited by the time we finish traversing all the row vertices. However, if row i is missing
the number k, then there will be no available unvisited puzzle vertices to visit after rik , so this part of the
graph can only be traversed if all the row constraints are satisfied by the choices in step 2.
Step 4 evolves analogously to step 3, except for cjk instead of rik , yijkl instead of xijkl , wij instead of
vij and dj instead of ti . Hence, this part of the graph can only be traversed if all the column constraints
are also satisfied by the choices in step 2.
Assuming the graph must be traversed as described above, it is clear that all Hamiltonian cycles in the
resultant instance of HCP correspond to valid Sudoku solutions. In order to show this is the case, we first
describe the set of directed edges E in the graph. Note that in each of the following, if k + 1 or k + 2 are
bigger than N , they should be wrapped back around to a number between 1 and N by subtracting N . For
example, if k + 2 = N + 1 then it should be taken as 1 instead.
• (s , b11 ), (dN , f ) and (f , s)
• (bak , xi,j,(k+1),1 ) for all a, k, and (i, j) contained in block a
• (xijk1 , xijk2 ), (xijk2 , xijk1 ), (xijk2 , xijk3 ) and (xijk3 , xijk2 ) for all i, j, k
• (xijk3 , xi,j,(k+1),1 ) for all i, j, k
• (yijk1 , yijk2 ), (yijk2 , yijk1 ), (yijk2 , yijk3 ) and (yijk3 , yijk2 ) for all i, j, k
• (yijk3 , yi,j,(k+1),1 ) for all i, j, k
• (xijk3 , yi,j,(k+2),1 ) for all i, j, k
• (yijk3 , ba,k+2 ) for all i, j, and for k 6= N − 1, where a is the block containing position (i, j)
• (yi,j,N −1,3 , ba+1,1 ) for all i, j except for the case where both i = N and j = N , where a is the
block containing position (i, j)
• (yN,N,N −1,3 , r11 )
• (rik , xijk3 ) for all i, j, k
• (xijk1 , vij ) for all i, j, k
• (vij , rik ) for all i, j, k
• (vij , ti ) for all i, j
• (ti , ri+1,1 ) for all i < N
• (tN , c11 )
Reducing the generalised Sudoku problem to the Hamiltonian cycle problem
5
• (cjk , yijk3 ) for all i, j, k
• (yijk1 , wij ) for all i, j, k
• (wij , cjk for all i, j, k
• (wij , dj ) for all i, j
• (dj , cj+1,1 ) for all j < N
3
Detailed explanation
We need to show that every valid Hamiltonian cycle corresponds to a valid Sudoku solution. Note that at
this stage, we have not handled any fixed cells, so any valid Sudoku solution will suffice. Fixed cells will
be taken care of in Section 5.
Theorem 3.1 Every Hamiltonian cycle in the graph constructed in the previous section corresponds to a
valid Sudoku solution, and every valid Sudoku solution has corresponding Hamiltonian cycles.
Proof: First of all, note that vertices xijk2 are degree 2 vertices, and so they ensure that if vertex xijk1
is visited before xijk3 , it must be proceeded by xijk2 and then xijk3 . Likewise, if vertex xijk3 is visited
before xijk1 , it must be proceeded by xijk2 and xijk1 . The same argument holds for vertices yijk2 . This
will ensure that the path any Hamiltonian cycle must take through the x and y vertices is tightly controlled.
Each of the block vertices bak links to xi,j,(k+1),1 for all (i, j) contained in block a. One of these
edges must be chosen. Suppose number k is to be placed in position (i, j), contained in block a. Then
the edge (bak , xi,j,(k+1),1 ) is traversed. From here, the cycle must continue through vertices xi,j,(k+1),2
and xi,j,(k+1),3 . It is then able to either exit to one of the y vertices, or continue visiting x vertices.
However, as will be seen later, if it exits to the y vertices at this stage, it will be impossible to complete
the Hamiltonian cycle. So instead it continues on to xi,j,(k+2),1 , and so on. Only once all of the xijml
vertices for m 6= k have been visited (noting that i and j are fixed here) can it safely exit to the y vertices
– refer this as Assumption 1 (we will investigate later what happens if Assumption 1 is violated for any
i, j, k). The exit to y vertices will occur immediately after visiting vertex xi,j,(k−1),3 , which is linked to
vertex yi,j,(k+1),1 . Note that by Assumption 1, vertices xijkl are unvisited for l = 1, 2, 3. Then, from
the y vertices, the same argument as above applies again, and eventually vertex yi,j,(k−1),3 is departed,
linking to vertex ba,k+1 if k < N , or to vertex ba+1,1 if k = N . Refer to the equivalent assumption on
visiting the y vertices as Assumption 2. This continues until all the block vertices have been traversed, at
which time vertex yN,N,N −1,3 links to r11 . Note that, other than by violating Assumptions 1 or 2, it is
not possible to have deviated from the above path. By the time we arrive at r11 , all the block vertices bak
have been visited. Also, every puzzle vertex xijkl and duplicate puzzle vertex yijkl has been visited other
than those corresponding to placing number k in position (i, j).
Next, each of the row vertices rik links to xijk3 for all i, j, k. For each i and k, one of these edges
must be chosen. However, by Assumption 1, all vertices xijk3 have already been visited except for those
corresponding to the number k being placed in position (i, j). If the choices in the previous step violate
the row constraints, then there will be a row i that does not contain a number k, and subsequently there
will be no valid edge emanating from vertex rik . Hence, if the choices made in step 2 violate the row
constraints, and Assumption 1 is correct, it is impossible to complete a Hamiltonian cycle. If the choices
6
Michael Haythorpe
in the previous step satisfy the row constraints, then there should always be precisely one valid edge to
choose here. Once vertex xijk3 is visited, vertices xijk2 and xijk1 must follow, at which point the only
remaining valid choice is to proceed to vertex vij . From here, any row vertex rim that has not yet been
visited can be visited. If all, have been visited, then ti can be visited instead. Note that once ti is visited,
it is impossible to return to any rik vertices, so they must all be visited before ti is visited.
An analogous argument to above can be made for the column vertices cjk . Note that if Assumptions 1
and 2 are correct, then vertex yijkl will be unvisited at the start of step 4 if and only if xijkl was unvisited
at the start of step 3. Therefore, we see that if Assumptions 1 and 2 are correct, then it is only possible to
complete the Hamiltonian cycle if the choices made in step 2 correspond to a valid Sudoku solution.
Now consider the situation where Assumption 1 is violated, that is, after step 2 there exists unvisited
vertices xijkl and xijml for some i, j, and k 6= m. Then during step 3, without loss of generality, suppose
vertex rik is visited before rim . As argued above, this will be followed by vertices xijk3 , xijk2 , xijk1 ,
at which point visiting vertex vij is the only available choice. Then later, rim is visited. It must visit
xijm3 , xijm2 , xijm1 and is then, again, forced to proceed to vertex vij . However, since vertex vij has
already been visited, this is impossible and the Hamiltonian cycle cannot be completed. If Assumption 2
is violated, and it is vertices yijkl and yijml that are unvisited after step 2, an analogous argument can be
made involving step 4. Hence, every Hamiltonian cycle in the graph must satisfy Assumptions 1 and 2.
This completes the proof.
2
Since any valid Sudoku solution has corresponding Hamiltonian cycles, the resulting instance of HCP
is equivalent to a blank Sudoku puzzle. In a later section, the method for removing edges based on fixed
numbers for a given Sudoku instance is described. Since the instance of HCP can be constructed, and
the relevant edges removed, in polynomial time as a function of N , the algorithm above constitutes a
reduction of Sudoku to the Hamiltonian cycle problem.
4
Size of “blank” instance
The instance of HCP that emerges from the above conversion consists of 6N 3 + 5N 2 + 2N + 2 vertices,
and 19N 3 +2N 2 +2N +2 directed edges. For the standard Sudoku puzzle where N = 9, this corresponds
to a directed graph with 4799 vertices and 14033 directed edges.
All of the best HCP heuristic currently available assume that the instance is undirected. There is a
well-known conversion of directed HCP to undirected HCP which can be performed as follows. First,
produce a new graph which has three times as many vertices as the directed graph. Then add edges to this
new graph by the following scheme, where n is the number of vertices in the directed graph:
1. Add edges (3i − 1, 3i − 2) and (3i − 1, 3i) for all i = 1, . . . , n.
2. For each directed edge (i, j) in the original graph, add edge (3i, 3j − 2).
In the present case, this results in an undirected instance of HCP consisting of 18N 3 + 15N 2 + 6N + 6
vertices and 31N 3 + 12N 2 + 6N + 6 edges. This implies that the average degree in the graph grows
monotonically with N , but towards a limit of 31
9 , so the resultant graph instance is sparse. For N = 4, the
average degree is just slightly above 3.1, and for N = 9 the average degree is just under 3.3.
A trick can be employed to reduce the number of vertices in the undirected graph. Consider the vertices in the undirected graph corresponding to the x and y vertices. In particular, consider the set of 9
Reducing the generalised Sudoku problem to the Hamiltonian cycle problem
7
vertices corresponding to xijk1 , xijk2 and xijk3 . The nine vertices form an induced subgraph such as that
displayed at the top of Figure 1. There are incoming edges incident on the first and seventh vertices, and
outgoing edges incident on the third and ninth vertices. If the induced subgraph is entered via the first
vertex, it must be departed via the ninth vertex, or else a Hamiltonian cycle cannot be completed. Likewise, if the induced subgraph is entered via the seventh vertex, it must be departed via the third vertex.
It can be seen by inspecting all cases that if the fifth vertex is removed, and a new edge is introduced
between the fourth and sixth vertices, the induced subgraph retains these same properties. This alternative
choice is displayed at the bottom of Figure 1. Such a replacement can be made for each triplet xijkl or
yijkl . Hence, we can remove 2N 3 vertices and 2N 3 edges from the undirected graph for a final total of
16N 3 + 15N 2 + 6N + 6 vertices and 29N 3 + 12N 2 + 6N + 6, although at the cost of raising the average
degree by a small amount (roughly between 0.1 and 0.15, depending on N .)
Fig. 1: The induced subgraph created after the conversion to an undirected graph, corresponding to vertices xijk1 ,
xijk2 and xijk3 , and an alternative subgraph with one vertex removed.
5
Handling fixed numbers
In reality, all meaningful instances of Sudoku have fixed values in some of the N 2 cells. Although this
could potentially be handled by removing vertices, it would then be necessary to redirect edges appropriately. Instead, it is simpler to remove edges that cannot be used while choosing these fixed values. Once
this is performed, a graph simplifying heuristic could then be employed to remove unnecessary vertices if
desired.
For each fixed value, 12N − 12 edges can be identified as redundant, and be removed. However, when
there are multiple fixed values, some edges may be identified as redundant multiple times, so 12N − 12 is
only an upper bound on the number of edges that can be removed per fixed value. For example, suppose
one cell has a fixed value of 1, and another cell within the same block has a fixed value of 2. From the first
fixed value, we know that all other entries in the block must not be 1. From the second fixed value, we
know that the second cell must have a value of 2, and hence not 1. Then the edge corresponding to placing
a value of 1 in the second cell would be identified as redundant twice. The exact number of redundant
edges identified depends on the precise orientation of the fixed values.
For each fixed value k in position (i, j), and block a containing position (i, j), the following sets of
edges are redundant and may be removed (an explanation for each set follows the list):
(1) (bak , xmnk1 ) for all choices of m and n such that block a contains (m, n), and also (m, n) 6= (i, j)
8
Michael Haythorpe
(2) (bam , xijm1 ) for m 6= k
(3) (xm,n,(k−1),3 , ym,n,(k+1),1 ) for all choices of m and n such that block a contains (m, n), and also
(m, n) 6= (i, j)
(4) (xi,j,(m−1),3 , yi,j,(m+1),1 ) for m 6= k
(5a) If k < N : (ym,n,(k−1),3 , ba,k+1 ) for all choices of m and n such that block a contains (m, n),
and also (m, n) 6= (i, j)
(5b) If k = N and a < N : (ym,n,(k−1),3 , ba+1,1 ) for all choices of m and n such that block a contains
(m, n), and also (m, n) 6= (i, j)
(5c) If k = N and a = N : (ym,n,(k−1),3 , r11 ) for all choices of m and n such that block a contains
(m, n), and also (m, n) 6= (i, j)
(6a) (yi,j,(m−1),3 , ba,m+1 ) for m 6= k and m 6= N
(6b) If k < N and a < N : (yi,j,(N −1),3 , ba+1,1 )
(6c) If k < N and a = N : (yi,j,(N −1),3 , r11 )
(7) (rik , ximk3 ) for all m 6= k
(8) (ximk1 , vim ) for all m 6= k
(9) (rim , xijm3 ) for all m 6= k
(10) (cjk , ymjk3 ) for all m 6= k
(11) (ymjk1 , wmk ) for all m 6= k
(12) (cjm , yijm3 ) for all m 6= k
The edges in set (1) correspond to the option of placing a value of k elsewhere in block a. The edges
in set (2) correspond to the option of picking a value other than k in position (i, j). Those two sets of
incorrect choices would lead to the edges from sets (3) and (4) respectively being used to transfer from
the x vertices to the y vertices, and so those edges are also redundant.
The edges in (5a)–(5c) correspond to the edges that return from the y vertices to the next block vertex
after an incorrect choice is made (corresponding to the set (1)). If k = N then the next block vertex is
actually for the following block, rather than for the next number in the same block. If k = N and a = N
then all block vertices have been visited and the next vertex is actually the first row vertex.
Likewise, the edges in (6a)–(6c) correspond to the edges that return from the y vertices after an incorrect
choice is made (corresponding to the set (2)). Note that if k = N , there are N − 1 redundant edges in
(6a). If k < N there are N − 2 redundant edges in (6a) and then one additional redundant edge from
either (6b) or (6c).
The edges in set (7) correspond to the option of finding a value of k in row i at a position other than
(i, j), which is impossible. The edges in set (8) correspond to visiting the end puzzle vertex after making
an incorrect choice from (7). The edges in set (9) correspond to the option of finding a value other than k
Reducing the generalised Sudoku problem to the Hamiltonian cycle problem
9
in row i and position (i, j), which is also impossible. Analogous arguments can be made for the edges in
sets (10)–(12), except for columns instead of rows.
Each of sets (1)-(4) and (7)-(12) identify N − 1 redundant edges each. As argued above, the relevant
sets from (5a)–(5c) will contribute N − 1 more redundant edges, as well the relevant sets from (6a)–(6c).
Hence, the maximum number of edges that can be removed per number is 12N − 12 for each fixed value.
6
Recovering the Sudoku solution from a Hamiltonian cycle
The constructive algorithm above produces a HCP instance for which each solution corresponds to a valid
Sudoku solution Once such a solution is obtained, the following algorithm reconstructs the corresponding
Sudoku solution:
Denote by h the Hamiltonian cycle obtained. For each i = 1, . . . , N and j = 1, . . . , N , find vertex
vij in h. Precisely one of its adjacent vertices in h will be of the form xijk1 for some value of k. Then,
number k can be placed in the cell in the ith row and jth column in the Sudoku solution.
Suppose that the vertices are labelled in the order given in Section 2. That is, s is labelled as 1, f is
labelled as 2, the bak vertices are labelled 3, 4, . . . , N 2 − 2, and so on. Then, for each i and j, vertex vij
will be labelled 3N 3 +3N 2 +(i+1)N +(j +2), and vertex xijk1 will be labelled 3iN 2 +(3j −1)N +3k.
Of course, if the graph has been converted to an undirected instance, or if it has been reduced in size by a
graph reduction heuristic, these labels will need to be adjusted appropriately.
7
Reducing the size of the HCP instances
After constructing the HCP instances using the above method, graph reduction techniques can be applied.
Most meaningful instances of Sudoku will have many fixed values, which in turn leads to an abundance
of degree 2 vertices.
In order to test the effectiveness of such techniques, a very simple reduction algorithm was used. Iteratively, the algorithm iteratively checks the following two conditions until there are no applicable reductions remaining:
1. If two adjacent vertices are both degree 2, they can be contracted to a single vertex.
2. If a vertex has two degree 2 neighbours, all of its incident edges going to other vertices can be
removed.
Note that the second condition above leads to three adjacent degree 2 vertices which will in turn be
contracted to a single vertex. The removal of edges when the second condition is satisfied often leads to
additional degree 2 vertices being formed which allows the algorithm to continue reducing.
Note also that this simple graph reduction heuristic is actually hampered by the graph reduction method
described in Section 4, since that method eliminates many degree 2 vertices. It is likely that a more
sophisticated graph reduction heuristic could be developed that incorporates both methods.
The above heuristic was applied to both a well-formed (that is, uniquely solvable) Sudoku instance with
35 fixed values, as well as one of the Sudoku instances from the repository of roughly 50000 instances
maintained by Royle [10]. The instances in that repository all contain precisely 17 fixed numbers, and
are all well-formed; it was recently proved via a clever exhaustive computer search that 17 is the minimal
10
Michael Haythorpe
Fig. 2: Two well-formed Sudoku instances with 35 fixed values and 17 fixed values respectively.
number of fixed values for a well-formed Sudoku problem with 9 symbols [9]. The two instances tested
are displayed in Figure 2.
After the simple reduction heuristic above was applied to the first Sudoku instance, it had been reduced
from an undirected instance with 14397 vertices and 22217 edges, to an equivalent instance with 8901
vertices and 14175 edges. Applying the above reduction algorithm to the second Sudoku instance from
Royle’s repository reduced it from an undirected instance with 14397 vertices and 22873 edges, to an
equivalent instance with 12036 vertices and 19301 edges. In both cases the reduction is significant,
although obviously more there are greater opportunities for reduction when there are more fixed values.
Both instances were solved by Concorde [1] which is arguably the best algorithm for solving HCP
instances containing large amount of structure, as its branch-and-cut method is very effective at identifying
sets of arcs that must be fixed all at once, or not at all, particularly in sparse graphs. Technically, Concorde
actually converts the HCP instance to an equivalent TSP instance but does so in an efficient way. The first
instance was solved during Concorde’s presolve phase, while the second instance required 20 iterations
of Concorde’s branch and cut algorithm(i) to discover a solution. This would seem to indicate that the
first Sudoku instance can be solved without requiring any amount of guessing. The two solutions were
then interpreted via the algorithm in Section 6 to provide solutions to the initial Sudoku instances; those
solutions are displayed in Figure 3.
References
[1] Applegate, D.L., Bixby, R.B., Chavátal, V., and Cook, W.J.: Concorde TSP Solver:
http://www.tsp.gatech.edu/concorde/index.html (2015). Accessed Jan 20, 2016.
[2] Bartlett, A., Chartier, T.P., Langville, A.V. and Rankin, T.D.: An integer programming model for the
Sudoku problem. Journal of Online Mathematics and its Applications, vol.8, Article ID 1798, 2008.
[3] Creignou, N.: The class of problems that are linearly equivalent to Satisfiability or a uniform method
for proving NP-completeness, Lect. Notes. Comput. Sc., 145:111-145, 1995.
(i)
It should be noted that Concorde does use a small amount of randomness in its execution. The random seed used in this experiment
was 1453347272.
Reducing the generalised Sudoku problem to the Hamiltonian cycle problem
11
Fig. 3: The solutions to the Sudoku instances in Figure 2, as interpreted from the Hamiltonian cycles of the converted
HCP instances.
[4] Dewdney, A.K.: Linear transformations between combinatorial problems, Int. J. Comput. Math.,
(11):91–110, 1982.
[5] Ejov, V., Haythorpe, M., and Rossomakhine, S.: A Linear-size Conversion of HCP to 3HCP. Australasian Journal of Combinatorics 62(1):45–58, 2015.
[6] J. A. Filar and M. Haythorpe, A Linearly-Growing Conversion from the Set Splitting Problem to the
Directed Hamiltonian Cycle Problem, in: Optimization and Control methods in Industrial Engineering and Construction, pp. 35–52, 2014.
[7] R. M. Karp, Reducibility among combinatorial problems, Springer, New York, 1972.
[8] Lynce, I. and Ouaknine, J.: Sudoku as a SAT Problem. In Proceedings of the 9th Symposium on
Artificial Intelligence and Mathematics, 2006.
[9] McGuire, G., Tugemann, B. and and Civario, G.: There Is No 16-Clue Sudoku: Solving the Sudoku
Minimum Number of Clues Problem via Hitting Set Enumeration. Exp. Math. 23(2):190–217, 2014.
[10] Royle, G.: Minimum Sudoku. http://staffhome.ecm.uwa.edu.au/ 00013890/sudokumin.php (2005).
Accessed Jan 20, 2016.
[11] Simonis, H.: Sudoku as a constraint problem. In CP Workshop of Modeling and Reformulating
Constraint Satisfaction Problems, pages 13–27, 2005.
[12] Yato, T. and Seta, T.: Complexity and completeness of finding another solution and its application
to puzzles. IEICE T. Fund. Electr., E86-A(5):1052–1060, 2003.
| 8 |
ON LENGTHS OF HZ-LOCALIZATION TOWERS
arXiv:1605.08198v2 [] 27 Jun 2016
SERGEI O. IVANOV AND ROMAN MIKHAILOV
Abstract. In this paper, the HZ-length of different groups is studied. By definition,
this is the length of HZ-localization tower or the length of transfinite lower central series
of HZ-localization. It is proved that, for a free noncyclic group, its HZ-length is ≥ ω + 2.
For a large class of Z[C]-modules M, where C is an infinite cyclic group, it is proved
that the HZ-length of the semi-direct product M ⋊ C is ≤ ω + 1 and its HZ-localization
can be described as a central extension of its pro-nilpotent completion. In particular,
this class covers modules M , such that M ⋊ C is finitely presented and H2 (M ⋊ C) is
finite.
MSC2010: 55P60, 19C09, 20J06
1. Introduction
Let R be either a subgring of rationals or a cyclic ring. In his fundamental work [5], A.K.
Bousfield introduced the concept of HR-localization. This is a functor in the category of
groups, closely related to the functor of homological localization of spaces. In this paper
we will study the case R = Z, that is, HZ-localization for different groups.
An HZ-map between two groups is a homomorphism which induces an isomorphism
on H1 and an epimorphism on H2 . A group Γ is HZ-local if any HZ-map G → H induces
a bijection Mor(H, Γ) ≃ Mor(G, Γ). Recall that ([5], Theorem 3.10) the class of HZ-local
groups is the smallest class which contains the trivial group and closed under inverse
limits and central extensions. Given a group G, the HZ-localization
η ∶ G → EG
can be uniquely characterized by the following two properties: η is an HZ-map and the
group EG is HZ-local. These two properties are given as a definition of HZ-localization
in [5]. It is shown in [5] that, for any G, the HZ-localization EG exists, unique and
transfiniltely nilpotent.
For a group G, denote by {γτ (G)} the transfinite lower central series of G, defined
inductively as γτ +1 (G) ∶= [γτ (G), G] and γα = ⋂τ <α γτ (G) for a limit ordinal α. For a G,
we will call the length of transfinite lower series of EG, i.e. the least ordinal τ, such that
γτ (EG) = 1 by HZ-length of G and denote it as HZ-length(G).
Let C be an infinite cyclic group. A Z[C]-module M is tame if and only if M ⋊ C is
a finitely presented group [3]. If M is a tame C-module, then dimQ (M ⊗ Q) < ∞ and
there exist a generator t ∈ C such that the minimal polynomial of the linear map t ⊗ Q ∶
M ⊗ Q → M ⊗ Q is an integral monic polynomial, which is denoted by µM ∈ Z[x] (see [2,
Theorem C] and Lemma 4.7). We prove the following
1
ON LENGTHS OF HZ-LOCALIZATION TOWERS
2
Theorem. Let G be a metabelian group of the form G = M ⋊C, where M is a tame Z[C]module and µM = (x − λ1 )m1 . . . (x − λl )ml for some distinct complex numbers λ1 , . . . , λl
and mi ≥ 1.
(1) Assume that the equality λi λj = 1 holds only if λi = λj = 1. Then
HZ-length(G) ≤ ω.
(2) Assume that the equality λi λj = 1 holds only if either mi = mj = 1 or λi = λj = 1.
Then
HZ-length(G) ≤ ω + 1.
As a contrast, we give an example of a finitely presented metabelian group of the form
M ⋊C, where M is tame, whose HZ-length is greater than ω +1. In the following example,
the Z[C]-module M is tame but it does not satisfy the condition of Theorem 5.6. Let
G = ⟨a, b, t ∣ at = a−1 , bt = ab−1 , [a, b] = 1⟩ = Z2 ⋊ C,
(1.1)
( −10 −11 ).
It is shown in Theorem 5.5 that the HZ-length
where C acts on Z2 by the matrix
of G is ≥ ω + 2.
Let M be a tame Z[C]-module and µM = (x − 1)m f, for some m ≥ 0 and f ∈ Z[x] such
that f (1) ≠ 0. Assume that f = f1m1 . . . flml where f1 , . . . , fl ∈ Z[x] are distinct irreducible
monic polynomials. If f (1) ∈ {−1, 1}, then HZ-length(M ⋊ C) < ω. (Corollary 4.17).
Conjecture. If f (1) ∉ {−1, 1}, then HZ-length(M ⋊ C) ≤ ω + n, where
n = max({mi ∣ fi (0) ∈ {−1, 1} ∧ fi (1) ∉ {−1, 1}} ∪ {0}).
In particular, for any tame Z[C]-module M, HZ-length(M ⋊ C) < 2ω.
It is easy to check that the above theorem together with Corollary 4.17 and Proposition
4.8 imply the conjecture for n = 0, 1.
For a group G, denote by Ĝ its pro-nilpotent completion:
Ĝ ∶= lim G/γn (G).
←Ð n
For a finitely generated group G, there is a natural isomorphism (Prop. 3.14 [5])
EG/γω (EG) = Ĝ.
Therefore, for finitely generated groups, HZ-localization gives a natural extension of the
pro-nilpotent completion.
The pro-nilpotent completion of a finitely generated group G is always HZ-local and
the map G → Ĝ induces an isomorphism on H1 . Therefore, for such a group, the following
conditions are equivalent:
1) the natural epimorphism EG ↠ Ĝ is an isomorphism;
2) HZ-length of G ≤ ω;
3) The natural map H2 (G) → H2 (Ĝ) is an epimorphism.
A simple example of a group with HZ-length ω is the following. Let
G = ⟨a, t ∣ at = a3 ⟩ = Z[1/3] ⋊ C.
Here C acts on Z[1/3] as the multiplication by 3. Then the pro-nilpotent completion
has the structure Ĝ = Z2 ⋊ C, where the cyclic group C = ⟨t⟩ acts on 2-adic integers
ON LENGTHS OF HZ-LOCALIZATION TOWERS
3
as the multiplication by 3. Looking at the homology spectral sequence for an extension
1 → Z2 → Ĝ → C → 1, we obtain H2 (Ĝ) = Λ2 (Z2 ) ⊗ Z/9 = 0. Therefore, EG = Ĝ. Since the
group G is not pre-nilpotent, HZ-length(G) = ω.
The above example is an exception. In most cases, the description of HZ-localization
as well as the computation of HZ-length for a given group is a difficult problem. It is
shown in [5] that HZ-length of the Klein bottle group GKl ∶= ⟨a, t ∣ at = a−1 ⟩ = Z ⋊ C is
greater than ω. As a corollary, it is concluded in [5] that HZ-length of any non-cyclic free
group also is greater than ω. Our Theorem 5.6 implies that HZ-length(GKl ) = ω + 1 and
that the HZ-localization EGKl lives in the central extension
1 → Λ2 (Z2 ) → EGKl → Z2 ⋊ ⟨t⟩ → 1,
where the action of t on 2-adic integers is negation. Moreover, we give a more explicit
description of EGKl in Proposition 9.2. The HZ-length of a free non-cyclic group remains
a mystery for us, however, we prove the following
Theorem. Let F be a free group of rank ≥ 2. Then HZ-length(F ) ≥ ω + 2.
Briefly recall the scheme of the proof. Consider an extension of the group (1.1) given by
presentation
Γ ∶= ⟨a, b, t ∣ [[a, b], a] = [[a, b], b] = 1, at = a−1 , bt = ab−1 ⟩
(1.2)
We follow the Bousfield scheme of comparison of pro-nilpotent completions for a free
group and the group Γ. Consider a free simplicial resolution of Γ with F0 = F . Group Γ
has finite second homology, therefore, lim 1 of its Baer invariants is zero, therefore, π0 of
←Ð
the pro-nilpotent completion of the free simplicial resolution equals to the pro-nilpotent
completion of Γ. The group Γ has HZ-length greater than ω + 1 and the result about
HZ-length of F follows from natural properties of the HZ-localization tower. Observe
that the same method does not work for the group (1.1) (as well as for all groups of the
type M ⋊ C for abelian M), since lim 1 of Baer invariants of G is huge (this follows from
←Ð
proposition 5.8 and an analysis of the tower of Baer invariants for a metabelian group).
The paper is organized as follows. In section 2, we present the theory of relative central
extensions, which is a generalisation of the standard theory of central extensions. A nonlimit step in the construction of Bousfield’s tower can be viewed as the universal relative
extension. Section 2 is technical and introductory, it may be viewed just as a comment
to the section 3 of [5]. In section 3 we recall the exact sequences in homology from [9],
[10] for central stem-extensions. Observe that the universal relative extensions used for
the construction of HZ-tower are stem-extensions. Proposition 3.1 gives the main trick:
for a cyclic group C, Z[C]-module M, and a central stem-extension N ↪ G ↠ M ⋊ C, the
composite map H3 (M ⋊ C) → (M ⋊ C) ⊗ N ↠ N can be decomposed as H3 (M ⋊ C) →
H2 (M)C → H2 (M)C → N. This trick gives a possibility to analyze the homology of the
ω + 1-st term of the HZ-localization tower for groups of the type M ⋊ C.
Using the properties of tame modules we show in section 4 that the question about the
HZ-length of the group M ⋊ C with a tame Z[C]-module M, can be reduced to the same
question for the group (M/N) ⋊ C, where N is the largest nilpotent submodule of M. It
is shown in [14] that, for a finitely presented metabelian group G, the cokernel (denoted
by H2 (ηω )(G)) of the natural map H2 (G) → H2 (Ĝ) is divisible. Using this property we
ON LENGTHS OF HZ-LOCALIZATION TOWERS
4
conclude that, HZ-length of M ⋊ C is not greater than ω + 1 if and only if the composite
map
Λ2 (M̂ )C → Λ2 (M̂ )C → H2 (ηω )
is an epimorphism (see proposition 5.3). Theorem 5.6 is our main result of section 5. There
is a simple condition on a tame Z[C]-module M which implies that HZ-length(M ⋊ C) ≤
ω + 1. Theorem 5.6 provides a large class of groups for which one can describe HZlocalization explicitly. In particular, we show that, if the homology H2 (M ⋊ C) is finite,
then the module M satisfies the condition (ii) of Theorem 5.6 and therefore, for such M,
HZ-length(M ⋊ C) ≤ ω + 1.
In section 6 we recall the method of Bousfield from [5], which gives a possibility to
study the second homology of the pro-nilpotent completion of a free group. In section 7
we present our root examples (1.1) and (1.2) and prove that they have HZ-length greater
than ω + 1. Following the scheme described above, we get the same result for a free
non-cyclic group. In the last section of the paper we present an alternative approach
for proving that some groups have a long HZ-localization tower. Consider the wreath
product
i
Z ≀ Z = ⟨a, b ∣ [a, ab ] = 1, i ∈ Z⟩
Using functorial technique, we show in theorem 8.2 that HZ-length(Z ≀ Z) ≥ ω + 2. In the
last section, as an application of the theory developed in the paper, we give an explicit
construction of EGKl .
2. Relative central extensions and HZ-localization
Throughout this section G, H denote groups, f ∶ H → G a homomorphism and A an
abelian group.
2.1. (Co)homology of a homomorphism. Consider the continuous map between classifying spaces Bf ∶ BH → BG and its mapping cone Cone(Bf ). Following Bousfield [5,
2.14], we define homology and cohomology of f with coefficients in A as follows
Hn (f, A) = Hn (Cone(Bf ), A),
H n (f, A) = H n (Cone(Bf ), A).
Then there are long exact sequences
⋅ ⋅ ⋅ → H2 (H, A) → H2 (G, A) → H2 (f, A) → H1 (H, A) → H1 (G, A) → H1 (f, A) → 0,
0 → H 1 (f, A) → H 1 (G, A) → H 1 (H, A) → H 2 (f, A) → H 2 (G, A) → H 2 (H, A) → . . . .
In particular, H1 (f ) = Coker{Hab → Gab } and H1 (f, A) = H1 (f ) ⊗ A.
We denote by C̄ ● (G, A) the complex of normalized cochains of G with coefficients in A,
[18, 6.5.5] by ∂ n ∶ C̄ n (G, A) → C̄ n+1 (G, A) its differential and by Z̄ n (G, A) and B̄ n (G, A)
the groups of normalized cocycles and coboundaries. For a homomorphism f ∶ H → G
and an abelian group A we denote by Z̄ n (f, A) and B̄ n (f, A) the following subgroups of
Z̄ n (G, A) ⊕ C̄ n−1 (H, A)
Z̄ n (f, A) = {(c, α) ∣ f ∗ c = −∂α},
B̄ n (f, A) = {(−∂β, f ∗ β + ∂γ) ∣ β ∈ C̄ n−1 (G, A), γ ∈ C̄ n−2 (H, A)}.
Since the map ∂ ∶ C̄ 0 (H, A) → C̄ 1 (H, A) is trivial, we have
B̄ 2 (f, A) = {(−∂β, βf ) ∣ β ∈ C̄ 1 (G, A)}.
ON LENGTHS OF HZ-LOCALIZATION TOWERS
5
Lemma 2.1. For n ≥ 1 there is an isomorphism H n (f, A) ≅ Z̄ n (f, A)/B̄ n (f, A).
Proof. For a space X, we denote by C● (X) the complex of integral chains. Then
C● (X, A) = C● (X) ⊗ A and C ● (X, A) = Hom(C● (X), A). For a continuous map F ∶ X → Y
we denote by C● (F ) ∶ C● (X) → C● (Y ) the induced morphism of complexes. Then there
is a natural homotopy equivalence of complexes Cone(C● (F )) ≃ C● (Cone(F )). It follows
that there is a natural homotopy equivalence of complexes
C ● (Cone(F ), A) ≃ Cone(C ● (F, A))[−1].
(2.1)
Denote by C ● (G, A) the complex of (non-normalised) cochains of the group G. For
a homomorphism f ∶ H → G we denote by C ● (f, A) ∶ C ● (G, A) → C ● (H, A) the induced morphism of complexes. There is a natural homotopy equivalence C ● (G, A) ≃
C ● (BG, A). Moreover, there is a natural homotopy equivalence of complexes of normalised and non-normalised cochains C̄ ● (G, A) ≃ C ● (G, A) because that come from
two different functorial resolutions. It follows that there is a natural homotopy
equivalence Cone(C̄ ● (f, A)) ≃ Cone(C ● (Bf, A)). Combining this with (2.1) we get
∼
C ● (Cone(Bf ), A) Ð
→ Cone(C̄ ● (f, A))[−1]. The assertion follows.
2.2. Relative central extensions.
Definition 2.2. A relative central extension of G by A with respect to f is a couple
ι
π
ι
π
E = (A ↣ E ↠ G, f˜), where A ↣ E ↠ G is a central extension of G and f˜ ∶ H → E is a
homomorphism such that π f˜ = f.
H
0
//
A
ι
❅❅
❅❅ f
❅❅
f˜
❅❅
π //
// E
G
//
1
ι1
π1
ι2
π2
Two relative central extensions (A ↣ E1 ↠ G, f˜1 ) and (A ↣ E2 ↠ G, f˜2 ) are said to be
≅
equivalent if there exist an isomorphism θ ∶ E1 → E2 such that θι1 = ι2 , π2 θ = π1 and
θf˜1 = f˜2 .
Let (c, α) ∈ Z̄ 2 (f, A). Consider the central extension A ↣ Ec ↠ G corresponding to the
2-cocycle c. The underling set of Ec is equal to A × G and the product is given by
(a1 , g1 )(a2 , g2 ) = (a1 + a2 + c(g1 , g2 ), g1 g2 ).
Denote by f˜α ∶ H → Ec the map given by f˜α (h) = (α(h), f (h)). Note that the equality
f ∗ c = −∂α implies the equality
c(f (h1 ), f (h2 )) = −α(h1 ) + α(h1 h2 ) − α(h2 )
for all h1 , h2 ∈ H. It follows that f˜α is a homomorphism. Indeed
f˜α (h1 )f˜α (h2 ) = (α(h1 ), f (h1 ))(α(h2 ), f (h2 )) =
(α(h1 ) + α(h2 ) + c(f (h1 ), f (h2 )), f (h1 )f (h2 )) =
(α(h1 h2 ), f (h1 h2 )) = f˜α (h1 h2 ).
Then we obtain a relative central extension
E(c, α) = (A ↣ Ec ↠ G, f˜α ).
ON LENGTHS OF HZ-LOCALIZATION TOWERS
6
Proposition 2.3. The map (c, α) ↦ E(c, α) induces a bijection between elements of
H 2 (f, A) and equivalence classes of relative central extensions of G by A with respect
to f .
Proof. Any central extension is equivalent to the extension A ↣ Ec ↠ G for a normalised
2-cocycle c. Hence, it is sufficient to consider only them. Consider a relative central
extension (A ↣ Ec ↠ G, f˜). Define α ∶ H → A so that f˜(h) = (α(h), f (h)). Since f˜(1) =
(0, 1), α is a normalised 1-cochain. Since f˜ is a homomorphism, we get
(α(h1 ) + α(h2 ) + c(f (h1 ), f (h2 )), f (h1 )f (h2 )) = (α(h1 h2 ), f (h1 h2 )).
Thus f ∗ c = −∂α. It follows that any relative central extension is isomorphic to the relative
central extension E(c, α) for some (c, α) ∈ Z 2 (f, A).
Consider two elements (c, α), (c′ , α′ ) ∈ Z 2 (f, A) such that (c′ , α′ ) − (c, α) = (−∂β, βf )
for some β ∈ C̄ 1 (G, A). It follows that
c′ (g1 , g2 ) + β(g1 ) + β(g2 ) = c(g1 , g2 ) + β(g1 g2 )
for any g1 , g2 ∈ G and α′ (h) = α(h) + β(f (h)) for any h ∈ H. Denote by θβ ∶ Ec → Ec′ the
map given by θβ (a, g) = (a + β(g), g). Then θβ is a homomorphism. Indeed,
θβ (a1 , g1 )θβ (a2 , g2 ) = (a1 + β(g1 ), g1 )(a2 + β(g2 ), g2 ) =
(a1 + a2 + β(g1 ) + β(g2 ) + c′ (g1 , g2 ), g1 g2 ) =
(a1 + a2 + β(g1 g2 ) + c(g1 , g2 ), g1 g2 ) = θβ ((a1 , g1 )(a2 , g2 )).
Moreover, θβ is an isomorphism, because θ−β is its inverse, and it is easy to see that it is
an equivalence of the relative central extensions. It follows that the map (c, α) ↦ E(c, α)
induces a surjective map from elements of H 2 (f, A) to equivalence classes of relative
central extensions.
Consider two elements (c, α), (c′ , α′ ) ∈ Z 2 (f, A) such that the relative central extensions
E(c, α) and E(c′ , α′ ) are equivalent. Then there is an equivalence θ ∶ Ec → Ec′ . Since θ
respects the i njections from A, ϕ(a, 1) = (a, 1) for any a ∈ A. Since θ respects the
projections on G, there exist a unique normalised 1-cochain β ∶ G → A such that θ(0, g) =
(β(g), g) for any g ∈ G. Using that c is a normalised 2-cocycle, we obtain
θ(a, g) = θ((a, 1)(0, g)) = (a, 1)(β(g), g) = (a + β(g), g).
Then the fact that θ is a homomorphism implies that
c′ (g1 , g2 ) + β(g1 ) + β(g2 ) = c(g1 , g2 ) + β(g1 g2 ),
and hence c′ − c = −∂β, and the equality θf˜α = f˜α′ implies α′ − α = βf. The assertion
follows.
2.3. Universal relative central extensions. Let A1 and A2 be abelian groups. Recall
ι1
π1
ι2
π2
that c morphism from a central extension A1 ↣ E1 ↠ G to a central extension A2 ↣ E2 ↠
G is a couple (ϕ, θ), where ϕ ∶ A1 → A2 and θ ∶ E1 → E2 are homomorphisms such that
θι1 = ι2 τ, π2 θ = π1 .
Lemma 2.4 (cf. [18, Lemma 6.9.6]). Let (ϕ, θ) and (ϕ′ , θ′ ) be morphisms from a central
extension A1 ↣ E1 ↠ G to a central extension A2 ↣ E2 ↠ G. Then the restrictions on
the commutator subgroup coincide θ∣[E1 ,E1 ] = θ′ ∣[E1 ,E1 ] .
ON LENGTHS OF HZ-LOCALIZATION TOWERS
7
Proof. For the sake of simplicity we identify A1 with the subgroup of E1 and A2 with the
subgroup of E2 . Consider the map ρ ∶ E1 → A2 given by ρ(x) = θ(x)θ′ (x)−1 . Since A2 is
central, we get
ρ(x)ρ(y) = θ(x)θ′ (x)−1 ρ(y) = θ(x)ρ(y)θ′ (x)−1 =
θ(x)θ(y)θ′ (y)−1 θ′ (x)−1 = θ(xy)θ′ (xy)−1 = ρ(xy).
Hence ρ is a homomorphism to an abelian group. Thus ρ∣[E1 ,E1 ] = 1.
Lemma 2.5. Let (ϕ, θ) be a morphism from a central extension A1 ↣ E1 ↠ G to a central
ι2
π2
extension A2 ↣ E2 ↠ G. If ϕ = 0, then A2 ↣ E2 ↠ G splits.
ι1
π1
Proof. ϕ = 0 implies θι1 = 0. Since G is a cokernel of ι1 , there exists s ∶ G → E2 such that
sπ1 = θ. Then π2 sπ1 = π2 θ = π1 . It follows that π2 s = id.
Definition 2.6. A morphism from a relative central extension (A1 ↣ E1 ↠ G, f˜1 ) to
a relative central extension (A2 ↣ E2 ↠ G, f˜2 ) is a morphism (ϕ, θ) from the central
extension A1 ↣ E1 ↠ G to the central extension A2 ↣ E2 ↠ G such that θf˜1 = f˜2 . So
relative central extensions of G with respect to f form a category. The initial object of
this category is called the universal relative central extension of G with respect to f.
Example 2.7. For any homomorphism ϕ ∶ A1 → A2 and any (c, α) ∈ Z 2 (f, A1 ) there is a
morphism of relative central extensions
E(ϕ) = (ϕ, ϕ × id).
E(ϕ) ∶ E(c, α) Ð→ E(ϕc, ϕα),
(2.2)
Definition 2.8. A homomorphism f ∶ H → G is said to be perfect if fab ∶ Hab → Gab is an
epimorphism. In other words, f is perfect if and only if H1 (f ) = 0.
Lemma 2.9 (cf. [18, Lemma 6.9.6]). Let (ϕ, θ) and (ϕ′ , θ′ ) be morphisms from a relative
central extension (A1 ↣ E1 ↠ G, f˜1 ) to a relative central extension (A2 ↣ E2 ↠ G, f˜2 ). If
f˜1 is perfect, then (ϕ, θ) = (ϕ′ , θ′ ).
Proof. Since f˜1 is perfect, we obtain Im(f˜1 )[E1 , E1 ] = E1 . Since θf˜1 = f˜2 = θ′ f˜1 , we have
θ∣Im(f˜1 ) = θ′ ∣Im(f˜1 ) . Lemma 2.4 implies θ∣[E1 ,E1] = θ′ ∣[E1 ,E1 ] . The assertion follows.
Proposition 2.10. The universal relative central extension of G with respect to f exists
if and only if f is perfect. Moreover, in this case it is unique (up to isomorphism) and
given by a short exact sequence
H
u
0
//
H2 (f )
//
U
❄❄
❄❄ f
❄❄
❄❄
//
G
//
1
where u ∶ H → U is a perfect homomorphism.
Proof of Proposition 2.10. Assume that there is a universal relative central extension
π
(A ↣ U ↠ G, u) of G with respect to f and prove that f is perfect. Set B =
Coker(fab ∶ Hab → Gab ) and consider the epimorphism τ ∶ U ↠ B given by the composition U ↠ G ↠ Gab ↠ B. Then we have two morphisms (0, (πτ )) and (0, (π0 )) from
ON LENGTHS OF HZ-LOCALIZATION TOWERS
8
(A ↣ U ↠ G, u) to the split extension (B ↣ B × G ↠ G, (f0)). The universal property
implies that they are equal, and hence B = 0. Thus f is perfect.
Assume that f is perfect. Since H1 (Cone(Bf )) = H1 (f ) = 0, the universal coefficient formula for the space Cone(Bf ) implies that there is an isomorphism H 2 (f, A) ≅
Hom(H2 (f ), A) natural by A. Chose an element (cu , αu ) ∈ Z 2 (f, H2 (f )) that represents
the element of H 2 (f, H2 (f )) corresponding to the identity map in Hom(H2 (f ), H2 (f )).
ιu
πu
Set U ∶= Ecu , u = f˜αu and Eu = E(cu , αu ). Then Eu = (H2 (f ) ↣ U ↠ G, u). Take a
homomorphism ϕ ∶ H2 (f ) → A and consider the commutative diagram
Z 2 (f, H2 (f ))
// //
H 2 (f, H2 (f )) oo
ϕ∗
Hom(H2 (f ), H2 (f ))
ϕ∗
Z 2 (f, A)
≅
ϕ○−
// //
H 2 (f, A) oo
≅
Hom(H2 (f ), A).
It shows that the isomorphism Hom(H2 (f ), A) ≅ H 2 (f, A) sends ϕ to the class of
(ϕcu , ϕαu ). Combining this with Proposition 2.3 we obtain that any relative central extension is isomorphic to the extension E(ϕcu , ϕαu ) for some ϕ ∶ H2 (f ) → A. For any relative
central extension E(ϕcu , ϕαu ) there exists a morphism E(ϕ) ∶ Eu → E(ϕcu , ϕαu ) from Example 2.7. Then we found a morphism from Eu to any other relative central extension.
In order to prove that Eu is the universal relative central extension, we have to prove that
such a morphism is unique. By Lemma 2.9 it is enough to prove that u ∶ H → U is perfect.
Prove that u ∶ H → U is perfect. In other words we prove that Im(u)[U, U] = U. Set
˜
˜
˜
E ∶= Im(u)[U, U], A = ι−1
u (E) and f ∶ H → E given by f (h) = u(h). Note that f is perfect.
Since f is perfect, πu (E) = Im(f )[G, G] = G. Consider the restriction π = πu ∣E and the
π
relative central extension E = (A ↣ E ↠ G, f˜) with the obvious embedding E ↪ Eu .
Consider the projection ϕ′ ∶ H2 (f ) → H2 (f )/A and take the composition
E(ϕ′ )
E ↪ Eu ÐÐÐ→ E(ϕ′ cu , ϕ′ αu ).
The composition is equal to (0, (ϕ′ × 1)∣E ). By Lemma 2.5 E(ϕ′ cu , ϕ′ αu ) splits. Thus
(ϕ′ cu , ϕ′ αu ) represents 0 in H 2(f, A) ≅ Hom(H2 (f ), A), and hence ϕ′ = 0. It follows that
A = H2 (f ). Then the extension A ↣ E ↠ G is embedded into the extension A ↣ U ↠ G.
It follows that E = U.
Remark 2.11. If fab ∶ Hab → Gab is an isomorphism, then H2 (f ) = Coker{H2 (H) →
H2 (G)}.
Remark 2.12. In the proof of Proposition 2.10 we show that, if f is perfect, then the
universal relative central extension corresponds to the identity map Hom(H2 (f ), H2 (f ))
with respect to the isomorphism H 2 (f, H2 (f )) ≅ Hom(H2 (f ), H2 (f )) that comes from
the universal coefficient theorem.
2.4. HZ-localization tower via relative central extensions. Here we give an approach to the HZ-localization tower [5] via relative central extensions.
Let G be a group and η ∶ G → EG be its HZ-localization. For an ordinal α we define
αth term of the HZ-localization tower by Tα G ∶= EG/γα (EG), where γα (EG) is the αth
term of the transfinite lower central series (see [5, Theorem 3.11]). By ηα ∶ G → Tα G we
denote the composition of η and the canonical projection, and by tα ∶ Tα+1 G ↠ Tα G we
ON LENGTHS OF HZ-LOCALIZATION TOWERS
9
denote the canonical projection. The main point of [5] is that Tα G can be constructed
inductively and EG = Tα G for big enough α. We threat the construction of Tα G for a
non-limit ordinal α via universal relative central extensions.
Proposition 2.13. Let G be a group and α > 1 be an ordinal. Then (ηα )ab ∶ Gab →
(Tα G)ab is an isomorphism, the universal central extension of Tα G with respect to ηα is
given by
G
0
//
●●
●●
● ηα
ηα+1 ●●●
●●
##
tα
// Tα+1 G
//
H2 (ηα )
Tα G
//
1,
and H2 (ηα ) = Coker{H2 (G) → H2 (Tα G)}.
Proof. It follows from [5, 3.2], [5, 3.4], Proposition 2.10 and Remarks 2.11, 2.12.
3. Homology of stem-extensions
Consider a central extension of groups
1→N →G→Q→1
(3.1)
It is shown in [10] that, there is a natural long exact sequence
H3 (G) → H3 (Q) → (Gab ⊗ N)/U → H2 (G) → H2 (Q) → N → H1 (G) → H1 (Q) → 0 (3.2)
β
where U is the image of the natural map
H4 K(N, 2) → Gab ⊗ N.
Here H4 K(N, 2) is the forth homology of the Eilenberg-MacLane space K(N, 2) which
can be described as the Whitehead quadratic functor
H4 K(N, 2) = Γ2 N.
A central extension (3.1) is called a stem-extension if N ⊆ [G, G]. For a stem extension
(3.1), the exact sequence (3.2) has the form (see [9], [10])
H3 (G) → H3 (Q) → Gab ⊗ N → H2 (G) → H2 (Q) → N → 0
δ
β
(3.3)
The map δ is given as follows. We present (3.1) in the form
1 → S/R → F /R → F /S → 1,
for a free group F and normal subgroups R, S with R ⊂ S, [F, S] ⊆ R. Then the map δ is
induced by the natural epimorphism Sab → S/R:
H3 (Q) = H1 (F /S, Sab ) → H1 (F /S, S/R) = Qab ⊗ N = Gab ⊗ N.
The isomorphism H1 (F /S, S/R) = Qab ⊗ N follows from the triviality of F /S-action on
S/R.
In this section we consider the class of metabelian groups of the form Q = M ⋊ C, where
C is an infinite cyclic group and M a Z[C]-module. It follows immediately from the
homology spectral sequence that, for any n ≥ 2, there is a short exact sequence
0 → H0 (C, Hn (M)) → Hn (Q) → H1 (C, Hn−1 (M)) → 0
ON LENGTHS OF HZ-LOCALIZATION TOWERS
10
which can be presented in terms of (co)invariants as
0 → Hn (M)C → Hn (Q) → Hn (M)C → 0.
Composing the last epimorphism with Hn−1 (M)C → Hn−1 (M)C ↪ Hn−1 (Q), we get a
natural (in the category of Z[C]-modules) map
αn ∶ Hn (Q) → Hn−1 (Q).
In the next proposition we will construct a composite map
α3′ ∶ H3 (Q) → H1 (C, Λ2 (M)) ↪ Λ2 (M) ↠ Λ2 (M)C ↪ H2 (Q)
(3.4)
using group-theoretical tools, without spectral sequence. Probably, α3′ coincides with α3
up to isomorphism, but we will not use this comparison later.
Proposition 3.1. For a stem extension (3.1) of a group Q = M ⋊ C, there exists a map
α3′ ∶ H3 (Q) → H2 (Q), given as a composition (3.4), such that the following diagram is
commutative
H3 (Q)
α′3
H2 (Q)
//
β
δ
Gab ⊗ N
//
N
where the lower horizontal map is the projection
Gab ⊗ N → C ⊗ N = N.
Proof. We choose a free group F with normal subgroups R ⊂ S ⊂ T such that
F /T = C, F /S = Q, F /R = G.
In the above notation, we get M = T /S, N = S/R. The proof follows from the direct
analysis of the following diagram, which corners are exactly the roots of the diagram
given in proposition:
α′3
H3 (Q)
H1 (F /S, Sab )
H1 (F /T, S∩T
S′ )
//
′
H1 (F /T, Sab )
//
S∩T
H1 (F /T, [S,T
])
′
(3.5)
H2 (T /S)
H2 (Q)
//
β
δ
H1 (F /S, S/R)
(F /S)ab ⊗ S/R
//
S/R
All arrows of this diagram are natural. We will make comments only about two maps
from the diagram, other maps are obviously defined. The map
S ∩ T′
H1 (F /T,
) → H1 (F /T, Sab )
S′
ON LENGTHS OF HZ-LOCALIZATION TOWERS
11
is an isomorphism since
S
) ↪ H1 (F /T, Tab ) = H3 (F /T ) = 0.
S ∩ T′
S∩T ′
The vertical map in the diagram H1 (F /T, [S,T
] ) = H1 (F /T, H2 (S/T )) ↪ H2 (S/T ) follows
from the identification of H1 of a cyclic group with invariants.
The commutativity of (3.5) follows from the commutativity of the natural square
H1 (F /T,
H1 (F /S, Sab )
// //
H1 (F /S, S/R)
H1 (F /T, Sab )
// //
H1 (F /T, S/R)
and identification of H1 (F /T, −) with invariants of the F /T -action.
4. Tame modules and completions
Throughout the section C denotes an infinite cyclic group. If R is a commutative ring,
R[C] denotes the group algebra over R. We use only R = Z, Q, C. The augmentation ideal
is denoted by I. If t is one of two generators of C, R[C] = R[t, t−1 ] and I = (t − 1).
4.1. Finite dimensional K[C]-modules. Let K be a field (we use only K = Q, C), V
be a right K[C]-module such that dimK V < ∞. If we fix a generator t ∈ C we obtain a
linear map ⋅t ∶ V → V that defines the module structure. We denote the linear map by
aV ∈ GL(V ). The characteristic and minimal polynomials of aV are denoted by χV and
µV respectively. These polynomials depend of the choice of t ∈ C. Note that for any such
modules V and U we have
χV ⊕U = χV χU ,
µV ⊕U = lcm(µV , µU ).
(4.1)
Lemma 4.1. Let V be a right K[C]-module such that dimK V < ∞ and t ∈ C be a
generator. Then there exist distinct irreducible monic polynomials f1 , . . . , fl ∈ K[x] and
an isomorphism
V ≅ V1 ⊕ ⋅ ⋅ ⋅ ⊕ Vl ,
where
mi,l
m
Vi = K[C]/(fi i,1 (t)) ⊕ ⋅ ⋅ ⋅ ⊕ K[C]/(fi i (t)),
and mi,1 ≥ mi,2 ≥ ⋅ ⋅ ⋅ ≥ mi,li ≥ 1. Moreover, if we set mi = ∑j mi,j , then χV = f1m1 . . . flml
m
m
and µV = f1 1,1 . . . fl l,1 .
Proof. Note that K[C] = K[t, t−1 ] is the polynomial ring K[t] localised at the element
t. Then it is a principal ideal domain. Then the isomorphism follows from the structure
theorem for finitely generated modules over a principal ideal domain. The statement
about χV and µV follows from the fact that both characteristic and minimal polynomials
m
m
of K[C]/(fi i,j (t)) equal to fi i,j and the formulas (4.1).
Let R be a commutative ring and t be a generator of C. For an R[C]-module M and
a polynomial f ∈ R[x] we set
M f = {m ∈ M ∣ m ⋅ f (t) = 0}.
ON LENGTHS OF HZ-LOCALIZATION TOWERS
12
Note that M C = M x−1 . It is easy to see that for any f, g ∈ R[x] we have
(M/M f )g = M f g /M f .
(4.2)
Corollary 4.2. Let V be a right K[C]-module such that dimK V < ∞ and t ∈ C be a
generator. Assume that µV = f1m1 . . . flml , where f1 , . . . , fl ∈ K[x] are distinct irreducible
monic polynomials. Consider the filtration
0 = F0 V ⊂ F1 V ⊂ ⋅ ⋅ ⋅ ⊂ Fl V = V
given by Fi V = V
m
m
f1 1 ...fi i
. Then
V = ⊕ Fi V /Fi−1 V,
i
and Fi V /Fj−1 V = (V /Fj−1V )fj
mj
mi
...fi
m
µFi V /Fj−1 = fj j . . . fimi
.
Proof. In the notation of Lemma 4.1 we obtain Fi V = V1 ⊕⋅ ⋅ ⋅⊕Vi . The assertion follows.
4.2. Tame Z[C]-modules. The rank of an abelian group A is dimQ (A ⊗ Q). The torsion
subgroup of A is denoted by tor(A). The following statement seems to be well known but
we can not find a reference, so we give it with a proof.
Lemma 4.3. Let A be a torsion free abelian group of finite rank and B be a finite abelian
group. Then A ⊗ B is finite.
Proof. It is sufficient to prove that A ⊗ Z/pk is finite for any prime p and k ≥ 1. Consider a
p-basic subgroup A′ of A (see [12, VI]). Then A′ ≅ Zr , where r is the rank of A and A/A′
is p-divisible. Thus (A/A′ ) ⊗ Z/pk = 0. Hence the map (Z/pk )r ≅ A′ ⊗ Z/pk ↠ A ⊗ Z/pk is
an epimorphism.
A finitely generated Z[C]-module M is said to be tame if the group M ⋊ C is finitely
presented.
Proposition 4.4 (Theorem C of [2]). Let M be a finitely generated Z[C]-module. Then
M is tame if and only if the following properties hold:
● tor(M) is finite;
● the rank of M is finite;
● there is a generator t of C such that χM ⊗Q is integral.
Lemma 4.5 (Lemma 3.4 of [2]). Let M and M ′ be tame Z[C]-modules. Then M ⊗ M ′
is a tame Z[C]-module (with the diagonal action).
Definition 4.6. Let M be a tame module. The generator t ∈ C such that such that χM ⊗Q
is integral is called an integral generator for M. When we consider a tame module, we
always denote by t an integral generator for M. We set aM ∶= t ⊗ Q ∶ M ⊗ Q → M ⊗ Q, and
denote by χM , µM the characteristic and the minimal polynomial of aM . In other words
χM = χM ⊗Q and µM = µM ⊗Q .
Lemma 4.7. µM is an integral monic polynomial for any tame Z[C]-module M.
Proof. Let χM = (x − λ1 )m1 . . . (x − λl )ml for some distinct λ1 , . . . , λl ∈ C and mi ≥ 1. Then
µM = (x − λ1 )k1 . . . (x − λl )kl , where 1 ≤ ki ≤ mi . Since χ is a monic integral polynomial,
λ1 , . . . , λl are algebraic integers. It follows that the coefficients of µM are algebraic integers
as well. Moreover, they are rational numbers, because aM is defined rationally. Using
that a rational number is an algebraic integer iff it is an integer number, we obtain
µM ∈ Z[x].
ON LENGTHS OF HZ-LOCALIZATION TOWERS
13
Proposition 4.8. Let M be a torsion free tame Z[C]-module and µM = f1m1 . . . flml where
f1 , . . . , fl are distinct irreducible integral monic polynomials and mi ≥ 1 for all i. Consider
the filtration
0 = F0 M ⊂ F1 M ⊂ ⋅ ⋅ ⋅ ⊂ Fl M = M
m1
mi
m
given by Fi M = M f1 ...fi . Then Fi M/Fj−1 M is torsion free and µFi M /Fj−1 M = fj j . . . fimi
for any i ≥ j. Moreover, the corresponding filtration on M ⊗ Q splits:
l
M ⊗ Q ≅ ⊕ (Fi M/Fi−1 M) ⊗ Q.
i=1
Proof. Prove that Fi M/Fj−1 M is torsion free. Let v + Fj−1 M ∈ Fi M/Fj−1 M and nv +
m
Fj−1 M = 0. Hence nv ⋅ f1m1 (t) . . . fj j (t) = 0 in M. Using that M is torsion free we get
m
v ⋅ f1m1 (t) . . . fj j (t) = 0, and hence v + Fj−1 M = 0. Thus Fi M/Fj−1 M is torsion free. Set
V = M ⊗Q, K = Q, apply Corollary 4.2 and note that Fi V /Fj−1 V = (Fi M/Fj−1 M)⊗Q. The
assertion follows. Here we use Gauss lemma about integral polynomials: an irreducible
polynomial in Z[x] is irreducible in Q[x].
Recall that a module N is said to be nilpotent if NI n = 0 for some n, where I is the
augmentation ideal. It is easy to see that a Z[C]-module N is nilpotent if and only if
n
N (x−1) = N for some n.
Definition 4.9. A Z[C]-module M is said to be invariant free if M C = 0.
Lemma 4.10. Let M be a torsion free tame Z[C]-module. Then the following equivalent.
(1)
(2)
(3)
(4)
(5)
(6)
M is invariant free;
M does not have non-trivial nilpotent submodules;
MC is finite;
aM − 1 is an automorphism;
χM (1) ≠ 0;
µM (1) ≠ 0.
Proof. (1) ⇔ (2) and (4) ⇔ (5) ⇔ (6) are obvious. The equality Ker(aM − 1) = M C ⊗ Q
implies (1) ⇔ (4). Since M is finitely generated Z[C]-module, MC is a finitely generated
abelian group. Then the equality Coker(aM − 1) = MC ⊗ Q implies (3) ⇔ (4).
Corollary 4.11. Let M be a torsion free tame Z[C]-module and µM = (x − 1)m f , where
f (1) ≠ 0. Then there exists the largest nilpotent submodule N ≤ M. Moreover, µN =
(x − 1)m , µM /N = f, M/N is torsion free and invariant free, and the short exact sequence
N ⊗ Q ↣ M ⊗ Q ↠ (M/N) ⊗ Q splits over Q[C].
Proof. If µM (1) ≠ 0, then M is already invariant free, N = 0 and there is nothing to
prove. If µM (1) = 0, then we can decompose µM = (x − 1)m1 f2m2 . . . flml into a product
of irreducible polynomials such that fi (1) ≠ 0 for i ≥ 2. Consider the filtration from
Proposition 4.8. Then N = F1 M.
Corollary 4.12. Let M be a tame Z[C]-module. Then there exists the largest nilpotent
submodule N ≤ M. Moreover, M/N is invariant free and (M/N)C is finite.
Recall that a module N is said to be prenilpotent if NI n = NI n+1 for n >> 1.
ON LENGTHS OF HZ-LOCALIZATION TOWERS
14
Corollary 4.13. Let M be a tame Z[C]-module and µM = (x − 1)m f , where f (1) ≠ 0.
Then there exists a prenilpotent submodule N ≤ M such that M/N is torsion free and
invariant free. Moreover, tor(N) = tor(M), µN = (x − 1)m , µM /N = f and the sequence
N ⊗ Q ↣ M ⊗ Q ↠ (M/N) ⊗ Q splits over Q[C].
Lemma 4.14. Let M be a tame torsion free Z[C]-module. If µM (0) ∈ {−1, 1}, then M is
finitely generated as an abelian group.
Proof. It follows from the fact that Z[t, t−1 ]/(µM (t)) is a finitely generated abelian group.
4.3. Completion of tame Z[C]-modules. If M is a finitely generated R[C]-module,
we set M̂ = lim M/MI i and we denote by
←Ð
ϕ = ϕM ∶ M Ð→ M̂
the natural map to the completion. Note that the functor M ↦ M̂ is exact [19, VIII] and
M̂ /M̂I i = M/MI i .
We set
Zn = lim Z/ni
←Ð
for any n ∈ Z. In particular, Zn = Z−n , Z0 = Z, Z1 = 0 and, if n ≥ 2, then Zn = ⊕ Zp , where
p runs over all prime divisors of n.
Lemma 4.15. Let M be a tame torsion free invariant free Z[C]-module and n = χM (1).
Then ni ⋅ M ⊆ MI i for any i ≥ 1 and there exists a unique epimorphism of Z[C]-modules
ϕ̂ ∶ M ⊗ Zn ↠ M̂ such that the diagrams
M ⊗ Zn
ϕ̂
// //
M ⊗ Z/ni
M̂
// //
M/MI i
are commutative.
Proof. We identify M with the subgroup of M ⊗ Q. Corollary 4.10 implies that n ≠ 0.
Set b = aM − 1. Then the characteristic polynomial of b is equal to χb (x) = χM (x + 1) and
if χb = ∑di=0 βi xi , then β0 = n. Thus nx = b(∑di=1 βi bi−1 (x)) for any x ∈ M. It follows that
nM ⊆ b(M). Hence ni M ⊆ bi (M) = MI i for any i ≥ 1 and we obtain homomorphisms
M ⊗ Z/ni → M/MI i . We define ϕ̂ as the composition M ⊗ Zn → lim(M ⊗ Z/ni ) → M̂.
←Ð
Since the rank M is finite, the abelian groups M ⊗ Z/ni are finite. Thus we get that
the homomorphism lim(M ⊗ Z/ni ) → M̂ is an epimorphism because lim1 of an inverse
←Ð
←Ð
sequence of finite groups is trivial. Then it is sufficient to prove that the homomorphism
M ⊗Zn → lim(M ⊗ Z/ni ) is an epimorphism. For this it is enough to prove that M ⊗Zp →
←Ð
lim(M ⊗ Z/pi ) is an epimorphism for any prime p. Consider a p-basic subgroup B of M
←Ð
(see [12, VI]). Since B ≅ Zl , we get B ⊗Zp = lim(B ⊗ Z/pi ). Using that B ⊗Z/pi ↠ M ⊗Z/pi
←Ð
are epimorphisms of finite groups, we obtain that lim(B ⊗ Z/pi ) → lim(M ⊗ Z/pi ) is an
←Ð
←Ð
ON LENGTHS OF HZ-LOCALIZATION TOWERS
15
epimorphism. Then analysing the diagram
B ⊗ Zp //
//
M ⊗ Zp
≅
lim(B
←Ð
⊗ Z/pi )
// //
lim(M ⊗ Z/pi )
←Ð
we obtain that the right vertical arrow is an epimorphism.
A Z[C]-module is said to be perfect if MI = M.
Corollary 4.16. Let M be a torsion free tame Z[C]-module. If µM (1) ∈ {−1, 1}, then M
is perfect.
Proof. By Lemma 4.10, M is invariant free. Then Lemma 4.15 implies M̂ = 0. Hence M
is perfect.
Corollary 4.17. Let M be a tame Z[C]-module. If µM = (x−1)m f, where f is an integral
polynomial such that f (1) ∈ {−1, 1}, then M is prenilpotent.
Proof. A finite module is always prenilpotent, so we can assume that M has no torsion.
Further, by Lemma 4.11, we can consider the largest nilpotent submodule N ≤ M such
that µN = (x − 1)m and µM /N = f. Corollary 4.16 implies that M/N is perfect. Then N
and M/N are prenilpotent, and hence, M is prenilpotent.
Proposition 4.18. Let M and M ′ be tame Z[C]-modules with the same integral generator
t ∈ C, λ1 , . . . , λl ∈ C are eigenvalues of aM and λ′1 , . . . , λ′l′ ∈ C are eigenvalues of aM ′ .
Assume that the equality λi λ′j = 1 holds only if λi = λ′j = 1. Then the homomorphism
(M ⊗ M ′ )C Ð→ (M̂ ⊗ M̂ ′ )C
is an epimorphism.
Proof. Note that if M1 ↣ M2 ↠ M3 is a short exact sequence of tame modules and
(M1 ⊗ M ′ )C → (M̂1 ⊗ M̂ ′ )C , (M3 ⊗ M ′ )C → (M̂3 ⊗ M̂ ′ )C are epimorphisms, then (M2 ⊗
M ′ )C → (M̂2 ⊗ M̂ ′ )C is an epimorphism. Indeed, since the functor of completion is exact,
we have the commutative diagram with exact rows
(M1 ⊗ M ′ )C
//
(M̂1 ⊗ M̂ ′ )C
(M2 ⊗ M ′ )C
// //
//
(M̂2 ⊗ M̂ ′ )C
(M3 ⊗ M ′ )C
// //
(M̂3 ⊗ M̂ ′ )C
that implies this. Then, using Corollary 4.13, we obtain that we can divide our prove
into two parts: (1) prove the statement for the case of torsion free invariant free modules
M, M ′ ; (2) prove the statement for the case of a prenilpotent module M and arbitrary
tame module M ′ . Throughout the proof we use that (M ⊗ M ′ )C ≅ M ⊗Z[C] Mσ′ , where
Mσ′ is the module with the same underling abelian group M ′ but with the twisted action
of C ∶ m ∗ t = mt−1 .
ON LENGTHS OF HZ-LOCALIZATION TOWERS
16
(1) Assume that M, M ′ are torsion free invariant free tame Z[C]-modules. Lemma
4.10 implies that λi ≠ 1 and λ′j ≠ 1 for all i, j. Then we have λi λ′j ≠ 1 for all i, j. Note
that the eigenvalues of aM ⊗ aM ′ equal to the products λi λj , and hence 1 is not an
eigenvalue of aM ⊗ aM ′ . It follows that det(aM ⊗ aM ′ − 1) ≠ 0. Consider the minimal
polynomial µ of the tensor square aM ⊗ aM ′ . Since the aM ⊗ aM ′ is defined over Q, the
coefficients of µ are rational (because they are invariant under the action of the absolute
Galois group). Moreover, µ = ∏(x − λi λ′j )ki,j for some ki,j , and hence, its coefficients are
algebraic integers. It follows that µ is a monic polynomial with integral coefficients. The
polynomial µ(x + 1) is the minimal polynomial for aM ⊗ aM ′ − 1. Let µ(x + 1) = ∑ki=0 ni xi .
Then n0 = det(aM ⊗aM ′ −1) ≠ 0 and n0 ⋅(M ⊗M ′ ) ⊆ (M ⊗M ′ )(t−1). Since the rank of M ⊗M ′
is finite, (M ⊗M ′ )/n0 (M ⊗M ′ ) is finite, and hence, (M ⊗M ′ )C = (M ⊗M ′ )/(M ⊗M ′ )(t−1)
is finite. By Lemma 4.15 we have epimorphisms M ⊗ Zn ↠ M̂ and M ′ ⊗ Zn′ ↠ M̂ ′ , where
n = det(aM − 1) and n′ = det(aM ′ − 1). It is easy to see that
((M ⊗ Zn ) ⊗ (M ′ ⊗ Zn′ ))C = (M ⊗ M ′ )C ⊗ (Zn ⊗ Zn′ ).
Since (M ⊗M ′ )C is finite, (M ⊗M ′ )C → (M ⊗M ′ )C ⊗(Zn ⊗Zn′ ) is an epimorphism. Then
(M ⊗ M ′ )C → (M̂ ⊗ M̂ ′ )C is an epimorphism.
(2) Assume that M is a prenilpotent Z[C]-module and M ′ is a tame Z[C]-module.
Then there exists i such that M̂ = M/MI i . Since (M̂ ⊗ M̂ ′ )C ≅ M̂ ⊗Z[C] M̂σ′ , we get
(M̂ ⊗ M̂ ′ )C ≅ (M/MI i ⊗ M̂ ′ )C ≅ (M/MI i ⊗ M̂ ′ /M̂ ′ I i )C ≅ (M/MI i ⊗ M ′ /M ′ I i )C .
It follows that (M ⊗ M ′ )C → (M̂ ⊗ M̂ ′ )C is an epimorphism.
Corollary 4.19. Let M be a tame Z[C]-module and µM = (x − 1)m f1m1 . . . flml for some
distinct monic irreducible polynomials f1 , . . . , fl ∈ Z[x] such that fi (1) ≠ 0 and fi (0) ∉
{1, −1} for all 1 ≤ i ≤ l. Then the the homomorphism
(M ⊗2 )C Ð→ (M̂ ⊗2 )C
is an epimorphism.
Proof. Let λ1 , . . . , λk be roots of µM . Assume that λi λj = 1. Then λi is an invertible
algebraic integer, and hence, the absolute term of its minimal polynomial equals to ±1.
Thus λi can not be a root of fm for 1 ≤ m ≤ l. It follows that it is a root of x − 1. Then
λi = λj = 1.
Proposition 4.20. Let M, M ′ be tame Z[C]-modules with the same integral generator
t ∈ C, µM = (x − λ1 )m1 . . . (x − λl )ml for some distinct λ1 , . . . , λl ∈ C and µM ′ = (x −
′
′
λ′1 )m1 . . . (x − λ′l′ )ml′ for some distinct λ′1 , . . . , λ′l′ ∈ C. Assume that the equality λi λ′j = 1
holds only if either mi = m′j = 1 or λi = λ′j = 1. Then the cokernel of the homomorphism
(M ⊗ M ′ )C ⊕ (M̂ ⊗ M̂ ′ )C Ð→ (M̂ ⊗ M̂ ′ )C
is finite.
Proof. Corollary 4.13 implies that the proof can be divided into proofs of the following
two statements: (1) the statement for torsion free invariant free modules M, M ′ ; (2)
if N ↣ M ↠ M0 is a short exact sequence of tame Z[C]-modules such that N ⊗ Q ↣
M ⊗Q ↠ M0 ⊗Q splits, N is prenilpotent, and the statement holds for the couple M0 , M ′ ,
then it holds for the couple M, M ′ .
ON LENGTHS OF HZ-LOCALIZATION TOWERS
17
(1) Here we prove that the cokernel of (M̂ ⊗ M̂ ′ )C Ð→ (M̂ ⊗ M̂ ′ )C is already finite. Set
n = χM (1) and n′ = χM ′ (1). Lemma 4.15 implies that there are epimorphisms M ⊗Zn ↠ M̂
and M ′ ⊗ Zn′ ↠ M̂ ′ . Using that − ⊗ (Zn ⊗ Zn′ ) is an exact functor, we obtain that there is
an epimorphism (M ⊗M ′ )C ⊗(Zn ⊗Zn′ ) ↠ (M̂ ⊗ M̂ ′ )C . Moreover, there is an epimorphism
Coker((M ⊗ M ′ )C → (M ⊗ M ′ )C ) ⊗ (Zn ⊗ Zn′ ) ↠ Coker((M̂ ⊗ M̂ ′ )C → (M̂ ⊗ M̂ ′ )C ).
It follows that it is enough to prove that Coker((M ⊗ M ′ )C → (M ⊗ M ′ )C ) is finite.
Lemma 4.5 implies that M ⊗ M ′ is finitely generated, and hence, (M ⊗ M ′ )C is a finitely
generated abelian group. It follows that it is enough to prove that (M ⊗ M ′ )C ⊗ C →
(M ⊗ M ′ )C ⊗ C is an epimorphism. Eigenvalues of aM ⊗ aM ′ are products λi λ′j . Assume
that λi λ′j = 1 for some i, j. Since M and M ′ are invariant free, λi ≠ 1 and λ′j ≠ 1. Then
mi = 1 = m′j . It follows that all Jordan blocks of aM ⊗ C corresponding to λi and all
Jordan blocks of aM ′ ⊗ C corresponding to λ′j are 1 × 1-matrices. It follows that all Jordan
blocks of aM ⊗ aM ′ ⊗ C corresponding to 1 are 1 × 1-matrices. Hence all Jordan blocks of
B ∶= aM ⊗ aM ′ ⊗ C − 1 corresponding to 0 are 1 × 1-matrices. It is easy to see that, if all
Jordan blocks of a complex linear map B ∶ V → V corresponding to 0 are 1 × 1-matrices,
then V = Ker(B)⊕Im(B). It follows that the map Ker(B) → Coker(B) is an isomorphism.
Then (M ⊗ M ′ )C ⊗ C → (M ⊗ M ′ )C ⊗ C is an isomorphism.
(2) Note that N̂ = N/NI i for some i >> 1. Since, (N̂ ⊗ M̂ ′ )C can be interpret as
N̂ ⊗Z[C] M̂σ′ , (tensor product over Z[C]), we obtain (N̂ ⊗ M̂ ′ )C = (N/NI i ⊗ M ′ /M ′ I i )C .
It follows that (N̂ ⊗ M̂ ′ )C is a finitely generated abelian group and the map (N ⊗ M ′ )C →
(N̂ ⊗ M̂ ′ )C is an epimorphism. Set
N ∶= N ⊗ M ′ , M ∶= M ⊗ M ′ ,
M0 ∶= M0 ⊗ M ′ , Ñ ∶= N̂ ⊗ M̂ ′ ,
M̃ ∶= M̂ ⊗ M̂ ′ ,
M̃0 ∶= M̂0 ⊗ M̂ ′ ,
L̃ ∶= Ker(M̃ ↠ M̃0 ).
Then ÑC is a finitely generated abelian group and the map NC → ÑC is an epimorphism.
Consider the exact sequence
0 → L̃C → M̃C → M̃C
0 → L̃C → M̃C → (M̃0 )C → 0.
Since M̃ ⊗ Q ↠ M̃0 ⊗ Q is a split epimorphism, the image of M̃C
0 → L̃C lies in the torsion
subgroup, which is finite because of the epimorphism ÑC ↠ L̃C . Then the cokernel of M̃C
→ M̃C
0 is finite. Set
Q = Coker(MC ⊕ M̃C → M̃C ),
Q0 = Coker((M0 )C ⊕ M̃C
0 → (M̃0 )C ).
ON LENGTHS OF HZ-LOCALIZATION TOWERS
18
Then we know that Q0 is finite and Coker(M̃C → M̃C
0 ) is finite, and we need to prove
that Q is finite. Consider the diagram with exact columns.
NC ⊕ Ñ C
//
MC ⊕ M̃C
ÑC
α
//
(M0 )C ⊕ M̃0
//
M̃C
C
//
(M̃0 )C
Q
//
Q0
Using the snake lemma, we obtain that Ker(Q → Q0 ) = Coker(α). Since Q0 is finite and
Coker(α) = Coker(M̃C → M̃C
0 ) is finite, we get that Q is finite.
Corollary 4.21. Let M be a tame Z[C]-module and µM = (x − 1)m f1m1 . . . flml for some
distinct monic irreducible polynomials f1 , . . . , fl ∈ Z[x] such that fi (1) ≠ 0. Assume that
for any 1 ≤ i ≤ l either fi (0) ∉ {−1, 1} or mi = 1. Then the cokernel of the homomorphism
(M ⊗2 )C ⊕ (M̂ ⊗2 )C Ð→ (M̂ ⊗2 )C
is finite.
Remark 4.22. We prove Propositions 4.18 and 4.20 for tensor products of some modules
and their completions. Further we need the same statements for exterior squares. Of
course, the statements for tensor products imply the statements for exterior squares, so
it is enough to prove for tensor products. Moreover, it is more convenient to prove such
statements for tensor products because they have two advantages.
The first obvious advantage is that we can change modules M and M ′ in the tensor
product M ⊗ M ′ independently doing some reductions to ‘simpler’ modules.
The second less obvious advantage is the following. Let A be an abelian group and
M, M ′ are Z[A]-modules. Then we can interpret coinvariants of the tensor product as
the tensor product over Z[A]
(M ⊗ M ′ )A = M ⊗Z[A] Mσ′ ,
where Mσ′ is the module M ′ with the twisted module structure m∗a = ma−1 . In particular,
there is an additional nontrivial structure of Z[A]-module on (M ⊗ M ′ )A . But there is no
such a structure on (Λ2 M)A . More precisely, the kernel of the epimorphism
(M ⊗ M)A ↠ (Λ2 M)A
is not always a Z[A]-submodule. For example, if A = C = ⟨t⟩, M = Z2 where t acts on M
via the matrix ( 10 11 ) , it is easy to check that the kernel is not a submodule.
In our article [14] there are two mistakes concerning this that can be fixed easily.
(1) On page 562 we define ∧2σ M as a quotient module of M ⊗Λ Mσ by the submodule
generated by the elements m ⊗ m. Then we prove Corollaries 3.4 and 3.5 for such
a module. In the proof of Proposition 7.2 we assume that ∧2σ M = (∧2 M)A which
is the first mistake. In order to fix this mistake we have define ∧2σ M as a quotient
ON LENGTHS OF HZ-LOCALIZATION TOWERS
19
of M ⊗Λ M by the abelian group generated by the elements m ⊗ m and prove
Corollaries 3.4 and 3.5 using this definition. The prove is the same. We just need
to change the meaning of the word ‘generated’ form ‘generated as a module’ to
’generated as an abelian group’.
(2) In the proof of Lemma 7.1 we assume that (∧2 M)A is an Z[A]-module. This is
the second mistake. In order to fix it, we have replace (∧2 M)A by (M ⊗ M)A in
the first sentence of the proof of Lemma 7.1.
5. HZ-localization of M ⋊ C
Now consider our group G = M ⋊ C and the maximal nilpotent submodule N ⊆ M, such
that (M/N)C is finite. We have a natural commutative diagram
N //
//
G
// //
G/N
ηω
ηω
N //
//
Ĝ
// //
̂
G/N
Observe that, for any Z[C]-submodule N ′ ⊆ N,
̂′ = Ĝ/N ′ .
G/N
Lemma 5.1. For any Z[C]-submodule N ′ of N, there is a natural isomorphism
H2 (ηω )(G) = H2 (ηω )(G/N ′ ).
Proof. We can present the submodule N ′ as a finite tower of central extensions. If we will
prove that
H2 (ηω )(G) = H2 (ηω )(G/N ′ )
for any N ′ ⊆ N such that N ′ (1−t) = 0, than we will be able to prove the general statement
by induction on class of nilpotence of N ′ .
The assumption that N ′ (1 − t) = 0 implies that the extensions
̂′ → 1
1 → N ′ → G → G/N ′ → 1, 1 → N ′ → Ĝ → G/N
are central. Consider the natural map between sequences (3.2) for these extensions:
(Gab ⊗ N ′ )/U
//
H2 (G)
H2 (G/N ′ )
//
//
N′
H1 (G)
//
(Ĝab ⊗ N ′ )/U
//
H2 (Ĝ)
//
H2 (ηω )(G)
̂′ )
H2 (G/N
//
N′
//
H1 (Ĝ)
//
H2 (ηω )(G/N ′ )
Elementary diagram chasing implies that the lower horizontal map is an isomorphism and
the needed statement follows.
Lemma 5.2. If E(G/N) = Tω+1 (G/N), then EG = Tω+1 (G).
ON LENGTHS OF HZ-LOCALIZATION TOWERS
20
Proof. First we observe that, for any N ′ ⊆ N, there is a natural isomorphism
Tω+1 (G/N ′ ) = Tω+1 (G)/N ′ .
Indeed, lemma 5.1 implies that there is a natural diagram
H2 (ηω )(G) //
N′
N′
Tω+1 (G)
//
// //
H2 (ηω )(G/N ′ ) //
Tω+1 (G/N ′ )
//
Ĝ
// //
̂′
G/N
Hence, we have a natural diagram
N ′ //
//
G
// //
G/N ′
ηω+1
ηω+1
N ′ //
//
Tω+1 (G)
// //
Tω+1 (G/N ′ )
Again, as in the proof of lemma 5.1, we will assume that N ′ is central and will prove that,
in this case, EG = Tω+1 (G) provided E(G/N ′ ) = Tω+1 (G/N ′ ).
This follows from comparison of sequences (3.2) applied to the above central extensions:
(Gab ⊗ N ′ )/U
(Ĝab
⊗ N ′ )/U
//
H2 (G)
//
//
H2 (Tω+1 (G))
//
N′
//
H2 (Tω+1 (G/N ′ ))
//
H2 (ηω+1 )(G)
H2 (G/N ′ )
//
N′
//
H1 (G)
H1 (Tω+1 (G))
//
H2 (ηω+1 )(G/N ′ )
Again, elementary diagram chasing shows that the lower horizontal map is an isomorphism
and the needed statement follows.
Proposition 5.3. For a tame Z[C]-module M, the following conditions are equivalent:
(i) HZ-length(M ⋊ C) ≤ ω + 1;
(ii) the composition
Λ2 (M̂ )C → Λ2 (M̂ )C → H2 (ηω )
is an epimorphism.
ON LENGTHS OF HZ-LOCALIZATION TOWERS
21
Proof. It follows from (3.3) and construction of Tω+1 (G) that we have a natural diagram:
H3 (Ĝ)
Gab ⊗ H2 (ηω )(G)
//
//
H2 (G)
H2 (G)
H2 (Tω+1 (G))
H2 (Ĝ)
//
// //
H2 (ηω )(G)
q
qqqq
q
q
qqq
qqq
H2 (ηω )(G)
(5.1)
This diagram implies that the condition (i) for G = M ⋊ C is equivalent to the surjectivity
of the map δ ∶ H3 (Ĝ) → Gab ⊗ H2 (ηω )(G). Proposition 3.1 implies the following natural
diagram
H3 (Ĝ)
//
Gab ⊗ H2 (ηω )(G)
Λ2 (M̂ )C
//
H2 (ηω )(G)
♣77
♣♣♣
♣
♣
♣♣
♣♣♣
❏❏
❏❏
❏❏
❏❏
❏%% %%
Λ2 (M̂ )C
and the implication (i) ⇒ (ii) follows.
Now assume that (ii) holds. Let N be the maximal nilpotent submodule of M such
that (M/N)C is finite. Denote H ∶= (M/N) ⋊ C. We have a natural diagram:
H3 (Ĝ)
//
H3 (Ĥ)
//
Λ2 (M̂ )C
Λ2 (̂
M/N )C
//
//
Λ2 (M̂ )C
Λ2 (̂
M/N )C
//
H2 (ηω )(G)
//
H2 (ηω )(H)
Lemma 5.1 implies that the right hand vertical map in this diagram is a natural isomorphism. Condition (ii) implies that the composition of three lower arrows in the last
diagram must be an epimorphism. Now observe that Hab ⊗ H2 (ηω )(H) = H2 (ηω )(H),
since the group H2 (ηω )(H) is divisible and (M/N)C is finite. Therefore, δ ∶ H3 (Ĥ) →
Hab ⊗ H2 (ηω )(H) is surjective. The diagram (5.1) with G replaced by H implies that
EH = Tω+1 (H). Now the statement (i) follows from lemma 5.2.
Now we will consider the key example of a tame Z[C]-module M, such that
HZ-length(M ⋊ C) > ω + 1. For the construction of such an example, recall first certain well-known properties of quadratic functors.
Let X1 , . . . , Xn , Y1 , . . . , Ym be abelian groups and X = ⊕ni=1 Xi , Y = ⊕m
j=1 Yj . An element
of a direct sum will be written as a column (x1 , . . . , xn )T ∈ X and a homomorphism
ON LENGTHS OF HZ-LOCALIZATION TOWERS
22
f ∶ X → Y will be written as a matrix f = (fji ), where fji ∶ Xi → Yj and
n
n
f ((x1 , . . . , xn ) ) = (∑ f1i (xi ), . . . , ∑ fmi (xi ))T .
T
i=1
i=1
For an abelian group X we denote by
and X ⊗2 its exterior, symmetric,
divided (the same as the Whitehead quadratic functor) and tensor squares respectively.
If X is torsion free, then there are short exact sequences
Λ2 X,
S2 X,
ι∧
πS
ιΓ
π∧
Γ2 X
0 Ð→ Λ2 X ÐÐ→ X ⊗2 ÐÐ→ S2 X Ð→ 0,
0 Ð→ Γ2 X ÐÐ→ X ⊗2 ÐÐ→ Λ2 X Ð→ 0,
where π∧ , πS are the canonical projections
ι∧ (x1 x2 ) = x1 ⊗ x2 − x1 ⊗ x2 ,
ιΓ (γ2 (x)) = x ⊗ x
(see [13, ch.1 Section 4.3]). We will identify Γ2 X with the kernel of π∧ for torsion free
groups.
Lemma 5.4. Let M = Z2 be the Z[C]-module with the action of C given by the matrix
1
2n = 4n M for any natural n and M̂ = (Z )2 with the action of
c = ( −1
2
0 −1 ) . Then M(t − 1)
C given by the same matrix. Moreover,
Λ2 M̂ = Λ2 Z2 ⊕ Λ2 Z2 ⊕ Z⊗2
2 ,
(Λ2 M̂ )C = Λ2 Z2 ⊕ 0 ⊕ Γ2 Z2 ,
(Λ2 M̂ )C = 0 ⊕ Λ2 Z2 ⊕ S2 Z2
and the cokernel of the natural map
(Λ2 M̂)C Ð→ (Λ2 M̂ )C
is isomorphic Λ2 Z2 .
1
n
n
n
2
Z2 /dn (Z2 ).
Proof. Set d ∶= c−1 = ( −2
0 −2 ) . Since MI = M(t−1) = d (Z ), we have M̂ = lim
←
Ð
Computations show that d2 = −4c, and hence d2n = (−4)n cn . Since c induces an automorphism on Z2 , we obtain d2n (Z2 ) = 4n Z2 . Thus the filtration dn (Z2 ) of Z2 is equivalent
to the filtration 2n Z2 . It follows that M̂ = (Z2 )2 with the action of C given by c. Then
Λ2 M̂ ≅ (Λ2 Z2 )2 ⊕ Z⊗2
2 . We identify the element
xe1 ∧ x′ e1 + ye2 ∧ y ′e2 + ze1 ∧ z ′ e2 ∈ ∧2 M̂
with the column
(x ∧ x′ , y ∧ y ′ , z ⊗ z ′ )T ∈ (∧2 Z2 )2 ⊕ (Z2 ⊗ Z2 ),
where e1 , e2 is the standard basis of (Z2 )2 over Z2 . Let us present the homomorphism
∧2 ĉ ∶ Λ2 M̂ → Λ2 M̂ that defines the action of C as a matrix. Since
∧2 ĉ(xe1 ∧ x′ e1 ) = xe1 ∧ x′ e1
∧2 ĉ(ye2 ∧ y ′e2 ) = ye1 ∧ y ′e1 + ye2 ∧ y ′ e2 − (ye1 ∧ y ′ e2 − y ′ e1 ∧ ye2 )
∧2 ĉ(ze1 ∧ z ′ e2 ) = −ze1 ∧ z ′ e1 + ze1 ∧ z ′ e2 ,
we obtain
1 1 −π∧
1
0 ),
0 −ι∧ 1
∧2 ĉ = ( 0
0 1 −π∧
0
0 ).
0 −ι∧ 0
∧2 ĉ − 1 = ( 0
ON LENGTHS OF HZ-LOCALIZATION TOWERS
23
Now it is easy to compute that
(Λ2 M̂ )C = Ker(∧2 ĉ − 1) = Λ2 Z2 ⊕ 0 ⊕ Γ2 Z2
and
Im(∧2 ĉ − 1) = Λ2 Z2 ⊕ 0 ⊕ ι∧ (Λ2 Z2 ).
It follows that (Λ2 M̂ )C = Coker(∧2 ĉ − 1) = 0 ⊕ Λ2 Z2 ⊕ S2 Z2 . In order to prove that
Coker((Λ2 M̂)C → (Λ2 M̂ )C ) = Λ2 Z2 ,
2
we have to prove that Coker(Γ2 Z2 → Z⊗2
2 → S Z2 ) = 0. In other words we have to prove
⊗2
that Z2 is generated by elements x ⊗ x and x ⊗ y − y ⊗ x for x, y ∈ Z2 . Fist note that any
2-divisible element is generated by elements
2x ⊗ y = (x ⊗ y − y ⊗ x) + (x + y) ⊗ (x + y) − x ⊗ x − y ⊗ y.
Since any element of Z2 is equal to 2x or 1+2x for some x ∈ Z2 , all other elements of Z2 ⊗Z2
can be presented as sums of elements (1 +2x)⊗(1 +2y) = 1 ⊗1 +2(x⊗1 +1 ⊗y +2x⊗y).
Theorem 5.5. Let M be the Z[C]-module from lemma 5.4 The HZ-length of the group
G ∶= M ⋊ C = ⟨a, b, t ∣ [a, b] = 1, at = a−1 , bt = ab−1 ⟩
is greater than ω + 1.
Proof. Consider the central sequence
1 → H2 (ηω )(G) → Tω+1 (G) → Ĝ → 1
We will show that the cokernel of the map δ ∶ H3 (Ĝ) → Gab ⊗ H2 (ηω )(G) from (3.1)
contains Λ2 (Z2 ). The theorem will immediately follow.
By proposition 3.1, the map
δ ∶ H3 (Ĝ) → Gab ⊗ H2 (ηω )(G) = H2 (ηω )(G)
factors as
H3 (Ĝ) → (Λ2 M̂ )C → (Λ2 M̂ )C ↠ H2 (ηω )(G)
The direct summand Λ2 Z2 from (Λ2 M̂)C maps isomorphically to a direct summand of
H2 (ηω )(G) = Λ2 Z2 ⊕ (S2 Z2 )/Z. By lemma 5.4, the summand Λ2 Z2 lies in the cokernel of
the map (Λ2 M̂ )C → (Λ2 M̂ )C , therefore, it lies also in the cokernel of the composite map
(Λ2 M̂ )C → H2 (ηω )(G) as well as of the map δ.
Theorem 5.6. Let G be a metabelian group of the form G = M ⋊ C, where M is a tame
C-module and µM = (x − λ1 )m1 . . . (x − λl )ml for some distinct complex numbers λ1 , . . . , λl
and mi ≥ 1.
(1) Assume that the equality λi λj = 1 holds only if λi = λj = 1. Then
HZ-length(G) ≤ ω.
(2) Assume that the equality λi λj = 1 holds only if either mi = mj = 1 or λi = λj = 1.
Then
HZ-length(G) ≤ ω + 1.
Proof. Follows from propositions 5.3, 4.18 and 4.20.
ON LENGTHS OF HZ-LOCALIZATION TOWERS
24
Lemma 5.7. Let M be a tame Z[C]-module such that H2 (M ⋊ C) is finite and
µM = (x − λ1 )m1 . . . (x − λl )ml ,
where λ1 , . . . , λl are distinct complex numbers and mi ≥ 1. Then λi λj = 1 implies mi =
mj = 1.
Proof. Set V ∶= M ⊗C. Then by Lemma 4.1 we get V = V1 ⊕⋅ ⋅ ⋅⊕Vl such that µVi = (x−λi )mi .
The short exact sequence
(Λ2 M)C ↣ H2 (M ⋊ C) ↠ M C
implies that (∧2 M)C is finite. Then (Λ2C V )C = (Λ2 M)C ⊗ C = 0. Assume the contrary,
that there exist i, j such that λi λj = 1 and mi ≥ 2. Consider two cases i = j and i ≠ j.
(1) Assume that i = j. Then λi = λj = −1. Since mi ≥ 2, at least one of Jordan blocks of
aM ⊗ C corresponding to −1 has size bigger than 1. It follows that there is a epimorphism
V ↠ U, where U = C2 and C acts on U by the matrix ( −10 −11 ) . The epimorphism induces a
epimorphism 0 = (Λ2C V )C ↠ (∧2C U)C . From the other hand, a simple computation shows
that (Λ2C U)C ≅ C. So we get a contradiction.
(2) Assume that i ≠ j. Then λi ≠ λj . Because of the isomorphism of C[C]-modules
Λ2C V = (⊕ Λ2C Vk ) ⊕ ( ⊕ Vk ⊗C Vk′ ),
k
k<k ′
we have an epimorphism of Z[C]-modules Λ2C V ↠ Vi ⊗C Vj . Since mi ≥ 2, there is an
epimorphism Vi ↠ Ui , where Ui = C2 and C acts on Ui by the matrix ( λ0i λ1i ) . Moreover,
there is an epimorphism Vj ↠ Uj , where Uj = C and C acts on Uj by the multiplication
on λj . It follows that C acts on Ui ⊗C Uj by the matrix ( 10 λ1j ) . Thus (Ui ⊗C Uj )C ≅ C.
From the other hand we have an epimorphism 0 = (Λ2C V )C ↠ (Ui ⊗ Uj )C . So we get a
contradiction.
Proposition 5.8. Let G be a metabelian finitely presented group of the form G = M ⋊ C
for some Z[C]-module M and H2 (G) is finite. Then
HZ-length(G) ≤ ω + 1.
Proof. It follows from Lemma 5.7 and Theorem 5.6.
6. Bousfield’s method
Let G be a finitely presented group given by presentation
⟨x1 , . . . , xm ∣ r1 , . . . , rk ⟩
Consider the free group F = F (x1 , . . . , xn ) and an epimorphism F → G with the kernel
normally generated by k elements ker F → G = ⟨r1 , . . . , rk ⟩F . Here we will study the
induced map
h ∶ H2 (F̂ ) → H2 (Ĝ).
(6.1)
ON LENGTHS OF HZ-LOCALIZATION TOWERS
25
We follow the scheme due to Bousfield from [5]. Let
Ð→
Ð→
d0 ,d1
Ð→
F∗ ∶
. . . Ð→ F1 Ð→ F0 (= F ) → G
←Ð
←Ð
←Ð
be a free simplicial resolution of G, where F1 a free group with m + k generators. The
structure of F1 is F (y1 , . . . , yk ) ∗ F , and the maps d0 , d1 are given as
id
d0 ∶ yi ↦ 1, F ↦ F
id
d 1 ∶ y i ↦ ri , F ↦ F
The following short exact sequence follows from Lemma 5.4 and Proposition 3.13 in [5]:
̂∗ ) → Ĝ → 1
1 → lim 1 π1 (F∗ /γk (F∗ )) → π0 (F
←Ð k
(6.2)
The first homotopy group π1 (F∗ /γk (F∗ )) is isomorphic to the kth Baer invariant known
also as k-nilpotent multiplicator of G (see [7], [11]). If G = F /R for a normal subgroup
R ⊲ F , the Baer invariant can be presented as the quotient
π1 (F∗ /γ∗(F∗ )) ≃
R ∩ γk (F )
.
[R, F, . . . , F ]
´¹¹ ¹ ¹ ¹ ¹ ¹ ¸¹ ¹ ¹ ¹ ¹ ¹ ¹¶
k−1
1
Now assume that, for a group G, the lim -term vanishes in (6.2), that is, there is a natural
←Ð
isomorphism
̂∗ ) ≃ Ĝ
π0 (F
There is the first quadrant spectral sequence (see [4], page 108)
1
̂∗ ))
= Hq (F̂p ) ⇒ Hp+q (W (F
Ep,q
r
r
→ Ep−r,q+r−1
dr ∶ Ep,q
As a result of convergence of this spectral sequence, we have the following diagram:
1
E0,1
= H2 (F̂ )
// //
∞
E0,1
(6.3)
̂∗ ))
H2 (W (F
// //
H2 (Ĝ)
∞
E1,0
∞ is a quotient of π ((F ) ) =
Since F1 is finitely generated, H1 (F̂1 ) ≃ H1 (F1 ), therefore, E1,0
1
∗ ab
H2 (G). Now the diagram (6.3) implies the following
ON LENGTHS OF HZ-LOCALIZATION TOWERS
26
Lemma 6.1. Assuming that lim 1 -term in (6.2) vanishes, the cokernel of the map (6.1)
←Ð
is isomorphic to a quotient of the homology group H2 (G). For a group G with H2 (G) = 0,
the map h ∶ H2 (F̂ ) → H2 (Ĝ) is an epimorphism.
7. Main construction
Lemma 7.1. Let K = ⟨a, b ∣ [[a, b], a] = [[a, b], b] = 1⟩ be the free 2-generated nilpotent
group of class 2 and C = ⟨t⟩ acts on K by the following automorphism
a ↦ a−1
b ↦ ab−1 .
Set K̃ ∶= lim K/γn (K ⋊ C). Then the following holds
←Ð
(1) the pronilpotent completion of K ⋊ C is equal to K̃ ⋊ C;
(2) Kab is isomorphic to the Z[C]-module from Lemma 5.4;
≅ ̂
(3) the obvious map (K̃)ab Ð
→ (K
ab ) is an isomorphism of Z[C]-modules;
(4) either [K̃, K̃] ≅ Z2 or [K̃, K̃] ≅ Z/2m for some m.
Proof. For the sake of simplicity we set Kn ∶= γn (K ⋊ C). (1) follows from the equality
(K ⋊ C)/Kn = (K/Kn ) ⋊ C
for n ≥ 2. (2) is obvious. Prove (3) and (4).
It is obvious that γ2 (K) = Z(K) = {[a, b]k ∣ k ∈ Z} ≅ Z and [ai , bj ] = [a, b]ij for
all i, j ∈ Z. Moreover, any element of K can be uniquely presented as ai bj [a, b]k for
i, j, k ∈ Z. Set M = Kab . We consider M with the additive notation as a module over
C. Then γn+1 (M ⋊ C) = M(t − 1)n . Lemma 5.4 implies that γ2n+1 (M ⋊ C) = 4n M. Since
the map K2n+1 ↠ γ2n+1 (K ⋊ C) is an epimorphism, there exist k, l ∈ Z (that depend of
n
n
2n
n
n
n) such that a4 [a, b]k , b4 [a, b]l ∈ K2n+1 . Since, [a, b]4 = [a4 [a, b]k , b4 [a, b]l ], we obtain
2n
m(2n+1)
[a, b]4 ∈ K2n+1 . Then K2n+1 ∩ γ2 (K) = ⟨[a, b]2
⟩ for some natural number m(2n + 1)
m(n)
2n
because 4 is divisible only on powers of 2. Hence Kn ∩ γ2 (K) = ⟨[a, b]2
⟩ for some
m(n). Then the short exact sequence γ2 (K) ↣ K ↠ M induces the short exact sequence
ιn
0 Ð→ Z/2m(n) ÐÐÐ→ K/Kn Ð→ M/M(t − 1)n Ð→ 1,
where ιn (1) = [a, b]. Since Z/2m(n) and M/M(t − 1)n are finite 2-groups, the order of
′
K/Kn is equal to 2m (n) for some m′ (n). We obtain a short exact sequence
0 Ð→ lim Z/2m(n) ÐÐ→ K̃ Ð→ M̂ Ð→ 1.
←Ð
If m(n) → ∞, then lim Z/2m(n) = Z2 , else m(n) stabilizes and lim Z/2m(n) = Z/2m .
←Ð
←Ð
Now it is sufficient to prove that [K̃, K̃] = Im(ι̂). Since M̂ is abelian, [K̃, K̃] ⊆ Im(ι̂).
Prove that [K̃, K̃] ⊇ Im(ι̂). Any element of K̃ can be presented as a sequence (xn )∞
n=1 ,
m(n)
where xn ∈ K/Kn such that xn ≡ xn+1 mod Kn . Any element of lim Z/2
can be
←Ð
∞
k
presented as an image of a 2-adic integer ∑k=0 αk 2 , where αk ∈ {0, 1}. Then an element
of Im(ι̂) can be presented as a sequence
ι̂
([a, b]∑k=0
m(n)
αk 2k ∞
)n=1
= ([a∑k=0
m(n)
αk 2k
, b])∞
n=1 .
ON LENGTHS OF HZ-LOCALIZATION TOWERS
27
2
Note that the element (a∑k=0 αk 2 )∞
n=1 is a well defined element of K̃ because a
It follows that
m(n)
m(n)
k
∑k=0 αk 2k ∞
)n=1 , (b)∞
([a, b]∑k=0 αk 2 )∞
n=1 ],
n=1 = [(a
m′ (n)
m′ (n)
k
and hence, [K̃, K̃] ⊇ Im(ι̂).
∈ Kn .
Theorem 7.2. Let F be a free group of rank ≥ 2. Then HZ-length(F ) ≥ ω + 2.
Proof. Consider the following group:
Γ ∶= ⟨a, b, t ∣ [[a, b], a] = [[b, a], b] = 1, at = a−1 , bt = ab−1 ⟩
Observe that Γ is the semidirect product K ⋊ C from the previous lemma.
Consider the natural diagram induced by the projection K → Kab = M, i.e. by Γ =
K ⋊ C ↠ G = M ⋊ C:
H3 (Γ̂)
δ
H3 (Ĝ)
δ
//
H1 (Γ) ⊗ H2 (ηω )(Γ)
(7.1)
◗ ◗
◗ ◗
◗ ◗
◗((
// H1 (G) ⊗ H2 (ηω )(G)
// // Coker(δ)(G)
It is shown in the proof of theorem 5.5 that Coker(δ)(G) contains Λ2 Z2 . This term
is an epimorphic image of one of the terms Λ2 Z2 in
Λ2 Z2
H2 (M̂ ) = Λ2 M̂ = Λ2 Z2 ⊕ Λ2 Z2 ⊕ Z2 ⊗ Z2
(see lemma 5.4). By lemma 7.1, there is a short exact sequence
H2 (K̃) → H2 (M̂ ) ↠ [K̃, K̃]
Now, by lemma 7.1, [K̃, K̃] is either Z2 or a finite group, therefore, the image of both
terms Λ2 Z2 in [K̃, K̃] are zero (2-adic integers do not contain divisible subgroups). In
particular, the term Λ2 Z2 which maps isomorphically onto a subgroup of Coker(δ)(G)
lies in the image of H2 (K̃). The natural square
H2 (K̃)
// //
H2 (M̂ )
H2 (ηω )(Γ)
// //
H2 (ηω )(G)
is commutative and we conclude that the diagonal arrow in (7.1) maps onto the subgroup
Λ2 Z2 of Coker(δ)(G). Hence, Coker(δ)(Γ) maps epimorphically onto Λ2 Z2 .
Now lets prove that the second homology H2 (Γ) is finite. Since the group Γ is the
semi-direct product K ⋊ C, its homology is given as
0 → H2 (K)C → H2 (Γ) → H1 (C, Kab ) → 0
The right hand term is zero: H1 (C, Kab ) = H1 (C, M) = M C = 0. It follows immediately
that H2 (K)C = (γ3 (F (a, b))/γ4 (F (a, b)))C = Z/4.
ON LENGTHS OF HZ-LOCALIZATION TOWERS
28
Observe that the group Γ can be defined with two generators only. Let F be a free
group of rank ≥ 2. Consider a free simplicial resolution of Γ with F0 = F : F∗ → Γ. Since
H2 (Γ) is finite, all Baer invariants of Γ are finite (see, for example, [11])
lim 1 π1 (F∗ /γk (F∗ )) = 0
←Ð
and, by lemma 6.1, the cokernel of the natural map H2 (F̂ ) → H2 (Γ̂) is finite.
We have a natural commutative diagram:
H3 (F̂ )
δ
H1 (F ) ⊗ H2 (ηω )(F )
//
H3 (Γ̂)
// //
Coker(δ)(F )
δ
//
H1 (Γ) ⊗ H2 (ηω )(Γ)
(7.2)
// //
Coker(δ)(Γ)
with H2 (ηω )(F ) = H2 (F̂ ). Since the cokernel of H2 (F̂ ) → H2 (Γ̂) is finite,
Coker{Coker(δ)(F ) → Coker(δ)(Γ)}
also is finite. However, Coker(δ)(Γ) maps onto Λ2 Z2 , as we saw before, hence Coker(δ)(F )
is uncountable. Therefore, by (3.3), H2 (Tω+1 (F )) ≠ 0 and the statement is proved.
8. Alternative approaches
In general, given a group, the description of its pro-nilpotent completion is a difficult
problem. If a group is not pre-nilpotent, its pro-nilpotent is uncountable and may contain
complicated subgroups. In this paper, as well as in [14] we essentially used the explicit
structure of pro-nilpotent completion for metabelian groups. Now we observe that, there
is a trick which gives a way to show that some groups have HZ-length greater than ω
without explicit description of their pro-nilpotent completion. In a sense, the Bousfield
scheme described above also gives such a method, however in that way one must compare
the considered group with a group with clear completion. The trick given bellow is
different.
We put Φi H2 (G) = Ker{H2 (G) → H2 (G/γi+1 (G))}. Then Φi H2 (G) is the Dwyer filtration on H2 (G) (see [8]).
Proposition 8.1. Let G be a group with the following properties:
(i) γω (G) ≠ γω+1 (G)
(ii) ∩i Φi H2 (G/γω (G)) = 0.
Then the HZ-length of G is greater than ω.
ON LENGTHS OF HZ-LOCALIZATION TOWERS
29
Proof. All ingredients of the proof are in the following diagram with exact horizontal and
vertical sequences:
H2 (G)
▼▼▼
▼▼▼
▼▼▼
▼▼&&
//
// H2 (G/γω (G))
ker //
∩i Φi H2 (G/γω (G))
H2 (Ĝ)
γω (G)/γω+1 (G)
The fact that the kernel lies in ∩i Φi H2 (G/γω (G)) follows from the standard property of
Dwyer’s filtration: the map G/γω (G) → Ĝ induces isomorphisms on H2 /Φi for all i (see
[8]). The vertical exact sequence is the part of 5-term sequence in homology. Now assume
that the map H2 (G) → H2 (Ĝ) is an epimorphism. Then H2 (G/γω (G)) → H2 (Ĝ) is an
epimorphism as well. However, condition (ii) implies that the last map is a monomorphism
as well. Hence, it is an isomorphism and surjectivity of H2 (G) → H2 (Ĝ) contradicts the
property (i).
An example of a group with satisfies both conditions (i) and (ii) from proposition, is
the group
⟨a, b ∣ [a, b3 ] = [[a, b], a]2 = 1⟩
(see [16], examples 1.70 and 1.85). However, in this example, G/γω (G) is metabelian and
one can use explicit description of its pro-nilpotent completion to show the same result.
The above proposition can be used for more complicated groups.
Now we consider another example of finitely generated metabelian group of the type
M ⋊ C with HZ-length greater than ω + 1. In our example, M = Z[C] with a regular
action of C as multiplication. This is not a tame Z[C]-module and the group is not
finitely presented. The proof of the bellow result uses functorial arguments.
Theorem 8.2. Let
G = Z[C] ⋊ C = Z ≀ Z = ⟨a, b ∣ [a, ab ] = 1, i ∈ Z⟩,
i
then HZ-length(G) ≥ ω + 2.
Proof. We define a functor from the category fAb of finitely generated free abelian groups
to the category of groups as follows:
F ∶ A ↦ (A ⊗ M) ⋊ C, A ∈ fAb,
where the action of C on A is trivial and M = Z[C]. Now our main example can be
written as G = F (Z). Since the action of C on A is trivial, the pro-nilpotent completion
of F (A) can be described as follows:
̂
F
(A) = (A ⊗ M̂) ⋊ C.
One can easily see that
H1 (C, Hi (A ⊗ M̂ )) = 0, i ≥ 1.
ON LENGTHS OF HZ-LOCALIZATION TOWERS
30
Since the abelian group A ⊗ M̂ is torsion-free,
̂
Hi (F
(A)) = Λi (A ⊗ M̂ )C , i ≥ 1.
Now we have
H2 (ηω )(F (A)) = Coker{Λ2 (A ⊗ M)C → Λ2 (A ⊗ M̂ )C }.
Now consider the functor from the category of finitely generated free abelian groups to
all abelian groups
G ∶ A ↦ H2 (ηω )(F (A))
It follows immediately that G is a quadratic functor. Indeed, it is a proper quotient of
the functor A ↦ A ⊗ A ⊗ B for a fixed abelian group B.
The exact sequence (3.3) applied to the stem-extension
̂
1 → H2 (ηω )(F (A)) → Tω+1 (F (A)) → F
(A) → 1
can be viewed as an exact sequence of functors in the category sAb. Consider the map δ
̂
as a natural transformation in sAb. The functor H3 (F
(A)) = Λ3 (A ⊗ M̂)C is a quotient
of the cubic functor
A⊗A⊗A⊗D
for some fixed abelian group D (which equals to the tensor cube of M̂ ). Recall that any
natural transformation of the form
A ⊗ A ⊗ A ⊗ D → quadratic functor
is zero. This follows from the simple observation that A⊗A⊗A⊗D is a natural epimorphic
image of its triple cross-effect. See, for example, [17] for generalizations and detailed
discussion of this observation.
Therefore, for any non-zero A, the map δ in (3.3) is zero and H2 (Tω+1 (F (A))) contains a subgroup H2 (ηω )(F (A)) which is uncountable. In particular, H2 (Tω+1 (G)) =
H2 (Tω+1 (F (Z)) is uncountable and hence H2 (G) → H2 (Tω+1 (G)) is not an epimorphism.
Remark 8.3. If we change the group Z ≀ Z by lamplighter group by adding one more
relation, we will obtain another wreath product (for n ≥ 2)
Z/n ≀ Z = ⟨a, b ∣ an = 1, [a, ab ] = 1, i ∈ Z⟩
i
One can use the scheme of this paper to prove that HZ-length(Z/n≀Z) > ω+1. Essentially it
follows from the triviality of Λ2 (M̂ )C as in Theorem 8.2. As it was shown by G. Baumslag
[1], there are different ways to embed this group into a finitely-presented metabelian group.
For example, Z/n ≀ Z is a subgroup generated by a, b in
G[n] ∶= ⟨a, b, c ∣ an = [a, ab ] = [a, ab ] = [b, c] = 1, ac = aab a−b ⟩.
2
2
Now we have a decomposition G[n] = (Z/n)⊕∞ ⋊ (C × C) and it follows immediately from
[14] that HZ-length(G[n] ) = ω, since H2 (ηω )(G[n] ) is divisible.
ON LENGTHS OF HZ-LOCALIZATION TOWERS
31
9. Concluding remarks
Lemma 9.1. For any prime p and n ≥ 2 the embedding Zp ↪ Qp induces an isomorphism
Λn Zp ≅ Λn Qp .
In particular, Λn Zp is a Q-vector space of countable dimension.
⊗n
Proof. Since Zp and Qp are torsion free, the map Z⊗n
p → Qp is a monomorphism. Since
for torsion free groups the exterior power is embedded into the tensor power, we obtain
that the map Λn Zp → Λn Qp is a monomorphism. So we can identify Λn Zp with its image
in Λn Qp . Since Qp = ⋃ p1i ⋅ Zp , we obtain Λn Qp = ⋃ p1i ⋅ Λn Zp . Then it is enough to prove
that Λn Zp is p-divisible. For any a ∈ Zp we consider the decomposition a = a(0) + a(1) ,
where a(0) ∈ {0, . . . , p − 1} and a(1) = p ⋅ b for some b ∈ Zp . Then for any a1 , . . . , an ∈ Zp
(i )
(i )
we have a1 ∧ ⋅ ⋅ ⋅ ∧ an = ∑ a1 1 ∧ ⋅ ⋅ ⋅ ∧ an n , where (i1 , . . . , in ) runs over {0, 1}n . For any
(i )
(i )
sequence (i1 , . . . , in ) except (0, . . . , 0) the element a1 1 ∧ ⋅ ⋅ ⋅ ∧ an n is p divisible because
(1)
(0)
(0)
ai is p-divisible. Since Λn Z = 0, we get a1 ∧ ⋅ ⋅ ⋅ ∧ an = 0. It follows that a1 ∧ ⋅ ⋅ ⋅ ∧ an is
p-divisible. Thus Λn Zp is p-divisible.
Remind that the Klein bottle group is given by GKl = Z ⋊ C, where C acts on Z by the
multiplication on −1. Its pronilpotent completion is equal to Z2 ⋊ C. Consider the map
w ∶ Z2 × Z2 → Λ2 Z2 given by w(a, b) = 21 a ∧ b. Here we use that Λ2 Zp is a Q-vector space.
It is easy to see that w is a 2-cocycle and we get the corresponding central extension
Λ2 Z2 ↣ Nw ↠ Z2 ,
whose underlying set is (Λ2 Z2 ) × Z2 and the product is given by
(α, a)(β, b) = (α + β +
1
⋅ a ∧ b, a + b) .
2
We define the action of C on Nw by the map (α, a) ↦ (α, −a).
Proposition 9.2. There is an isomorphism
EGKl = Nw ⋊ C.
In other words, EGKl can be described as the set (Λ2 Z2 ) × Z2 × C with the multiplication
given by
(α, a, ti )(β, b, tj ) = (α + β +
(−1)i
⋅ a ∧ b, a + (−1)i b, ti+j ) .
2
Proof. Set G ∶= GKl . Then Ĝ = Z2 ⋊ C. The Lyndon-Hochschild-Serre spectral sequences
≅
imply that H2 (G) = 0 and the map Λ2 Z2 = H2 (Z2 ) Ð
→ H2 (Ĝ) is an isomorphism. Theorem 5.6 implies that Tω+1 G = EG. Then EG is the universal relative central extension
(H2 (ηω ) ↣ EG ↠ Ĝ, ηω+1 ). Since H2 (ηω ) = Coker{H2 (G) → H2 (Ĝ)} and H2 (G) = 0,
≅
≅
we obtain that H2 (Z2 ) Ð
→ H2 (Ĝ) Ð
→ H2 (ηω ) are isomorphisms. The continious maps
ON LENGTHS OF HZ-LOCALIZATION TOWERS
32
BZ2 → B Ĝ → Cone(Bηω ) give the commutative diagram for any abelian group A ∶
H 2 (ηω , A)
Hom(H2 (ηω ), A)
≅ //
(9.1)
≅
H 2 (Ĝ, A)
//
Hom(H2 (Ĝ), A)
≅
H 2 (Z2 , A)
//
Hom(H2 (Z2 ), A)
If A is a divisible abelian group, then Ext(H1 (Ĝ), A) = 0 = Ext(Z2 , A), and all morphisms
in the diagram (9.1) are isomorphisms. In particular, if A = H2 (ηω ) = H2 (Ĝ) = Λ2 Z2 , then
the morphisms induce isomorphisms
H 2 (ηω , Λ2 Z2 ) ≅ H 2 (Ĝ, Λ2 Z2 ) ≅ H 2 (Z2 , Λ2 Z2 ) ≅ Hom(Λ2 Z2 , Λ2 Z2 )
and the extension Λ2 Z2 ↣ EG ↠ Ĝ corresponds to the identity map in Hom(Λ2 Z2 , Λ2 Z2 ).
Therefore, it is sufficient to prove that the extension Λ2 Z2 ↣ Nw ⋊ C ↠ Ĝ goes to the
identity map via the composition
H 2 (Ĝ, Λ2 Z2 ) → H 2 (Z2 , Λ2 Z2 ) → Hom(Λ2 Z2 , Λ2 Z2 ).
The map H 2 (Ĝ, Λ2 Z2 ) → H 2 (Z2 , Λ2 Z2 ) on the level of extensions is given by the pullback.
It follows that the extension Λ2 Z2 ↣ Nw ⋊ C ↠ Ĝ goes to the extension Λ2 Z2 ↣ Nw ↠ Z2 .
Then we need to prove that w goes to the identity map under the map H 2 (Z2 , Λ2 Z2 ) →
Hom(Λ2 Z2 , Λ2 Z2 ).
For any group H and an abelian group A the map H 2 (H, A) → Hom(H2 (H), A) on
the level of cocycles is induced by the evaluation map C 2 (H, A) → Hom(C2 (H), A) given
by u ↦ ((h, h′ ) ↦ u(h, h′ )). For an abelian group H the map Λ2 H → H2 (H) given by
h ∧ h′ ↦ (h, h′ ) − (h′ , h) + B2 (H) is an isomorphism Λ2 H ≅ H2 (H) (see [6, Ch.V §5-6]).
Then the map H 2 (H, A) → Hom(Λ2 H, A) is given by u ↦ (h ∧ h′ ↦ u(h, h′ ) − u(h′ , h)).
Since w(a, b) − w(b, a) = 21 (a ∧ b − b ∧ a) = a ∧ b, w goes to the identity map.
Remark 9.3. If we identify EGKl with the set (Λ2 Z2 ) × Z2 × C, it is easy to check that
γ2 (EGKl ) = (Λ2 Z2 ) × 2Z2 × 1 and [γ2 (EGKl ), γ2 (EGKl )] = (Λ2 Z2 ) × 0 × 1. It follows that
EGKl is a solvable group of class 3 but it is not metabelian. In particular, the class of
metabelian groups is not closed under the HZ-localization.
Finishing this paper, we mention some possibilities of generalization of the obtained
results. We conjecture that, the tame Z[C]-module M defined by the k × k-matrix (k ≥ 2)
⎞
⎛
⎜−1 1 0 . . . 0 ⎟
⎟
⎜
⎟
⎜
⎜ 0 −1 1 0 . . .⎟
⎟
⎜
⎟
⎜
⎟
⎜
⎟
⎜
⎟
⎜. . .
⎟
⎜
⎟
⎜
⎟
⎜
⎝ 0 . . . 0 0 −1⎠
ON LENGTHS OF HZ-LOCALIZATION TOWERS
33
defines the group M ⋊C with HZ-length ω +l(k), where l(k) ∈ N and l(k) → ∞ for k → ∞.
With the help of this group, one can try to use the same scheme as in this paper to prove
that HZ-length of a free non-cyclic group is ≥ 2ω.
Acknowledgement
The research is supported by the Russian Science Foundation grant N 16-11-10073.
References
[1] G. Baumslag: Subgroups of finitely presented metabelian groups, J. Austral. Math. Soc. 16 (1973),
98–110, Collection of articles dedicated to the memory of Hanna Neumann.
[2] R. Bieri and K. Strebel: Almost finitely presented soluble groups, Comm. Math. Helv. 53 (1978),
258–278.
[3] R. Bieri and R. Strebel: A geometric invariant for modules over an abelian group, J. Reine Angew.
Math. 322 (1981), 170–189.
[4] A. K. Bousfield and D. Kan: Homotopy limits, completions and localizations, Lecture Notes in
Mathematics 304, (1972).
[5] A. K. Bousfield: Homological localization towers for groups and π-modules, Mem. Amer. Math. Soc,
no. 186, 1977.
[6] K. Brown, Cohomology of Groups, Springer-Verlag GTM 87, 1982.
[7] J. Burns and G. Ellis: On the nilpotent multipliers of a group, Math. Z. 226 (1997), 405–428.
[8] W. Dwyer: Homology, Massey products and maps between groups, J. Pure Appl. Algebra, 6 (1975),
177–190.
[9] B. Eckmann, P. Hilton and U. Stammbach: On the homology theory of central group extensions: I
- the commutator map and stem extensions, Comm. Math. Helv. 47, (1972), 102–122.
[10] B. Eckmann, P. Hilton and U. Stammbach: On the homology theory of central group extensions: II
- the exact sequence in the general case, Comm. Math. Helv. 47, (1972), 171–178.
[11] G. Ellis: A Magnus-Witt type isomorphism for non-free groups, Georgian Math. J. 9 (2002), 703–708.
[12] L. Fuchs: Infinite abelian groups, Academic Press, New York and London.
[13] L.Illusie: Complexe Cotangent et Déformation I, Lecture Notes in Mathematics, vol. 239, Springer,
Berlin, 1971.
[14] S. O. Ivanov and R. Mikhailov: On a problem of Bousfield for metabelian groups: Advances in Math.
290 (2016), 552–589.
[15] F. Kasch: Modules and Rings, Acd. Press, London New York, 1982.
[16] R. Mikhailov and I. B. S. Passi: Lower Central and Dimension Series of Groups, LNM Vol. 1952,
Springer 2009.
[17] R. Mikhailov: Polynomial functors and homotopy theory, Progress in Math, 311 (2016), arXiv:
1202.0586.
[18] C. Weibel: An Introduction to Homological Algebra, Cambridge Univ. Press, 1994.
[19] O. Zariski and P. Samuel, Commutative Algebra, Vol. 2, Van Nostrand, Princeton 1960.
Chebyshev Laboratory, St. Petersburg State University, 14th Line, 29b, Saint Petersburg, 199178 Russia
E-mail address: ivanov.s.o.1986@gmail.com
Chebyshev Laboratory, St. Petersburg State University, 14th Line, 29b, Saint Petersburg, 199178 Russia and St. Petersburg Department of Steklov Mathematical Institute
E-mail address: rmikhailov@mail.ru
| 4 |
Technical Report: Graph-Structured Sparse
Optimization for Connected Subgraph Detection
Feng Chen
Computer Science Department
University at Albany – SUNY
Albany, USA
fchen5@albany.edu
arXiv:1609.09864v1 [] 30 Sep 2016
Baojian Zhou
Computer Science Department
University at Albany – SUNY
Albany, USA
bzhou6@albany.edu
Abstract—Structured sparse optimization is an important and
challenging problem for analyzing high-dimensional data in a
variety of applications such as bioinformatics, medical imaging,
social networks, and astronomy. Although a number of structured
sparsity models have been explored, such as trees, groups, clusters, and paths, connected subgraphs have been rarely explored
in the current literature. One of the main technical challenges
is that there is no structured sparsity-inducing norm that can
directly model the space of connected subgraphs, and there is
no exact implementation of a projection oracle for connected
subgraphs due to its NP-hardness. In this paper, we explore
efficient approximate projection oracles for connected subgraphs,
and propose two new efficient algorithms, namely, G RAPH -IHT
and G RAPH -GHTP, to optimize a generic nonlinear objective
function subject to connectivity constraint on the support of
the variables. Our proposed algorithms enjoy strong guarantees
analogous to several current methods for sparsity-constrained
optimization, such as Projected Gradient Descent (PGD), Approximate Model Iterative Hard Thresholding (AM-IHT), and
Gradient Hard Thresholding Pursuit (GHTP) with respect to
convergence rate and approximation accuracy. We apply our
proposed algorithms to optimize several well-known graph scan
statistics in several applications of connected subgraph detection
as a case study, and the experimental results demonstrate that
our proposed algorithms outperform state-of-the-art methods.
I. I NTRODUCTION
In recent years, structured sparse methods have attracted
much attention in many domains such as bioinformatics,
medical imaging, social networks, and astronomy [2], [4],
[14], [16], [17]. Structured sparse methods have been shown
effective to identify latent patterns in high-dimensional data
via the integration of prior knowledge about the structure
of the patterns of interest, and at the same time remain
a mathematically tractable concept. A number of structured
sparsity models have been well explored, such as the sparsity
models defined through trees [14], groups [17], clusters [16],
and paths [2]. The generic optimization problem based on a
structured sparsity model has the form
minn f (x) s.t. supp(x) ∈ M
x∈R
(1)
where f : Rn → R is a differentiable cost function, the
sparsity model M is defined as a family of structured supports: M = {S1 , S2 , · · · , SL }, where Si ⊆ [n] satisfies a
certain structure property (e.g., trees, groups, clusters), [n] =
{1, 2, · · · , n}, and the support set supp(x) refers to the set
of indexes of non-zero entries in x. For example, the popular
k-sparsity model is defined as M = {S ⊆ [n] | |S| ≤ k}.
Existing structured sparse methods fall into two main categories: 1) Sparsity-inducing norms based. The methods in
this category explore structured sparsity models (e.g., trees,
groups, clusters, and paths) [4] that can be encoded as structured sparsity-inducing norms, and reformulate Problem (1) as
a convex (or non-convex) optimization problem
minx∈Rn f (x) + λ · Ω(x)
(2)
where Ω(x) is a structured sparsity-inducing norm of M that
is typically non-smooth and non-Euclidean and λ is a trade-off
parameter. 2) Model-projection based. The methods in this
category rely on a projection oracle of M:
P(b) = arg minx∈Rn kb − xk22 s.t. supp(x) ∈ M,
(3)
and decompose the problem into two sub-problems, including
unconstrained minimization of f (x) and the projection problem P(b). Most of the methods in this category assume that
the projection problem P(b) can be solved exactly, including
the forward-backward algorithm [42], the gradient descent
algorithm [38], the gradient hard-thresholding algorithms [6],
[18], [40], the projected iterative hard thresholding [5], [7],
and the Newton greedy pursuit algorithm [41]. However,
when an exact solver of P(b) is unavailable and we have to
apply approximate projections, the theoretical guarantees of
these methods do not hold any more. We note that there is
one recent approach named as G RAPH-C OSAMP that admits
inexact projections by assuming “head” and “tail” oracles for
the projections, but is only applicable to compressive sensing
or linear regression problems [15].
We consider an underlying graph G = (V, E) defined on
the coefficients of the unknown vector x, where V = [n] and
E ⊆ V × V. We focus on the sparsity model of connected
subgraphs that is defined as
M(G, k) = {S ⊆ V | |S| ≤ k, S is connected},
(4)
where k refers to the allowed maximum subgraph size. There
are a wide array of applications that involve the search of
interesting or anomalous connected subgraphs in networks.
The connectivity constraint ensures that subgraphs reflect
changes due to localized in-network processes. We describe
a few applications below.
•
•
•
•
•
•
Detection in sensor networks, e.g., detection of traffic bottlenecks in road networks or airway networks [1]; crime
hot spots in geographic networks [22]; and pollutions in
water distribution networks [27].
Detection in digital signals and images, e.g., detection of
objects in images [15].
Disease outbreak detection, e.g., early detection of disease outbreaks from information networks incorporating
data from hospital emergency visits, ambulance dispatch
calls and pharmacy sales of over-the-counter drugs [36].
Virus detection in a computer network, e.g., detection
of viruses or worms spreading from host to host in a
computer network [24].
Detection in genome-scale interaction network, e.g., detection of significantly mutated subnetworks [21].
Detection in social media networks, e.g., detection and
forecasting of societal events [8], [9].
To the best of our knowledge, there is no existing approach
to Problem (1) for M(G, k) that is computationally tractable
and provides performance bound. First, there is no known
structured sparsity-inducing norm for M(G, k).
P The most relevant norm is fused lasso norm [39]: Ω(x) = (i,j)∈E |xi −xj |,
where xi is the i-th entry in x. This norm is able to enforce the smoothness between neighboring entries in x, but
has limited capability to recover all the possible connected
subsets as described by M(G, k) (See further discussions in
Section V Experiments). Second, there is no exact solver for
the projection oracle of M(G, k):
P(x) = arg minn kb − xk22 s.t. supp(x) ∈ M(G, k),
x∈R
(5)
as this projection problem is NP-hard due to a reduction from
classical Steiner tree problem [19]. As most existing modelprojection based methods require an exact solution to the
projection oracle P(x), these methods are inapplicable to the
problem studied here. To the best of our knowledge, there
is only one recent approach named as G RAPH-C OSAMP that
admits inexact projections for M(G, k) by assuming “head”
and “tail” oracles for the projections, but is only applicable to
compression sensing and linear regression problems [15]. The
main contributions of our study are summarized as follows:
•
•
Design of efficient approximation algorithms. Two new
algorithms, namely, G RAPH -IHT and G RAPH -GHTP,
are developed to approximately solve Problem (1) that
has a differentiable cost function and a sparsity model
of connected subgraphs M(G, k). G RAPH -GHTP is required to minimize f (x) over a projected subspace as
an intermediate step, which could be too costly in some
applications and G RAPH -IHT could be considered as a
fast variant of G RAPH -GHTP.
Theoretical guarantees and connections. The convergence rate and accuracy of our proposed algorithms are
analyzed under a smoothness condition of f (x) that is
more general than popular conditions such as Restricted
Strong Convexity/Smoothness (RSC/RSS) and Stable
Mode Restricted Hessian (SMRH). We prove that under
mild conditions our proposed G RAPH -IHT and G RAPH GHTP enjoy rigorous theoretical guarantees.
• Compressive experiments to validate the effectiveness and efficiency of the proposed techniques. Both
G RAPH -IHT and G RAPH -GHTP are applied to optimize
a variety of graph scan statistic models for the connected
subgraph detection task. Extensive experiments on a
number of benchmark datasets demonstrate that G RAPH IHT and G RAPH -GHTP perform superior to state-of-theart methods that are designed specifically for this task in
terms of subgraph quality and running time.
Reproducibility: The implementation of our algorithms and
the data sets is open-sourced via the link [11].
The remaining parts of this paper are organized as follows.
Sections II introduces the sparsity model of connected subgraphs and statement of the problem. Sections III presents
two efficient algorithms and their theoretical analysis. Section IV discusses applications of our proposed algorithms to
graph scan statistic models. Experiments on several real world
benchmark datasets are presented in Section V. Section VI
discusses related work and Section VII concludes the paper
and describes future work.
II. P ROBLEM F ORMULATION
Given an underlying graph G = (V, E) defined on the
coefficients of the unknown vector x, where V = [n],
E ⊆ V × V, and n is typically large (e.g., n > 10, 000).
The sparsity model of connected subgraphs in G is defined
in (4), and its projection oracle P(x) is defined in (5). As
this projection oracle is NP-hard to solve, we first introduce
efficient approximation algorithms for P(x) and then present
statement of the problem that will be studied in the paper.
A. Approximation algorithms for the projection oracle P(x)
There are two nearly-linear time approximation algorithms [15] for P(x) that have the following properties:
• Tail approximation (T(x)): Find a S ⊆ V such that
•
kx − xS k2 ≤ cT · 0 min
kx − xS 0 k2 ,
(6)
S ∈M(G,kT )
√
where cT = 7, kT = 5k, and xS is the restriction of
x to indices in S: we have (xS )i = xi for i ∈ S and
(xS )i = 0 otherwise.
Head approximation (H(x)): Find a S ⊆ V such that
kxS k2 ≥ cH · 0 max kxS 0 k2 ,
(7)
S ∈M(G,kH )
p
where cH = 1/14 and kH = 2k.
It can be readily proved that, if cT = cH = 1, then T(x) =
H(x) = P(x), which indicates that these two approximations
(T(x) and H(x)) stem from the fact that cT > 1 and cH < 1.
B. Problem statement
Given a predefined cost function f (x) that is differentiable,
the input graph G, and the sparsity model of connected
Fig. 1. Illustration of G RAPH -IHT and G RAPH -GHTP on the main steps of each iteration. In this example, the gray scale of each node i encodes the weight
of this node wi ∈ R, f (x) = −wT x + 12 kxk2 , and the maximum size of subgraphs is set to k = 6, where w = [w1 , · · · , wn ]T . The resulting problem
tends to find a connected subgraph with the largest overall weight (See discussion about the effect of 12 kxk2 in Section IV). In each iteration, S i is the
connected subset of nodes and its induced subgraph is denoted as GS i . The sequence of intermediate vectors and subgraphs are: (x0 , GS 1 ), · · · , (xi , GS i ).
subgraph M(G, k), the problem to be studied is formulated
as:
min f (x) s.t. supp(x) ∈ M(G, k).
x∈Rn
(8)
Problem (8) is difficult to solve as it involves decision variables
from a nonconvex set that is composed of many disjoint
subsets. In this paper, we will develop nearly-linear time
algorithms to approximately solve Problem (8). The key
idea is decompose this problem to sub-problems that are
easier to solve. These sub-problems include an optimization
sub-problem of f (x) that is independent on M(G, k) and
projection approximations for M(G, k), including T(x) and
H(x). We will design efficient algorithms to couple these subproblems to obtain global solutions to Problem (8) with good
trade-off on running time and accuracy.
III. A LGORITHMS
This section first presents two efficient algorithms, namely,
G RAPH -IHT and G RAPH -GHTP, and then analyzes their time
complexities and performance bounds.
A. Algorithm G RAPH -IHT
The proposed G RAPH -IHT algorithm generalizes the traditional algorithm named as projected gradient descent [5], [7]
that requires a exact solver of the projection oracle P(x). The
high-level summary of G RAPH -IHT is shown in Algorithm 1
and illustrated in Figure 1 (b). The procedure generates a
sequence of intermediate vectors x0 , x1 , · · · from an initial
approximation x0 . At the i-th iteration, the first step (Line
5) first calculates the gradient “∇f (xi )”, and then identifies a
subset of nodes via head approximation that returns a support
set with the head value at least a constant factor of the
optimal head value: “Ω ← H(∇f (xi ))”. The support set Ω
can be interpreted as the subspace where the nonconvex set
“{x | supp(x) ∈ M(G, k)}” is located, and the projected gradient in this subspace is: “∇Ω f (xi )”. The second step (Line
6) calculates the projected gradient descent at the point xi with
step-size η: “b ← xi − η · ∇Ω f (xi )”. The third step (Line 7)
identifies a subset of nodes via tail approximation that returns a
support set with tail value at most a constant times larger than
the optimal tail value: “S i+1 ← T(b)”. The last step (Line 8)
calculates the intermediate solution xi+1 : xi+1 = bS i+1 . The
previous two steps can be interpreted as the projection of b to
the nonconvex set “{x | supp(x) ∈ M(G, k)}” using the tail
approximation.
Algorithm 1 G RAPH -IHT
1: Input: Input graph G, maximum subgraph size k, and step
size η (1 by default).
2: Output: The estimated vector x̂ and the corresponding
connected subgraph S.
3: i ← 0, xi ← 0; S i ← ∅;
4: repeat
5:
Ω ← H(∇f (xi ));
6:
b ← xi − η · ∇Ω f (xi );
7:
S i+1 ← T(b);
8:
xi+1 ← bS i+1 ;
9:
i ← i + 1;
10: until halting condition holds
11: return x̂ = xi and S = GS i ;
B. Algorithm G RAPH -GHTP
The proposed G RAPH -GHTP algorithm generalizes the traditional algorithm named as Gradient Hard Threshold Pursuit
(GHTP) that is designed specifically for the k-sparsity model:
M = {S ⊆ [n] | |S| ≤ k} [40]. The high-level summary
of G RAPH -GHTP is shown in Algorithm 2 and illustrated in
Figure 1 (c). The first two steps (Line 5 and Line 6) in each
iteration is the same as the first two steps (Line 5 and Line
6) of G RAPH -IHT, except that we return the support of the
projected gradient descent: “Ψ ← supp(xi − η · ∇Ω f (xi ))”,
in which pursuing the minimization will be most effective.
Over the support set S, the function f is minimized to
produce an intermediate estimate at the third step (Line 7):
Algorithm 2 G RAPH -GHTP
1: Input: Input graph G, maximum subgraph size k, and step
size η (1 by default).
2: Output: The estimated vector x̂ and the corresponding
connected subgraph S.
3: i ← 0, xi ← 0; S i ← ∅;
4: repeat
5:
Ω ← H(∇f (xi ))
6:
Ψ ← supp(xi − η · ∇Ω f (xi ));
7:
b ← arg minx∈Rn f (x) s.t. supp(x) ⊆ Ψ;
8:
S i+1 ← T(b);
9:
xi+1 ← bS i+1 ;
10:
i ← i + 1;
11: until halting condition holds
12: return x̂ = xi and S = GS i ;
“b ← arg minx∈Rn f (x) s.t. supp(x) ⊆ Ω”. The fourth and
fifth steps (Line 8 and Line 9) are the same as the last two
steps (Line 7 and Line 8) of G RAPH -IHT in each iteration.
C. Relations between G RAPH -IHT and G RAPH -GHTP
These two algorithms are both variants of gradient descent.
In overall, G RAPH -GHTP converges faster than G RAPH -IHT
as it identifies a better intermediate solution in each iteration
by minimizing f (x) over a projected subspace {x | supp(x) ⊆
Ω}. If the cost function f (x) is linear or has some special
structure, this intermediate step can be conducted in nearlylinear time. However, when this step is too costly in some
applications, G RAPH -IHT is preferred.
D. Theoretical Analysis of G RAPH -IHT
In order to demonstrate the accuracy of estimates using
Algorithm 1, we require that the cost function f (x) satisfies
the Weak Restricted Strong Convexity (WRSC) condition as
follows:
Definition III.1 (Weak Restricted Strong Convexity Property
(WRSC)). A function f (x) has the (ξ, δ, M)-model-WRSC
if ∀x, y ∈ Rn and ∀S ∈ M with supp(x) ∪ supp(y) ⊆ S, the
following inequality holds for some ξ > 0 and 0 < δ < 1:
kx − y − ξ∇S f (x) + ξ∇S f (y)k2 ≤ δkx − yk2 .
(9)
The WRSC is weaker than the popular Restricted Strong
Convexity/Smoothness (RSC/RSS) conditions that are used in
theoretical analysis of convex optimization algorithms [40].
The RSC condition basically characterizes cost functions
that have quadratic bounds on the derivative of the objective function when restricted to model-sparse vectors. The
RSC/RSS conditions imply condition WRSC, which indicates that WRSC is no stronger than RSC/RSS [40]. In the
special case where f (x) = ky−Axk22 and ξ = 1, the condition
(ξ, δ, M)-model-WRSC reduces to the well known Restricted
Isometry Property (RIP) condition in compressive sensing.
Theorem III.1. Consider the sparsity model of connected
subgraphs M(G, k) for some k ∈ N and a cost function
f : Rn → R that satisfies the (ξ, δ, M(G, 5k))-model-WRSC
condition. If η = cH (1 − δ) − δ then for any x ∈ Rn such
that supp(x) ∈ M(G, k), with η > 0 the iterates of Algorithm
2 obey
kxi+1 − xk2 ≤ αkxi − xk2 + βk∇I f (x)k2
(10)
where
α0 = cH (1 − δ) − δ, β0 = δ(1 + cH ),
q
2(1 + cT )
η
η
2
1 − α0 + (2 − )δ + 1 −
,
α=
1−δ
ξ
ξ
!
√
√
√
√
1 + cT
2β0
2α0 β0
β=
,
(1 + 2 2)ξ + (2 − 2 2)η +
+p
1−δ
α0
1 − α02
√
and I = arg maxS∈M(G,8k) k∇S f (x)k2
Before we prove this result, we give the following two
lemmas III.2 and III.3.
Lemma III.2. [40] Assume that f is a differentiable function.
If f satisfies condition (ξ, δ, M)-WRSC, then ∀x, y ∈ Rn with
supp(x) ∪ supp(y) ⊂ S ∈ M, the following two inequalities
hold
1−δ
1+δ
kx − yk2 ≤ k∇S f (x) − ∇S f (y)k2 ≤
kx − yk2
ξ
ξ
1+δ
kx − yk22
f (x) ≤ f (y) + h∇f (y), x − yi +
2ξ
Lemma III.3. Let α0 = cH (1 − δ) − δ, β0 = ξ(1 + cH ),
ri = xi − x, and Ω = H(∇f (xi )). Then
kriΓc k2
≤
q
#
β0
α0 β0
k∇I f (x)k2
+
+p
α0
1 − α02
"
1−
α02 kri k2
where I = arg maxS∈M(G,8k) k∇S f (x)k2 . We assume that cH
and δ are such that α0 > 0.
Proof: Denote Φ = supp(x) ∈ M(G, k), Ω =
H(∇f (xi )) ∈ M(G, 2k), ri = xi − x, and Λ = supp(ri ) ∈
M(G, 6k). The component k∇Γ f (xi )k2 can be lower bounded
as
k∇Γ f (xi )k2
≥
cH k∇Φ f (xi )k2
≥
cH (k∇Φ f (xi ) − ∇Φ f (x)k2 − k∇Φ f (x)k2
cH (1 − δ) i
kr k2 − cH k∇I f (x)k2 ,
ξ
≥
where the first inequality follows from the definition of
head approximation and the last inequality follows from
Lemma III.2 of our paper. The component k∇Γ f (xi )k2 can
also be upper bounded as
k∇Γ f (xi )k2
≤
≤
≤
≤
1
kξ∇Γ f (xi ) − ξ∇Γ f (x)k2 + k∇Γ f (x)k2
ξ
1
kξ∇Γ f (xi ) − ξ∇Γ f (x) − riΓ + riΓ k2 +
ξ
k∇Γ f (x)k2
1
kξ∇Γ∪Ω f (xi ) − ξ∇Γ∪Ω f (x) − riΓ∪Ω k2 +
ξ
kriΓ k2 + k∇Γ f (x)k2
δ
1
· kri k2 + kriΓ k2 + k∇I f (x)k2 ,
ξ
ξ
where the fourth inequality follows from condition
(ξ, δ, M(G, 8k))-WRSC and the fact that riΓ∪Ω = ri .
Combining the two bounds and grouping terms, we obtain
the inequality:
kriΓ k ≥ α0 kri k2 − ξ(1 + cH )k∇I f (x)k2 .
have kriΓ k ≥ α0 kri k2 − β0 k∇I f (x)k2 . After a
We
number of
algebraic manipulations, we obtain the inequality
kriΓc k2
"
#
q
β0
α0 β0
i
2
k∇I f (x)k2 ,
+p
≤ 1 − α0 kr k2 +
α0
1 − α02
which proves the lemma.
We give the formal proof of III.1.
Proof: From the traingle inequality, we have
kri+1 k2
= kxi+1 − xk2
= kbΨ − xk2
≤
kb − xk2 + kb − bΨ k2
≤ (1 + cT )kb − xk2
=
=
E. Theoretical Analysis of G RAPH -GHTP
i
i
i
i
i
(1 + cT )kx − η∇Ω f (x ) − x k2
(1 + cT )kr − η∇Ω f (x )k2 ,
i
where ∇Ω f (x ) is the projeceted vector of f (xi ) in which
the entries outoside Ω are set to zero and the entries in Ω are
unchanged. kri − η∇Ω f (xi )k2 has the inequalities
kri − η∇Ω f (xi )k2 = kriΩc + riΩ − η∇Ω f (xi )k
≤ kriΩc k2 + kriΩ − η∇Ω f (xi ) + η∇Ω f (x) − η∇Ω f (x)k
≤ kriΩc k2 + kriΩ − η∇Ω f (xi ) + η∇Ω f (x)k + kη∇Ω f (x)k
≤ kriΩc k2 + kriΩ − ξ∇Ω f (xi ) + ξ∇Ω f (x)k+
(ξ − η)k∇Ω f (xi ) − ∇Ω f (x)k2 k + kη∇Ω f (x)k2
≤ kriΩc k2 + (1 − η/ξ + (2 − η/ξ)δ)kri k2 + ηk∇I f (x)k2
where the last inequality follows from condition (ξ, δ, M)WRSC and Lemma III.2. From Lemma III.3, we have
q
hβ
α0 β0 i
0
+p
k∇I f (x)k2
kriΓc k2 ≤ 1 − α02 kri k2 +
α0
1 − α02
Combining the above inequalities, we prove the theorem.
Our proposed G RAPH -IHT generalizes several existing
sparsity-constrained optimization algorithms: 1) Projected
Gradient Descent (PGD) [30]. If we redefine H(b) =
supp(b) and T(b) = supp(P(b)), where P(b) is the projection
oracle defined in Equation (5), then G RAPH -IHT reduces
to the PGD method; 2) Approximated Model-IHT(AMIHT) [13]. If the cost function f (x) is defined as the least
square cost function f (x) = ky − Axk22 , then ∇f (x) has
the specific form −AT (y − A) and G RAPH -IHT reduces to
the AM-IHT algorithm, the state-of-the-art variant of IHT for
compressive sensing and linear regression problems. In particular, let e = y − Ax. The component
k∇f (xi )k2 = kAT ek2
√
is upper bound by bounded by 1 + δkek2 [13], Assume that
ξ = 1 and η = 1. Condition (ξ, η, M)-WRSC then reduces
to the RIP condition in compressive sensing. The convergence
inequality (10) then reduces to
kx
i+1
0
i
0
− xk2 ≤ α kx − xk2 + β kek2 ,
(11)
h
i
p
2
where α = (1 + cT ) δ + 1 − α0 and
√
h (α + β )√1 + δ
α0 β0 ( 1 + δ) i
0
0
0
p
β = (1 + cT )
+
.
α0
1 − α02
0
Surprisingly, the above convergence inequality is identical
to the convergence inequality of AM-IHT derived in [13]
based on the RIP condition, which indicates that G RAPH -IHT
has the same convergence rate and approximation error as
AM-IHT, although we did not make any attempt to explore
the special properties of the RIP condition. We note that the
convergence properties of G RAPH -IHT hold in fairly general
setups beyond compressive sensing and linear regression. As
we consider G RAPH -IHT as a fast variant of G RAPH -GHTP,
due to space limit we ignore the discussions about the convergence condition of G RAPH -IHT. The theoretical analysis
of G RAPH -GHTP to be discussed in the next subsection can
be readily adapted to the theoretical analysis of G RAPH -IHT.
Theorem III.4. Consider the sparsity model of connected
subgraphs M(G, k) for some k ∈ N and a cost function
f : Rn → R that satisfies the (ξ, δ, M(G, 5k))-model-WRSC
condition. If η = cH (1 − δ) − δ then for any x ∈ Rn such
that supp(x) ∈ M(G, k), with η > 0 the iterates of Algorithm
2 obey
kxi+1 − xk2 ≤ αkxi − xk2 + βk∇I f (x)k2
(12)
where
α0 = cH (1 − δ) − δ, β0 = δ(1 + cH ),
q
2(1 + cT )
η
η
1 − α02 + (2 − )δ + 1 −
,
α=
1−δ
ξ
ξ
!
√
√
√
√
1 + cT
2β0
2α0 β0
(1 + 2 2)ξ + (2 − 2 2)η +
+p
β=
,
1−δ
α0
1 − α02
√
and I = arg maxS∈M(G,8k) k∇S f (x)k2
Proof: Denote Ω = H(∇f (xi )) and Ψ = supp(xi − η ·
∇Ω f (xi )). Let ri+1 = xi+1 − x. kri+1 k2 is bounded as
kri+1 k2 = kxi+1 − xk2
≤
≤
≤
kxi+1 − bk2 + kx − bk2
cT kx − bk2 + kx − bk2
(1 + cT )kx − bk2 ,
(13)
where the second inequality follows from the definition of tail
approximation. The component k(x − b)Ψ k22 is bounded as
k(x − b)Ψ k22 = hb − x, (b − x)Ψ i
= hb − x − ξ∇Ψ f (b) + ξ∇Ψ f (x), (b − x)Ψ i −
hξ∇Ψ f (x), (b − x)Ψ i
≤ δkb − xk2 k(b − x)Ψ k2 + ξk∇Ψ f (x)k2 k(b − x)Ψ k2 ,
where the second equality follows from the fact that
∇S f (b) = 0 since b is the solution to the problem in the
third Step (Line 7) of G RAPH -GHTP, and the last inequality
can be derived from condition (ξ, δ, M(G, 8k))-WRSC. After
simplification, we have
k(x − b)Ψ k2 ≤ δkb − xk2 + ξk∇Ψ f (x)k2 .
It follows that
kx − bk2 ≤ k(x − b)Ψ k2 + k(x − b)Ψc k2
≤ δkb − xk2 + ξk∇Ψ f (x)k2 + k(x − b)Ψc k2 .
After rearrangement we obtain
kb − xk2
≤
k(b − x)Ψc k2
ξk∇Ψ f (x)k2
+
,
1−δ
1−δ
(14)
where this equality follows from the fact that supp(b) ⊆ S.
Let Φ = supp(x) ∈ M(G, k).
k(xi − η∇Ω f (xi ))Φ k2 ≤ k(xi − η∇Ω f (xi ))Ψ k2 ,
as Ψ = supp(xi −η·∇Ω f (xi )). By eliminating the contribution
on Φ ∩ Ψ, we derive
k(xi − η∇Ω f (xi ))Φ\Ψ k2 ≤ k(xi − η∇Ω f (xi ))Ψ\Φ k2
For the right-hand side, we have
k(xi − η∇Ω f (xi ))Ψ\Φ k2 ≤
k(xi − x − η∇Ω f (xi ) + η∇Ω f (x))Ψ\Φ k2 + ηk∇Ω∪Ψ f (x)k2 ,
where the inequality falls from the fact that Φ = supp(x).
From the left-hand
side,i we have
i
k(x − η∇Ω f (x ))Φ\Ψ k2 ≤ −ηk∇Ω∪Φ f (x)k2 +
k(xi − x − η∇Ω f (xi ) + η∇Ω f (x))Φ\Ψ + (x − b)Ψc k2
where the inequality follows from the fact that bΨc = 0,
xΦ\Ψ = xΨc , and −xΦ\Ψ + (x − b)Ψc = 0. Let Φ∆Ψ be
the symmetric difference of the set Φ and Ψ. It follows that
≤
k(b − x)Ψc k2
√
2k(xi − x − η∇Ω f (xi ) + η∇Ω f (x))Φ∆Ψ k2 + 2ηk∇I f (x)k2
√
≤ 2k(xi − x − ξ∇Ω f (xi ) + ξ∇Ω f (x))Φ∆Ψ k2 +
√
2(ξ − η)k(∇Ω f (xi ) + ∇Ω f (x))Φ∆Ψ k + 2ηk∇I f (x)k2
√
≤ 2k(riΩc + riΩ − ξ∇Ω f (xi ) + ξ∇Ω f (x))Φ∆Ψ k2 +
√
2(ξ − η)k(∇Ω f (xi ) − ∇Ω f (x))Φ∆Ψ k + 2ηk∇I f (x)k2
√
√
≤ 2kriΩc k + 2k(riΩ − ξ∇Ω f (xi ) + ξ∇Ω f (x))Ψ∆Ψ k2 +
√
2(ξ − η)k(∇Ω f (xi ) − ∇Ω f (x))Ψ∆Ψ k + 2ηk∇I f (x)k2
√
√
i
≤ 2krΩc k + 2kri − ξ∇Ω∪Ψ∪Φ f (xi ) + ξ∇Ω∪Ψ∪Φ f (x)k2 +
√
2(ξ − η)k(∇Ω∪Ψ∪Φ f (xi ) − ∇Ω∪Ψ∪Φ f (x))Ψ∆Ψ k + 2ηk∇I f (x)k2
√
√
η
η
i
c k2 +
≤ 2krΩ
δ+1−
kri k +
2
2−
ξ
ξ
√
√
2
2ξ + (1 − 2)η k∇I f (x)k2 ,
where the first inequality follows from the fact that
ηk∇Ω∪Φ f (x)k2 + ηk∇Ψ∪Φ∪Ω f (x)k2 ≤ 2ηk∇I f (x)k2 ,
the third inequality follows as xi − x = ri = riΩc + riΩ , the
fourth inequality follows from the fact that k(riΩc )Φ∆Ψ k2 ≤
kriΩc k2 , the fifth inequality follows as ri ⊆ Ω ∪ Ψ ∪ Φ, and the
last inequality follows from condition (ξ, δ, M(G, 8k))-WRSC
and Lemma III.2. From Lemma
III.3, we have
"
#
kriΩc k2 ≤
p
ξ(1 + cH )
ξη(1 + cH )
k∇I f (x)k2
1 − η 2 kri k2 +
+ p
η
1 − η2
Combining (14) and above inequalities, we prove the theorem.
Theorem III.4 shows the estimator error of G RAPH -GHTP
is determined by the multiple of k∇S f (x)k2 , and the convergence rate is geometric. Specifically, if x is an uncontrained
minimizer of f (x), then ∇f (x) = 0. It means G RAPH -GHTP
is guaranteed to obtain the true x to arbitrary precision. The
estimation error is negligible when x is sufficiently close to
an unconstrained minimizer of f (x) as k∇S f (x)k2 is a small
value. The parameter
√
q
2(1 + cT )
η
η
1 − α02 + (2 − )δ + 1 −
< 1,
α=
1−δ
ξ
ξ
controls the convergence rate of G RAPH -GHTP. Our algorithm allows an exact recovery if α < 1. As δ is an arbitrary
constant parameter, it can be an arbitrary small positive value.
Let η be ξ and δ be an arbitrary small positive value, the
parameters cH and cT satisfy the following inequality
c2H > 1 − 1/(1 + cT )2 .
(15)
It is noted that the head and tail approximation algorithms
described in [15] do not meet the inequality (15). Nonetheless,
the approximation factor cH of any given head approximation
algorithm can be boosted to any arbitrary constant c0H < 1,
which leads to the satisfaction of the above condition as shown
in [15]. Boosting the head-approximation algorithm, though
strongly suggested by [13], is not empirically necessary.
Our proposed G RAPH -GHTP has strong connections to the
recently proposed algorithm named as Gradient Hard Thresholding Pursuit (GHTP) [40] that is designed specifically for
the k-sparsity model: M = {S ⊆ [n] | |S| ≤ k}. In particular,
if we redefine H(b) = supp(b) and T(b) = supp(P(b)),
where P(b) is the projection oracle defined in Equation (5),
and assume that there is an algorithm that solves the projection
oracle exactly, in which the sparsity model does not require
to be the k-sparsity model. It then follows that the upper
bound of kriΩc k2 stated in Lemma III.2 in Appendix is updated
as kriΩc k2 ≤ 0, since supp(ri ) = Ω and riΩc = 0. In
addition, the multiplier (1 + cT ) is replaced as 1 as the first
inequality (13) in the proof of Theorem III.4 in Appendix is
updated as kri+1 k2 ≤ kx−bk2 , instead of the original version
kri+1 k2 ≤ (1 + cT )kx − bk2 . After these two changes, the
shrinkage rate α is updated as
√
2
η
η
(2 − )δ + 1 −
,
(16)
α=
1−δ
ξ
ξ
which is the same as the shrinkage rate of G RAPH -GHTP
as derived in [40] specifically for the k-sparsity model. The
above shrinkage rate α (16) should satisfy the condition α < 1
to ensure the geometric convergence of G RAPH -GHTP, which
implies that
√
√
√
√
(17)
η > ((2 2 + 1)δ + 2 − 1)ξ/( 2 + 2δ).
√
It follows that if δ < 1/( 2 + 1), a step-size η < ξ can
always be found to satisfy the above inequality. This constant
condition of δ is analogous to the constant condition of stateof-the-art compressive sensing methods that consider noisy
measurements [23] under the assumption of the RIP condition.
We derive the analogous constant using the WRSC condition
that weaker than the RIP condition.
As discussed above, our proposed G RAPH -GHTP has
connections to GHTP on the shrinkage rate of geometric
convergence. We note that the shrinkage rate of our proposed
G RAPH -GHTP stated in Theorem III.4 is derived based on
head and tail approximations of the sparsity model of connected subgraphs M(G, k), instead of the k-sparsity model
that has an exact projection oracle solver. Our convergence
properties hold in fairly general setups beyond k-sparsity
model, as a number of popular structured sparsity models such
as the “standard” k-sparsity, block sparsity, cluster sparsity,
and tree sparsity can be encoded as special cases of M(G, k).
Theorem III.5. Let x ∈ Rn such that supp(x) ∈ M(G, k),
and f : Rn → R be cost function that satisfies condition
(ξ, δ, M(8k, g))-WRSC. Assuming that α < 1, G RAPH -GHTP
(or G RAPH -IHT) returns a x̂ such that, supp(x̂) ∈ M(5k, g)
β
) is a fixed
and kx − x̂k2 ≤ ck∇I f (x)k2 , where c = (1 + 1−α
constant. Moreover, G RAPH -GHTP runs in time
O (T + |E| log3 n) log(kxk2 /k∇I f (x)k2 ) ,
(18)
where T is the time complexity of one execution of the
subproblem in Step 6 in G RAPH -GHTP (or Step 5 in G RAPH IHT). In particular, if T scales linearly with n, then G RAPH GHTP (or G RAPH -IHT) scales nearly linearly with n.
Proof: The i-th iterate of G RAPH -GHTP (or G RAPH IHT) satisfies
β
k∇I f (x)k2 .
1 − α m
kx − xi k2 ≤ αi kxk2 +
(19)
l
1
2
iterations, G RAPH After t = log k∇Ikxk
f (x)k2 / log α
GHTP (or G RAPH -IHT) returns an estimate x̂ satisfying kx−
β
x̂k2 ≤ (1 + 1−α
)k∇I f (x)k2 . The time complexities of both
head approximation and tail approximation are O(|E| log3 n).
The time complexity of one iteration in G RAPH -GHTP (or
3
G RAPH -IHT)l is
(T + |E| log
n), and
m the total number of
kxk2
iterations is log k∇I f (x)k2 / log α1 , and the overall time
follows. G RAPH -GHTP and G RAPH -IHT are only different
in the definition of α and β in this Theorem.
As shown in Theorem 18, the time complexity of G RAPH GHTP is dependent on the total number of iterations and the
time cost (T ) to solve the subproblem in Step 6. In comparison,
the time complexity of G RAPH -IHT is dependent on the total
number of iterations and the time cost T to calculate the
gradient ∇f (xi ) in Step 5. It implies that, although G RAPH GHTP converges faster than G RAPH -IHT, the time cost to
solve the subproblem in Step 6 is often much higher than the
time cost to calculate a gradient ∇f (xi ), and hence G RAPH IHT runs faster than G RAPH -GHTP in practice.
IV. A PPLICATIONS ON G RAPH S CAN S TATISTICS
In this section, we specialize G RAPH-IHT and G RAPHGHTP to optimize a number of well-known graph scan
statistics for the task of connected subgraph detection, including elevated mean scan (EMS) statistic [29], Kulldorff’s
scan statistic [25], and expectation-based Poisson (EBP) scan
statistic [26]. Each graph scan statistic is defined as the
generalized likelihood ratio test (GLRT) statistic of a specific
hypothesis testing about the distributions of features of normal
and abnormal nodes. The EMS statistic corresponds to the
following GLRT test: Given a graph G = (V, E), where
V = [n] and E ⊆ V × V, each node i is associated with a
random variable xi :
xi = µ · 1(i ∈ S) + i , i ∈ V,
(20)
where |µ| represents the signal strength and i ∈ N (0, 1). S
is some unknown anomalous cluster that forms as a connected
subgraph. The task is to decide between the null hypothesis
(H0 ): ci ∈ N (0, 1), ∀i ∈ V and the alternative (H1 (S)): ci ∈
N (µ, 1), ∀i ∈ S and ci ∈ N (0, 1), ∀i ∈
/ S. The EMS statistic
is defined as the GLRT function under this hypothesis testing:
Prob(Data|H1 (S))
1 X
F (S) =
ci .
(21)
=p
Prob(Data|H0 )
|S|
i∈S
The problem of connected subgraph detection based on the
EMS statistic is then formulated as
1 X 2
ci ) s.t. S ∈ M(G, k),
(22)
min − (
S⊆V
|S|
i∈S
where the square of the EMS scan statistic is considered to
make the function smooth, and this transformation does not
infect the optimum solution. Let the {0, 1}-vectors form of S
be x ∈ {0, 1}n , such that supp(x) = S. Problem (22) can be
reformulated as
min
x∈{0,1}n
−(cT x)2 /(1T x) s.t. supp(x) ∈ M(G, k),
(23)
where c = [c1 , · · · , cn ]T . To apply our proposed algorithms,
we relax the input domain of x and maximize the strongly
convex function [3]:
1
min −(cT x)2 /(1T x) + xT x s.t. supp(x) ∈ M(G, k). (24)
x∈Rn
2
The connected subset of nodes can be found as the subset of
indexes of positive entries in x̂, where x̂ refers to the solution
of the Problem (24). Assume that c is normalized and ci ≤ 1,
∀i. Let ĉ = max{c1 , · · · , cn }. The Hessian matrix of the above
objective function satisfies the following conditions
cT x
cT x
(1 − ĉ2 ) · I I − (c − T 1)(c − T 1)T 1 · I. (25)
1 x
1 x
According to Lemma 1 (b) in [40]), the objective function
f (x) satisfies condition (ξ, δ, M(G, 8k))-WRSC that
p
δ = 1 − 2ξ(1 − ĉ2 ) + ξ 2 ,
for any ξ such that ξ < 2(1 − ĉ2 ). The geometric convergence
of G RAPH -GTHP as shown in Theorem III.4 is guaranteed.
Different from the EMS statistic that is defined for numerical features based on Gaussian distribution, the Kulldorff’s
scan statistic and Expectation Based Poisson statistic (EBP)
are defined for count features based on Poisson distribution. In
particular, each node i is associated with a feature ci , the count
of events (e.g., crimes, flu infections) observed at the current
time, and a feature bi , the expected count (or ‘baseline’) of
events by using historical data. Let c = [c1 , · · · , cn ]T and
b = [b1 , · · · , bn ]T . The Kulldorff’s scan statistic and EBP
scan statistics are described Table I. We note that these two
scan statistics do not satisfy the WRSC condition, but as
demonstrated in our experiments, our proposed algorithms
perform empirically well for all the three scan statistics, and
in particular, our proposed G RAPH -GHTP converged in less
than 10 iterations in all the settings.
V. EXPERIMENTS
This section evaluates the performance of our proposed
methods using four public benchmark data sets for connected
subgraph detection. The experimental code and data sets are
available from the Link [11] for reproducibility.
TABLE I
The three typical graph scan statistics that are tested in our experiments (The vectors x, c, and b are defined in Section IV)
Score Functions
Definition
Kulldorff’s Scan Statistic [25]
cT x log
Expectation-based Poisson Statistic (EBP) [26]
Elevated Mean Scan
Statsitic (EMS) [29]
(1T c
−
cT x log
Applications
T
cT x
c
− 1T c log 11T b
bT x
T
T
1 c−c x
T
c x) log 1T b−bT x
cT x
bT x
+
The statistic is used for anomalous pattern detection in graphs with count
features, such as detection of traffic bottlenecks in sensor networks [1], [22],
detection of anomalous regions in digitals and images [10], detection of attacks
in computer networks [24], disease outbreak detection [36], and various others.
+ bT x − cT x
This statistic is used for the same applications as above [1], [12], [25], [36],
[37], but has different assumptions on data distribution [25].
cT x/1T x
This statistic is used for anomalous pattern detection in graphs with numerical
features, such as event detection in social networks, network surveillance,
disease outbreak detection, biomedical imaging [29], [35]
TABLE II
Summary of dataset settings in the experiments. (For the network of each dataset, we only use its maximal connected component if it is not fully connected).
Dataset
Application
Training & Testing Time Periods
BWSN
Detection of
contaminated nodes
CitHepPh
Detection of emerging
research areas
Detection of most
congested subgraphs
Detection of crime
hot spots
Training: Hours 3 to 5 with 0% noise
Testing: Hours 3 to 5 with
2%, 4%, · · · , 10% noises
Testing: 1999 to 2002
Traffic
ChicagoCrime
Observed value at
node v: cv
Sensor value (0 or 1)
Count of citations
Testing: Mar.2014, 5AM to 10PM
− log(pt (v)/µ) 1
Testing: Year of 2015
Count of burglaries
Baseline value at node v: bv
# of Nodes
# of Edges
# of snapshots
Average sensor value for EBP
Constant ‘1’ for Kulldorff
12,527
14,323
hourly: 3 × 6
Average count of citations for EBP
Maximum count of citations for Kulldorff
None
(EBP and Kulldorff are not applicable)
Average count of burglaries for EBP
Maximum count of burglaries for Kulldorff
11,895
75,873
yearly: 1 × 11
1,723
5,301
per 15 min: 68 × 304
46,357
168,020
yearly: 1 × 15
pt (v) refers to the statistical p-value of node v at time t that is calculated via empirical calibration based on historical speed values of v from 2013 June.
1st to 2014 Feb. 29; and µ is a significance level threshold and is set to 0.15. The larger this value − log(pt (v)/µ), the more congested in the region near v.
1
A. Experiment Design
Datasets: 1) BWSN Dataset. A real-world water network is offered in the Battle of the Water Sensor Networks
(BWSN) [28]. That has 12,527 nodes and 14,323 edges.
In order to simulate a contaminant sub-area, 4 nodes with
chemical contaminant plumes, which were distributed in this
sub-area, were generated. We use the water network simulator
EPANET [31] that was employed in BWSN for a period of 3
hours to simulate the spreads for contaminant plumes on this
graph. If a node is polluted by the chemical, then its sensor
reports 1, otherwise, 0, in each hour. To test the tolerance of
noise of our methods, K ∈ {2, 4, 6, 8, 10} percent vertices
were selected randomly, and their sensor reports were set to
0 if their original reports were 1 and vice versa. Each hour
has a graph snapshot. The snapshots corresponding to the 3
hours that have 0% noise are considered for training, and the
snapshots that have 2%, · · · , 10% noise reports for testing. The
goal is to detect a connected subgraph that is corresponding to
the contaminant sub-area. 2) CitHepPh Dataset. We downloaded the high energy physics phenomenology citation data
(CitHepPh) from Stanford Network Analysis Project (SNAP)
[20]. This citation graph contains 11,897 papers corresponding
to graph vertices and 75,873 edges. An undirected edge
between two vertices (papers) exists , if one paper is cited by
another. The period of these papers published is from January
1992 to April 2002. Each vertex has two attributes for each
specific year (t = 1992, · · · , t = 2002). We denote the number
of citations of each specific year as the first attribute and
the average citations of all papers in that year as the second
attribute. The goal is to detect a connected subgraph where the
number of citations of vertices (papers) in this subgraph are abnormally high compared with the citations of vertices that are
not in this subgraph. This connected subgraph is considered
as a potential emerging research area. Since the training data
is required for some baseline methods, the data before 1999 is
considered as the training data, and the rest from 1999 to 2002
as the testing data. 3) Traffic Dataset. Road traffic speed data
from June 1, 2013 to Mar. 31, 2014 in the arterial road network
of the Washington D.C. region is collected from the INRIX
database (http://inrix.com/publicsector.asp), with 1,723 nodes
and 5,301 edges. The database provides traffic speed for each
link at a 15-minute rate. For each 15-minute interval, each
method identities a connected subgraph as the most congested
region. 4) ChicagoCrime Dataset. We collected crime data
from City of Chicago [https://data.cityofchicago.org/PublicSafety/Crimes-2001-to-present/ijzp-q8t2] from Jan. 2001 to
Dec. 2015. There are 46,357 nodes (census blocks) and
168,020 edges (Two census blocks are connected with each
other if they are neighbours). Specifically, we collected all
records of burglaries from 2001 to 2015. The data covers
burglaries in the period from Jan. 2001 to Dec. 2015. Each
vertex has an attribute denoting the number of burglaries in
sepcific year and average number of burglaries over 10 years.
We aim to detect connected census areas which has anomaly
high burglaries accidents. The data before 2010 is considered
as training data, and the data from 2011 to 2015 is considered
as testing data.
Graph Scan Statistics: As shown in Table I, three graph
scan statistics were considered as the scoring functions of
connected subgraphs, including Kulldorff’s scan statistic [25],
expectation-based Poisson (EBP) scan statistic [26], and elevated mean scan (EMS) statistic [29]. The first two statistic
functions require that each vertex v has a count cv representing
the count of events observed at that vertex, and an expected
count (‘baseline‘) bv . For EMS statistic, only cv is used. We
need to normalize cv for EMS as it is defined based on the
assumptions of standard normal distribution for normal values
and shifted-mean normal distribution for abnormal values.
Table II provides details about the calculations of cv and bv
for each data set.
Comparison Methods: We compared our proposed
methods with four state-of-the-art baseline methods that
are designed specifically for connected subgraph detection, namely, GraphLaplacian
[33], EventTree
[32], DepthFirstGraphScan [36] and NPHGS [8].
DepthFirstGraphScan is an exact search algorithm
based on depth-first search and takes weeks to run on graphs
that have more than 1000 nodes. We imposed a maximum limit
on the depth of the search to 10 to reduce its time complexity.
The basic ideas of these baseline methods are summarized
as follows: NPHGS starts from random seeds (nodes) as
initial candidate clusters and gradually expends each candidate
cluster by including its neighboring nodes that could help
improve its BJ statistic score until no new nodes can be added.
The candidate cluster with the largest BJ statistic score is
returned. DepthFirstGraphScan adopts a similar strategy
to NPHGS but expands the initial clusters based on depth-first
search. GraphLaplacian uses a graph Laplacian penalty
function to replace the connectivity constraint and converts
the problem to a convex optimization problem. EventTree
reformulates the connected subgraph detection problem as a
prize-collecting steiner tree (PCST) problem [19] and apply
the Goemans-Williamson (G-W) algorithm for PCST [19]
to detect anomalous subgraphs. We also implemented the
generalized fused lasso model (GenFusedLasso) for graph
scan statistics using the framework of alternating direction
method of multipliers (ADMM). GenFusedLasso method
solves the following minimization problem
X
minn −f (x) + λ
|xi − xj |,
(26)
x∈R
(i,j)∈E
where f (x) is a predefined graph scan statistic and the
trade-off parameter λ controls the degree of smoothness of
neighboring entries in x. We applied the heuristic rounding
step proposed in [29] to x to generate connected subgraphs.
Parameter Tunning: We strictly followed strategies recommended by authors in their original papers to tune the
related model parameters. Specifically, for EventTree, we
tested the set of λ values: {0.02, 0.04, · · · , 2.0, 3.0, · · · , 20}.
For Graph-Laplacian, we tested the set of λ values:
{0.001, 0.003, 0.01, 0.03, 0.1, 0.3, 1} and returned the best result. For GenFusedLasso, we tested the set of λ values:
{0.02, 0.04, · · · , 2.0, 3.0, · · · , 20}. For NPHGS, we set the
suggested parameters by the authors: αmax = 0.15 and K =
5. Our proposed methods G RAPH -IHT and G RAPH -GHTP
have a single parameter k, an upper bound of the subgraph
size. We tested the set of k values: {50, 100, · · · , 1000}. As
the BWSN dataset has the ground truth about the contaminated
nodes, we identified the best parameter for each method that
has the largest F-measure in the training data. For the other
data sets, as we do not have ground truth about the true
subgraphs, for each specific scan statistic, we identified the
best parameter for each method that was able to identify the
connected subgraph with the largest statistic score.
Performance Metrics: 1) Optimization Power. The over-
all scores of the three graph scan statistic functions of the
connected subgraphs returned by the comparison methods are
compared and analyzed. The objective is to identify methods
that could find the connected subgraphs with the largest graph
scan statistic scores. 2) Precision, Recall, and F-Measure.
For the BWSN dataset, as the true anomalous subgraphs are
known,we use F-measure that combines precision and recall to
evaluate the quality of detected subgraphs by different methods. 3) Run Time. The running times of different methods
are compared.
B. Evolving Curves of Graph Scan Statistics
Figure 3 compares our methods Graph-IHT and G RAPH GHTP with GenFusedLasso on the scores of two graph
scan statistics (Kulldorff’s scan statistic and elevated mean
scan statistic (EMS)) based on the best connected subgraphs
identified by both methods in different iterations. Note that,
a heuristic rounding process as proposed in [29] is applied to
continuous vector xi estimated by GenFusedLasso at each
iteration i in order to identify the best connected subgraph at
the current iteration. As the setting of the parameter λ will
influence the quality of the detected connected subgraph, the
results under different λ values are also shown in Figure 2.
The results indicate that our method G RAPH -GHTP converges
in less than 10 iterations, and G RAPH -IHT converges in more
steps. The qualities (scan statistic scores) of the connected subgraphs identified at different iterations by our two methods are
consistently higher than those returned by GenFusedLasso.
C. Optimization Power
The comparisons between our method and the other baseline
methods are shown in Table III and Table IV. The scores
of the three graph scan statistics based on the connected
subgraphs returned by these methods are reported in these two
tables. The results in indicate that our method outperformed
all the baseline methods on the three graph scan statistics,
except that EventTree achieved the highest Kulldorff score
(16738.43) on the CitHepPh dataset, but is only 2.71% larger
than the returned score of our method G RAPH -GHTP. We note
EventTree is a heuristic algorithm and does not provide
theoretical guarantee on the quality of the connected subgraph
returned, as measured by the scan statistic scores.
D. Water Pollution Detection
Figure 4 shows the precision, recall, and F-measure of all
the comparison methods on the detection of polluted nodes in
the water distribution network in BWSN with respect to different noise ratios. The results indicate that our proposed method
G RAPH -GHTP and DepthFirstGraphScan were the best
methods on all the three measures for most of the settings.
However, DepthFirstGraphScan spent 5929 seconds to
finish, and G RAPH -GHTP spent only 166 seconds, 35.8
times faster than DepthFirstGraphScan. EventTree
achieved high recalls but low precisions consistently in different settings. In contrast, GraphLaplacian and NPHGS
achieved high precisions but low recalls in most settings.
Fig. 2. Scability of G RAPH -GHTP with respect to k Fig. 3. Evolving curves of graph scan statistic scores between our methods( G RAPH -IHT and
(the upper bound of subgraph size).
G RAPH -GHTP) and GenFusedLasso in different iterations.
(a) BWSN(precision)
Fig. 4.
(b) BWSN(recall)
(c) BWSN(fmeasure)
Precision, Recall, and F-measure curves for water pollution detection in BWSN with respect to different noise ratios.
TABLE III
Comparison on scores of the three graph scan statistics based on connected subgraphs returned by comparison methods. EMS and EBP are Elevated Mean
Scan Statistic and Expectation-Based Poisson Statistic, respectively.
G RAPH -GHTP
GraphLaplacian
EventTree
DepthFirstGraphScan
NPHGS
Kulldorff
1097.15
474.96
834.59
735.85
541.13
BWSN
EMS
EBP
21.56
79.71
14.89
49.91
20.25
32.13
20.41
79.30
16.90
58.59
E. Scalability Analysis
Table III and Table IV also show the comparison between
our proposed method G RAPH -GHTP and other baseline methods on the running time. The results indicate that our proposed method G RAPH -GHTP ran faster than all the baseline
methods in most of the settings, except for EventTree.
EventTree was the fastest method but was unable to detect
subgraphs with high qualities. As our method has a parameter
on the upper bound (k) of the subgraph returned, we also conducted the scalability of our method with respect to different
values of k as shown in Figure 2. The results indicate that the
running time of our algorithm is insensitive to the setting of
k, which is consistent with the time complexity analysis of
G RAPH -GHTP as discussed in Theorem III.5.
Run Time
165.86
55315.94
441.74
5929.00
256.91
Kulldorff
16296.40
2585.44
16738.43
9531.19
11965.14
CitHepPh
EMS
EBP
337.90
9342.94
202.38
2305.05
335.34
9061.56
260.06
5561.66
326.23
9098.22
Run Time
155.74
22424.24
124.28
12183.88
175.08
VI. R ELATED W ORK
A. Structured sparse optimization. The methods in this
category have been briefly reviewed in the introduction section. The most relevant work is by Hegde et al. [15]. The
authors present G RAPH-C OSAMP, a variant of C OSAMP [23],
for compressive sensing and linear regression problems based
on head and tail approximations of M(G, k).
B. Connected subgraph detection. Existing methods in this
category fall into three major categories:1) Exact algorithms.
The most recent method is a brunch-and-bounding
algorithm DepthFirstGraphScan [36] that runs in
exponential time in the worst case; 2) Heuristic algorithms.
The most recent methods in this category include
EventTree [32], NPHGS [8], AdditiveScan [37],
TABLE IV
Comparison on scores of the three graph scan statistics based on connected subgraphs returned by comparison methods. EMS and EBP are Elevated Mean
Scan Statistic and Expectation-Based Poisson Statistic, respectively. GraphLaplacian failed to run on ChicagoCrime due to out-of-memory error.
G RAPH -GHTP
GraphLaplacian
EventTree
DepthFirstGraphScan
NPHGS
EMS
20.45
5.40
12.40
8.13
6.28
Traffic
Run Time
22.25
291.75
5.02
47.73
0.22
GraphLaplacian
[33], and EdgeLasso
[34]; 3)
Approximation algorithms that provide performance bounds.
The most recent method is presented by Qian et al.. The
authors reformulate the connectivity constraint as linear matrix
inequalities (LMI) and present a semi-definite programming
algorithm based on convex relaxation of the LMI [18, 19] with
a performance bound. However, this method is not scalable
to large graphs (≥ 1000 nodes). Most of the above methods
are considered as baseline methods in our experiments and
are briefly summarized in Section V-A.
VII. CONCLUSION
This paper presents, G RAPH -IHT and G RAPH -GHTP, two
efficient algorithms to optimize a general nonlinear optimization problem subject to connectivity constraint on the
support of variables. Extensive experiments demonstrate the
effectiveness and efficency of our algorithms. For the future
work, we plan to explore graph-structured constraints other
than connectivity constraint and extend our proposed methods
such that good theoretical properties of the cost functions that
do not satisfy the WRSC condition can also be analyzed.
R EFERENCES
[1] B. Anbaroğlu, T. Cheng, and B. Heydecker. Non-recurrent traffic
congestion detection on heterogeneous urban road networks. TRANSPORTMETRICA, 11(9):754–771, 2015.
[2] M. Asteris and e. a. Kyrillidis. Stay on path: Pca along graph paths. In
ICML, pages 1728–1736, 2015.
[3] F. Bach. Learning with submodular functions: A convex optimization
perspective. arXiv:1111.6453, 2011.
[4] F. Bach, R. Jenatton, J. Mairal, G. Obozinski, et al. Structured sparsity
through convex optimization. Stat Sci, 27(4):450–468, 2012.
[5] S. Bahmani, P. T. Boufounos, and B. Raj. Learning model-based sparsity
via projected gradient descent. IT, 2016.
[6] S. Bahmani, B. Raj, and P. T. Boufounos. Greedy sparsity-constrained
optimization. JMLR, 14(1):807–841, 2013.
[7] T. Blumensath. Compressed sensing with nonlinear observations and
related nonlinear optimization problems. IT, 59(6):3466–3474, 2013.
[8] F. Chen and D. B. Neill. Non-parametric scan statistics for event
detection and forecasting in heterogeneous social media graphs. In ACM
SIGKDD, pages 1166–1175, 2014.
[9] F. Chen and D. B. Neill. Human rights event detection from heterogeneous social media graphs. Big Data, 3(1):34–40, 2015.
[10] J. W. Coulston and K. H. Riitters. Geographic analysis of forest health
indicators using spatial scan statistics. EM, 31(6):764–773, 2003.
[11] Dataset and code. https://github.com/baojianzhou/Graph-GHTP.
[12] W. L. Gorr and Y. Lee. Early warning system for temporary crime hot
spots. Journal of Quantitative Criminology, 31(1):25–47, 2015.
[13] C. Hegde, P. Indyk, and L. Schmidt. Approximation-tolerant modelbased compressive sensing. In SODA, pages 1544–1561. SIAM, 2014.
[14] C. Hegde, P. Indyk, and L. Schmidt. A fast approximation algorithm
for tree-sparse recovery. In ISIT, pages 1842–1846. IEEE, 2014.
[15] C. Hegde, P. Indyk, and L. Schmidt. A nearly-linear time framework
for graph-structured sparsity. In ICML, pages 928–937, 2015.
Kulldorff
6386.08
4388.42
1123.49
966.70
ChicagoCrime
EMS
EBP
5.45
5172.54
4.91
3965.96
2.56
1094.21
2.43
948.23
Run Time
3177.60
226.50
12133.50
701.40
[16] J. Huang, T. Zhang, and D. Metaxas. Learning with structured sparsity.
JMLR, 12:3371–3412, 2011.
[17] L. Jacob, G. Obozinski, and J.-P. Vert. Group lasso with overlap and
graph lasso. In ICML, pages 433–440. ACM, 2009.
[18] P. Jain, A. Tewari, and P. Kar. On iterative hard thresholding methods
for high-dimensional m-estimation. In NIPS, pages 685–693, 2014.
[19] D. S. Johnson, M. Minkoff, and S. Phillips. The prize collecting steiner
tree problem: theory and practice. In SODA, pages 760–769, 2000.
[20] J. e. a. Leskovec. Graphs over time: densification laws, shrinking
diameters and possible explanations. In KDD, pages 177–187, 2005.
[21] J. Mairal and B. Yu. Supervised feature selection in graphs with path
coding penalties and network flows. JMLR, 14(1):2449–2485, 2013.
[22] R. Modarres and G. Patil. Hotspot detection with bivariate data. Journal
of Statistical planning and inference, 137(11):3643–3654, 2007.
[23] D. Needell and J. A. Tropp. Cosamp: Iterative signal recovery from
incomplete and inaccurate samples. ACHA, 26(3):301–321, 2009.
[24] J. Neil and H. et al. Scan statistics for the online detection of locally
anomalous subgraphs. Technometrics, 55(4):403–414, 2013.
[25] D. B. Neill. An empirical comparison of spatial scan statistics for
outbreak detection. IJHG, 8(1):1, 2009.
[26] D. B. Neill. Fast subset scan for spatial pattern detection. JRSS: Series
B (Statistical Methodology), 74(2):337–360, 2012.
[27] D. Oliveira and D. P. et al. Detection of patterns in water distribution
pipe breakage using spatial scan statistics for point events in a physical
network. JCCE, 25(1):21–30, 2010.
[28] A. Ostfeld and J. G. e. a. Uber. The battle of the water sensor networks
(bwsn): A design challenge for engineers and algorithms. JWRPM,
134(6):556–568, 2008.
[29] J. Qian, V. Saligrama, and Y. Chen. Connected sub-graph detection. In
AISTATS, 2014.
[30] R. T. Rockafellar. Monotone operators and the proximal point algorithm.
SIAM journal on control and optimization, 14(5):877–898, 1976.
[31] L. A. Rossman and W. Supply. Epanet 2 users manual, 2000.
[32] P. Rozenshtein, A. Anagnostopoulos, A. Gionis, and N. Tatti. Event
detection in activity networks. In SIGKDD, pages 1176–1185, 2014.
[33] J. Sharpnack, A. Rinaldo, and A. Singh. Changepoint detection over
graphs with the spectral scan statistic. arXiv:1206.0773, 2012.
[34] J. Sharpnack, A. Rinaldo, and A. Singh. Sparsistency of the edge lasso
over graphs. In AISTATS, pages 1028–1036, 2012.
[35] J. L. Sharpnack, A. Krishnamurthy, and A. Singh. Near-optimal anomaly
detection in graphs using lovász extended scan statistic. In NIPS, pages
1959–1967, 2013.
[36] S. Speakman, E. McFowland Iii, and D. B. Neill. Scalable detection of
anomalous patterns with connectivity constraints. JCGS, 0(ja):00–00, 0.
[37] S. Speakman, Y. Zhang, and D. B. Neill. Dynamic pattern detection
with temporal consistency and connectivity constraints. In ICDM, pages
697–706. IEEE, 2013.
[38] A. Tewari and R. et al. Greedy algorithms for structurally constrained
high dimensional problems. In NIPS, pages 882–890, 2011.
[39] B. Xin and K. et al. Efficient generalized fused lasso and its application
to the diagnosis of alzheimers disease. In AAAI, pages 2163–2169, 2014.
[40] X. Yuan, P. Li, and T. Zhang. Gradient hard thresholding pursuit for
sparsity-constrained optimization. In ICML, pages 127–135, 2014.
[41] X.-T. Yuan and Q. Liu. Newton greedy pursuit: A quadratic approximation method for sparsity-constrained optimization. In CVPR, pages
4122–4129. IEEE, 2014.
[42] T. Zhang. Adaptive forward-backward greedy algorithm for sparse
learning with linear models. In NIPS, pages 1921–1928, 2009.
| 8 |
A Linear Algorithm for Finding the Sink of
Unique Sink Orientations on Grids
arXiv:1709.08436v1 [] 25 Sep 2017
Xiaoming Sun, Jialin Zhang, Zhijie Zhang
September 26, 2017
Abstract
An orientation of a grid is called unique sink orientation (USO)
if each of its nonempty subgrids has a unique sink. Particularly, the
original grid itself has a unique global sink. In this work we investigate
the problem of how to find the global sink using minimum number of
queries to an oracle. There are two different oracle models: the vertex
query model where the orientation of all edges incident to the queried
vertex are provided, and the edge query model where the orientation
of the queried edge is provided. In the 2-dimensional case, we design
an optimal linear deterministic algorithm for the vertex query model
and an almost linear deterministic algorithm for the edge query model,
previously the best known algorithms run in O(N log N ) time for the
vertex query model and O(N 1.404 ) time for the edge query model.
1
Introduction
In this paper we consider a special type of d-dimensional grid, which is the
Cartesian product of d complete graphs. Each pair of vertices of the grid has
an edge if and only if they are distinct in exactly one coordinate. A subgrid
is the Cartesian product of cliques of the original complete graphs. Recall
that in an oriented graph a vertex with zero outdegree is called a sink. A
unique sink orientation (USO) of a grid is an orientation of its edges such
that each of its nonempty subgrids (including the original grid) has a unique
sink. Traditionally, an oriented grid with the above property is called a grid
USO.
The computational problem now is to find the unique global sink of a grid
USO. Two different oracle models were introduced in the literature [18, 7],
namely the vertex query model and the edge query model. A vertex query
1
Figure 1: (a) a 2-dimensional grid USO. (b) a 3-cube USO with an cycle.
reveals the orientation of all incident edges of the queried vertex, whereas
an edge query returns the orientation of the queried edge. We count for the
time overhead only the number of queries to the oracle.
In this paper, we restrict our main attention to the sink-finding problem
on a 2-dimensional grid USO, see Figure 1 (a) for an instance. There are two
reasons. As is well-known, a planar grid USO must be acyclic [7]. On the
contrary, a d-dimensional grid USO with d > 2 may contain cycles. Figure
1 (b) depicts a cyclic cube (a grid with each of its dimensions having size
two). The acyclicity of a planar grid USO allows us to design algorithms
that enhance the “rank” of queried vertices step by step and finally reach
the unique sink. Besides, a fast algorithm running in the lower dimensional
case may improve the upper bound in the general case, due to the inherited
grid USO introduced by Gärtner et al. [7].
However, the vertex query model seems to be unreasonable when fixing
the dimension. Since in practice it takes linear (or polynomial) time to
implement a vertex query, while the total number of vertex is polynomial.
The time for a vertex query is never negligible compared with the number of
queries. There are several reasons to justify the vertex query model. First,
the vertex query model is theoretically simpler than the edge query model
in that it is easy to formally capture all the information coming from a
single vertex query. Second, the number of vertex queries is a good measure
of complexity for a general grid USO of d dimension and luckily, due to
the inherited grid structure, algorithms running on a fixed-dimensional grid
USO may be adapted to a general d-dimensional grid USO. Third, it turns
out that our algorithm under the vertex query model serves as a black box
when addressing the more natural and practical edge query model. In a
word, though being mostly of theoretical interest for the grid USO of fixed
2
dimension, the study of the vertex query model still has potential practical
applications.
The sink-finding problem on a planar grid USO has an intuitive interpretation. Assume we have a matrix as input. Only numbers in the same
row or in the same column are allowed to be compared. Each submatrix has
exactly one minimum number. How many comparisons do we need to find
the global minimum number?
The grid USO of dimension two serves as a simple combinatorial framework to many well-studied optimization problems. The first example is the
one line and N points problem, first studied in [9]. Suppose there are N
points in general position in the plane and one vertical line to which none of
those N points belongs. There is a line segment between a pair of points if
and only if this segment intersects with the vertical line. The problem asks
to find the unique segment that has the lowest intersection with the vertical
line. This problem can be recast to the sink-finding problem of an implicit
planar grid USO in the following way [4]. Each line segment defines a grid
vertex. Two line segments are adjacent if and only if they share exactly one
endpoint. The higher one has an oriented edge to the lower one. The total
orientation turns out to be an USO. The lowest segment corresponds to the
global sink.
Another optimization problem that can be considered as a special case
of the sink-finding problem on a planar grid USO is the linear programs on
(N − 2)-polytopes with N facets. Felsner et al. [4] showed that the vertexedge graph of such a polytope, oriented by the linear objective function, is
isomorphic to a planar grid USO. The global sink, again, corresponds to the
optimal solution of this special type of linear programs. It is of interest to
take a look at how the algorithms devised for finding the sink of the planar
grid USO go on the vertex-edge graph of the above polytope. We provide
such an example in Section 3.
We have seen some relationship between the grid USO (of dimension
two) and linear programs. Indeed, one main motivation of the study of the
(general) grid USO is that it is closely related to linear programs. It is
generally known that although there are polynomial algorithms for solving
linear programs, such as the ellipsoid algorithm by Khachiyan [13] and the
interior-point algorithm by Karmarkar [12], both of them are not strongly
polynomial. It remains open whether there is such an algorithm. And also it
is unknown whether there exists a pivoting rule such that the simplex method
runs in polynomial time. Several well-known (randomized) pivoting rules,
such as Random-Edge, Random-Facet [11, 15] and Random-Bland
[2], have failed to reach such an bound [14, 16, 5, 6]. It turns out that the
unique sink orientation may help devise an outperforming algorithm to solve
3
linear programs. Holt and Klee [10] showed that an orientation of a polytope
is an acyclic unique sink orientation (AUSO) with the Holt-Klee property if
it is induced from an LP instance. The Holt-Klee property states that the
number of vertex-disjoint directed paths from the source to the sink equals to
the number of the neighbours of the source (or the sink equivalently) in every
subgrid. Furthermore, Gärtner and Schurr [8] proved that any LP instance
in d nonnegative variables defines a d-dimensional cube USO. The sink of
this cube corresponds naturally to an optimal solution to the LP. The vertex
query oracle comes down to Gaussian elimination. A polynomial sink-finding
algorithm would yield a corresponding algorithm to solve linear programs.
Besides linear programs, the underlying combinatorial structures of many
other optimization problems are actually a grid USO. An important example
is the generalized linear complementarity problem over a P-matrix (PGLCP),
first introduced by Cottle and Dantzig [3]. Gärtner et al. [7] showed that
this problem can be recast to the sink-finding problem of an implicit grid
USO.
Previous work. The sink-finding problem of a grid USO was first put
forward formally and studied by Szabó and Welzl [18], where they restricted
their attention to a d-dimensional cube. They designed the first nontrivial
deterministic and randomized algorithms which use O(cd ) vertex queries for
some constant c < 2. Later Gärtner et al. [7] extended the two oracle
models to a d-dimensional grid USO. In that paper they investigated several
properties of a grid USO and introduced randomized algorithms for both
oracle models. However, no nontrivial deterministic algorithm was found
for a d-dimensional grid USO at that time. Attempting to find such an
algorithm, Barba et al. [1] first paid their attention to the planar case.
Here we state some known results in the planar case, given a 2-dimensional
grid USO with m × n vertices. Assume that N = m + n. In the randomized
setting, Gärtner et al. [7] prove an upper bound of O(log m · log n) for
the vertex query model, against a lower bound of Ω(log m + log n) claimed
by Barba et al. [1]. It is necessary to mention the performance of the most
natural Random-Edge algorithm on the planar grid USO, which chooses the
next queried vertex randomly from the out-going neighbours of the current
one. Gärtner et al. [9] proved it runs in Θ(log2 N ) under the Holt-Klee
condition and Milatz [17] extended this result for the general planar grid
USO. In the edge query model, the unique sink of a 2-dimensional grid USO
can be obtained with Θ(N ) queries in expectation [7]. In the deterministic
setting, Barba et al. [1] exhibit an algorithm using O(N log N ) vertex queries
to find the sink and another algorithm using O(N log4 7 ) edge queries. In
4
particular, they introduced an O(N ) algorithm for the vertex query model
under the Holt-Klee condition [10].
Our contributions. The main contribution of our paper is Lemma 4,
which states that we are able to exclude certain row and certain column from
being the global sink after querying a linear number of vertices in some way.
Based on it, we prove that m + n − 1 vertex queries suffice to determine the
global sink in the worst case, which coincide exactly with the lower bound
for the vertex
query model. Using it as a black box, we are able to exhibit an
√
2 log N
) deterministic algorithm for the edge query model. We note
O(N · 2
that for a d-dimensional grid USO our algorithm yields an upper bound of
O(N̂ dd/2e ) for the vertex query model, where N̂ denotes the sum of the sizes
of every dimension.
Paper organization. In Section 2 we establish some notations for a
planar grid USO and introduce some known properties. In Section 3 we
handle the vertex query model and give an optimal algorithm. We address
the edge query model based on the algorithm for the vertex query model in
Section 4. And at last we conclude the paper in Section 5 with some open
problems.
2
Preliminaries
First we provide some definitions and notations for the planar case. Denote
by Kn the complete graph with n vertices. An (m, n)-grid is the Cartesian
product Km × Kn . Its vertex set is defined to be the Cartesian product
[m] × [n], where [n] := {1, 2, . . . , n}. Elements in [n] are called coordinates,
and there are N = m + n coordinates. Throughout this paper, we identify
[m] × [n] with Km × Kn . A subgrid is then the Cartesian product I × J,
where I ⊆ [m] and J ⊆ [n]. For the sake of convenience, we say all vertices
with the same first-coordinate form a row. A column is defined analogously.
Hence two vertices are adjacent if and only if they are in the same row or in
the same column. Denote by uij the vertex at the cross of the i-th row and
the j-th column.
Let Tv (m, n) be the number of vertex queries needed in the worst case
to find the sink of an (m, n)-grid USO deterministically and Tv (n) for short
when m = n. Similarly, Te (m, n) and Te (n) are defined for the edge query
model. Following the tradition of the previous works [18, 1], in the vertex
query model the global sink must be queried even if we have already known
its position before it is queried. Thus, for instance, Tv (1) = 1 and Tv (2) = 3,
5
instead of Tv (1) = 0 and Tv (2) = 2. Readers will find the benefit of this
tradition shortly.
Now we introduce some known properties about the (m, n)-grid USO.
Lemma 1 ([7]). Every (m, n)-grid USO is acyclic.
Suppose G is an (m, n)-grid USO. This lemma allows us to define a partial
order on the vertex set [m] × [n] of G. For arbitrary two vertices u, v ∈ G,
define u v if and only if either u = v or there exists a directed path
from u to v. In other word we just say u is larger than v. The unique sink
corresponds to the unique minimum vertex.
Barba et al. claimed a lower bound of m + n − 1 for the vertex query
model without a proof [1]. For the completeness we give a simple adversary
argument of this lower bound.
Lemma 2. Tv (m, n) ≥ m + n − 1.
Proof. Here is the answering strategy of the adversary. Let the first queried
vertex be the sink of the first row. Make all vertices in this row point out
to their adjacent vertices in other rows. Thus this vertex query eliminates
exactly the first row and any query of the other vertices in this row gives no
more information. By induction the i-th queried vertex eliminates exactly
the i-th row and therefore m − 1 vertex queries are necessary to eliminate all
m rows but the last row. At this time, we need n vertex queries instead of
n − 1 to find the sink of the last row, recalling the definition of Tv (m, n).
Induced grid USO. Barba et al. [1] discovered a simple construction of an induced grid USO from an (m, n)-grid USO, which helped a lot
their design of algorithms for both oracle models. It’s worth describing the
construction in detail.
Assume G is an (m, n)-grid USO. Let P = {P1 , . . . , Pk } be a partition of
[m] and Q = {Q1 , . . . , Ql } be a partition of [n]. Each Pi × Qj is a subgrid of
G. Let H be a (k, l)-grid with vertex set {Pi × Qj | i = 1, . . . , k, j = 1, . . . , l}.
As before, two distinct vertices x = Pi × Qj and y = Pi0 × Qj 0 are adjacent
in H if and only if they are in the same row or in the same column, i.e.
i = i0 or j = j 0 . Suppose that x and y are adjacent, it remains to determine
the orientation of edge xy in H. Recall that x and y are subgrids of G and
therefore by the USO property x and y have unique sinks ux and uy in G,
respectively. If ux has an outgoing edge to some vertex of y in G, then we
make x point to y in H. Otherwise we make v point to u. This orientation is
well-defined due to the acyclicity of an (m, n)-grid USO (see figure 2). Such
a grid H is called induced grid of G. What’s more, it was proved that the
induced grid H also suffices the USO property [1]:
6
Figure 2: (a) let P1 = {1, 2}, P2 = {3}, Q1 = {1} and Q2 = {2, 3}. The
(3, 3)-grid USO is divided into four subgrids accordingly. (b) The induced
grid USO defined according to the partition.
Lemma 3. Let G be a 2-dimensional grid USO and H be an induced grid
constructed from G. Then H is also a 2-dimensional grid USO and the sink
of H is the subgrid of G which contains the sink of G.
3
Vertex Query Model
This section is contributed to the vertex query model. We first explore
an important combinatorial property of the 2-dimensional grid USO. Then
we exploit this property to obtain an optimal deterministic algorithm using
m + n − 1 vertex queries in the worst case. Next we provide an intuitive
interpretation of this algorithm in connection with linear programs on N − 2polytope with N facets. To end this section, we adapt our algorithm to higher
dimensional case.
Let G be an (m, n)-grid USO. For a queried vertex v = uij , let Iv [m]
be the collection of the first-coordinates such that u v for any u ∈ Iv × {j}.
Note that v itself is included in Iv × {j}. Jv [n] is defined analogously.
Clearly v is the unique sink of the subgrid Iv × Jv . Hence if v is not the
global sink of G, every vertex in the subgrid Iv × Jv is excluded from being
the global sink, since otherwise there would be two sinks in Iv × Jv . Suppose
v is not the global sink, the query of v eliminates exactly the corresponding
subgrid Iv × Jv .
Assume that m = n. First query arbitrary n vertices {v1 , . . . , vn } in
distinct rows and distinct columns. Suppose w.l.o.g. that none of them is
the sink of G, then the subgrids Ivi × Jvi are eliminated, for i = 1, . . . , n.
7
Figure 3: Induction on a (6, 6)-grid USO. White vertices are eliminated. All
vertices in I × {6} and {6} × J are larger than v6 .
Now there are a lot of eliminated vertices. The lemma below answers in a
way how many such vertices there are.
Lemma 4. Let G be an (n, n)-grid USO. After querying arbitrary n vertices
of G in distinct rows and distinct columns and eliminating corresponding
subgrids, at least one row and one column are eliminated.
Proof. We adopt induction on the scale n. There is nothing to prove in
the trivial case n = 1. Assume the lemma holds for smaller values of n.
Also we assume that none of the n vertices has ever been the global sink,
since otherwise the lemma naturally holds. Note that shuffling rows and
columns does not change the underlying structure of G. So by rearranging
the coordinates, we may assume that all queried vertices {v1 , . . . , vn } lie in
the diagonal, i.e. vi = uii , and further assume that for all 1 ≤ i < j ≤ n
either vi vj or vi and vj are incomparable. Consider the (2, 2)-subgrid
Hi,j spanned by vi and vj . By the assumption there is no path from vj to vi
in Hi,j . Hence by the USO property vj cannot be the source of Hi,j , which
implies in Hi,j there is at least one incoming edge of vj . This simple fact was
first observed by Barba et al. [1].
In particular, vn has at least one incoming edge in each subgrid Hi,n ,
which means either i ∈ Ivn or i ∈ Jvn , for 1 ≤ i ≤ n − 1. Let I = Ivn ∩ [n − 1]
8
and J = [n − 1]\I. Clearly J ⊆ Jvn ∩ [n − 1], since J ∩ Ivn = ∅. Accordingly,
we divide the (n − 1, n − 1)-grid spanned by {v1 , . . . , vn−1 } into subgrids
I × I, J × J, I × J and J × I. The subgrid J × I is of no relevance in our
discussion. The subgrid I × J ⊆ Ivn × Jvn and therefore all of its vertices are
eliminated. Square subgrids I × I and J × J both contain queried vertices
in their diagonals, respectively. So by the inductive assumption both of
them contain one eliminated row and one eliminated column, respectively.
Let i ∈ I be the coordinate such that {i} × I is the eliminated row in
I × I. The subgrid {i} × J ⊆ I × J and therefore is eliminated. The
vertex {i} × {n} ∈ Ivn × {n} and is also eliminated. To sum up, the row
{i} × [n] = {i} × (I ∪ J ∪ n) is eliminated. Similarly, let j ∈ J be the
coordinate such that J × {j} is the eliminated column in J × J. The column
[n] × {j} turns out to be an eliminated column of the original grid by the
same argument. The proof now is completed. See Figure 3 for an intuitive
example.
Lemma 4 makes full use of the information from the queried vertices and
naturally leads to the following algorithm depicted as Algorithm 1.
Algorithm 1: Diagonal Algorithm
Input: An (n, n)-grid USO G
Output: The unique sink of G
1 query arbitrary n vertices in distinct rows and distinct columns;
2 while the sink has not been queried do
3
eliminate one row and one column, say the i-th row and the j-th
column;
4
if two distinct queried vertex uij 0 and ui0 j are eliminated then
5
query vertex ui0 j 0 ;
6
end
7 end
We first query arbitrary n vertices in distinct rows and distinct columns
(line 1). By Lemma 4, there are one row and one column excluded from being
the global sink, say the i-th row and the j-th column (line 3). Now we need to
handle a subgrid USO of scale n − 1. Note that at most 2 queried vertices are
eliminated, since each row (or column) contains exactly one queried vertex.
If two distinct queried vertex uij 0 and ui0 j are eliminated, one more query of
vertex ui0 j 0 would make the subgrid contain n − 1 queried vertices in distinct
rows and distinct columns (line 4-6). Otherwise, the subgrid has already
contained n − 1 such queried vertices. At either case, we can apply Lemma 4
9
again and at most another n − 1 queries suffice to find the global sink. The
argument above leads to the theorem below.
Theorem 1. There exists a deterministic algorithm using 2n − 1 vertex
queries in the worst case to find the unique sink of an (n, n)-grid USO.
We can easily extend Algorithm 1 to an arbitrary (m, n)-grid USO. Assume that m < n. First query m vertices in distinct rows and distinct
columns. Then the m columns with queried vertices in them form an (m, m)subgrid USO. By Lemma 4, one column is eliminated, and there remains
m − 1 queried vertices. Next, one more appropriate vertex query would
again exclude one column. Repeat this procedure until there remains exactly m columns, i.e. an (m, m)-subgrid USO, which contains m − 1 queried
vertices. Now we can eliminate both one row and one column at the same
time after every vertex query and another m queries suffice to determine the
global sink. To conclude we have
Theorem 2. There exists a deterministic algorithm using m + n − 1 vertex
queries in the worst case to find the unique sink of an (m, n)-grid USO.
Combined with Lemma 2, this theorem implies that Tv (m, n) = m+n−1,
so Algorithm 1 is optimal.
As is mentioned before, the vertex-edge graph of a (N − 2)-polytope
with N facets is isomorphic to an (m, n)-grid USO G for some m, n with
N = m+n. There are totally N coordinates in G. Each coordinate represents
a facet of the original polytope. Note that a vertex v = uij of G is the
intersection of some (N − 2) facets. The coordinates i and j mean that v
does not lie in the corresponding facets. Two vertices are adjacent in the
vertex-edge graph if and only if both of them belong to exactly the same
N − 3 facets. If in some way (e.g. by Lemma 4), the i-th row (or column) is
excluded from being the global sink, we can deduce that the global sink must
lie in the corresponding facet. The procedure of Algorithm 1 becomes rather
clear. At each step, Algorithm 1 arbitrarily queries a new vertex which is
not adjacent to the previously queried vertices (line 1) in order to involve as
many facets as possible. Once every vertex is adjacent to at least one queried
vertex, it is guaranteed that the global sink must belong to certain facet or
(intersection of several facets) (line 3). Later queries are indeed restricted in
that facet (line 4-6). From the view of linear programs, such an algorithm is
rather interesting.
Though being less practical than the edge query model, the vertex query
model still has potential applications. One of them is to reach a better upper
bound for the general grid USO of d dimension, combined with the inherited
10
grid USO structure. Roughly speaking, fix the coordinates of two dimensions
of size n1 and n2 , and vertices share the fixed two coordinates form a subgrid
of the original grid. The vertex set of the inherited grid is the collection of
the n1 × n2 subgrids. The definition and the orientation of the edges are the
same as those in the induced grid USO. Let N̂ be the sum of the sizes of each
dimension and T̂v (d) be the time overhead for the grid USO of d dimension
in the worst case. Running Algorithm 1 on the inherited grid USO yields the
following recurrence
T̂v (d) ≤ (n1 + n2 − 1) · T̂v (d − 2).
By solving it, we have the following corollary,
Corollary 1. There exists a deterministic algorithm using O(N̂ dd/2e ) vertex
queries in the worst case to find the unique sink of a d-dimensional grid USO.
4
Edge Query Model
Though being optimal, Algorithm 1 may not be a good choice to solve optimization problems like one line and N points, for implementing a vertex
query actually takes linear time. However, there are potential applications
of this result. An immediate one is a fast algorithm under the edge query
model.
Throughout this section we assume that m = n, since one can always add
rows or columns to make the grid square without changing the structure of
the original grid and the position of the global sink. We extend the divideand-conquer strategy in [1] to an almost linear algorithm using Algorithm 1
as a black box. The formal description is depicted as Algorithm 2.
Algorithm 2: Divide-and-Conquer
Input: An (n, n)-grid USO G
Output: The unique sink of G
1 construct an induced (k, k)-grid USO H from G;
2 run Algorithm 1 on H under the vertex query model;
Let G be an (n, n)-grid USO, and H be an induced (k, k)-grid USO constructed from G. The construction of H is indeed two respective partitions
of [n] and takes constant time (line 1). As is shown in Algorithm 2, the main
idea is to run Algorithm 1 on H under the vertex query model (line 2). By
Theorem 1, 2k − 1 vertex queries suffice to determine the sink of H. Recall
11
that each vertex in H corresponds to a subgrid in G, and that the sink of
H is the subgrid of G which contains the sink of G. Note that a vertex
query returns the orientation of all the incident edges. Hence according to
the definition of the orientation of edges in H, to implement a vertex query
in H, we need to (i) find the sink of the corresponding subgrid in G and (ii)
query all edges incident to this local sink in G. In a word, a vertex query
in H is equivalent to at most Te ( nk ) + 2n − 2 edge queries in G. The above
argument implies the recurrence below,
n
Te (n) ≤ (2k − 1) Te ( ) + 2n − 2 .
k
Note that our careful definition of Tv (m, n), the number of vertex queries
in the worst case, pays off here — the sink of H has already been queried,
which means that the sink of the corresponding subgrid in G, i.e. the sink
of G, has been found.
Solve the recurrence will get Te (n) = O(nlogk (2k−1) ) if setting k = O(1),
which √
coincides with the result in [1] when k = 4. Furthermore, we set
k = 22√ log n , where log means the logarithm to base 2. Assume that Te (n) ≤
cn · 22 log n , where c > 4, for smaller values of n, then we have
√
Te (n)
≤
2kc nk
q
2
log n−log k
2
·2
√
log n−2
+ 4kn
√
log n
2 log n
+ 4n · 2
.
= 2cn · 2
√
2 log n
To make Te (n) ≤ cn · 2
, we only need to assure
q
√
√
2c
2( log n− log n−2 log n)
≥
.
2
c−4
The left side is monotone decreasing and its limit
√ is 4, hence setting c ≥ 8
2 log n
the inequality holds. √
Therefore Te (n) = O(n · 2
).
Notice that n · 22 log n = o(n1+ ), for any > 0. Algorithm 2 is mildly
superlinear. We conclude this section by the following theorem,
√
2 log N
) deterministic algorithm to find
Theorem 3. There exists an O(N · 2
the sink of an (m, n)-grid USO under the edge query model, where N = m+n.
5
Conclusion
In this paper, we have discovered a new combinatorial property (Lemma 4)
of the 2-dimensional grid USO and developed deterministic algorithms for
12
both oracle models based on it, one optimal, the other nearly optimal. In
the randomized setting, however, all the known randomized algorithms only
reach an upper bound of O(log2 N ), against the lower bound of Ω(log N ). By
further exploiting Lemma 4, one may devise an optimal algorithm to close
the gap. In the general d-dimensional grid USO, it is of interest whether
there exists a similar combinatorial property.
References
[1] Luis Barba, Malte Milatz, Jerri Nummenpalo, and Antonis Thomas. Deterministic algorithms for unique sink orientations of grids. Computing
and Combinatorics Conference, pages 357–369, 2016.
[2] Robert G Bland. New finite pivoting rules for the simplex method.
Mathematics of Operations Research, 2(2):103–107, 1977.
[3] Richard W Cottle and George B Dantzig. A generalization of the linear
complementarity problem. Journal of Combinatorial Theory, 8(1):79–
90, 1970.
[4] Stefan Felsner, Bernd Gärtner, and Falk Tschirschnitz. Grid orientations, (d,d + 2)-polytopes, and arrangements of pseudolines. Discrete
and Computational Geometry, 34(3):411–437, 2005.
[5] Oliver Friedmann, Thomas Dueholm Hansen, and Uri Zwick. Subexponential lower bounds for randomized pivoting rules for the simplex
algorithm. In Proceedings of the forty-third annual ACM symposium on
Theory of computing, pages 283–292. ACM, 2011.
[6] Oliver Friedmann, Thomas Dueholm Hansen, and Uri Zwick. Randomfacet and random-bland require subexponential time even for shortest
paths. arXiv preprint arXiv:1410.7530, 2014.
[7] Bernd Gärtner, Walter Morris, and Leo Rüst. Unique sink orientations
of grids. Algorithmica, 51(2):200–235, 2008.
[8] Bernd Gärtner and Ingo Schurr. Linear programming and unique sink
orientations. Symposium on Discrete Algorithms, pages 749–757, 2006.
[9] Bernd Gärtner, Falk Tschirschnitz, Emo Welzl, József Solymosi, and
Pavel Valtr. One line and n points. Random Structures and Algorithms,
23(4):453–471, 2003.
13
[10] Fred Holt and Victor Klee. A proof of the strict monotone 4-step conjecture. Contemporary Mathematics, 223:201–216, 1999.
[11] Gil Kalai. A subexponential randomized simplex algorithm. In Proceedings of the twenty-fourth annual ACM symposium on Theory of computing, pages 475–482. ACM, 1992.
[12] Narendra Karmarkar. A new polynomial-time algorithm for linear programming. In Proceedings of the sixteenth annual ACM symposium on
Theory of computing, pages 302–311. ACM, 1984.
[13] Leonid G Khachiyan. Polynomial algorithms in linear programming.
USSR Computational Mathematics and Mathematical Physics, 20(1):53–
72, 1980.
[14] Jiřı́ Matoušek. Lower bounds for a subexponential optimization algorithm. Random Structures and Algorithms, 5(4):591–607, 1994.
[15] Jiřı́ Matoušek, Micha Sharir, and Emo Welzl. A subexponential bound
for linear programming. Algorithmica, 16(4-5):498–516, 1996.
[16] Jiřı́ Matoušek and Tibor Szabó. Random edge can be exponential on
abstract cubes. Advances in Mathematics, 204(1):262–277, 2006.
[17] Malte Milatz. Directed random walks on polytopes with few facets.
CoRR, abs/1705.10243, 2017.
[18] Tibor Szabó and Emo Welzl. Unique sink orientations of cubes. In Foundations of Computer Science, 2001. Proceedings. 42nd IEEE Symposium
on, pages 547–555. IEEE, 2001.
14
| 8 |
arXiv:1702.03150v1 [] 10 Feb 2017
Autocommuting probability of a finite
group relative to its subgroups
Parama Dutta and Rajat Kanti Nath∗
Department of Mathematical Sciences, Tezpur University,
Napaam-784028, Sonitpur, Assam, India.
Emails: parama@gonitsora.com and rajatkantinath@yahoo.com
Abstract
Let H ⊆ K be two subgroups of a finite group G and Aut(K) the
automorphism group of K. The autocommuting probability of G relative
to its subgroups H and K, denoted by Pr(H, Aut(K)), is the probability
that the autocommutator of a randomly chosen pair of elements, one from
H and the other from Aut(K), is equal to the identity element of G. In
this paper, we study Pr(H, Aut(K)) through a generalization.
Key words: Automorphism group, Autocommuting probability, Autoisoclinism.
2010 Mathematics Subject Classification: 20D60, 20P05, 20F28.
1
Introduction
Let G be a finite group acting on a set Ω. Let Pr(G, Ω) denote the probability that
a randomly chosen element of Ω fixes a randomly chosen element of G. In 1975,
Sherman [11] initiated the study of Pr(G, Ω) considering G to be an abelian group and
Ω = Aut(G), the automorphism group of G. Note that
Pr(G, Aut(G)) =
|{(x, α) ∈ G × Aut(G) : [x, α] = 1}|
|G|| Aut(G)|
where [x, α] is the autocommutator of x and α defined as x−1 α(x). The ratio
Pr(G, Aut(G)) is called autocommuting probability of G.
Let H and K be two subgroups of a finite group G such that H ⊆ K. Motivated
by the works in [2, 6], we define
Prg (H, Aut(K)) =
|{(x, α) ∈ H × Aut(K) : [x, α] = g}|
|H|| Aut(K)|
(1.1)
where g ∈ K. That is, Prg (H, Aut(K)) is the probability that the autocommutator
of a randomly chosen pair of elements, one from H and the other from Aut(K), is
∗ Corresponding
author
1
equal to a given element g ∈ K. The ratio Prg (H, Aut(K)) is called generalized
autocommuting probability of G relative to its subgroups H and K. Clearly, if H = G
and g = 1 then Prg (H, Aut(K)) = Pr(G, Aut(G)). We would like to mention here that
the case when H = G is considered in [3]. In this paper, we study Prg (H, Aut(K))
extensively. In particular, we obtain some computing formulae, various bounds and a
few characterizations of G through a subgroup. We conclude the paper describing an
invariance property of Prg (H, Aut(K)).
We write S(H, Aut(K)) to denote the set {[x, α] : x ∈ H and α ∈ Aut(K)} and
[H, Aut(K)] := hS(H, Aut(K))i. We also write L(H, Aut(K)) := {x ∈ H : [x, α] =
1 for all α ∈ Aut(K)} and L(G) := L(G, Aut(G)), the absolute center of G (see [5]).
Note that L(H, Aut(K)) is a normal subgroup of H contained in H ∩ Z(K). Further,
L(H, Aut(K)) =
∩
CH (α), where CH (α) = {x ∈ H : [x, α] = 1} is a subgroup
α∈Aut(K)
of H. Let CAut(K) (x) := {α ∈ Aut(K) : α(x) = x} for x ∈ H and CAut(K) (H) = {α ∈
Aut(K) : α(x) = x for all x ∈ H}. Then CAut(K) (x) is a subgroup of Aut(K) and
CAut(K) (H) = ∩ CAut(K) (x).
x∈H
Clearly, Prg (H, Aut(K)) = 1 if and only if [H, Aut(K)] = {1} and g = 1
if and only if H = L(H, Aut(K)) and g = 1. Also, Prg (H, Aut(K)) = 0 if
and only if g ∈
/ S(H, Aut(K)). Therefore, we consider H 6= L(H, Aut(K)) and
g ∈ S(H, Aut(K)) throughout the paper.
2
Some computing formulae
For any x ∈ H, let us define the set Tx,g (H, K) = {α ∈ Aut(K) : [x, α] = g},
where g is a fixed element of K. Note that Tx,1 (H, K) = CAut(K) (x). The
following two lemmas play a crucial role in obtaining computing formula for
Prg (H, Aut(K)).
Lemma 2.1. Let H and K be two subgroups of a finite group G such that H ⊆
K. If Tx,g (H, K) 6= φ then Tx,g (H, K) = σCAut(K) (x) for some σ ∈ Tx,g (H, K)
and hence |Tx,g (H, K)| = |CAut(G) (x)|.
Proof. Let σ ∈ Tx,g (H, K) and β ∈ σCAut(K) (x). Then β = σα for some
α ∈ CAut(K) (x). We have
[x, β] = [x, σα] = x−1 σ(α(x)) = [x, σ] = g.
Therefore, β ∈ Tx,g (H, K) and so σCAut(K) (x) ⊆ Tx,g (H, K). Again, let γ ∈
Tx,g (H, K) then γ(x) = xg. We have σ −1 γ(x) = σ −1 (xg) = x and so σ −1 γ ∈
CAut(K) (x). Therefore, γ ∈ σCAut(K) (x) which gives Tx,g (H, K) ⊆ σCAut(K) (x).
Hence, the result follows.
Consider the action of Aut(K) on K given by (α, x) 7→ α(x) where α ∈
Aut(K) and x ∈ K. Let orbK (x) := {α(x) : α ∈ Aut(K)} be the orbit of
x ∈ K. Then by orbit-stabilizer theorem, we have
| orbK (x)| =
| Aut(K)|
.
|CAut(K) (x)|
2
(2.1)
Lemma 2.2. Let H and K be two subgroups of a finite group G such that
H ⊆ K. Then Tx,g (H, K) 6= φ if and only if xg ∈ orbK (x).
Proof. The result follows from the fact that α ∈ Tx,g (H, K) if and only if xg ∈
orbK (x).
The following theorem gives two computing formulae for Prg (H, Aut(K)).
Theorem 2.3. Let H and K be two subgroups of a finite group G such that
H ⊆ K. If g ∈ K then
Prg (H, Aut(K)) =
=
1
|H|| Aut(K)|
1
|H|
X
X
|CAut(K) (x)|
x∈H
xg∈orbK (x)
x∈H
xg∈orbK (x)
1
.
| orbK (x)|
Proof. We have {(x, α) ∈ H × Aut(K) : [x, α] = g} = ⊔ ({x} × Tx,g (H, K)),
x∈H
where ⊔ represents the union of disjoint sets. Therefore, by (1.1), we have
X
|H|| Aut(K)|Prg (H, Aut(K)) = | ⊔ ({x} × Tx,g (H, K))| =
|Tx,g (H, K)|.
x∈H
x∈H
Hence, the result follows from Lemma 2.1, Lemma 2.2 and (2.1).
Considering g = 1 in Theorem 2.3, we get the following computing formulae
for Pr(H, Aut(K)).
Corollary 2.4. Let H and K be two subgroups of a finite group G such that
H ⊆ K. Then
Pr(H, Aut(K)) =
X
1
| orbK (H)|
|CAut(K) (x)| =
|H|| Aut(K)|
|H|
x∈H
where orbK (H) = {orbK (x) : x ∈ H}.
Corollary 2.5. Let H and K be two subgroups of a finite group G such that
H ⊆ K. If CAut(K) (x) = {I} for all x ∈ H \ {1}, where I is the identity element
of Aut(K), then
Pr(H, Aut(K)) =
1
1
1
+
−
.
|H| | Aut(K)| |H|| Aut(K)|
Proof. By Corollary 2.4, we have
|H|| Aut(K)| Pr(H, Aut(K)) =
X
|CAut(K) (x)| = | Aut(K)| + |H| − 1.
x∈H
Hence, the result follows.
3
We also have |{(x, α) ∈ H × Aut(K) : [x, α] = 1}| =
P
|CH (α)| and hence
α∈Aut(K)
Pr(H, Aut(K)) =
1
|H|| Aut(K)|
X
|CH (α)|.
(2.2)
α∈Aut(K)
We conclude this section with the following two results.
Proposition 2.6. Let H and K be two subgroups of a finite group G such that
H ⊆ K. If g ∈ K then
Prg−1 (H, Aut(K)) = Prg (H, Aut(K)).
Proof. Let
A = {(x, α) ∈ H × Aut(K) : [x, α] = g} and
B = {(y, β) ∈ H × Aut(K) : [y, β] = g −1 }.
Then (x, α) 7→ (α(x), α−1 ) gives a bijection between A and B. Therefore |A| =
|B|. Hence, the result follows from (1.1).
Proposition 2.7. Let G1 and G2 be two finite groups. Let H1 , K1 and H2 , K2
be subgroups of G1 and G2 respectively such that H1 ⊆ K1 , H2 ⊆ K2 and
gcd(|K1 |, |K2 |) = 1. If (g1 , g2 ) ∈ K1 × K2 then
Pr(g1 ,g2 ) (H1 × H2 , Aut(K1 × K2 )) = Prg1 (H1 , Aut(K1 ))Prg2 (H2 , Aut(K2 )).
Proof. Let
X = {((x, y), αK1 ×K2 ) ∈ (H1 × H2 ) × Aut(K1 × K2 ) :
[(x, y), αK1 ×K2 ] = (g1 , g2 )},
Y = {(x, αK1 ) ∈ H1 × Aut(K1 ) : [x, αK1 ] = g1 } and
Z = {(y, αK2 ) ∈ H2 × Aut(K2 ) : [y, αK2 ] = g2 }.
Since gcd(|K1 |, |K2 |) = 1, by Lemma 2.1 of [1], we have Aut(K1 × K2 ) =
Aut(K1 ) × Aut(K2 ). Therefore, for every αK1 ×K2 ∈ Aut(K1 × K2 ) there exist
unique αK1 ∈ Aut(K1 ) and αK2 ∈ Aut(K2 ) such that αK1 ×K2 = αK1 × αK2 ,
where αK1 ×αK2 ((x, y)) = (αK1 (x), αK2 (y)) for all (x, y) ∈ H1 ×H2 . Also, for all
(x, y) ∈ H1 × H2 , we have [(x, y), αK1 ×K2 ] = (g1 , g2 ) if and only if [x, αK1 ] = g1
and [y, αK2 ] = g2 . These leads to show that X = Y × Z. Therefore
|Y|
|Z|
|X |
=
·
.
|H1 × H2 || Aut(K1 × K2 )|
|H1 || Aut(K1 )| |H2 || Aut(K2 )|
Hence, the result follows from (1.1).
4
3
Various bounds
In this section, we obtain various bounds for Prg (H, Aut(K)). We begin with
the following lower bounds.
Proposition 3.1. Let H and K be two subgroups of a finite group G such that
H ⊆ K. Then, for g ∈ K, we have
|CAut(K) (H)|(|H|−|L(H,Aut(K))|)
|H|| Aut(K)|
(a) Prg (H, Aut(K)) ≥
|L(H,Aut(K))|
|H|
(b) Prg (H, Aut(K)) ≥
|L(H,Aut(K))||CAut(K) (H)|
|H|| Aut(K)|
+
if g = 1.
if g 6= 1.
Proof. Let C denote the set {(x, α) ∈ H × Aut(K) : [x, α] = g}.
(a) We have (L(H, Aut(K))×Aut(K))∪(H×CAut(K) (H)) is a subset of C and
|(L(H, Aut(K)) × Aut(K)) ∪ (H × CAut(K) (H))| = |L(H, Aut(K))|| Aut(K)| +
|CAut(K) (H)||H| − |L(H, Aut(K))||CAut(K) (H)|. Hence, the result follows from
(1.1).
(b) Since g ∈ S(H, Aut(K)) we have C is non-empty. Let (y, β) ∈ C then
(y, β) ∈
/ L(H, Aut(K)) × CAut(K) (H) otherwise [y, β] = 1. It is easy to see
that the coset (y, β)(L(H, Aut(K)) × CAut(K) (H)) is a subset of C having order
|L(H, Aut(K))||CAut(K) (H)|. Hence, the result follows from (1.1).
Proposition 3.2. Let H and K be two subgroups of a finite group G such that
H ⊆ K. If g ∈ K then
Prg (H, Aut(K)) ≤ Pr(H, Aut(K)).
The equality holds if and only if g = 1.
Proof. By Theorem 2.3, we have
Prg (H, Aut(K)) =
≤
1
|H|| Aut(K)|
X
|CAut(K) (x)|
x∈H
xg∈orbK (x)
X
1
|CAut(K) (x)| = Pr(H, Aut(K)).
|H|| Aut(K)|
x∈H
The equality holds if and only if xg ∈ orbK (x) for all x ∈ H if and only if
g = 1.
Proposition 3.3. Let H and K be two subgroups of a finite group G such that
H ⊆ K. Let g ∈ K and p the smallest prime dividing | Aut(K)|. If g 6= 1 then
Prg (H, Aut(K)) ≤
1
|H| − |L(H, Aut(K))|
< .
p|H|
p
5
Proof. By Theorem 2.3, we have
Prg (H, Aut(K)) =
1
|H|
X
x∈H\L(H,Aut(K))
xg∈orbK (x)
1
| orbK (x)|
(3.1)
noting that for x ∈ L(H, Aut(K)) we have xg ∈
/ orbK (x). Also, for x ∈ H \
L(H, Aut(K)) and xg ∈ orbK (x) we have | orbK (x)| > 1. Since | orbK (x)| is
a divisor of | Aut(K)| we have | orbK (x)| ≥ p. Hence, the result follows from
(3.1).
Proposition 3.4. Let H1 , H2 and K be subgroups of a finite group G such that
H1 ⊆ H2 ⊆ K. Then
Prg (H1 , Aut(K)) ≤ |H2 : H1 |Prg (H2 , Aut(K)).
The equality holds if and only if xg ∈
/ orbK (x) for all x ∈ H2 \ H1 .
Proof. By Theorem 2.3, we have
|H1 || Aut(K)|Prg (H1 , Aut(K)) =
X
|CAut(K) (x)|
X
|CAut(K) (x)|
x∈H1
xg∈orbK (x)
≤
x∈H2
xg∈orbK (x)
= |H2 || Aut(K)|Prg (H2 , Aut(K)).
Hence, the result follows.
Proposition 3.5. Let H and K be two subgroups of a finite group G such that
H ⊆ K. If g ∈ K then
Prg (H, Aut(K)) ≤ |K : H| Pr(K, Aut(K))
with equality if and only if g = 1 and H = K.
Proof. By Proposition 3.2, we have
Prg (H, Aut(K)) ≤ Pr(H, Aut(K))
X
1
=
|CAut(K) (x)|
|H|| Aut(K)|
x∈H
X
1
≤
|CAut(K) (x)|.
|H|| Aut(K)|
x∈K
Hence, the result follows from Corollary 2.4.
6
Note that if we replace Aut(K) by Inn(K), the inner automorphism group
of K, in (1.1) then Prg (H, Inn(K)) = Prg (H, K) where
Prg (H, K) =
|{(x, y) ∈ H × K : x−1 y −1 xy = g}|
.
|H||K|
A detailed study on Prg (H, K) can be found in [2]. The following proposition
gives a relation between Prg (H, Aut(K)) and Prg (H, K) for g = 1.
Proposition 3.6. Let H and K be two subgroups of a finite group G such that
H ⊆ K. If g = 1 then
Prg (H, Aut(K)) ≤ Prg (H, K).
Proof. If g = 1 then by [2, Theorem 2.3], we have
Prg (H, K) =
1
1 X
|H|
| clK (x)|
(3.2)
x∈H
where clK (x) = {α(x) : α ∈ Inn(K)}. Since clK (x) ⊆ orbK (x) for all x ∈ H,
the result follows from (3.2) and Theorem 2.3.
Theorem 3.7. Let H and K be two subgroups of a finite group G such that
H ⊆ K and p the smallest prime dividing | Aut(K)|. Then
Pr(H, Aut(K)) ≥
|L(H, Aut(K))| p(|H| − |XH | − |L(H, Aut(K))|) + |XH |
+
|H|
|H|| Aut(K)|
and
Pr(H, Aut(K)) ≤
(p − 1)|L(H, Aut(K))| + |H| |XH |(| Aut(K)| − p)
−
,
p|H|
p|H|| Aut(K)|
where XH = {x ∈ H : CAut(K) (x) = {I}}.
Proof. We have XH ∩ L(H, Aut(K)) = φ. Therefore
X
|CAut(K) (x)| = |XH | + | Aut(K)||L(H, Aut(K))|
x∈H
X
+
|CAut(K) (x)|.
x∈H\(XH ∪L(H,Aut(K)))
For x ∈ H \ (XH ∪ L(H, Aut(K))) we have {I} =
6 CAut(K) (x) 6= Aut(K)
| Aut(K)|
which implies p ≤ |CAut(K) (x)| ≤
. Therefore
p
X
|CAut(K) (x)| ≥|XH | + | Aut(K)||L(H, Aut(K))|
x∈H
+ p(|H| − |XH | − |L(H, Aut(K))|)
7
(3.3)
and
X
|CAut(K) (x)| ≤|XH | + | Aut(K)||L(H, Aut(K))|
x∈H
+
| Aut(K)|(|H| − |XH | − |L(H, Aut(K))|)
.
p
(3.4)
Hence, the result follows from Corollary 2.4, (3.3) and (3.4).
We have the following two corollaries.
Corollary 3.8. Let H and K be two subgroups of a finite group G such that H ⊆
K. If p and q are the smallest primes dividing | Aut(K)| and |H| respectively
then
p+q−1
.
Pr(H, Aut(K)) ≤
pq
In particular, if p = q then Pr(H, Aut(K)) ≤
2p−1
p2
≤ 43 .
Proof. Since H 6= L(H, Aut(K)) we have |H : L(H, Aut(K))| ≥ q. Therefore,
by Theorem 3.7, we have
1
p−1
p+q−1
Pr(H, Aut(K)) ≤
+1 ≤
.
p |H : L(H, Aut(K))|
pq
Corollary 3.9. Let H and K be two subgroups of a finite group G such that
H ⊆ K and p, q be the smallest primes dividing | Aut(K)| and |H| respectively.
If H is non-abelian then
Pr(H, Aut(K)) ≤
q2 + p − 1
.
pq 2
In particular, if p = q then Pr(H, Aut(K)) ≤
p2 +p−1
p3
≤ 58 .
Proof. Since H is non-abelian we have |H : L(H, Aut(K))| ≥ q 2 . Therefore, by
Theorem 3.7, we have
1
p−1
q2 + p − 1
Pr(H, Aut(K)) ≤
.
+1 ≤
p |H : L(H, Aut(K))|
pq 2
Now we obtain two lower bounds analogous to the lower bounds obtained in
[9, Theorem A] and [8, Theorem 1].
Theorem 3.10. Let H and K be two subgroups of a finite group G such that
H ⊆ K. Then
1
|S(H, Aut(K))| − 1
Pr(H, Aut(K)) ≥
1+
.
|S(H, Aut(K))|
|H : L(H, Aut(K))|
The equality holds if and only if orbK (x) = xS(H, Aut(K)) for all x ∈ H \
L(H, Aut(K)).
8
Proof. For all x ∈ H \ L(H, Aut(K)) we have α(x) = x[x, α] ∈ xS(H, Aut(K)).
Therefore orbK (x) ⊆ xS(H, Aut(K)) and so | orbK (x)| ≤ |S(H, Aut(K))| for
all x ∈ H \ L(H, Aut(K)). Now, by Corollary 2.4, we have
X
X
1
1
1
Pr(H, Aut(K)) =
+
|H|
| orbK (x)|
| orbK (x)|
x∈L(H,Aut(K))
x∈H\L(H,Aut(K))
1
|L(H, Aut(K))|
+
≥
|H|
|H|
X
x∈H\L(H,Aut(K))
1
.
|S(H, Aut(K))|
Hence, the result follows.
Lemma 3.11. Let H and K be two subgroups of a finite group G such that
H ⊆ K. Then, for any two integers m ≥ n, we have
n−1
1
m−1
1
1+
≥
1+
.
n
|H : L(H, Aut(K))|
m
|H : L(H, Aut(K))|
If L(H, Aut(K)) 6= H then equality holds if and only if m = n.
Proof. The proof is an easy exercise.
Corollary 3.12. Let H and K be two subgroups of a finite group G such that
H ⊆ K. Then
1
|[H, Aut(K)]| − 1
Pr(H, Aut(K)) ≥
1+
.
|[H, Aut(K)]|
|H : L(H, Aut(K))|
If H 6= L(H, Aut(K)) then the equality holds if and only if [H, Aut(K)] =
S(H, Aut(K)) and orbK (x) = x[H, Aut(K)] for all x ∈ H \ L(H, Aut(K)).
Proof. Since |[H, Aut(K)]| ≥ |S(H, Aut(K))|, the result follows from Theorem
3.10 and Lemma 3.11.
Note that the equality holds if and only if equality holds in Theorem 3.10
and Lemma 3.11.
It is worth mentioning that Theorem 3.10 gives better lower bound than the
lower bound given by Corollary 3.12. Also
1
|[H, Aut(K)]| − 1
|L(H, Aut(K))|
1+
≥
|[H, Aut(K)]|
|H : L(H, Aut(K))|
|H|
p(|H| − |L(H, Aut(K))|)
.
+
|H|| Aut(K)|
Hence, Theorem 3.10 gives better lower bound than the lower bound given by
Theorem 3.7.
9
4
A few Characterizations
In this section, we obtain some characterizations of a subgroup H of G if equality
holds in Corollary 3.8 and Corollary 3.9. We begin with the following result.
Theorem 4.1. Let H and K be two subgroups of a finite group G such that
H ⊆ K. If Pr(H, Aut(K)) = p+q−1
for some primes p and q. Then pq divides
pq
|H|| Aut(K)|. Further, if p and q are the smallest primes dividing | Aut(K)|
and |H| respectively, then
H
∼
= Zq .
L(H, Aut(K))
In particular, if H and Aut(K) are of even order and Pr(H, Aut(K)) =
H
∼
L(H,Aut(K)) = Z2 .
3
4
then
Proof. By (1.1), we have (p + q − 1)|H|| Aut(K)| = pq|{(x, α) ∈ H × Aut(K) :
[x, α] = 1}|. Therefore, pq divides |H|| Aut(K)|.
If p and q are the smallest primes dividing | Aut(K)| and |H| respectively
then, by Theorem 3.7, we have
p−1
1
p+q−1
≤
+1
pq
p |H : L(H, Aut(K))|
which gives |H : L(H, Aut(K))| ≤ q. Hence,
H
L(H,Aut(K))
∼
= Zq .
Theorem 4.2. Let H ⊆ K be two subgroups of a finite group G such that H
2
is non-abelian and Pr(H, Aut(K)) = q +p−1
for some primes p and q. Then
pq2
pq divides |H|| Aut(K)|. Further, if p and q are the smallest primes dividing
| Aut(K)| and |H| respectively then
H
∼
= Zq × Zq .
L(H, Aut(K))
In particular, if H and Aut(K) are of even order and Pr(H, Aut(K)) =
H
∼
L(H,Aut(K)) = Z2 × Z2 .
5
8
then
Proof. By (1.1), we have (q 2 + p − 1)|H|| Aut(K)| = pq 2 |{(x, α) ∈ H × Aut(K) :
[x, α] = 1}|. Therefore, pq divides |H|| Aut(K)|.
If p and q are the smallest primes dividing | Aut(K)| and |H| respectively
then, by Theorem 3.7, we have
q2 + p − 1
p−1
1
≤
+
1
pq 2
p |H : L(H, Aut(K))|
which gives |H : L(H, Aut(K))| ≤ q 2 . Since H is non-abelian we have |H :
H
∼
L(H, Aut(K))| 6= 1, q. Hence, L(H,Aut(K))
= Zq × Zq .
10
The following two results give partial converses of Theorem 4.1 and 4.2
respectively.
Proposition 4.3. Let H and K be two subgroups of a finite group G such that
H ⊆ K. Let p, q be the smallest prime divisors of | Aut(K)|, |H| respectively
and | Aut(K) : CAut(K) (x)| = p for all x ∈ H \ L(H, Aut(K)).
(a) If
H
L(H,Aut(K))
∼
= Zq then Pr(H, Aut(K)) =
(b) If
H
L(H,Aut(K))
∼
= Zq × Zq then Pr(H, Aut(K)) =
p+q−1
pq .
q2 +p−1
pq2 .
Proof. Since | Aut(K) : CAut(K) (x)| = p for all x ∈ H \ L(H, Aut(K)) we have
|CAut(K) (x)| = | Aut(K)|
for all x ∈ H \ L(H, Aut(K)). Therefore, by Corollary
p
2.4, we have
Pr(H, Aut(K)) =
=
|L(H, Aut(K))|
1
+
|H|
|H|| Aut(K)|
Pr(H, Aut(K)) =
(b) If
5
H
L(H,Aut(K))
H
L(H,Aut(K))
|CAut(K) (x)|
x∈H\L(H,Aut(K))
|L(H, Aut(K))| |H| − |L(H, Aut(K))|
+
.
|H|
p|H|
Thus
(a) If
X
1
p
p−1
+1 .
|H : L(H, Aut(K))|
(4.1)
∼
= Zq then (4.1) gives Pr(H, Aut(K)) = p+q−1
pq .
2
∼
= Zq × Zq then (4.1) gives Pr(H, Aut(K)) = q +p−1
pq2 .
Autoisoclinic pairs
In the year 1940, Hall [4] introduced the concept of isoclinism between two
groups. Following Hall, Moghaddam et al. [7] have defined autoisoclinism
between two groups, in the year 2013. Recall that two groups G1 and G2
G2
G1
→ L(G
,
are said to be autoisoclinic if there exist isomorphisms ψ : L(G
1)
2)
β : [G1 , Aut(G1 )] → [G2 , Aut(G2 )] and γ : Aut(G1 ) → Aut(G2 ) such that the
following diagram commutes
G1
L(G1 )
ψ×γ
× Aut(G1 ) −−−−→
a(G ,Aut(G ))
1
y 1
[G1 , Aut(G1 )]
β
−−−−→
G2
L(G2 )
× Aut(G2 )
a(G ,Aut(G ))
2
y 2
[G2 , Aut(G2 )]
Gi
× Aut(Gi ) → [Gi , Aut(Gi )], for i = 1, 2,
where the maps a(Gi ,Aut(Gi )) : L(G
i)
are given by
a(Gi ,Aut(Gi )) (xi L(Gi ), αi ) = [xi , αi ].
Such a pair (ψ × γ, β) is called an autoisoclinism between the groups G1 and
G2 . We generalize the notion of autoisoclinism in the following way:
11
Let H1 , K1 and H2 , K2 be subgroups of the groups G1 and G2 respectively.
The pairs of subgroups (H1 , K1 ) and (H2 , K2 ) such that H1 ⊆ K1 and H2 ⊆ K2
H1
are said to be autoisoclinic if there exist isomorphisms ψ : L(H1 ,Aut
K1 ) →
H2
L(H2 ,Aut(K2 )) ,
β : [H1 , Aut(K1 )] → [H2 , Aut(K2 )] and γ : Aut(K1 ) → Aut(K2 )
such that the following diagram commutes
H1
L(H1 ,Aut(K1 ))
ψ×γ
× Aut(K1 ) −−−−→
a(H ,Aut(K ))
1
y 1
[H1 , Aut(K1 )]
where the maps a(Hi ,Aut(Ki )) :
i = 1, 2, are given by
H2
L(H2 ,Aut(K2 ))
β
−−−−→
Hi
L(Hi ,Aut(Ki ))
× Aut(K2 )
a(H ,Aut(K ))
2
y 2
[H2 , Aut(K2 )]
× Aut(Ki ) → (Hi , Aut(Ki )), for
a(Hi ,Aut(Ki )) (xi L(Hi , Aut(Ki )), αi ) = [xi , αi ].
Such a pair (ψ × γ, β) is said to be an autoisoclinism between the pairs of groups
(H1 , K1 ) and (H2 , K2 ). We conclude this paper with the following generalization
of [3, Theorem 5.1] and [10, Lemma 2.5].
Theorem 5.1. Let G1 and G2 be two finite groups with subgroups H1 , K1 and
H2 , K2 respectively such that H1 ⊆ K1 and H2 ⊆ K2 . If (ψ × γ, β) is an
autoisoclinism between the pairs (H1 , K1 ) and (H2 , K2 ) then, for g ∈ K1 ,
Prg (H1 , Aut(K1 )) = Prβ(g) (H2 , Aut(K2 )).
H1
×
Proof. Let us consider the sets Sg = {(x1 L(H1 , Aut(K1 )), α1 ) ∈ L(H1 ,Aut(K
1 ))
Aut(K1 ) : [x1 L(H1 , Aut(K1 )), α1 ] = g} and Tβ(g) = {(x2 L(H2 , Aut(K2 )), α2 ) ∈
H2
L(H2 ,Aut(K2 )) × Aut(K2 ) : [x2 L(H2 , Aut(K2 )), α2 ] = β(g)}. Since (H1 , K1 ) is
autoisoclinic to (H2 , K2 ) we have |Sg | = |Tβ(g) |. Again, it is clear that
|{(x1 , α1 ) ∈ H1 × Aut(K1 ) : [x1 , α1 ] = g}| = |L(H1 , Aut(K1 ))||Sg |
(5.1)
and
|{(x2 , α2 ) ∈ H2 × Aut(K2 ) : [x2 , α2 ] = β(g)}| = |L(H2 , Aut(K2 ))||Tβ(g) |. (5.2)
Hence, the result follows from (1.1), (5.1) and (5.2).
References
[1] C. J. Hillar and D. L. Rhea, Automorphism of finite abelian groups, Amer.
Math. Monthly, 114(10), 917–923 (2007).
[2] A. K. Das and R. K. Nath, On generalized relative commutativity degree of
a finite group, Int. Electron. J. Algebra, 7, 140–151 (2010).
12
[3] P. Dutta and R. K. Nath, Autocommuting probabilty of a finite group,
preprint.
[4] P. Hall, The classification of prime power groups, J. Reine Angew. Math.,
182, 130–141 (1940).
[5] P. V. Hegarty, The absolute centre of a group, J. Algebra, 169(3), 929–935
(1994).
[6] M. R. R. Moghaddam, F. Saeedi and E. Khamseh, The probability of an
automorphism fixing a subgroup element of a finite group, Asian-Eur. J.
Math. 4(2), 301308 (2011).
[7] M. R. R. Moghaddam, M. J. Sadeghifard and M. Eshrati, Some properties
of autoisoclinism of groups, Fifth International group theory conference,
Islamic Azad University, Mashhad, Iran, 13-15 March 2013.
[8] R. K. Nath and A. K. Das, On a lower bound of commutativity degree, Rend.
Circ. Mat. Palermo, 59(1), 137–142 (2010).
[9] R. K. Nath and M. K. Yadav, Some results on relative commutativity degree,
Rend. Circ. Mat. Palermo, 64(2), 229–239 (2015).
[10] M. R. Rismanchian and Z. Sepehrizadeh, Autoisoclinism classes and autocommutativity degrees of finite groups, Hacet. J. Math. Stat. 44(4), 893–899
(2015).
[11] G. J. Sherman, What is the probability an automorphism fixes a group
element?, Amer. Math. Monthly, 82, 261–264 (1975).
13
| 4 |
arXiv:cs/0601038v1 [cs.LO] 10 Jan 2006
Under consideration for publication in Theory and Practice of Logic Programming
1
Constraint-based Automatic Verification of
Abstract Models of Multithreaded Programs
GIORGIO DELZANNO
Dipartimento di Informatica e Scienze dell’Informazione, Università di Genova
via Dodecaneso 35, 16146 Genova - Italy
(e-mail: giorgio@disi.unige.it)
submitted 17 December 2003; revised 13 April 2005; accepted 15 January 2006
Abstract
We present a technique for the automated verification of abstract models of multithreaded
programs providing fresh name generation, name mobility, and unbounded control.
As high level specification language we adopt here an extension of communication finitestate machines with local variables ranging over an infinite name domain, called TDL
programs. Communication machines have been proved very effective for representing communication protocols as well as for representing abstractions of multithreaded software.
The verification method that we propose is based on the encoding of TDL programs
into a low level language based on multiset rewriting and constraints that can be viewed as
an extension of Petri Nets. By means of this encoding, the symbolic verification procedure
developed for the low level language in our previous work can now be applied to TDL
programs. Furthermore, the encoding allows us to isolate a decidable class of verification
problems for TDL programs that still provide fresh name generation, name mobility, and
unbounded control. Our syntactic restrictions are in fact defined on the internal structure
of threads: In order to obtain a complete and terminating method, threads are only allowed
to have at most one local variable (ranging over an infinite domain of names).
KEYWORDS: Constraints, Multithreaded Programs, Verification.
1 Introduction
Andrew Gordon (Gordon 2001) defines a nominal calculus to be a computational
formalism that includes a set of pure names and allows the dynamic generation of
fresh, unguessable names. A name is pure whenever it is only useful for comparing
for identity with other names. The use of pure names is ubiquitous in programming
languages. Some important examples are memory pointers in imperative languages,
identifiers in concurrent programming languages, and nonces in security protocols.
In addition to pure names, a nominal process calculus should provide mechanisms
for concurrency and inter-process communication. A computational model that provides all these features is an adequate abstract formalism for the analysis of multithreaded and distributed software.
2
Giorgio Delzanno
The Problem Automated verification of specifications in a nominal process calculus
becomes particularly challenging in presence of the following three features: the possibility of generating fresh names (name generation); the possibility of transmitting
names (name mobility); the possibility of dynamically adding new threads of control
(unbounded control). In fact, a calculus that provides all the previous features can
be used to specify systems with a state-space infinite in several dimensions. This
feature makes difficult (if not impossible) the application of finite-state verification
techniques or techniques based on abstractions of process specifications into Petri
Nets or CCS-like models. In recent years there have been several attempts of extending automated verification methods from finite-state to infinite-state systems
(Abdulla and Nylén 2000; Kesten et al. 2001). In this paper we are interested in investigating the possible application of the methods we proposed in (Delzanno 2001)
to verification problems of interest for nominal process calculi.
Constraint-based Symbolic Model Checking In (Delzanno 2001) we introduced a
specification language, called MSR(C), for the analysis of communication protocols
whose specifications are parametric in several dimensions (e.g. number of servers,
clients, and tickets as in the model of the ticket mutual exclusion algorithm shown
in (Bozzano and Delzanno 2002)). MSR(C) combines multiset rewriting over first
order atomic formulas (Cervesato et al. 1999) with constraints programming. More
specifically, multiset rewriting is used to specify the control part of a concurrent
system, whereas constraints are used to symbolically specify the relations over local data. The verification method proposed in (Delzanno 2005) allows us to symbolically reason on the behavior of MSR(C) specifications. To this aim, following
(Abdulla et al. 1996; Abdulla and Nylén 2000) we introduced a symbolic representation of infinite collections of global configurations based on the combination of
multisets of atomic formulas and constraints, called constrained configurations.1
The verification procedure performs a symbolic backward reachability analysis by
means of a symbolic pre-image operator that works over constrained configurations
(Delzanno 2005). The main feature of this method is the possibility of automatically
handling systems with an arbitrary number of components. Furthermore, since we
use a symbolic and finite representation of possibly infinite sets of configurations,
the analysis is carried out without loss of precision.
A natural question for our research is whether and how these techniques can be
used for verification of abstract models of multithreaded programs.
Our Contribution In this paper we propose a sound, and fully automatic verification
method for abstract models of multithreaded programs that provide name generation, name mobility, and unbounded control. As a high level specification language
we adopt here an extension with value-passing of the formalism of (Ball et al. 2001)
1
Notice that in (Abdulla et al. 1996; Abdulla and Nylén 2000) a constraint denotes a symbolic
state whereas we use the word constraint to denote a symbolic representation of the relation of
data variables (e.g. a linear arithmetic formula) used as part of the symbolic representation of
sets of states (a constrained configuration).
Constraint-based Verification of Abstract Multithreaded Programs
3
based on families of state machines used to specify abstractions of multithreaded
software libraries. The resulting language is called Thread Definition Language
(TDL). This formalism allows us to keep separate the finite control component of a
thread definition from the management of local variables (that in our setting range
over a infinite set of names), and to treat in isolation the operations to generate
fresh names, to transmit names, and to create new threads. In the present paper we
will show that the extension of the model of (Ball et al. 2001) with value-passing
makes the model Turing equivalent.
The verification methodology is based on the encoding of TDL programs into a
specification in the instance MSRN C of the language scheme MSR(C) of(Delzanno 2001).
MSRN C is obtained by taking as constraint system a subclass of linear arithmetics
with only = and > relations between variables, called name constraints (N C). The
low level specification language MSRN C is not just instrumental for the encoding
of TDL programs. Indeed, it has been applied to model consistency and mutual
exclusion protocols in (Bozzano and Delzanno 2002; Delzanno 2005). Via this encoding, the verification method based on symbolic backward reachability obtained
by instantiating the general method for MSR(C) to NC-constraints can now be applied to abstract models of multithreaded programs. Although termination is not
guaranteed in general, the resulting verification method can succeed on practical
examples as the Challenge-Response TDL program defined over binary predicates
we will illustrated in the present paper. Furthermore, by propagating the sufficient
conditions for termination defined in (Bozzano and Delzanno 2002; Delzanno 2005)
back to TDL programs, we obtain an interesting class of decidable problems for abstract models of multithreaded programs still providing name generation, name
mobility, and unbounded control.
Plan of the Paper In Section 2 we present the Thread Definition Language (TDL)
with examples of multithreaded programs. Furthermore, we discuss the expressiveness of TDL programs showing that they can simulate Two Counter Machines. In
Section 3, after introducing the MSRN C formalism, we show that TDL programs
can be simulated by MSRN C specifications. In Section 4 we show how to transfer
the verification methods developed for MSR(C) to TDL programs. Furthermore, we
show that safety properties can be decided for the special class of monadic TDL
programs. In Section 5 we address some conclusions and discuss related work.
2 Thread Definition Language (TDL)
In this section we will define TDL programs. This formalism is a natural extension
with value-passing of the communicating machines used by (Ball et al. 2001) to
specify abstractions of multithreaded software libraries.
Terminology Let N be a denumerable set of names equipped with the relations =
and 6= and a special element ⊥ such that n 6= ⊥ for any n ∈ N . Furthermore, let
V be a denumerable set of variables, C = {c1 , . . . , cm } a finite set of constants, and
L a finite set of internal action labels. For a fixed V ⊆ V, the set of expressions is
4
Giorgio Delzanno
defined as E = V ∪ C ∪ {⊥} (when necessary we will use E(V ) to explicit the set of
variables V upon which expressions are defined). The set of channel expressions is
defined as Ech = V ∪ C. Channel expressions will be used as synchronization labels
so as to establish communication links only at execution time.
A guard over V is a conjunction γ1 , . . . , γs , where γi is either true, x = e or x 6= e
with x ∈ V and e ∈ E for i : 1, . . . , s. An assignment α from V to W is a conjunction
like xi := ei where xi ∈ W , ei ∈ E(V ) for i : 1, . . . k and xr 6= xs for r 6= s. A
message template m over V is a tuple m = hx1 , . . . , xu i of variables in V .
Definition 1
A TDL program is a set T = {P1 , . . . , Pt } of thread definitions (with distinct names
for local variables control locations). A thread definition P is a tuple hQ, s0 , V, Ri,
where Q is a finite set of control locations, s0 ∈ Q is the initial location, V ⊆ V is
a finite set of local variables, and R is a set of rules. Given s, s′ ∈ Q, and a ∈ L, a
rule has one of the following forms2 :
a
• Internal move: s −−→ s′ [γ, α], where γ is a guard over V , and α is an assignment from V to V ;
a
• Name generation: s −−→ s′ [x := new], where x ∈ V , and the expression new
denotes a fresh name;
a
• Thread creation: s −−→ s′ [run P ′ with α], where P ′ = hQ′ , t, W, R′ i ∈ T , and
α is an assignment from V to W that specifies the initialization of the local
variables of the new thread;
e!m
• Message sending: s −−→ s′ [γ, α], where e is a channel expression, m is a message template over V that specify which names to pass, γ is a guard over V ,
and α is an assignment from V to V .
e?m
• Message reception: s −−−→ s′ [γ, α], where e is a channel expression, m is a
message template over a new set of variables V ′ (V ′ ∩ V = ∅) that specifies
the names to receive, γ is a guard over V ∪ V ′ and α is an assignment from
V ∪ V ′ to V .
Before giving an example, we will formally introduce the operational semantics of
TDL programs.
2.1 Operational Semantics
In the following we will use N to indicate the subset of used names of N . Every
constant c ∈ C is mapped to a distinct name nc 6= ⊥ ∈ N , and ⊥ is mapped to ⊥.
Let P = hQ, s, V, Ri and V = {x1 , . . . , xk }. A local configuration is a tuple p =
hs′ , n1 , . . . , nk i where s′ ∈ Q and ni ∈ N is the current value of the variable xi ∈ V
for i : 1, . . . , k.
A global configuration G = hN, p1 , . . . , pm i is such that N ⊆ N and p1 , . . . , pm are
local configurations defined over N and over the thread definitions in T . Note that
2
In this paper we keep assignments, name generation, and thread creation separate in order to
simplify the presentation of the encoding into MSR.
Constraint-based Verification of Abstract Multithreaded Programs
5
there is no relation between indexes in a global configuration in G and in T ; G is
a pool of active threads, and several active threads can be instances of the same
thread definition.
Given a local configuration p = hs′ , n1 , . . . , nk i, we define the valuation ρp as
ρp (xi ) = ni if xi ∈ V , ρp (c) = nc if c ∈ C, and ρp (⊥) = ⊥. Furthermore, we
say that ρp satisfies the guard γ if ρp (γ) ≡ true, where ρp is extended to constraints in the natural way (ρp (ϕ1 ∧ ϕ2 ) = ρp (ϕ1 ) ∧ ρp (ϕ2 ), etc.).
The execution of x := e has the effect of updating the local variable x of a thread
with the current value of e (a name taken from the set of used values N ). On the contrary, the execution of x := new associates a fresh unused name to x. The formula
run P with α has the effect of adding a new thread (in its initial control location)
to the current global configuration. The initial values of the local variables of the
generated thread are determined by the execution of α whose source variables are
the local variables of the parent thread. The channel names used in a rendez-vous
are determined by evaluating the channel expressions tagging sender and receiver
rules. Value passing is achieved by extending the evaluation associated to the current configuration of the receiver so as to associate the output message of the sender
to the variables in the input message template. The operational semantics is given
via a binary relation ⇒ defined as follows.
Definition 2
Let G = hN, . . . , p, . . .i, and p = hs, n1 , . . . , nk i be a local configuration for P =
hQ, s, V, Ri, V = {x1 , . . . , xk }, then:
a
• If there exists a rule s −−→ s′ [γ, α] in R such that ρp satisfies γ, then G ⇒
hN, . . . , p′ , . . .i (meaning that only p changes) where p′ = hs′ , n′1 , . . . , n′k i,
n′i = ρp (ei ) if xi := ei is in α, n′i = ni otherwise, for i : 1, . . . , k.
a
• If there exists a rule s −−→ s′ [xi := new] in R, then G ⇒ hN ′ , . . . , p′ , . . .i
′
′
′
′
where p = hs , n1 , . . . , nk i, ni is an unused name, i.e., n′i ∈ N \ N , n′j = nj
for every j 6= i, and N ′ = N ∪ {n′i };
a
• If there exists a rule s −−→ s′ [run P ′ with α] in R with P ′ = hQ′ , t0 , W, R′ i,
W = {y1 , . . . , yu }, and α is defined as y1 := e1 , . . . , yu := eu then G ⇒
hN, . . . , p′ , . . . , qi (we add a new thread whose initial local configuration is q)
where p′ = hs′ , n1 , . . . , nk i, and q = ht0 , ρp (e1 ), . . . , ρp (eu )i.
• Let q = ht, m1 , . . . , mr i (distinct from p) be a local configuration in G associated with P ′ = hQ′ , t0 , W, R′ i.
e!m
e′ ?m′
Let s −−→ s′ [γ, α] in R and t −−−−→ t′ [γ ′ , α′ ] in R′ be two rules such that
m = hx1 , . . . , xu i, m′ = hy1 , . . . , yv i and u = v (message templates match).
We define σ as the value passing evaluation σ(yi ) = ρp (xi ) for i : 1, . . . , u,
and σ(z) = ρq (z) for z ∈ W ′ .
Now if ρp (e) = ρp (e′ ) (channel names match), ρp satisfies γ, and that σ
satisfies γ ′ , then hN, . . . , p, . . . , q, . . .i ⇒ hN, . . . , p′ , . . . , q′ , . . .i where p′ =
hs′ , n′1 , . . . , n′k i, n′i = ρp (v) if xi := v is in α, n′i = ni otherwise for i : 1, . . . , k;
q′ = ht′ , m′1 , . . . , m′r i, m′i = σ(v) if ui := v is in α′ , m′i = mi otherwise for
i : 1, . . . , r.
6
Giorgio Delzanno
Definition 3
An initial global configuration G0 has an arbitrary (but finite) number of threads
with local variables all set to ⊥. A run is a sequence G0 G1 . . . such that Gi ⇒ Gi+1
for i ≥ 0. A global configuration G is reachable from G0 if there exists a run from
G0 to G.
Example 1
Let us consider a challenge and response protocol in which the goal of two agents
Alice and Bob is to exchange a pair of new names hnA , nB i, the first one created
by Alice and the second one created by Bob, so as to build a composed secret key.
We can specify the protocol by using new names to dynamically establish private
channel names between instances of the initiator and of the responder. The TDL
program in Figure 1 follows this idea. The thread Init specifies the behavior of the
initiator. He first creates a new name using the internal action f resh, and stores
it in the local variable nA . Then, he sends nA on channel c (a constant), waits for
a name y on a channel with the same name as the value of the local variable nA
(the channel is specified by variable nA ) and then stores y in the local variable
mA . The thread Resp specifies the behavior of the responder. Upon reception of
a name x on channel c, he stores it in the local variable nB , then creates a new
name stored in local variable mB and finally sends the value in mB on channel with
the same name as the value of nB . The thread M ain non-deterministically creates
new thread instances of type Init and Resp. The local variable x is used to store
new names to be used for the creation of a new thread instance. Initially, all local
variables of threads Init/Resp are set to ⊥. In order to allow process instances to
participate to several sessions (potentially with different principals), we could also
add the following rule
restart
stopA −−−−−→ initA [nA := ⊥, mA := ⊥]
In this rule we require that roles and identities do not change from session to
session.3 Starting from G0 = hN0 , hinit, ⊥ii, and running the Main thread we can
generate any number of copies of the threads Init and Resp each one with a unique
identifier. Thus, we obtain global configurations like
hN, hinitM , ⊥i,
hinitA , i1 , ⊥, ⊥i, . . . , hinitA , iK , ⊥, ⊥i,
hinitB , iK+1 , ⊥, ⊥i, . . . , hinitB , iK+L , ⊥, ⊥i i
where N = {⊥, i1, . . . , iK , iK+1 , . . . , iK+L } for K, L ≥ 0. The threads of type Init
and Resp can start parallel sessions whenever created. For K = 1 and L = 1 one
possible session is as follows.
Starting from
h{⊥, i1 , i2 }, hinitM , ⊥i, hinitA , i1 , ⊥, ⊥i, hinitB , i2 , ⊥, ⊥ii
3
By means of thread and fresh name creation it is also possible to specify a restart rule in which
a given process takes a potential different role or identity.
Constraint-based Verification of Abstract Multithreaded Programs
7
T hread Init(local idA , nA , mA );
f resh
initA −−−−→ genA
c!hnA i
genA −−−−→ waitA
n ?hyi
waitA −−A
−−−→ stopA
[nA := new]
[true]
[mA := y]
T hread Resp(local id, nB , mB );
c?hxi
initB −−−→ genB
[nB := x]
f resh
genB −−−−→ readyB
n !hm i
[mB := new]
B
readyB −−B−−−−
→ stopB
[true]
T hread M ain(local x);
id
initM −−→ create
newA
create −−−−→
initM
new
→ initM
create −−−−B
[x := new]
[run Init with idA := x, nA := ⊥, mA := ⊥, x := ⊥]
[run Resp with idB := x, nB := ⊥, mB := ⊥, x := ⊥B ]
Fig. 1. Example of thread definitions.
if we apply the first rule of thread Init to hinitA , i1 , ⊥, ⊥i we obtain
h{⊥, i1 , i2 , a1 }, hinitM , ⊥i, hgenA , i1 , a1 , ⊥i, hinitB , i2 , ⊥, ⊥ii
where a1 is the generated name (a1 is distinct from ⊥, i1 , and i2 ). Now if we apply
the second rule of thread Init and the first rule of thread Resp (synchronization
on channel c) we obtain
h{⊥, i1 , i2 , a1 }, hinitM , ⊥i, hwaitA , i1 , a1 , ⊥i, hgenB , i2 , a1 , ⊥ii
If we apply the second rule of thread Resp we obtain
h{⊥, i1 , i2 , a1 , a2 }, hinitM , ⊥i, hwaitA , i1 , a1 , ⊥i, hreadyB , i2 , a1 , a2 ii
Finally, if we apply the last rule of thread Init and Resp (synchronization on
channel a1 ) we obtain
h{⊥, i1 , i2 , a1 , a2 }, hinitM , ⊥i, hstopA , i1 , a1 , a2 i, hstopB , i2 , a1 , a2 ii
Thus, at the end of the session the thread instances i1 and i2 have both a local
copy of the fresh names a1 and a2 . Note that a copy of the main thread hinitM , ⊥i
is always active in any reachable configuration, and, at any time, it may introduce
new threads (either of type Init or Resp) with fresh identifiers. Generation of fresh
names is also used by the threads of type Init and Resp to create nonces. Furthermore, threads can restart their life cycle (without changing identifiers). Thus,
in this example the set of possible reachable configurations is infinite and contains
configurations with arbitrarily many threads and fresh names. Since names are
stored in the local variables of active threads, the local data also range over an
infinite domain.
✷
8
Giorgio Delzanno
2.2 Expressive Power of TDL
To study the expressive power of the TDL language, we will compare it with the
Turing equivalent formalism called Two Counter Machines. A Two Counters Machine configurations is a tuple hℓ, c1 = n1 , c2 = n2 i where ℓ is control location taken
from a finite set Q, and n1 and n2 are natural numbers that represent the values
of the counters c1 and c2 . Each counter can be incremented or decremented (if
greater than zero) by one. Transitions combine operations on individual counters
with changes of control locations. Specifically, the instructions for counter ci are as
follows
Inc: ℓ1 : ci := ci + 1; goto ℓ2 ;
Dec: ℓ1 : if ci > 0 then ci := ci − 1; goto ℓ2 ; else goto ℓ3 ;
A Two Counter Machine consists then of a list of instructions and of the initial
state hℓ0 , c1 = 0, c2 = 0i. The operational semantics is defined according to the intuitive semantics of the instructions. Problems like control state reachability are
undecidable for this computational model.
The following property then holds.
Theorem 1
TDL programs can simulate Two Counter Machines.
Proof
In order to define a TDL program that simulates a Two Counter Machine we
proceed as follows. Every counter is represented via a doubly linked list implemented
via a collection of threads of type Cell and with a unique thread of type Last
pointing to the head of the list. The i-th counter having value zero is represented
as the empty list Cell(i, v, v), Last(i, v, w) for some name v and w (we will explain
later the use of w). The i-th counter having value k is represented as
Cell(i, v0 , v0 ), Cell(i, v0 , v1 ), . . . , C(i, vk−1 , vk ), Last(i, vk , w)
for distinct names v0 , v1 , . . . , vk . The instructions on a counter are simulated by
sending messages to the corresponding Last thread. The messages are sent on
channel Zero (zero test), Dec (decrement), and Inc (increment). In reply to each
of these messages, the thread Last sends an acknowledgment, namely Y es/N o for
the zero test, DAck for the decrement, IAck for the increment operation. Last
interacts with the Cell threads via the messages tstC, decC, incC acknowledged
by messages z/nz, dack. iack. The interactions between a Last thread and the Cell
threads is as follows.
Zero Test Upon reception of a message hxi on channel Zero, the Last thread with
local variables id, last, aux checks that its identifier id matches x - see transition
from Idle to Busy - sends a message hid, lasti on channel tstC directed to the cell
pointed to by last (transition from Busy to W ait), and then waits for an answer. If
the answer is sent on channel nz, standing for non-zero, (resp. z standing for zero)
- see transition from W ait to AckN Z (resp. AckZ) - then it sends its identifier on
Constraint-based Verification of Abstract Multithreaded Programs
9
Thread Last(local id, last, aux);
(Zero test)
Zero?hxi
Idle −−−−−−→ Busy
[id = x]
tstC!hid,lasti
Busy −−−−−−−−−→ W ait
nz?hxi
W ait −−−−→ AckN Z
[id = x]
z?hxi
W ait −−−→ AckZ
[id = x]
Y es!hidi
AckZ −−−−−→ Idle
No!hidi
AckN Z −−−−−→ idle
(Decrement)
Dec?hxi
Idle −−−−−→ Dbusy
[id = x]
decC!hid,lasti
DBusy −−−−−−−−−→ DW ait
dack?hx,ui
DW ait −−−−−−−→ DAck
[id = x, last := u]
DAck!hidi
DAck −−−−−−−→ Idle
(Increment)
Inc?hxi
Idle −−−−−→ IN ew
[id = x]
new
[aux := new]
run
[run Cell with idc := id; prev := last; next := aux]
IN ew −−−→ IRun
IRun −−→ IAck
IAck!hidi
IAck −−−−−−→ Idle
[last := aux]
Fig. 2. The process defining the last cell of the linked list associated to a counter
channel N o (resp. Y es) as an acknowledgment to the first message - see transition
from AckN Z (resp. Z) to Idle. As shown in Fig. 3, the thread Cell with local
variables idc, prev, and next that receives the message tstC, i.e., pointed to by a
thread Last with the same identifier as idc, sends an acknowledgment on channel
z (zero) if prev = next, and on channel nz (non-zero) if prev 6= next.
Decrement Upon reception of a message hxi on channel Dec, the Last thread with
local variables id, last, aux checks that its identifier id matches x (transition from
Idle to Dbusy), sends a message hid, lasti on channel decC directed to the cell
pointed to by last (transition from Busy to W ait), and then waits for an answer.
10
Giorgio Delzanno
Thread Cell(local idc, prev, next);
(Zero test)
tstC?hx,ui
idle −−−−−−−→ ackZ [x = idc, u = next, prev = next]
tstC?hx,ui
idle −−−−−−−→ ackN Z [x = idc, u = next, prev 6= next]
z!hidci
ackZ −−−−→ idle
nz!hidci
ackN Z −−−−−→ idle
(Decrement)
dec?hx,ui
idle −−−−−−→ dec [x = idc, u = next, prev 6= next]
dack!hidc,previ
dec −−−−−−−−−−→ idle
Fig. 3. The process defining a cell of the linked list associated to a counter
If the answer is sent on channel dack (transition from DW ait to DAck) then it
updates the local variable last with the pointer u sent by the thread Cell, namely
the prev pointer of the cell pointed to by the current value of last, and then sends
its identifier on channel DAck to acknowledge the first message (transition from
DAck to Idle).
As shown in Fig. 3, a thread Cell with local variables idc, prev, and next that
receives the message decC and such that next = last sends as an acknowledgment
on channel dack the value prev.
Increment To simulate the increment operation, Last does not have to interact with
existing Cell threads. Indeed, it only has to link a new Cell thread to the head of
the list (this is way the Cell thread has no operations to handle the increment
operation). As shown in Fig. 2 this can be done by creating a new name stored in
the local variable aux (transition from IN ew to IRun) and spawning a new Cell
thread (transition from IRun to IAck) with prev pointer equal to last, and next
pointer equal to aux. Finally, it acknowledges the increment request by sending its
identifier on channel IAck and updates variable last with the current value of aux.
Two Counter Machine Instructions We are now ready to use the operations provided by the thread Last to simulate the instructions of a Two Counter Machine.
As shown in Fig. 4, we use a thread CM with two local variables id1 , id2 to represent the list of instructions of a 2CM with counters c1 , c2 . Control locations of the
Two Counter Machines are used as local states of the thread CM . The initial local
state of the CM thread is the initial control location. The increment instruction
on counter ci at control location ℓ1 is simulated by an handshaking with the Last
thread with identifier idi : we first send the message Inc!hidi i, wait for the acknowledgment on channel IAck and then move to state ℓ2 . Similarly, for the decrement
Constraint-based Verification of Abstract Multithreaded Programs
11
Thread CM (local id1 , id2 );
..
.
(Instruction : ℓ1 : ci := ci + 1; goto ℓ2 ; )
Inc!hid i
ℓ1 −−−−−−i→ waitℓ1
IAck!hxi
waitℓ1 −−−−−−→ ℓ2 [x = idi ]
..
.
(Instruction : ℓ1 : ci > 0 then ci := ci − 1; goto ℓ2 ; else goto ℓ3 ; )
Zero!hid i
ℓ1 −−−−−−−i→ waitℓ1
NZAck?hxi
waitℓ1 −
−−−−−−−
→ decℓ1 [x = idi ]
Dec!hid i
decℓ1 −−−−−−i→ wdecℓ1
DAck?hyi
wdecℓ1 −−−−−−→ ℓ2 [y = idi ]
ZAck?hxi
waitℓ1 −−−−−−→ ℓ3 [x = idi ]
..
.
Fig. 4. The thread associated to a 2CM.
Thread Init(local nid1 , p1 , nid2 , p2 );
f reshId
init −−−−−→ init1 [nid1 := new]
f reshP
init1 −−−−−→ init2 [p1 := new]
runC
init2 −
−−−→ init3 [run Cell with idc := nid1 ; prev := p1 ; next := p1 ]
runL
init3 −−−→ init4 [run Last with idc := nid1 ; last := p1 ; aux := ⊥]
f reshId
init4 −−−−−→ init5 [nid2 := new]
f reshP
init5 −−−−−→ init6 [p2 := new]
runC
init6 −
−−−→ init7 [run Cell with idc := nid2 ; prev := p2 ; next := p2 ]
runL
init7 −−−→ init8 [run Last with idc := nid2 ; last := p2 ; aux := ⊥]
runCM
init8 −−−−−→ init9 [run 2CM with id1 := nid1 ; id2 := nid2 ]
Fig. 5. The initialization thread.
instruction on counter ci at control location ℓ1 we first send the message Zero!hidi i.
If we receive an acknowledgment on channel N ZAck we send a Dec request, wait
for completion and then move to ℓ2 . If we receive an acknowledgment on channel
ZAck we directly move to ℓ3 .
12
Giorgio Delzanno
Initialization The last step of the encoding is the definition of the initial state of the
system. For this purpose, we use the thread Init of Fig. 5. The first four rules of Init
initialize the first counter: they create two new names nid1 (an identifier for counter
c1 ) and p1 , and then spawn the new threads Cell(nid1 , p1 , p1 ), Last(nid1 , p1 , ⊥).
The following four rules spawns the new threads Cell(nid2 , p2 , p2 ), Last(nid2 , p2 , ⊥).
After this stage, we create a thread of type 2CM to start the simulation of the instructions of the Two Counter Machines. The initial configuration of the whole
system is G0 = hinit, ⊥, ⊥i. By construction we have that an execution step from
hℓ1 , c1 = n1 , c2 = n2 i to hℓ2 , c1 = m1 , c2 = m2 i is simulated by an execution run going from a global configuration in which the local state of thread CM is hℓ1 , id1 , id2 i
and in which we have ni occurrences of thread Cell with the same identifier idi
for i : 1, 2, to a global configuration in which the local state of thread CM is
hℓ2 , id1 , id2 i and in which we have mi occurrences of thread Cell with the same
identifier idi for i : 1, 2. Thus, every executions of a 2CM M corresponds to an execution of the corresponding TDL program that starts from the initial configuration
G0 = hinit, ⊥, ⊥i.
As a consequence of the previous theorem, we have the following corollary.
Corollary 1
Given a TDL program, a global configurations G, and a control location ℓ, deciding
if there exists a run going from G0 to a global configuration that contains ℓ (control
state reachability) is an undecidable problem.
3 From TDL to MSRN C
As mentioned in the introduction, our verification methodology is based on a translation of TDL programs into low level specifications given in MSRN C . Our goal is
to extend the connection between CCS and Petri Nets (German and Sistla 1992)
to TDL and MSR so as to be able to apply the verification methods defined in
(Delzanno 2005) to multithreaded programs. In the next section we will summarize
the main features of the language MSRN C introduced in (Delzanno 2001).
3.1 Preliminaries on MSRN C
N C-constraints are linear arithmetic constraints in which conjuncts have one of
the following form: true, x = y, x > y, x = c, or x > c, x and y being two variables
from a denumerable set V that range over the rationals, and c being an integer.
The solutions Sol of a constraint ϕ are defined as all evaluations (from V to Q)
that satisfy ϕ. A constraint ϕ is satisfiable whenever Sol(ϕ) 6= ∅. Furthermore, ψ
entails ϕ whenever Sol(ψ) ⊆ Sol(ϕ). N C-constraints are closed under elimination
of existentially quantified variables.
Let P be a set of predicate symbols. An atomic formula p(x1 , . . . , xn ) is such that
p ∈ P, and x1 , . . . , xn are distinct variables in V. A multiset of atomic formulas is
indicated as A1 | . . . | Ak , where Ai and Aj have distinct variables (we use variable
renaming if necessary), and | is the multiset constructor.
Constraint-based Verification of Abstract Multithreaded Programs
13
In the rest of the paper we will use M, N , . . . to denote multisets of atomic formulas,
ǫ to denote the empty multiset, ⊕ to denote multiset union and ⊖ to denote multiset
difference. An MSRN C configuration is a multiset of ground atomic formulas, i.e.,
atomic formulas like p(d1 , . . . , dn ) where di is a rational for i : 1, . . . , n.
An MSRN C rule has the form M −→ M′ : ϕ, where M and M′ are two (possibly
empty) multisets of atomic formulas with distinct variables built on predicates in
P, and ϕ is an N C-constraint. The ground instances of an MSRN C rule are defined
as
Inst(M −→ M′ : ϕ) = {σ(M) −→ σ(M′ ) | σ ∈ Sol(ϕ)}
where σ is extended in the natural way to multisets, i.e., σ(M) and σ(M′ ) are
MSRN C configurations.
An MSRN C specification S is a tuple hP, I, Ri, where P is a finite set of predicate
symbols, I is finite a set of (initial) MSRN C configurations, and R is a finite set of
MSRN C rules over P.
The operational semantics describes the update from a configuration M to one of its
possible successor configurations M′ . M′ is obtained from M by rewriting (modulo
associativity and commutativity) the left-hand side of an instance of a rule into the
corresponding right-hand side. In order to be fireable, the left-hand side must be
included in M. Since instances and rules are selected in a non deterministic way, in
general a configuration can have a (possibly infinite) set of (one-step) successors.
Formally, a rule H −→ B : ϕ from R is enabled at M via the ground substitution
σ ∈ Sol(ϕ) if and only if σ(H) 4 M. Firing rule R enabled at M via σ yields the
new configuration
M′ = σ(B) ⊕ (M ⊖ σ(H))
We use M ⇒MSR M′ to denote the firing of a rule at M yielding M′ .
A run is a sequence of configurations M0 M1 . . . Mk with M0 ∈ I such that
Mi ⇒MSR Mi+1 for i ≥ 0. A configuration M is reachable if there exists M0 ∈ I
∗
∗
such that M0 ⇒MSR M, where ⇒MSR is the transitive closure of ⇒MSR . Finally, the successor and predecessor operators P ost and P re are defined on a set
of configurations S as P ost(S) = {M′ |M ⇒MSR M′ , M ∈ S} and P re(S) =
{M|M ⇒MSR M′ , M′ ∈ S}, respectively. P re∗ and P ost∗ denote their transitive
closure.
As shown in (Delzanno 2001; Bozzano and Delzanno 2002), Petri Nets represent a
natural abstractions of MSRN C (and more in general of MSR rule with constraints)
specifications. They can be encoded, in fact, in propositional MSR specifications
(e.g. abstracting away arguments from atomic formulas).
3.2 Translation from TDL to MSRN C
The first thing to do is to find an adequate representation of names. Since all we
need is a way to distinguish old and new names, we just need an infinite domain
in which the = and 6= relation are supported. Thus, we can interpret names in N
14
Giorgio Delzanno
either as integer of as rational numbers. Since operations like variable elimination
are computationally less expensive than over integers, we choose to view names
as non-negative rationals. Thus, a local (TDL) configuration p = hs, n1 , . . . , nk i
is encoded as the atomic formula p• = s(n1 , . . . , nk ), where ni is a non-negative
rational. Furthermore, a global (TDL) configuration G = hN, p1 , . . . , pm i is encoded
as an MSRN C configuration G•
p•1 | . . . | p•m | f resh(n)
where the value n in the auxiliary atomic formula f resh(n) is an rational number
strictly greater than all values occurring in p•1 , . . . , p•m . The predicate f resh will
allow us to generate unused names every time needed.
The translation of constants C = {c1 , . . . , cm }, and variables is defined as follows:
x• = x for x ∈ V, ⊥• = 0, c•i = i for i : 1, . . . , m. We extend ·• in the natural way
on a guard γ, by decomposing every formula x 6= e into x < e• and x > e• . We will
call γ • the resulting set of N C-constraints.4
Given V = {x1 , . . . , xk }, we define V ′ as the set of new variables {x′1 , . . . , x′k }.
Now, let us consider the assignment α defined as x1 := e1 , . . . , xk := ek (we add
assignments like xi := xi if some variable does not occur as target of α). Then, α•
is the N C-constraint x′1 = e•1 , . . . , x′k = e•k .
The translation of thread definitions is defined below (where we will often refer to
Example 1).
Initial Global Configuration Given an initial global configuration consisting of the
local configurations hsi , ni1 , . . . , niki i with nij = ⊥ for i : 1, . . . , u, we define the
following MSRN C rule
init → s1 (x11 , . . . , x1k1 ) | . . . | su (xu1 , . . . , xuku ) | f resh(x) :
x > C, x11 = 0, . . . , xuku = 0
here C is the largest rational used to interpret the constants in C.
For each thread definition P = hQ, s0 , V, Ri in T with V = {x1 , . . . , xk } we translate
the rules in R as described below.
a
Internal Moves For every internal move s −−→ s′ [γ, α], and every ν ∈ γ • we define
s(x1 , . . . , xk ) → s′ (x′1 , . . . , x′k ) : ν, α•
a
Name Generation For every name generation s −−→ s′ [xi := new], we define
^
s(x1 , . . . , xk ) | f resh(x) → s′ (x′1 , . . . , x′k ) | f resh(y) : y > x′i , x′i > x,
x′j = xj
j6=i
f resh
For instance, the name generation initA −−−−→ genA [n := new] is mapped into the
MSRN C rule initA (id, x, y)| f resh(u) −−→ genA (id′ , x′ , y ′ ) | f resh(u′ ) : ϕ where ϕ
4
As an example, if γ is the constraint x = 1, x 6= z then γ • consists of the two constraints
x = 1, x > z and x = 1, z > x.
Constraint-based Verification of Abstract Multithreaded Programs
15
is the constraint u′ > x′ , x′ > u, y ′ = y, id′ = id. The constraint x′ > u represents
the fact that the new name associated to the local variable n (the second argument
of the atoms representing the thread) is fresh, whereas u′ > x′ updates the current
value of f resh to ensure that the next generated names will be picked up from
unused values.
Thread Creation Let P = hQ′ , t0 , V ′ , R′ i and V ′ = {y1 , . . . , yu }. Then, for every
a
thread creation s −−→ s′ [run P with α], we define
s(x1 , . . . , xk ) → s′ (x′1 , . . . , x′k ) | t(y1′ , . . . , yu′ ) : x′1 = x1 , . . . , x′k = xk , α• .
new
A
initM [run Init with id := x, . . .] of Example 1.
E.g., consider the rule create −−−−→
Its encoding yields the MSRN C rule create(x) −−→ initM (x′ ) | initA (id′ , n′ , m′ ) : ψ,
where ψ represents the initialization of the local variables of the new thread x′ =
x, id′ = x, n′ = 0, m′ = 0.
Rendez-vous The encoding of rendez-vous communication is based on the use of
constraint operations like variable elimination. Let P and P ′ be a pair of thread
definitions, with local variables V = {x1 , . . . , xk } and V ′ = {y1 , . . . , yl } with V ∩
e′ ?m′
e!m
V ′ = ∅. We first select all rules s −−→ s′ [γ, α] in R and t −−−−→ t′ [γ ′ , α′ ] in R′ ,
such that m = hw1 , . . . , wu i, m′ = hw1′ , . . . , wv′ i and u = v. Then, we define the new
MSRN C rule
s(x1 , . . . , xk ) | t(y1 , . . . , yl ) → s′ (x′1 , . . . , x′k ) | t′ (y1′ , . . . , yl′ ) : ϕ
for every ν ∈ γ • and ν ′ ∈ γ ′• such that the NC-constraint ϕ obtained by eliminating w1′ , . . . , wv′ from the constraint ν ∧ ν ′ ∧ α• ∧ α′• ∧ w1 = w1′ ∧ . . . ∧ wv = wv′
nA ?hyi
is satisfiable. For instance, consider the rules waitA −−−−−→ stopA [mA := y] and
nB !hmB i
readyB −−−−−−→ stopB [true]. We first build up a new constraint by conjoining the
NC-constraints y = mB (matching of message templates), and nA = nB , m′A =
y, n′A = nA , m′B = mB , n′B = nB , id′1 = id1 , id′2 = id2 (guards and actions of
sender and receiver). After eliminating y we obtain the constraint ϕ defined as
nB = nA , m′A = mB , n′A = nA , m′B = mB , n′B = nB , id′1 = id1 , id′2 = id2 defined
over the variables of the two considered threads. This step allows us to symbolically
represent the passing of names. After this step, we can represent the synchronization of the two threads by using a rule that simultaneously rewrite all instances
that satisfy the constraints on the local data expressed by ϕ, i.e., we obtain the
rule
waitA (id1 , nA , mA )| readyB (id2 , nB , mB ) −→
stopA (id′1 , n′A , m′A ) | stopB (id′2 , n′B , m′B ) : ϕ
The complete translation of Example 1 is shown in Fig. 6 (for simplicity we have
applied a renaming of variables in the resulting rules). An example of run in the
resulting MSRN C specification is shown in Figure 7. Note that, a fresh name is
selected between all values strictly greater than the current value of f resh (e.g. in
the second step 6 > 4), and then f resh is updated to a value strictly greater than
all newly generated names (e.g. 8 > 6 > 4).
16
Giorgio Delzanno
init −−→ f resh(x) | initM (y) : x > 0, y = 0.
f resh(x) | initM (y) −−→ f resh(x′ ) | create(y ′ ) : x′ > y ′ , y ′ > x.
create(x) −−→ initM (x′ ) | initA (id′ , n′ , m′ ) : x′ = x, id′ = x, n′ = 0, m′ = 0.
create(x) −−→ initM (x′ ) | initB (id′ , n′ , m′ ) : x′ = x, id′ = x, n′ = 0, m′ = 0.
initA (id, n, m)| f resh(u) −−→ genA (id, n′ , m) | f resh(u′ ) : u′ > n′ , n′ > u.
genA (id1 , n, m)| initB (id2 , u, v) −−→ waitA (id1 , n, m) | genB (id′2 , u′ , v ′ ) : u′ = n, v ′ = v
genB (id, n, m)| f resh(u) −−→ readyB (id, n, m′ ) | f resh(u′ ) : u′ > m′ , m′ > u.
waitA (id1 , n, m)| readyB (id2 , u, v) −−→ stopA (id1 , n, m′ ) | stopB (id2 , u, v) : n = u, m′ = v.
stopA (id, n, m) −−→ initA (id′ , n′ , m′ ) : n′ = 0, m′ = 0, id′ = id.
stopB (id, n, m) −−→ initB (id′ , n′ , m′ ) : n′ = 0, m′ = 0, id′ = id.
Fig. 6. Encoding of Example 1: for simplicity we embed constraints like x = x′ into
the MSR formulas.
init ⇒ . . . ⇒ f resh(4) | initM (0) | initA (2, 0, 0) | initB (3, 0, 0)
⇒ f resh(8) | initM (0) | genA (2, 6, 0) | initB (3, 0, 0)
⇒ f resh(8) | initM (0) | waitA (2, 6, 0) | genB (3, 6, 0)
⇒ . . . ⇒ f resh(16) | initM (0) | waitA (2, 6, 0) | genB (3, 6, 0) | initA (11, 0, 0)
Fig. 7. A run in the encoded program.
Let T = hP1 , . . . , Pt i be a collection of thread definitions and G0 be an initial
global state. Let S be the MSRN C specification that results from the translation
described in the previous section.
Let G = hN, p1 , . . . , pn i be a global configuration with pi = hsi , vi1 , . . . , viki i, and
let h : N ❀ Q+ be an injective mapping. Then, we define G• (h) as the MSRN C
configuration
s1 (h(v11 ), . . . , h(v1k1 )) | . . . | sn (h(vn1 ), . . . , h(vnkn )) | f resh(v)
where v is a the first value strictly greater than all values in the range of h. Given
an MSRN C configuration M defined as s1 (v11 , . . . , v1k1 ) | . . . | sn (vn1 , . . . , vnkn )
with sij ∈ Q+ , let V (M) ⊆ Q+ be the set of values occurring in M. Then, given a
bijective mapping f : V (M) ❀ N ⊆ N , we define M• (f ) as the global configuration hN, p1 , . . . , pn i where pi = hsi , f (vi1 ), . . . , f (viki )i.
Based on the previous definitions, the following property then holds.
Theorem 2
For every run G0 G1 . . . in T with corresponding set of names N0 N1 . . ., there exist
sets D0 D1 . . . and bijective mappings h0 h1 . . . with hi : Ni ❀ Di ⊆ Q+ for i ≥ 0,
such that init G•0 (h0 )G•1 (h1 ) . . . is a run of S. Vice versa, if init M0 M1 . . . is a
run of S, then there exist sets N0 N1 . . . in N and bijective mappings f0 f1 . . . with
fi : V (Mi ) ❀ Ni for i ≥ 0, such that M•0 (f0 )M•1 (f1 ) . . . is a run in T .
Proof
We first prove that every run in T is simulated by a run in S.
Let G0 . . . Gl be a run in T , i.e., a sequence of global states (with associated set
Constraint-based Verification of Abstract Multithreaded Programs
17
of names N0 . . . Nl ) such that Gi ⇒ Gi+1 and Ni ⊆ Ni+1 for i ≥ 0.
We prove that it can be simulated in S by induction on its length l.
Specifically, suppose that there exist sets of non negative rationals D0 . . . Dl and
bijective mappings h0 . . . hl with hi : Ni ❀ Di for 0 ≤ i ≤ l, such that
c0 (h0 ) . . . G
cl (hl )
init G
is a run of S. Furthermore, suppose Gl ⇒ Gl+1 .
We prove the thesis by a case-analysis on the type of rule applied in the last step
of the run.
Let Gl = hNl , p1 , . . . , pr i and pj = hs, n1 , . . . , nk i be a local configuration for the
thread definition P = hQ, s, V, Ri with V = {x1 , . . . , xk } and ni ∈ Nl for i : 1, . . . , k.
a
Assignment Suppose there exists a rule s −−→ s′ [γ, α] in R such that ρpj satisfies γ,
Gl = hNl , . . . , pj , . . .i ⇒ hNl+1 , . . . , p′j , . . .i = Gl+1 Nl = Nl+1 , p′j = hs′ , n′1 , . . . , n′k i,
and if xi := yi occurs in α, then n′i = ρpj (yi ), otherwise n′i = ni for i : 1, . . . , k.
The encoding of the rule returns one M SRN C rule having the form
s(x1 , . . . , xk ) → s′ (x′1 , . . . , x′k ) : γ ′ , α
b
for every γ ′ ∈ γ
b.
cl (hl ) is a multiset of atomic formulas that contains the
By inductive hypothesis, G
formula s(hl (n1 ), . . . , hl (nk )).
Now let us define hl+1 as the mapping from Nl to Dl such that hl+1 (n′i ) = hl (nj ) if
xi := xj is in α and hl+1 (n′i ) = 0 if xi := ⊥ is in α. Furthermore, let us the define
the evaluation
σ = hx1 7→ hl (n1 ), . . . , xk 7→ hl (nk ), x′1 7→ hl+1 (n′1 ), . . . , x′k 7→ hl+1 (n′k )i
Then, by construction of the set of constraints b
γ and of the constraint α
b, it follows
that σ is a solution for γ ′ , α
b for some γ ′ ∈ b
γ . As a consequence, we have that
s(n1 , . . . , nk ) → s′ (n′1 , . . . , n′k )
is a ground instance of one of the considered M SRN C rules.
cl (hl ), if we apply a rewriting step
Thus, starting from the M SRN C configuration G
we obtain a new configuration in which s(n1 , . . . , nk ) is replaced by s′ (n′1 , . . . , n′k ),
c
[
and all the other atomic formulas in G
l+1 (hl+1 ) are the same as in Gl (hl ). The
[
resulting M SRN C configuration coincides then with the definition of G
l+1 (hl+1 ).
Creation of new names Let us now consider the case of fresh name generation.
a
Suppose there exists a rule s −−→ s′ [xi := new] in R, and let n 6∈ Nl , and suppose
hNl , . . . , pj , . . .i ⇒ hNl+1 , . . . , p′j , . . .i where Nl+1 = Nl ∪ {v}, p′j = hs′ , n′1 , . . . , n′k i
where n′i = n, and n′j = nj for j 6= i.
We note than that the encoding of the previous rule returns the M SRN C rule
s(x1 , . . . , xk ) | f resh(x) → s′ (x′1 , . . . , x′k ) | f resh(x′ ) : ϕ
where ϕ consists of the constraints y > x′i , x′i > x and x′j = xj for j 6= i. By
cl (hl ) is a multiset of atomic formulas that contains the
inductive hypothesis, G
18
Giorgio Delzanno
formulas s(hl (n1 ), . . . , hl (nk )) and f resh(v) where hl is a mapping into Dl , and v
is the first non-negative rational strictly greater than all values occurring in the
formulas denoting processes.
Let v be a non negative rational strictly greater than all values in Dl . Furthermore,
let us define v ′ = v + 1 and Dl+1 = Dl ∪ {v, v ′ }.
Furthermore, we define hl+1 as follows hl+1 (n) = hl (n) for n ∈ Nl , and hl+1 (n′i ) =
hl+1 (n) = v ′ . Furthermore, we define the following evaluation
σ
= h x 7→ v, x1 7→ hl (n1 ), . . . , xk 7→ hl (nk ),
x′ 7→ v ′ , x′1 7→ hl+1 (n′1 ), . . . , x′k 7→ hl+1 (n′k ) i
Then, by construction of σ and α
b, it follows that σ is a solution for α
b. Thus,
s(n1 , . . . , nk ) | f resh(v) → s′ (n′1 , . . . , n′k ) | f resh(v ′ )
is a ground instance of the considered M SRN C rule.
cl (hl ), if we apply a rewriting step we
Starting from the M SRN C configuration G
obtain a new configuration in which s(n1 , . . . , nk ) and f resh(v) are substituted by
[
s′ (n′1 , . . . , n′k ) and f resh(v ′ ), and all the other atomic formulas in G
l+1 (hl+1 ) are
c
the same as in Gl (hl ). We conclude by noting that this formula coincides with the
[
definition of G
l+1 (hl+1 ).
For sake of brevity we omit the case of thread creation whose only difference from
the previous cases is the creation of several new atoms instead (with values obtained
by evaluating the action) of only one.
Rendez-vous Let pi = hs, n1 , . . . , nk i and pj = ht, m1 , . . . , mu i two local configurations for threads P 6= P ′ , ni ∈ Nl for i : 1, . . . , k and mi ∈ Nl for i : 1, . . . , u.
c!m
c?m′
Suppose s −−→ s′ [γ, α] and t −−−→ t′ [γ ′ , α′ ], where m = hxi1 , . . . , xiv i, and m′ =
hy1 , . . . , yv i ( all defined over distinct variables) are rules in R.
Furthermore, suppose that ρpi satisfies γ, and that ρ′ (see definition of the operational semantics) satisfies γ ′ , and suppose that Gl = hNl , . . . , pi , . . . , pj , . . .i ⇒
hNl+1 , . . . , p′i , . . . , p′j , . . .i = Gl+1 , where Nl+1 = Nl , p′i = hs′ , n′1 , . . . , n′k i, p′j =
ht′ , m′1 , . . . , m′u i, and if xi := e occurs in α, then n′i = ρpi (e), otherwise n′i = ni
for i : 1, . . . , k; if ui := e occurs in α′ , then m′i = ρ′ (e), otherwise m′i = mi for
i : 1, . . . , u.
cl (hl ) is a multiset of atomic formulas that contains the
By inductive hypothesis, G
formulas s(hl (n1 ), . . . , hl (nk )) and t(hl (m1 ), . . . , hl (mu )).
Now, let us define hl+1 as the mapping from Nl to Dl such that hl+1 (n′i ) = hl (nj )
if xi := xj is in α, hl+1 (m′i ) = hl (mj ) if ui := uj is in α′ , hl+1 (n′i ) = 0 if xi := ⊥ is
in α, hl+1 (m′i ) = 0 if ui := ⊥ is in α′ .
Now, let us define σ as the evaluation from Nl to Dl such that
σ = σ1 ∪ σ2
σ1 = hx1 7→ hl (n1 ), . . . , xk 7→ hl (nk ), u1 7→ hl (m1 ), . . . , uu 7→ hl (mu )i
σ2 = hx′1 7→ hl+1 (n′1 ), . . . , x′k 7→ hl+1 (n′k ), u′1 7→ hl+1 (m′1 ), . . . , u′u 7→ hl+1 (m′u )i.
Then, by construction of the sets of constraints γ
b, γb′ , α
b and αb′ it follows that σ is
′
′
′
a solution for the constraint ∃w1 . . . . ∃wp .θ ∧ θ ∧ α
b ∧ αb′ ∧ w1 = w1′ ∧ . . . ∧ wp = wp′
for some θ ∈ γ
b and θ′ ∈ γb′ . Note in fact that the equalities wi = wi′ express the
Constraint-based Verification of Abstract Multithreaded Programs
19
passing of values defined via the evaluation ρ′ in the operational semantics.
As a consequence,
s(n1 , . . . , nk ) | t(m1 , . . . , mu ) → s′ (n′1 , . . . , n′k ) | t′ (m′1 , . . . , m′u )
is a ground instance of one of the considered M SRN C rules.
cl (hl ), if we apply a rewriting
Thus, starting from the M SRN C configuration G
step we obtain a new configuration in which s(n1 , . . . , nk ) has been replaced by
s′ (n′1 , . . . , n′k ), and t′ (m′1 , . . . , m′k ) has been replaced by t(m′1 , . . . , m′u ), and all the
cl (hl ). This formula coincides with the definition
other atomic formulas are as in G
[
of Gl+1 (hl+1 ).
The proof of completeness is by induction on the length of an MSR run, and by
case-analysis on the application of the rules. The structure of the case analysis is
similar to the previous one and it is omitted for brevity.
4 Verification of TDL Programs
Safety and invariant properties are probably the most important class of correctness
specifications for the validation of a concurrent system. For instance, in Example
1 we could be interested in proving that every time a session terminates, two instances of thread Init and Resp have exchanged the two names generated during
the session. To prove the protocol correct independently from the number of names
and threads generated during an execution, we have to show that from the initial configuration G0 it is not possible to reach a configuration that violates the
aforementioned property. The configurations that violate the property are those in
which two instances of Init and Resp conclude the execution of the protocol exchanging only the first nonce. These configurations can be represented by looking
at only two threads and at the relationship among their local data. Thus, we can
reduce the verification problem of this safety property to the following problem:
Given an initial configuration G0 we would like to decide if a global configuration that contains at least two local configurations having the form hstopA , i, n, mi
and hstopB , i′ , n′ , m′ i with n′ = n and m 6= m′ for some i, i′ , n, n′ , m, m′ is reachable. This problem can be viewed as an extension of the control state reachability
problem defined in (Abdulla and Nylén 2000) in which we consider both control locations and local variables. Although control state reachability is undecidable (see
Corollary 1), the encoding of TDL into MSRN C can be used to define a sound and
automatic verification methods for TDL programs. For this purpose, we will exploit
a verification method introduced for MSR(C) in (Delzanno 2001; Delzanno 2005).
In the rest of this section we will briefly summarize how to adapt the main results
in (Delzanno 2001; Delzanno 2005) to the specific case of MSRN C .
Let us first reformulate the control state reachability problem of Example 1 for
the aforementioned safety property on the low level encoding into MSRN C . Given
the MSRN C initial configuration init we would like to check that no configuration
in P ost∗ ({init}) has the following form
{stopA (a1 , v1 , w1 ), stopB (a2 , v2 , w2 )} ⊕ M
20
Giorgio Delzanno
for ai , vi , wi ∈ Q i : 1, 2 and an arbitrary multiset of ground atoms M. Let us call
U the set of bad MSRN C configurations having the aforementioned shape. Notice
that U is upward closed with respect to multiset inclusion, i.e., if M ∈ U and
M 4 M′ , then M′ ∈ U . Furthermore, for if U is upward closed, so is P re(U ).
On the basis of this property, we can try to apply the methodology proposed in
(Abdulla and Nylén 2000) to develop a procedure to compute a finite representation
R of P re∗ U ). For this purpose, we need the following ingredients:
1. a symbolic representation of upward closed sets of configurations (e.g. a set
of assertions S whose denotation [[S]] is U );
2. a computable symbolic predecessor operator SP re working on sets of formulas
such that [[SP re(S)]] = P re([[S]]);
3. a (decidable) entailment relation Ent to compare the denotations of symbolic
representations, i.e., such that Ent(N, M ) implies [[N ]] ⊆ [[M ]]. If such a relation Ent exists, then it can be naturally extended to sets of formulas as
follows: EntS (S, S ′ ) if and only if for all N ∈ S there exists M ∈ S ′ such that
Ent(N, M ) holds (clearly, if Ent is an entailment, then EntS (S, S ′ ) implies
[[S]] ⊆ [[S ′ ]]).
The combination of these three ingredients can be used to define a verification
methods based on backward reasoning as explained next.
Symbolic Backward Reachability Suppose that M1 , . . . , Mn are the formulas of our
assertional language representing the infinite set U consisting of all bad configurations. The symbolic backward reachability procedure (SBR) procedure computes a
chain {Ii }i≥0 of sets of assertions such that
I0 = {M1 , . . . , Mn }
Ii+1 = Ii ∪ SP re(Ii ) for i ≥ 0
The procedure SBR stops when SP re produces only redundant information, i.e.,
EntS (Ii+1 , Ii ). Notice that EntS (Ii , Ii+1 ) always holds since Ii ⊆ Ii+1 .
Symbolic Representation In order to find an adequate represention of infinite sets of
MSRN C configurations we can resort to the notion of constrained configuration introduced in (Delzanno 2001) for the language scheme MSR(C) defined for a generic
constraint system C. We can instantiate this notion with N C constraints as follows.
A constrained configuration over P is a formula
p1 (x11 , . . . , x1k1 ) | . . . | pn (xn1 , . . . , xnkn ) : ϕ
where p1 , . . . , pn ∈ P, xi1 , . . . , xiki ∈ V for any i : 1, . . . n and ϕ is an N C-constraint.
.
The denotation a constrained configuration M = (M : ϕ) is defined by taking the
upward closure with respect to multiset inclusion of the set of ground instances,
namely
[[M ]] = {M′ | σ(M) 4 M′ , σ ∈ Sol(ϕ)}
Constraint-based Verification of Abstract Multithreaded Programs
21
This definition can be extended to sets of MSRN C constrained configurations with
disjoint variables (we use variable renaming to avoid variable name clashing) in the
natural way.
In our example the following set SU of MSRN C constrained configurations (with
distinct variables) can be used to finitely represent all possible violations U to the
considered safety property
SU = { stopA (i1 , n1 , m1 ) | stopB (i2 , n2 , m2 ) : n1 = n2 , m1 > m2
stopA (i1 , n1 , m1 ) | stopB (i2 , n2 , m2 ) : n1 = n2 , m2 > m1 }
Notice that we need two formulas to represent m1 6= m2 using a disjunction
of > constraints. The MSRN C configurations stopB (1, 2, 6) | stopA (4, 2, 5), and
stopB (1, 2, 6) | stopA (4, 2, 5) | waitA (2, 7, 3) are both contained in the denotation
of SU . Actually, we have that [[SU ]] = U . This symbolic representation allows us to
reason on infinite sets of MSRN C configurations, and thus on global configurations
of a TDL program, forgetting the actual number or threads of a given run.
To manipulate constrained configurations, we can instantiate to N C-constraints
the symbolic predecessor operator SP re defined for a generic constraint system in
(Delzanno 2005). Its definition is also given in Section Appendix A in Appendix.
From the general properties proved in (Delzanno 2005), we have that when applied
to a finite set of MSRN C constrained configurations S, SP reN C returns a finite set
of constrained configuration such that [[SP reN C (S)]] = P re([[S]]), i.e., SP reN C (S)
is a symbolic representation of the immediate predecessors of the configurations in
the denotation (an upward closed set) of S. Similarly we can instantiate the generic
entailment operator defined in (Delzanno 2005) to MSRN C constrained configurations so as to obtain an a relation Ent such that EntN C (N, M ) implies [[N ]] ⊆ [[M ]].
Based on these properties, we have the following result.
Proposition 1
Let T be a TDL program with initial global configuration G0 , Furthermore, let
S be the corresponding MSRN C encoding. and SU be the set of MSRN C constrained configurations denoting a given set of bad TDL configurations. Then,
init 6∈ SP re∗N C (SU ) if and only if there is no finite run G0 . . . Gn and mappings h0 , . . . , hn from the names occurring in G to non-negative rationals such
that init• G•0 (h0 ) . . . G•n (hn ) is a run in S and G•n (hn ) ∈ [[U ]].
Proof
Suppose init 6∈ SP re∗N C (U ). Since [[SP reN C (S)]] = pre([[S]]) for any S, it follows
that there cannot exist runs initM0 . . . Mn in S such that Mn ∈ [[U ]]. The thesis
then follows from the Theorem 2.
As discussed in (Bozzano and Delzanno 2002), we have implemented our verification procedure based on M SR and linear constraints using a CLP system with linear
arithmetics. By the translation presented in this paper, we can now reduce the verification of safety properties of multithreaded programs to a fixpoint computation
built on constraint operations. As example, we have applied our CLP-prototype
to automatically verify the specification of Fig. 6. The unsafe states are those described in Section 4. Symbolic backward reachability terminates after 18 iterations
22
Giorgio Delzanno
and returns a symbolic representation of the fixpoint with 2590 constrained configurations. The initial state init is not part of the resulting set. This proves our
original thread definitions correct with respect to the considered safety property.
4.1 An Interesting Class of TDL Programs
The proof of Theorem 1 shows that verification of safety properties is undecidable for TDL specifications in which threads have several local variables (they
are used to create linked lists). As mentioned in the introduction, we can apply the sufficient conditions for the termination of the procedure SBR given in
(Bozzano and Delzanno 2002; Delzanno 2005) to identify the following interesting
subclass of TDL programs.
Definition 4
A monadic TDL thread definition P = hQ, s, V, Ri is such that V is at most a
singleton, and every message template in R has at most one variable.
A monadic thread definition can be encoded into the monadic fragment of MSRN C
studied in (Delzanno 2005). Monadic MSRN C specifications are defined over atomic
formulas of the form p or p(x) with p is a predicate symbol and x is a variable, and
on atomic constraints of the form x = y, and x > y. To encode a monadic TDL
thread definitions into a Monadic MSRN C specification, we first need the following
observation. Since in our encoding we only use the constant 0, we first notice that
we can restrict our attention to MSRN C specifications in which constraints have no
constants at all. Specifically, to encode the generation of fresh names we only have
to add an auxiliary atomic formula zero(z), and refer to it every time we need to
express the constant 0. As an example, we could write rules like
init −−→ f resh(x) | initM (y) | zero(z) : x > z, y = z
for initialization, and
create(x) | zero(z) −−→ initM (x′ ) | initA (id′ , n′ , m′ ) | zero(z) :
x′ = x, id′ = x, n′ = z, m′ = z, z ′ = z
for all assignments involving the constant 0. By using this trick an by following the
encoding of Section 3, the translation of a collection of monadic thread definitions
directly returns a monadic MSRN C specification. By exploiting this property, we
obtain the following result.
Theorem 3
The verification of safety properties whose violations can be represented via an
upward closed set U of global configurations is decidable for a collection T of
monadic TDL definitions.
Proof
Let S be the MSRN C encoding of T and SU be the set of constrained configuration
such that SU = U . The proof is based on the following properties. First of all, the
MSRN C specification S is monadic. Furthermore, as shown in (Delzanno 2005),
Constraint-based Verification of Abstract Multithreaded Programs
23
the class of monadic MSRN C constrained configurations is closed under application
of the operator SP reN C . Finally, as shown in (Delzanno 2005), there exists an
entailment relation CEnt for monadic constrained configurations that ensures the
termination of the SBR procedure applied to a monadic MSRN C specification.
Thus, for the monadic MSRN C specification S, the chain defined as I0 = SU ,
Ii+1 = Ii ∪ SP re(Ii ) always reaches a point k ≥ 1 in which CEntS (Ik+1 , Ik ), i.e.
[[Ik ]] is a fixpoint for P re. Finally, we note that we can always check for membership
of init in the resulting set Ik .
As shown in (Schnoebelen 2002), the complexity of verification methods based on
symbolic backward reachability relying on the general results in (Abdulla and Nylén 2000;
Finkel and Schnoebelen 2001) is non primitive recursive.
5 Conclusions and Related Work
In this paper we have defined the theoretical grounds for the possible application
of constraint-based symbolic model checking for the automated analysis of abstract
models of multithreaded concurrent systems providing name generation, name mobility, and unbounded control. Our verification approach is based on an encoding
into a low level formalism based on the combination of multiset rewriting and
constraints that allows us to naturally implement name generation, value passing,
and dynamic creation of threads. Our verification method makes use of symbolic
representations of infinite set of system states and of symbolic backward reachability. For this reason, it can be viewed as a conservative extension of traditional
finite-state model checking methods. The use of symbolic state analysis is strictly
related to the analysis methods based on abstract interpretation. A deeper study
of the connections with abstract interpretation is an interesting direction for future
research.
Related Work The high level syntax we used to present the abstract models of
multithreaded programs is an extension of the communicating finite state machines
used in protocol verification (Bochmann 1978), and used for representing abstraction of multithreaded software programs (Ball et al. 2001). In our setting we enrich
the formalism with local variables, name generation and mobility, and unbounded
control. Our verification approach is inspired by the recent work of Abdulla and
Jonsson. In (Abdulla and Jonsson 2003), Abdulla and Jonsson proposed an assertional language for Timed Networks in which they use dedicated data structures
to symbolically represent configurations parametric in the number of tokens and
in the age (a real number) associated to tokens. In (Abdulla and Nylén 2000), Abdulla and Nylén formulate a symbolic algorithm using existential zones to represent the state-space of Timed Petri Nets. Our approach generalizes the ideas of
(Abdulla and Jonsson 2003; Abdulla and Nylén 2000) to systems specified via multiset rewriting and with more general classes of constraints. In (Abdulla and Jonsson 2001),
the authors apply similar ideas to (unbounded) channel systems in which messages
can vary over an infinite name domain and can be stored in a finite (and fixed a
24
Giorgio Delzanno
priori) number of data variables. However, they do not relate these results to multithreaded programs. Multiset rewriting over first order atomic formulas has been proposed for specifying security protocols by Cervesato et al. in (Cervesato et al. 1999).
The relationships between this framework and concurrent languages based on process algebra have been recently studied in (Bistarelli et al. 2005). Apart from approaches based on Petri Net-like models (as in (German and Sistla 1992; Ball et al. 2001)),
networks of finite-state processes can also be verified by means of automata theoretic techniques as in (Bouajjani et al. 2000). In this setting the set of possible
local states of individual processes are abstracted into a finite alphabet. Sets of
global states are represented then as regular languages, and transitions as relations
on languages. Differently from the automata theoretic approach, in our setting
we handle parameterized systems in which individual components have local variables that range over unbounded values. The use of constraints for the verification
of concurrent systems is related to previous works connecting Constraint Logic
Programming and verification, see e.g. (Delzanno and Podelski 1999). In this setting transition systems are encoded via CLP programs used to encode the global
state of a system and its updates. In the approach proposed in (Delzanno 2001;
Bozzano and Delzanno 2002), we refine this idea by using multiset rewriting and
constraints to locally specify updates to the global state. In (Delzanno 2001), we
defined the general framework of multiset rewriting with constraints and the corresponding symbolic analysis technique. The language proposed in (Delzanno 2001) is
given for a generic constraint system C (taking inspiration from CLP the language
is called M SR(C)). In (Bozzano and Delzanno 2002), we applied this formalism to
verify properties of mutual exclusion protocols (variations of the ticket algorithm)
for systems with an arbitrary number of processes. In the same paper we also formulated sufficient conditions for the termination of the backward analysis. The
present paper is the first attempt of relating the low level language proposed in
(Delzanno 2001) to a high level language with explicit management of names and
threads.
Acknowledgments The author would like to thank Ahmed Bouajjani, Andrew Gordon, Fabio Martinelli, Catuscia Palamidessi, Luca Paolini, and Sriram Rajamani
and the anonymous reviewers for several fruitful comments and suggestions.
References
Abdulla, P. A., Cerāns, K., Jonsson, B., and Tsay, Y.-K. 1996. General Decidability Theorems for Infinite-State Systems. In Proceedings 11th Annual International
Symposium on Logic in Computer Science (LICS’96). IEEE Computer Society Press,
New Brunswick, New Jersey, 313–321.
Abdulla, P. A. and Jonsson, B. 2001. Ensuring Completeness of Symbolic Verification
Methods for Infinite-State Systems. Theoretical Computer Science 256, 1-2, 145–167.
Abdulla, P. A. and Jonsson, B. 2003. Model checking of systems with many identical
timed processes. Theoretical Computer Science 290, 1, 241–264.
Abdulla, P. A. and Nylén, A. 2000. Better is Better than Well: On Efficient Verification
of Infinite-State Systems. In Proceedings 15th Annual International Symposium on
Constraint-based Verification of Abstract Multithreaded Programs
25
Logic in Computer Science (LICS’00). IEEE Computer Society Press, Santa Barbara,
California, 132–140.
Ball, T., Chaki, S., and Rajamani, S. K. 2001. Parameterized Verification of Multithreaded Software Libraries. In 7th International Conference on Tools and Algorithms
for Construction and Analysis of Systems (TACAS 2001), Genova, Italy, April 2-6,.
LNCS, vol. 2031. Springer-Verlag, 158–173.
Bistarelli, S., Cervesato, I., Lenzini, G., and Martinelli, F. 2005. Relating multiset
rewriting and process algebras for security protocol analysis. Journal of Computer
Security 13, 1, 3–47.
Bochmann, G. V. 1978. Finite state descriptions of communicating protocols. Computer
Networks 2, 46–57.
Bouajjani, A., Jonsson, B., Nilsson, M., and Touili, T. 2000. Regular Model Checking. In Proceedings 12th International Conference on Computer Aided Verification
(CAV’00), E. A. Emerson and A. P. Sistla, Eds. LNCS, vol. 1855. Springer-Verlag,
Chicago, Illinois, 403–418.
Bozzano, M. and Delzanno, G. 2002. Algorithmic verification of invalidation-based
protocols. In 14th International Conference on Computer Aided Verification, CAV ’02.
Lecture Notes in Computer Science, vol. 2404. Springer.
Cervesato, I., Durgin, N., Lincoln, P., Mitchell, J., and Scedrov, A. 1999. A
Meta-notation for Protocol Analysis. In 12th Computer Security Foundations Workshop
(CSFW’99). IEEE Computer Society Press, Mordano, Italy, 55–69.
Delzanno, G. 2001. An Assertional Language for Systems Parametric in Several Dimensions. In Verification of Parameterized Systems - VEPAS 2001. ENTCS, vol. 50.
Delzanno, G. 2005. Constraint Multiset Rewriting. Tech. Rep. TR-05-08, Dipartimento
Informatica e Scienze dell’Informazione, Università di Genova, Italia.
Delzanno, G. and Podelski, A. 1999. Model checking in CLP. In Proceedings 5th
International Conference on Tools and Algorithms for Construction and Analysis of
Systems (TACAS’99). Lecture Notes in Computer Science, vol. 1579. Springer-Verlag,
Amsterdam, The Netherlands, 223–239.
Finkel, A. and Schnoebelen, P. 2001. Well-Structured Transition Systems Everywhere!
Theoretical Computer Science 256, 1-2, 63–92.
German, S. M. and Sistla, A. P. 1992. Reasoning about Systems with Many Processes.
Journal of the ACM 39, 3, 675–735.
Gordon, A. D. 2001. Notes on nominal calculi for security and mobility. In Foundations
of Security Analysis and Design, Tutorial Lectures. Lecture Notes in Computer Science,
vol. 2171. Springer, 262–330.
Kesten, Y., Maler, O., Marcus, M., Pnueli, A., and Shahar, E. 2001. Symbolic
model checking with rich assertional languages. Theoretical Computer Science 256, 1,
93–112.
Schnoebelen, P. 2002. Verifying Lossy Channel Systems has Nonprimitive Recursive
Complexity. Information Processing Letters 83, 5, 251–261.
Appendix A Symbolic Predecessor Operator
Given a set of MSRN C configurations S, consider the MSRN C predecessor operator
P re(S) = {M|M ⇒MSR M′ , M′ ∈ S}. In our assertional language, we can define a
symbolic version SP reN C of P re defined on a set S containing MSRN C constrained
26
Giorgio Delzanno
multisets (with disjoint variables) as follows:
SP reN C (S) = { (A ⊕ N : ξ) |
(A −→ B : ψ) ∈ R, (M : ϕ) ∈ S,
M′ 4 M, B ′ 4 B,
(M′ : ϕ) =θ (B ′ : ψ), N = M ⊖ M′ ,
ξ ≡ (∃x1 . . . . xk .θ)
and x1 , . . . , xk are all variables not in A ⊕ N }.
where =θ is a matching relation between constrained configurations that also takes
in consideration the constraint satisfaction, namely
(A1 | . . . | An : ϕ) =θ (B1 | . . . | Bm : ψ)
provided m = n and there exists a permutation j1 , . . . , jn of 1, . . . , n such that
Vn
the constraint θ = ϕ ∧ ψ ∧ i=1 Ai = Bji is satisfiable; here p(x1 , . . . , xr ) =
q(y1 , . . . , ys ) is an abbreviation for the constraints x1 = y1 ∧ . . . ∧ xr = ys if p = q
and s = r, f alse otherwise.
As proved in (Delzanno 2005), the symbolic operator SP reN C returns a set of
MSRN C constrained configurations and it is correct and complete with respect to
P re, i.e., [[SP reN C (S)]] = P re([[S]]) for any S. It is important to note the difference
between SP reN C and a simple backward rewriting step.
For instance, given the constrained configurations M defined as p(x, z) | f (y) : z >
y and the rule s(u, m) | r(t, v) → p(u′ , m′ ) | r(t′ , v ′ ) : u = t, m′ = v, v ′ =
v, u′ = u, t′ = t (that simulates a rendez-vous (u, t are channels) and value passing
(m′ = v)), the application of SP re returns s(u, m) | r(t, v) | f (y) : u = t, v > y as
well as s(u, m) | r(t, v) | p(x, z) | f (y) : u = t, x > y (the common multiset here is
ǫ).
| 2 |
R. Isea / Challenges and characterization of a Biological system on Grid
Proceedings of the First EELA-2 Conference
R. Mayo et al. (Eds.)
CIEMAT 2009
© 2009 The authors. All rights reserved
Challenges and characterization of a Biological system on Grid by
means of the PhyloGrid application
R. Isea1, E. Montes2, A.J. Rubio-Montero2 and R. Mayo2
1
Fundación IDEA, Hoyo de la Puerta, Valle de Sartenejal, Baruta 1080 (Venezuela)
risea@idea.gob.ve
2
CIEMAT, Avda. Complutense, 22 - 28040 Madrid (Spain)
{esther.montes,antonio.rubio,rafael.mayo}@ciemat.es
Abstract
In this work we present a new application that is being developed. PhyloGrid is able to
perform large-scale phylogenetic calculations as those that have been made for estimating the
phylogeny of all the sequences already stored in the public NCBI database. The further
analysis has been focused on checking the origin of the HIV-1 disease by means of a huge
number of sequences that sum up to 2900 taxa. Such a study has been able to be done by the
implementation of a workflow in Taverna.
1. Introduction
The determination of the evolution history of different species is nowadays one of the more
exciting challenges that are currently emerging in the computational Biology [1]. In this
framework, Phylogeny is able to determine the relationship among the species and, in this
way, to understand the influence between hosts and virus [2]. As an example, we can
mention the work published in 2007 related to the origin of the oldest stumps in HIV/AIDS
(apart from the African one), which were found in Haiti [3], so it is clear that the test of
vaccines for this disease must include both African and Haitian sequences. This work was
performed with MrBayes tool, the bayesian inference method used in PhyloGrid.
With respect to the computational aspect, the main characteristic of the phylogenetic
executions is that they are extremely intensive, even more when the number of sequences
increases, so it is crucial to develop efficient tools for obtaining optimized solutions. Thus,
1000 taxa (sequences) generate 2.5·101167 trees, so among all of them the potential consensus
tree is found. That is why it is clear to understand the computational challenge that this work
represents by studying the origin of the HIV-1 since it is dealing with 2900 sequences.
Several techniques for estimating phylogenetic trees have been developed such as the
distance based method (which generates a dendrogram, i.e. they do not show the evolution,
but the changes among the sequences), the maximum parsimony technique, the maximum
likelihood method and the bayesian analysis (mainly Markov Chain Monte Carlo inference).
The latest is based on probabilistic techniques for evaluating the topology of the phylogenetic
trees; so it is important to point out that the Maximum Parsimony methods are not statistically
consistent, this is, the consensus tree cannot be found with the higher probability because of
the long branch attraction effect [4].
1
R. Isea / Challenges and characterization of a Biological system on Grid
In this work, we have worked with MrBayes software [5] for obtaining the phylogenetic
trees. It is important to indicate that MrBayes is relatively new in the construction of these
trees as the reader can check in the pioneering work of Rannala and Yang in 1996 [6]. This
methodology works with the Bayesian statistics previously proposed by Felsentein in 1968 as
indicated Huelsenbeck [7], a technique for maximizing the subsequent probability. The reason
for using this kind of approach is that it deals with higher computational speed methods so the
possible values for the generated trees can all be taken into account not being any of them
ruling the others. With respect to the consensus tree found in our work, the value of which
was obtained with this methodology, known as parametric bootstrap percentages or Bayesian
posterior probability values, we can mention that it gives information about the
reproducibility of the different part of the trees, but this kind of data does not represent a
significant statistical value [8].
Thus, based on MrBayes tool, the PhyloGrid application aims to offer to the scientific
community an easy interface for calculating phylogenetic trees by means of a workflow with
Taverna [9]. In this way, the user is able to define the parameters for doing the Bayesian
calculation, to determine the model of evolution and to check the accuracy of the results in the
intermediate stages. In addition to this, no knowledge from his/her side about the
computational procedure is required. More details about this workflow can be found in [10].
As a consequence of this development, several biological results have been achieved in the
Duffy domain of Malaria [10] or in the HPV classification [11]. In this work, we study the
dependence of a successful determination for a biological system with the number of
sequences to be aligned. To do so, the HIV case has been selected taking into account over a
thousand different sequences.
2. Tools
The structure of the implementation of the different tools present in the PhyloGrid
application can be seen in Figure 1. We here briefly explain them.
2.1. MrBayes
Bayesian inference is a powerful method which is implemented in the program MrBayes
[12] for estimating phylogenetic trees that are based on the posterior probability distribution
of the trees. Currently is included in different scientific software suites for cluster computing
such as Rocks Cluster (Bio Roll) as well as in other Linux distributions where the user doesn’t
need to compile it, for example Ubuntu, Gentoo, Mandriva and so on. This easiness allows
the program to be ported to the Grid. On the other hand, its main drawback is that its
execution for millions of iterations (generations) requires a large amount of computational
time and memory. As an example, we can cite that in the work from Cadotte et al. [13] 143
angiosperms were determined by means of MrBayes. In that work, four independent Markov
Chains were run, each with 3 heated chains for 100 million generations, the authors sampled
the runs every 10,000 generations and used a burning of 70 million steps to generate a
majority rule consensus tree that was used to calculate PDC. Such a methodology is basically
the same for other calculations and the number of iterations will depend on the convergence
from the chains. In fact, this kind of scientific calculation is extremely complex from a
mathematical point of view, but also from the computational one where the phylogenetic
estimation for a medium size dataset (50 sequences, 300 nucleotides for each sequence)
typically requires a simulation for 250,000 generations, which normally runs for 50 hours on a
PC with an Intel Pentium4 2.8 GHz processor.
2
R. Isea / Challenges and characterization of a Biological system on Grid
Figure 1. EELA Schema of PhyloGrid
Even more, the recent publication by Pérez et al. [14] shows that the study of 626 domains
of prokaryotic genomes, due to they are a major class of regulatory proteins in eukaryotes,
represents by itself a computational challenge that must be limited to the analysis of the
results instead of trying to delimit the number of obtained results by the number of sequences
that implies the computational time.
2.2. Gridsphere
The Gridsphere project aims to develop a standard based portlet framework for building
web portals and a set of portlet web applications. It is based on Grid computing and integrates
into the GridSphere portal framework a collection of gridportlets provided as an add-on
module, i.e. a portfolio of tools that forms a cohesive "grid portal" end-user environment for
managing users and provides access to information services. Grid security based on public
key infrastructure (PKI) and emerging IETF and OASIS standards are also well-defined
characteristics. In this way, it is important to mention that the users are able to access the Grid
by means of an implementation of the JSR 168 portlet API standard. As a consequence, they
can also interact with other standards such as those of GT4 and all of its capabilities.
An important key in this application is that the researcher who use it can do it with his/her
personal Grid user certificate (“myproxy” initialization), the execution of which is already
integrated in the Gridsphere release. There is also the possibility of running the jobs with a
proxy directly managed by the Administrator that would be renewed from time to time in
order to allow longer jobs to be ended. Thus, all the technical details are transparent for the
user, so all the methodology is automated and the application can either be run directly by a
certified user or letting Gridsphere to assign him a provisional proxy, registered in a map log.
Within the portal framework and in a future release, the possibility for doing a multiple
alignment of the sequences will be available for the user.
3
R. Isea / Challenges and characterization of a Biological system on Grid
2.3. Taverna, a tool for the implementation of Workflows
The workflow is fully built in Taverna [9] and structured in different services that are
equivalent to the different sections that are run in a common MrBayes job and performs a
complete calculation just building the input file by means of the construction of a common
Grid jdl file. The front end to the final user is a web portal built upon the Gridsphere
framework. This solution makes very easy to the researcher the calculations of molecular
phylogenies by no typing at all any kind of command.
The main profit that these kind of workflows offer is that they integrate in a common
development several modules (tools) connected one to each other. Even more, the deployment
of the application with Taverna allows any researcher to download the complete workflow,
making easy in this way their spread in the scientific community.
Figure 2. The Taverna Workflow used in this work
This is so because Taverna allows the user to construct complex analysis workflows from
components located on both remote and local machines, run these workflows on their own
data and visualise the results. To support this core functionality it also allows various
operations on the components themselves such as discovery and description and the selection
of personalised libraries of components previously discovered to be useful to a particular
application. Finally, we can indicate that Taverna is based on a workbench window where a
main section of an Advanced Model Explorer shows a tabular view of the entities within the
4
R. Isea / Challenges and characterization of a Biological system on Grid
workflow. This includes all processors, workflow inputs and outputs, data connections and
coordination links. For using Taverna, it is necessary to create a web service for the required
application that will be integrated into the software. Lately, this web service will call
MrBayes inside the workflow.
2.4. The PhyloGrid Workflow: a short description
The structure of the Taverna workflow can be seen in Figure 2, the structure of which has
been improved since the first works performed with the PhyloGrid application [10]. Once the
user has logged into the PhyloGrid portal, registered a valid proxy and introduced all the data
needed for his/her job, a background process performs the execution of the PhyloGrid
workflow with the input provided by the user.
The workflow receives several inputs: the MrBayes parameters (labelled as
MrBayes_params in Fig. 2) that define the evolutionary model that will be used in the
analysis and the settings for Markov Chain Monte Carlo (MCMC) analysis; the parameters
needed to construct the appropriate Grid job file (JobParams); the input file with the
aligned sequences (SeqFile) to analyse; and, the format of the file (SeqFileFormat).
The first three processors of the workflow (JobParams2jdl, formatFileToNexus
and InputParams2runMB) perform some tasks prior to MrBayes execution. Thus, as
aforementioned, the processor named JobParams2jdl creates the appropriate file for the
job submission to Grid; the formtFileToNexus processor, if necessary, converts the file
with the aligned sequences to NEXUS format (not available in the first PhyloGrid release);
and the InputParams2runMB constructs an executable file with MrBayes commands from
the MrBayes_params.
The output of these processors is sent to the core processor of the workflow, i.e.
runMrBayes. This processor submits a MrBayes analysis job (see below the Methodology
section). A call to a nested workflow (MrBayes_poll_job) is included to check for job
status, and wait if job is not ended. When the MrBayes analysis job is finished, the
runMrBayes processor receives a notification from the MrByes_poll_job processor.
As a final step, an additional processor is included to store the output of the MrBayes
analysis in a Storage Element. This step is needed due to the large size that MrBayes output
files can reach. Once the output files are stored in a SE, the workflows execution ends and the
user can retrieve the results from the PhyloGrid portal.
2.5. GridWay: The next step
Even when PhyloGrid is producing reliable scientific results, the improvement of the tool
is a key factor for this work team. That is why we are planning to introduce GridWay [15], the
standard metascheduler in Globus Toolkit 4, into the whole structure of the application in
order to perform the submission of jobs in a more efficient way. Since GridWay enables
large-scale, reliable and efficient sharing of heterogeneous computing resources managed by
different LRM systems, it is able to determine in real time the nodes that best fit the
characteristics of the submitted jobs. In the case of MrBayes and its parallel mode, this
functionality is not only limited to match in any resource the adequate tags into the
information systems whatever the installed mpich version the node had, but also to select the
best resources by means of several statistical methods based on accounting of previous
executions. As a consequence, the calculation will be done in the lowest possible time.
5
R. Isea / Challenges and characterization of a Biological system on Grid
At the same time, GridWay can recover the information in case of a cancelled job since it
creates checkpointers. It also migrates automatically failed jobs to other Grid resources. Thus,
it performs a reliable and unattended execution of jobs transparently to Taverna that
simplifies the workflow development and increases the global performance of PhyloGrid.
The porting process of GridWay into PhyloGrid will be done by means of the plugins
already available for Globus in Taverna that allow the use of resources by a standardized way.
3. Methodology
Once the user has determined his/her work area and has connected to the Internet Network,
new PhyloGrid jobs can be submitted. For doing so, he/she logs in the application and a new
window is then open. In this page, the user is able to define the name of the job to be
submitted as well as its description, to upload the file with the alignment, to select the model
of evolution and the number of iterations with a fixed frequency and, finally, to run the
experiment.
The Taverna workflow builts the script that will rule the process with the input data once
they are set in the MrBayes format. In addition, as parameters the user must configure: the
selected model for the evolution of the system (labelled as the lset part/command in the
example below); the number of simultaneous, completely independent, analyses (nruns set
to 1); the number of generations that will be performed in the calculation (ITER); and, finally,
the location of the file where the sequences are present (file-aligned.nex). To the date,
this file must be in Nexus format, but in further releases the workflow will be able to translate
to a NEXUS format the input alignment if it is written in any other kind of format (Clustal,
Phylip, MSA and so on). Thus, our example would be written as:
begin mrbayes;
set autoclose=yes nowarn=yes;
execute /Path-to-file/file-aligned.nex;
lset nst=6 rates=gamma;
mcmcp nruns=1 ngen=ITER samplefreq=FREQ;
mcmc ;
mcmc ;
mcmc ;
end;
This is, the workflow must perform its load section by section. Since the first two
instructions are always the same for any kind of calculation, the workflow has to begin with
the third one (execute...) making a call for bringing the input data, i.e. the aligned
sequences to be studied, and following with the rest of commands.
PhyloGrid needs to know the evolution model, so the fourth instruction (lset...) is used.
Here, two options are possible: to allow the researcher to select a specific one or to allow the
workflow to do so. The fifth instruction (mcmcp nruns…) sets the parameters of the
MCMC analysis without actually starting the chain. In our example, it sets the number of
independent analyses (nruns) and the number of executions to be made (the ITER data, one
million of iterations for example) with a concrete frequency (FREQ). This command is
identical in all respects to the mcmc one, except that the analysis will not start after this
command is issued. The following instructions (mcmc) perform the MCMC analysis (three
consecutive ones in our example), which will be able also to be monitored. All these options
6
R. Isea / Challenges and characterization of a Biological system on Grid
are able to be changed by the final user, who at the beginning of the process has simply
defined the evolutionary model that will be used in the analysis, the settings for MCMC
analysis and the name of the input file with the aligned sequences in NEXUS format. By
default, if the name of the output files is not provided, MrBayes sets the corresponding
extensions for them (files that will be generated adding the .p, .t and .mcmc extensions to
the name of the input file).
The job must always have the final command end since it is mandatory for MrBayes.
Once the workflow has started, MrBayes automatically validates the number of iterations
meanwhile it begins to write the output file and sets the burning value for generating the
phylogenetic trees. When the whole calculation is ended and the packed output files are stored
in an appropriate Storage Element, it can be downloaded by the user for further analysis.
4. Results and conclusions
PhyloGrid has been used to calculate the Phylogeny of the HIV-1 sequences stored in the
NCBI database, the number of which is up to 2900. All these sequences were previously
aligned with MPIClustal. The calculation lasted for 576 hours on 20 cores inside 5 Intel Xeon
X5365 (3GHz, 4 MB L2 cache per 2 cores in the quad-core processor) with 2GB of memory
per core. The output file had 835 MB, the tree of which can be seen in Figure 3.
Figure 3. Phylogenetic tree construction using PhyloGrid of HIV-1 sequences
To the knowledge of the authors, these results are the first that are totally performed for the
HIV counting on its whole number of sequences. In a future, the results will be analysed
separately, but we can point out the periodicity of the branchs in the obtained trees, so a
further study on the obtained patterns will be done in order to simplify the analysis of the
results.
7
R. Isea / Challenges and characterization of a Biological system on Grid
As can be seen for the parameters of the performed calculation (output file size or time
consumed), it is clear that Grid is a useful paradigm that allows the Biologists to cope with
such a complex research.
Acknowledgements
This work makes use of results produced by the EELA-2 project (http://www.eu-eela.eu),
co-funded by the European Commission within its Seventh Framework Programme and the
VII Cuba-Venezuela Scientific Collaboration Framework and the Annual Operative Plan of
Fundación IDEA.
References
[1] C.R. Woese. Procs. Nat. Academy of Sciences 95, 6854-6859 (1998)
[2] B. Korber et al. Science 288, 1789-1796 (2000)
[3] M. Thomas et al. PNAS 104 (47), 18566-18570 (2007)
[4] J. Bergsten. Cladistics 21 (2), 163-193 (2005)
[5] The Mrbayes tool, available from http://www.mrbayes.net
[6] B. Rannala and Z. Yang. J. Mol. Evolut. 43, 304-311 (1996)
[7] J. P. Huelsenbeck et al. Syst. Biol. 51, 673-688 (2002)
[8] B.G. Hall and S.J. Salipante. PLoS Comput Biol. March. 3 (3), e51 (2007)
[9] T. Oinn et al. Concurrency and Computation: Practice and Experience 18, 1067-1100 (2005)
[10] E. Montes, R. Isea and R. Mayo. Iberian Grid Infrastructure Conf. Proc. 2, 378-387 (2008)
[11] R. Isea, E. Montes and R. Mayo. UK-AHM 2008, Edinburgh, United Kingdom (2008)
[12] F. Ronquist and J. P. Huelsenbeck. Bioinformatics 19, 1572-1574 (2003)
[13] M.W. Cadotte, B.J. Cardinale and T.H. Oakley. PNAS 105 (44), 17012-17017 (2008)
[14] J. Pérez et al. PNAS 105 (41), 15950-15955 (2008)
[15] E. Huedo, R.S. Montero and I.M. Llorente. J. Scalable Comp.–Prac. Exp. 6 (3), 1-8 (2005)
8
| 5 |
1
Taxi Dispatch with Real-Time Sensing Data in Metropolitan Areas:
A Receding Horizon Control Approach
arXiv:1603.04418v1 [] 14 Mar 2016
Fei Miao, Student Member, IEEE, Shuo Han, Member, IEEE, Shan Lin, John A. Stankovic, Fellow, IEEE and
ACM, Desheng Zhang, Sirajum Munir, Hua Huang, Tian He, and George J. Pappas Fellow, IEEE
Abstract—Traditional taxi systems in metropolitan areas often
suffer from inefficiencies due to uncoordinated actions as system
capacity and customer demand change. With the pervasive
deployment of networked sensors in modern vehicles, large
amounts of information regarding customer demand and system
status can be collected in real time. This information provides
opportunities to perform various types of control and coordination for large-scale intelligent transportation systems. In this
paper, we present a receding horizon control (RHC) framework
to dispatch taxis, which incorporates highly spatiotemporally
correlated demand/supply models and real-time GPS location
and occupancy information. The objectives include matching
spatiotemporal ratio between demand and supply for service
quality with minimum current and anticipated future taxi idle
driving distance. Extensive trace-driven analysis with a data set
containing taxi operational records in San Francisco shows that
our solution reduces the average total idle distance by 52%, and
reduces the supply demand ratio error across the city during one
experimental time slot by 45%. Moreover, our RHC framework
is compatible with a wide variety of predictive models and
optimization problem formulations. This compatibility property
allows us to solve robust optimization problems with corresponding demand uncertainty models that provide disruptive event
information.
Note to Practitioners—With the development of mobile sensor
and data processing technology, the competition between traditional “hailed on street” taxi service and “on demand” taxi
service has emerged in the US and elsewhere. In addition, large
amounts of data sets for taxi operational records provide potential
demand information that is valuable for better taxi dispatch
systems. Existing taxi dispatch approaches are usually greedy
algorithms focus on reducing customer waiting time instead of
total idle driving distance of taxis. Our research is motivated
by the increasing need for more efficient, real-time taxi dispatch
methods that utilize both historical records and real-time sensing
information to match the dynamic customer demand. This paper
suggests a new receding horizon control (RHC) framework
aiming to utilize the predicted demand information when making
taxi dispatch decisions, so that passengers at different areas of a
city are fairly served and the total idle distance of vacant taxis are
This work was supported by NSF grant numbers CNS-1239483, CNS1239108, CNS-1239226, and CPS-1239152 with project title: CPS: Synergy:
Collaborative Research: Multiple-Level Predictive Control of Mobile Cyber
Physical Systems with Correlated Context. The preliminary conference version
of this work can be found in [19].
F. Miao, S. Han and G. J. Pappas are with Department of Electrical and
Systems Engineering, University of Pennsylvania, Philadelphia PA, 19104,
USA 19014. Email: {miaofei, hanshuo, pappasg}@seas.upenn.edu.
S. Lin and H. Huang are with Department of Electrical and Computer Engineering, Stony Brook University, Long Island, NY 11794, USA.
Email:{shan.x.lin, hua. huang}@stonybrook.edu}.
J.A. Stankovic and S. Munir are with Department of Computer
Science, University of Virginia, Charlottesville, VA 22904, USA. Email:
stankovic@cs.virginia.edu, sm7hr@virginia.edu.
D. Zhang and T. He are with Department of Computer Science
and Engineering, University of Minnesota, Minneapolis, MN 55455, USA.
Email:{tianhe, zhang}@cs.umn.edu
reduced. We formulate a multi-objective optimization problem
based on the dispatch requirements and practical constraints.
The dispatch center updates GPS and occupancy status information of each taxi periodically and solves the computationally
tractable optimization problem at each iteration step of the RHC
framework. Experiments for a data set of taxi operational records
in San Francisco show that the RHC framework in our work can
redistribute taxi supply across the whole city while reducing total
idle driving distance of vacant taxis. In future research, we plan
to design control algorithms for various types of demand model
and experiment on data sets with a larger scale.
Index Terms—Intelligent Transportation System, Real-Time
Taxi Dispatch, Receding Horizon Control, Mobility Pattern
I. I NTRODUCTION
More and more transportation systems are equipped with
various sensors and wireless radios to enable novel mobile
cyber-physical systems, such as intelligent highways, traffic
light control, supply chain management, and autonomous
fleets. The embedded sensing and control technologies in these
systems significantly improve their safety and efficiency over
traditional systems. In this paper, we focus on modern taxi
networks, where real-time occupancy status and the Global
Positioning System (GPS) location of each taxi are sensed
and collected to the dispatch center. Previous research has
shown that such data contains rich information about passenger
and taxi mobility patterns [31], [24], [23]. Moreover, recent
studies have shown that the passenger demand information
can be extracted and used to reduce passengers’ waiting time,
taxi cruising time, or future supply rebalancing cost to serve
requests [16], [25], [32].
Efficient coordination of taxi networks at a large scale is
a challenging task. Traditional taxi networks in metropolitan
areas largely rely on taxi drivers’ experience to look for
passengers on streets to maximize individual profit. However,
such self-interested, uncoordinated behaviors of drivers usually result in spatiotemporal mismatch between taxi supply
and passenger demand. In large taxi companies that provide
dispatch services, greedy algorithms based on locality are
widely employed, such as finding the nearest vacant taxi to
pick up a passenger [18], or first-come, first-served. Though
these algorithms are easy to implement and manage, they
prioritize immediate customer satisfaction over global resource
utilization and service fairness, as the cost of rebalancing the
entire taxi supply network for future demand is not considered.
Our goal is to utilize real-time information to optimize taxi
network dispatch for anticipated future idle driving cost and
global geographical service fairness, while fulfilling current,
local passenger demand. To accomplish such a goal, we
2
incorporate both system models learned from historical data
and real-time taxi data into a taxi network control framework.
To the best of our knowledge, this is the first work to consider
this problem. The preliminary version of this work can be
found in [19], and more details about problem formulation,
algorithm design, and numerical evaluations are included in
this manuscript.
In this paper, we design a computationally efficient moving
time horizon framework for taxi dispatch with large-scale realtime information of the taxi network. Our dispatch solutions in
this framework consider future costs of balancing the supply
demand ratio under realistic constraints. We take a receding
horizon control (RHC) approach to dynamically control taxis
in large-scale networks. Future demand is predicted based
on either historical taxi data sets [5] or streaming data [31].
The real-time GPS and occupancy information of taxis is also
collected to update supply and demand information for future
estimation. This design iteratively regulates the mobility of
idle taxis for high performance, demonstrating the capacity of
large-scale smart transportation management.
The contributions of this work are as follows,
• To the best of our knowledge, we are the first to design
an RHC framework for large-scale taxi dispatching. We
consider both current and future demand, saving costs under constraints by involving expected future idle driving
distance for re-balancing supply.
• The framework incorporates large-scale data in real-time
control. Sensing data is used to build predictive passenger
demand, taxi mobility models, and serve as real-time
feedback for RHC.
• Extensive trace driven analysis based on a San Francisco
taxi data set shows that our approach reduces average
total taxi network idle distance by 52% as in Figure 5, and
the error between local and global supply demand ratio
by 45% as in Figure 7, compared to the actual historical
taxi system performance.
• Spatiotemporal context information such as disruptive
passenger demand is formulated as uncertainty sets of
parameters into a robust dispatch problem. This allows
the RHC framework to provide more robust control
solutions under uncertain contexts as shown in Figure 8.
The error between local and global supply demand ratio
is reduced by 25% compared with the error of solutions
without considering demand uncertainties.
perspective in work [30], but the real-time location information
is not utilized in the algorithm. Seow et.al focus on minimizing
total customer waiting time by concurrently dispatching multiple taxis and allowing taxis to exchange their booking assignments [27]. A shortest time path taxi dispatch system based
on real-time traffic conditions is proposed by Lee et.al [15].
In [26], [13], [25], authors aim to maximize drivers’ profits
by providing routing recommendations. These works give
valuable results, but they only consider the current passenger
requests and available taxis. Our design uses receding horizon
control to consider both current and predicted future requests.
Various mobility and vehicular network modeling techniques have been proposed for transportation systems [6],
[4]. Researchers have developed methods to predict travel
time [8], [11] and traveling speed [2], and to characterize
taxi performance features [16]. A network model is used to
describe the demand and supply equilibrium in a regulated
market is investigated [29]. These works provide insights
to transportation system properties and suggest potential enhancement on transportation system performance. Our design
takes a step further to develop dispatch methods based on
available predictive data analysis.
There is a large number of works on mobility coordination
and control. Different from taxi services, these works usually
focus on region partition and coverage control so that coordinated agents can perform tasks in their specified regions [7],
[1], [12]. Aircraft dispatch system and air traffic management
in the presence of uncertainties have been addressed [3],
[28], while the task models and design objectives are different from taxi dispatching problem. Also, receding horizon
control (RHC) has been widely applied for process control,
task scheduling, and multi-agent transportation networks [20],
[14], [17]. These works provide solid results for related
mobility scheduling and control problems. However, none of
these works incorporates both the real-time sensing data and
historical mobility patterns into a receding horizon control
design, leveraging the taxi supply based on the spatiotemporal
dynamics of passenger demand.
The rest of the paper is organized as follows. The background of taxi monitoring system and control problems are
introduced in Section II. The taxi dispatch problem is formally
formulated in Section III, followed by the RHC framework
design in Section IV. A case study with a real taxi data set
from San Francisco to evaluation the RHC framework is shown
in Section V. Concluding remarks are made in Section VI.
A. State-of-the-Art
There are three categories of research topics related to our
work: taxi dispatch systems, transportation system modeling,
and multi-agent coordination and control.
A number of recent works study approaches of taxi dispatching services or allocating transportation resources in
modern cities. Zhang and Pavone [32] designed an optimal
rebalancing method for autonomous vehicles, which considers
both global service fairness and future costs, but they didn’t
take idle driving distance and real-time GPS information into
consideration. Truck schedule methods to reduce costs of
idle cruising and missing tasks are designed in the temporal
II. TAXI D ISPATCH P ROBLEM : M OTIVATION AND S YSTEM
Taxi networks provide a primary transportation service
in modern cities. Most street taxis respond to passengers’
requests on their paths when passengers hail taxis on streets.
This service model has successfully served up to 25% public
passengers in metropolitan areas, such as San Francisco and
New York [10], [21]. However, passenger’s waiting time
varies at different regions of one city and taxi service is not
satisfying. In the recent years, ”on demand” transportation
service providers like Uber and Lyft aim to connect a passenger directly with a driver to minimize passenger’s waiting
3
time. This service model is still uncoordinated, since drivers
may have to drive idly without receiving any requests, and
randomly traverse to some streets in hoping to receive a
request nearby based on experience.
Our goal in this work is a centralized dispatch framework to
coordinate service behavior of large-scale taxi Cyber-Physical
system. The development of sensing, data storage and processing technologies provide both opportunities and challenges to
improve existing taxi service in metropolitan areas. Figure 1
shows a typical monitoring infrastructure, which consists
of a dispatch center and a large number of geographically
distributed sensing and communication components in each
taxi. The sensing components include a GPS unit and a trip
recorder, which provides real-time geographical coordinates
and occupancy status of every taxi to the dispatch center via
cellular radio. The dispatch center collects and stores data.
Then, the monitoring center runs the dispatch algorithm to
calculate a dispatch solution and sends decisions to taxi drivers
via cellular radio. Drivers are notified over the speaker or on
a special display.
Given both historical data and real-time taxi monitoring information described above, we are capable to learn spatiotemporal characteristics of passenger demand and taxi mobility
patterns. This paper focuses on the dispatch approach with
the model learned based on either historical data or streaming
data. One design requirement is balancing spatiotemporal taxi
supply across the whole city from the perspective of system
performance. It is worth noting that heading to the allocated
position is part of idle driving distance for a vacant taxi. Hence,
there exists trade-off between the objective of matching supply
and demand and reducing total idle driving distance. We
aim at a scalable control framework that directs vacant taxis
towards demand, while balancing between minimum current
and anticipated future idle driving distances.
III. TAXI D ISPATCH P ROBLEM F ORMULATION
Informally, the goal of our taxi dispatch system is to schedule vacant taxis towards predicted passengers both spatially
and temporally with minimum total idle mileage. We use
supply demand ratio of different regions within a time period
as a measure of service quality, since sending more taxis
for more requests is a natural system-level requirement to
make customers at different locations equally served. Similar
Dispatch Center
Passenger
Distribution
Taxi
Mobility
Real-Time
Control
Cellular
Ratio
GPS
Pickup & Delivery
Figure 1.
Occupancy
Sensing
A prototype of the taxi dispatch system
service metric of service node utilization rate has been applied
in resource allocation problems, and autonomous driving car
mobility control approach [32].
The dispatch center receives real-time sensing streaming
data including each taxi’s GPS location and occupancy status
with a time stamp periodically. The real-time data stream is
then processed at the dispatch center to predict the spatiotemporal patterns of passenger demand. Based on the prediction,
the dispatch center calculates a dispatch solution in real-time,
and sends decisions to vacant taxis with dispatched regions to
go in order to match predicted passenger demands.
Besides balancing supply and demand, another consideration in taxi dispatch is minimizing the total idle cruising
distance of all taxis. A dispatch algorithm that introduces large
idle distance in the future after serving current demands can
decrease total profits of the taxi network in the long run. Since
it is difficult to perfectly predict the future of a large-scale
taxi service system in practice, we use a heuristic estimation
of idle driving distance to describe anticipated future cost associated with meeting customer requests. Considering control
objectives and computational efficiency, we choose a receding
horizon control approach. We assume that the optimization
time horizon is T , indexed by k = 1, . . . , T , given demand
prediction during time [1, T ].
Notation
In this paper, we denote 1N as a length N column vector
of all 1s, and 1TN is the transpose of the vector. Superscripts
of variables as in X k , X k+1 denote discrete time. We denote
the j-th column of matrix X k as X·jk .
A. Supply and demand in taxi dispatch
We assume that the entire area of a city is divided into n
regions such as administrative sub-districts. We also assume
that within a time slot k, the total number of passenger
requests at the j-th region is denoted by rjk . We also use
rk , [r1k , . . . , rnk ] ∈ R1×n to denote the vector of all
requests. These are the demands we want to meet during time
k = 1, . . . , T with minimal idle driving cost. Then the total
number of predicted requests in the entire city is denoted by
n
P
Rk =
rjk .
j=1
We assume that there are total N vacant taxis in the entire
city that can be dispatched according to the real-time occupancy status of all taxis. The initial supply information consists
of real-time GPS position of all available taxis, denoted by
P 0 ∈ RN ×2 , whose i-th row Pi0 ∈ R1×2 corresponds to the
position of the i-th vacant taxi. While the dispatch algorithm
does not make decisions for occupied taxis, information of
occupied taxis affects the predicted total demand to be served
by vacant taxis, and the interaction between the information of
occupied taxis and our dispatch framework will be discussed
in section IV.
The basic idea of the dispatch problem is illustrated in
Figure 2. Specifically, each region has a predicted number of
requests that need to be served by vacant taxis, as well as locations of all vacant taxis with IDs given by real-time sensing
4
Parameters
N
n
rk ∈ R1×n
C k ∈ [0, 1]n×n
P 0 ∈ RN ×2
Wi ∈ Rn×2
α ∈ RN
β ∈ R+
Rk ∈ R+
Variables
X k ∈ {0, 1}N ×n
P k ∈ [0, 1]N ×n
dki ∈ R+
Description
the total number of vacant taxis
the number of regions
the total number of predicted requests to be served by current vacant taxis at each region
matrix that describes taxi mobility patterns during one time slot
the initial positions of vacant taxis provided by GPS data
preferred positions of the i-th taxi at n regions
the upper bound of distance each taxi can drive for balancing the supply
the weight factor of the objective function
total number of predicted requests in the city
Description
the dispatch order matrix that represents the region each vacant taxi should go
predicted positions of dispatched taxis at the end of time slot k
lower bound of idle driving distance of the i-th taxi for reaching the dispatched location
Table I
PARAMETERS AND VARIABLES OF THE RHC PROBLEM (8).
{1,2,3}
{4,5}
1
[4]
2
[1]
{4,5}
1
[4]
2
[1]
adjacent
region
{4}
{2}
{1,2,3}
{4}
{2}
dispathing
solution
4
[9]
3
[2]
4
[9]
{7,8}
{6}
{7,8}
1TN X·jk
3
[2]
{6}
demand ratio for time slot k is defined as the total number of
vacant taxis decided by the total number of customer requests
during time slot k. When the supply demand ratio of every
region equals to that of the whole city, we have the following
equations for j = 1, . . . , n, k = 1, . . . , T ,
{6}
(a) A dispatch solution – taxi 2 goes (b) A dispatch solution – taxi 2 goes
to region 4, and taxi 4 goes to region to region 4, taxi 4 goes to region 3,
4.
and taxi 6 goes to region 4.
Figure 2.
Unbalanced supply and demand at different regions before
dispatching and possible dispatch solutions. A circle represents a region, with
a number of predicted requests ([·] inside the circle) and vacant taxis ({ taxi
IDs } outside the circle) before dispatching. A black dash edge means adjacent
regions. A red edge with a taxi ID means sending the corresponding vacant
taxi to the pointed region according to the predicted demand.
information. We would like to find a dispatch solution that
balances the supply demand ratio, while satisfying practical
constraints and not introducing large current and anticipated
future idle driving distance. Once dispatch decisions are sent to
vacant taxis, the dispatch center will wait for future computing
a new decision problem until updating sensing information in
the next period.
B. Optimal dispatch under operational constraints
The decision we want to make is the region each vacant taxi
should go. With the above initial information about supply and
predicted demand, we define a binary matrix X k ∈ {0, 1}N ×n
k
as the dispatch order matrix, where Xij
= 1 if and only if the
i-th taxi is sent to the j-th region during time k. Then
X k 1n = 1N , k = 1, . . . , T
must be satisfied, since every taxi should be dispatched to one
region during time k.
1) Two objectives: One design requirement is to fairly serve
the customers at different regions of the city — vacant taxis
should be allocated to each region according to predicted
demand across the entire city during each time slot. To
measure how supply matches demand at different regions, we
use the metric—supply demand ratio. For region j, its supply
rjk
=
1TN X·jk
rjk
N
,
⇐⇒
=
,
Rk
N
Rk
(1)
For convenience, we rewrite equation (1) as the following
equation about two row vectors
1 T k
1
1 X = k rk , k = 1, · · · , T.
(2)
N N
R
However, equation (2) can be too strict if used as a constraint,
and there may be no feasible solutions satisfying (2). This
is because decision variables X k , k = 1, . . . , T are integer
matrices, and taxis’ driving speed is limited that they may not
be able to serve the requests from any arbitrary region during
time slot k. Instead, we convert the constraint (2) into a soft
constraint by introducing a supply-demand mismatch penalty
function JE for the requirement that the supply demand ratio
should be balanced across the whole city, and one objective
of the dispatch problem is to minimize the following function
JE =
T
X
k=1
1 T k
1
1N X − k r k
N
R
.
(3)
1
The other objective is to reduce total idle driving distance of
all taxis. The process of traversing from the initial location to
the dispatched region will introduce an idle driving distance for
a vacant taxi, and we consider to minimize such idle driving
distance associated with meeting the dispatch solutions.
We begin with estimate the total idle driving distance associated with meeting the dispatch solutions. For the convenience
of routing process, the dispatch center is required to send the
GPS location of the destination to vacant taxis. The decision
variable X k only provides the region each vacant taxi should
go, hence we map the region ID to a specific longitude and
latitude position for every taxi. In practice, there are taxi
stations on roads in metropolitan areas, and we assume that
each taxi has a preferred station or is randomly assigned one at
every region by the dispatch system. We denote the preferred
geometry location matrix for the i-th taxi by Wi ∈ Rn×2 , and
5
[Wi ]j , where each row of Wi is a two-dimensional geometric
position on the map. The j-th row of Wi is the dispatch
k
position sent to the i-th taxi when Xij
= 1.
k
Once Xi is chosen, then the i-th taxi will go to the location
Xik Wi , because
equation holds
P the following
k
k
Xik Wi = q6=j Xiq
[Wi ]q + Xij
[Wi ]j = [Wi ]j ∈ R1×2 .
k
k
k
With a binary vector Xi that Xij = 1, Xiq
= 0 for q 6= j,
k
we have Xiq Wi = [0 0] for q 6= j. Since Wi does not need
to change with time k, the preferred location of each taxi at
every region in the city is stored as a matrix W, stored in
the dispatch center before the process of calculating dispatch
solutions starts. When updating information of vacant taxis,
matrix Wi is also updated for every current vacant taxi i.
The initial position Pi0 is provided by GPS data. Traversing
from position Pi0 to position Xi1 Wi for predicted demand will
introduce a cost, since the taxi drives towards the dispatched
locations without picking up a passenger. Hence, we consider
minimizing the total idle driving distance introduced by dispatching taxis. Driving in a city is approximated as traveling
on a grid road. To estimate the distance without knowing the
exact path, we use the Manhattan norm or one norm between
two geometric positions, which is widely applied as a heuristic
distance in path planning algorithms [22]. We define dki ∈ R as
the estimated idle driving distance of the i-th taxi for reaching
the dispatched location Xik Wi . For k = 1, a lower bound of
d1i is given by
d1i
>
kPi0
−
Xi1 Wi k1 ,
i = 1, . . . , N.
=f
k
(Xik−1 Wi ),
k
f :R
1×2
→R
1×2
,
(5)
where f k is a convex function, called a mobility pattern
function. To reach the dispatched location Xik Wi at the
beginning of time k from position Pik−1 , the approximated
driving distance is
dki > kf k (Xik−1 Wi ) − Xik Wi k1 .
(6)
The process to calculate a lower bound for dki is illustrated in
Figure 3.
Within time slot k, the distance that every taxi can drive
should be bounded by a constant vector αk ∈ RN :
dk 6 α k .
Total idle driving distance of all vacant taxis though time k =
1, . . . , T to satisfy service fairness is then denoted by
JD =
T X
N
X
dki .
min.
X k ,dk
J = JE + βJD
(4)
For k > 2, to estimate the anticipated future idle driving
distance induced by reaching dispatched position Xik Wi at
time k, we consider the trip at the beginning of time slot k
starts at the end location of time slot k − 1. However, during
time k − 1, taxis’ mobility patterns are related to pick-up
and drop-off locations of passengers, which are not directly
controlled by the dispatch center. So we assume the predicted
ending position for a pick-up location Xik−1 Wi during time
k − 1 is related to the starting position Xik−1 Wi as follows:
Pik−1
distance is nonzero only when a vacant taxi is dispatched to
a different region. We also require that the estimated distance
is a closed form function of the locations of the original and
dispatched regions, without knowledge about accurate traffic
conditions or exact time to reach the dispatched region. Hence,
in this work we use Manhattan norm to approximate the idle
distance—it is a closed form function of the locations of the
original and dispatched regions. When accessibility information of the road traffic network is considered in estimating
street-level distances, for the case that a taxi may not drive on
rectangular grids to pick up a passenger (for instance, when a
U-turn is necessary), Lee et.al have proposed a shortest time
path approach to pick up passengers in shortest time [15].
2) An RHC problem formulation: Since there exists a tradeoff between two objectives as discussed in Section II, we
define a weight parameter β k when summing up the costs
related to both objectives. A list of parameters and variables
is shown in Table I. When mixed integer programming is not
efficient enough for a large-scale taxi network regarding to the
problem size, one standard relaxation method is replacing the
k
k
constraint Xij
∈ {0, 1} by 0 ≤ Xij
≤ 1.
To summarize, we formulate the following problem (8)
based on the definitions of variables, parameters, constraints
and objective function
=
s.t
d1i
dki
>
T
X
1
1 T k
1 X − k rk
N N
R
k=1
kPi0
k
− Xi1 Wi k1 ,
(Xik−1 Wi ) −
> kf
i = 1, . . . , N,
k
k
d 6α ,
!
dki
i=1
i = 1, . . . , N,
Xik Wi k1 ,
k = 1, 2, . . . , T,
X 1n = 1N ,
06
1
N
X
k = 2, . . . , T,
k
k
Xij
+β
k
k = 1, 2, . . . , T,
6 1, i ∈ {1, . . . , N }, j ∈ {1, . . . , n}.
(8)
After getting an optimal solution X 1 of problem (8), for the
i-th taxi, we may recover binary solution through rounding
by setting the largest value of Xi1 to 1, and the others to
0. This may violate the constraint of d0i , but since we set a
conservative upper bound αk , and the rounding process will
return a solution that satisfies dki 6 αk + with bounded ,
the dispatch solution can still be executed during time slot k.
Latitude
Estimated distance
Possible paths
Longitude
(7)
k=1 i=1
It is worth noting that the idle distance we estimate here is
the region-level distance to pick up predicted passengers — the
Figure 3. Illustration of the process to estimate idle driving distance to the
dispatched location for the i-th taxi at k = 2: predict ending location of
k = 1 denoted by EPi1 in (9), get the distance between locations EPi1 and
Xi2 Wi denoted by d2i in (10).
6
C. Discussions on the optimal dispatch formulation
1) Why use supply demand ratio as a metric: An intuitive
measurement of the difference between the number of vacant
taxis and predicted total requests at all regions is:
n
P
e=
|skj − rjk |,
j=1
where skj is the total number of vacant taxis sent to the j-th
region. However, when the total number of vacant taxis and
requests are different in the city, this error e can be large even
under the case that more vacant taxis are already allocated to
busier regions and fewer vacant taxis are left to regions with
less predicted demand. We do not have an evidence whether
the dispatch center already fairly allocates supply according
to varying demand given the value of the above error e.
2) The meaning of αk : For instance, when the length of
time slot k is one hour, and αk is the distance one taxi can
traverse during 20 minutes of that hour, this constraint means
a dispatch solution involves the requirement that a taxi should
be able to arrive the dispatched position within 20 minutes
in order to fulfill predicted requests. With traffic condition
monitoring and traffic speed predicting method [2], αk can
be adjusted according to the travel time and travel speed
information available for the dispatch system. This constraint
also gives the dispatch system the freedom to consider the
fact that drivers may be reluctant to drive idly for a long
distance to serve potential customers, and a reasonable amount
of distance to go according to predicted demand is acceptable.
The threshold αk is related to the length of time slot. In
general, the longer a time slot is, the larger αk can be, because
of constraints like speed limit.
3) One example of mobility pattern function f k : When
taxi’s mobility pattern duringPtime slot k is described by a
n
k
matrix C k ∈ Rn×n satisfying j=1 Cij = 1, where Cij
is the
probability that a vacant taxi starts within region i will end
within region j during time k. From the queueing-theoretical
perspective such probability transition matrix approximately
describes passenger’s mobility [32]. Given Xik−1 and the
mobility pattern matrix C k−1 ∈ [0, 1]n×n , the probability of
ending at each region for taxi i is
p=
n
X
k−1
[C k−1 ]j I(Xij
= 1) = Xik−1 C k−1 ∈ R1×n ,
j=1
k−1
where the indicator function I(Xij
= 1) = 1 if and
k−1
k−1
only if Xij = 1, and [C
]j is the j-th row of C k−1 .
However, introducing a stochastic formula in the objective
function will cause high computational complexity for a largescale problem. Hence, instead of involving the probability of
taking different paths in the objective function to formulate a
stochastic optimization problem, we take the expected value
of the position of i-th taxi by the end of time k − 1
Pik−1 =
n
X
pj [Wi ]j = pWi = Xik−1 C k−1 Wi .
(9)
j=1
Here Pik−1 ∈ R1×2 is a vector representing a predicted ending
location of the i-th taxi on the map at each dimension. Then a
lower bound of idle driving distance for heading to Xik Wi to
meet demand during k is given by the distance between Pik−1
defined as (9) and Xik Wi .
dki > k(Xik−1 C k−1 − Xik )Wi k1 .
(10)
In particular, when the transition probability C k , k = 1, . . . , T
is available, we can replace the constraint about dki by dki >
k(Xik−1 C k−1 − Xik )Wi k1 .
It is worth noting that dki is a function of Xik−1 and
k
Xi , and the estimation accuracy of idle driving distance
to dispatched positions Xik (k = 2, . . . , T ) depends on the
predicting accuracy of the mobility pattern during each time
slot k, or Pik−1 . The distance d1 is calculated based on realtime GPS location P 0 and dispatch position X 1 , and we use
estimations d2 , . . . , dT to measure the anticipated future idle
driving distances for meeting requests.
The error of estimated C k mainly affects the choice of idle
distance dk when the true ending region of a taxi by the end
of time slot k is not as predicted based on its starting region at
time slot k. This is because C k determines the constraint for
dk (k = 2, 3, . . . , T ) as described by inequality (10). However,
the system also collects real-time GPS positions to make a new
decision based on the current true positions of all taxis, instead
of only applying predicted locations provided by the mobility
pattern matrix. According to constraint (4) distance d1 is
determined by GPS sensing data P 0 and dispatch decision
X 1 , and only X 1 will be executed sent to vacant taxis as
the dispatch solutions after the system solving problem (8).
From this perspective, real-time GPS and occupancy status
sensing data is significant to improve the system’s performance
when we utilize both historical data and real-time sensing data.
We also consider the effect of an inaccurate mobility pattern
estimation C k when choosing the prediction time horizon T —
large prediction horizon will induce accumulating prediction
error in matrix C k and the dispatch performance will even
be worse. Evaluation results in Section V show how real-time
sensing data helps to reduce total idle driving distance and
how the mobility pattern error of different prediction horizon
T affects the system’s performance.
4) Information on road congestion and passenger destination: When road congestion information is available to the
dispatch system, function in (5) can be generalized to include
real-time congestion information. For instance, there is a high
probability that a taxi stays within the same region during one
time slot under congestions.
We do not assume that information of passenger’s destination is available to the system when making dispatch decisions,
since many passengers just hail a taxi on the street or at taxi
stations instead of reserving one in advance in metropolitan
areas. When the destination and travel time of all trips are
provided to the dispatch center via additional software or devices as prior knowledge, the trip information is incorporated
to the definition of ending position function (5) for problem
formulation (8). With more accurate trip information, we get a
better estimation of future idle driving distance when making
dispatch decisions for k = 1.
5) Customers’ satisfaction under balanced supply demand
ratio: The problem we consider in this work is reaching
fair service to increase global level of customers’ satisfaction,
7
which is indicated by a balanced supply demand ratio across
different regions of one city, instead of minimizing each individual customer’s waiting time when a request arrives at the
dispatch system. Similar service fairness metric has been applied in mobility on demand systems [32], and supply demand
ratio considered as an indication of utilization ratio of taxis
is also one regulating objective in taxi service market [29].
For the situation that taxi i will not pick up passengers in its
original region but will be dispatched to another region, the
dispatch decision results from the fact that global customers’
satisfaction level will be increased. For instance, when the
original region of taxi i has a higher supply demand ratio
than the dispatched region, going to the dispatched region
will help to increase customer’s satisfaction in that region. By
sending taxi i to some other region, customers’ satisfaction in
the dispatched region can be increased, and the value of the
supply-demand cost-of-mismatch function JE can be reduced
without introducing much extra total idle driving distance JD .
D. Robust RHC formulations
Previous work has developed multiple ways to learn passenger demand and taxi mobility patterns [2], [8], [13], and
accuracy of the predicted model will affect the results of
dispatch solutions. We do not have perfect knowledge of
customer demand and taxi mobility models in practice, and
the actual spatial-temporal profile of passenger demands can
deviate from the predicted value due to random factors such
as disruptive events. Hence, we discuss formulations of robust
taxi dispatch problems based on (8).
Formulation (8) is one computationally tractable approach
to describe the design requirements with a nominal model.
One advantage of the formulation (8) is its flexibility to adjust
the constraints and objective function according to different
conditions. With prior knowledge of scheduled events that
disturb the demand or mobility pattern of taxis, we are able
to take the effects of the events into consideration by setting
uncertainty parameters. For instance, when we have basic
knowledge that total demand in the city during time k is
about R̃k , but each region rjk belongs to some uncertainty
set, denoted by an entry wise inequality
R1k rk R2k ,
n
k
k
given R1 ∈ R and R2 ∈ Rn . Then
k
k
rjk ∈ [R1j
, R2j
], j = 1, . . . , n
Theorem 1. The robust RHC problem (12) is equivalent to
the following computationally efficient convex optimization
problem
T
n
N
X
X
X
J0 =
min
tkj + β k
dki
X k ,dk ,tk
constraints of problem (8).
(12)
j=1
k=1
1N X·jk
−
s.t tkj ≥
N
1N X·jk
tkj ≥
−
N
j = 1, . . . , n,
i=1
k
R1j
R̃k
k
R2j
, tkj ≥
k
R1j
R̃k
k
R2j
, tkj ≥
R̃k
R̃k
, k = 1, . . . , T,
1N X·jk
,
N
1N X·jk
−
,
N
−
(13)
constraints of problem (8).
Proof: In the objective function, only the first term is
related to rk . To avoid the maximize expression over an
uncertain rk , we first optimize the term over rk for any fixed
X k . Let X·jk represent the j-th column of X k , then
1 k
1 T k
1N X −
r
N
R̃k
1
n
X
rjk
1 T k
1 X −
= max
N N ·j R̃k
R1k r k R2k
j=1
max
R1k r k R2k
=
n
X
j=1
max
k ,Rk ]
rjk ∈[R1j
2j
(14)
rjk
1 T k
1N X·j −
.
N
R̃k
The second equality holds because each rjk can be optimized
k
k
separately in this equation. For R1j
≤ rjk ≤ R2j
, we have
k
R1j
≤
rjk
≤
k
R2j
.
R̃k
R̃k
R̃k
Then the problem is to maximize each absolute value in (14)
for j = 1, . . . , n. Consider the following problem for x, a, b ∈
R to examine the character of maximization problem over an
absolute value:
(
|x − a|, if x > (a + b)/2
max |x − x0 | =
x0 ∈[a,b]
|x − b|, otherwise
(11)
is an uncertainty parameter instead of a fixed value as in
problem (8). Without additional knowledge about the change
of total demand in the whole city, we denote R̃k as the
approximated total demand in the city under uncertain rjk for
each region. By introducing interval uncertainty (11) to rk and
fixing R̃k on the denominator, we have the following robust
optimization problem (12)
!
N
T
X
X
1 k
1 T k
k
k
1 X −
r
+β
di
min.
max
N N
X k ,dk
R1k r k R2k
R̃k
1
i=1
k=1
s.t.
The robust optimization problem (12) is computationally
tractable, and we have the following Theorem 1 to show the
equivalent form to provide real-time dispatch decision.
= max{|x − a|, |x − b|}
= max{x − a, a − x, x − b, b − x}.
Similarly, for the problem related to rjk , we have
max
k ,Rk ]
rjk ∈[R1j
2j
(
=max
rjk
1N X·jk
−
N
R̃k
k
1N X·jk
R1j
−
,
N
R̃k
k
1N X·jk
R2j
−
N
R̃k
(15)
)
.
Thus, with slack variables tk ∈ Rn , we re-formulate the robust
RHC problem as (13).
Taxi mobility patterns during disruptive events can not be
easily estimated (in general), however, we have knowledge
8
such as a rough number of people are taking part in a
conference or competition, or even more customer reservations
because of events in the future. The uncertain set of predicted
demand rk can be constructed purely from empirical data such
as confidence region of the model, or external information
about disruptive events. By introducing extra knowledge besides historical data model, the dispatch system responds to
such disturbances with better solutions than the those without
considering model uncertainties. Comparison of results of (13)
and problem (8) is shown in Section V.
IV. RHC F RAMEWORK D ESIGN
Demand and taxi mobility patterns can be learned from
historical data, but they are not sufficient to calculate a
dispatch solution with dynamic positions of taxis when the
positions of the taxis change in real time. Hence, we design
an RHC framework to adjust dispatch solutions according to
real-time sensing information in conjunction with the learned
historical model. Real-time GPS and occupancy information
then act as feedback by providing the latest taxi locations, and
demand-predicting information for an online learning method
like [31]. Solving problem (8) or (12) is the key iteration step
of the RHC framework to provide dispatch solutions.
RHC works by solving the cost optimization over the
window [1, T ] at time k = 1. Though we get a sequence
of optimal solutions in T steps – X 1 , . . . , X T , we only
send dispatch decisions to vacant taxis according to X 1 . We
summarize the complete process of dispatching taxis with both
historical and real-time data as Algorithm 1, followed by a
detail computational process of each iteration. The lengths of
time slots for learning historical models (t1 ) and updating realtime information (t2 ) do not need to be the same, hence in
Algorithm 1 we consider a general case for different t1 , t2 .
Algorithm 1: RHC Algorithm for real-time taxi dispatch
Inputs: Time slot length t1 minutes, period of sending
dispatch solutions t2 minutes (t1 /t2 is an integer); a
preferred station location table W for every taxi in the
network; estimated request vectors r̂(h1 ),
h1 = 1, . . . , 1440/t1 , mobility patterns fˆ(h2 ),
h2 = 1, . . . , 1440/t2 ; prediction horizon T ≥ 1.
Initialization: The predicted requests vector r = r̂(h1 )
for corresponding algorithm start time h1 .
while Time is the beginning of a t2 time slot do
(1) Update sensor information for initial position of
vacant taxis P 0 and occupied taxis P 00 , total number
of vacant taxis N , preferred dispatch location
matrices Wi .
if time is the beginning of an h1 time slot then
Calculate r̂(h1 ) if the system applies an online
training method; count total number of occupied
taxis no (h1 ); update vector r.
end
(2) Update the demand vectors rk based on predicted
demand r̂(h1 ) and potential service ability of no (h1 )
occupied taxis; update mobility functions f k (·) (for
example, C k ), set up values for idle driving distance
threshold αk and objective weight β k ,
k = 1, 2, . . . , T . (3) if there is knowledge of demand
rk as an uncertainty set ahead of time then
solve problem (13);
else
solve problem (8) for a certain demand model;
end
(4) Send dispatch orders to vacant taxis according to
the optimal solution of matrix X 1 . Let h2 = h2 + 1.
end
Return:Stored sensor data and dispatch solutions.
A. RHC Algorithm
Remark 1. Predicted values of requests r̂(h1 ) depend on the
modeling method of the dispatch system. For instance, if the
system only applies historical data set to learn each r̂(h1 ),
r̂(h1 ) is not updated with real-time sensing data. When the
system applies online training method such as [31] to update
r̂(h1 ) for each h1 , values of r, rk are calculated based on the
real-time value of r̂(h1 ).
1) Update r: We receive sensing data of both occupied
and vacant taxis in real-time. Predicted requests that vacant
taxis should serve during h1 is re-estimated at the beginning
of each h1 time. To approximate the service capability when
an occupied taxi turns into vacant during time h1 , we define
the total number of drop off events at different regions as a
vector dp(h1 ) ∈ Rn×1 . Given dp(h1 ), the probability that a
drop off event happens at region j is
(16)
roj (h1 ) = dpdj (h1 ) × no (h1 )e,
where d·e is the ceiling function, no (h1 ) is the total number
of current occupied taxis at the beginning of time h1 provided
by real-time sensor information of occupied taxis. Let
r = r̂(h1 ) − ro (h1 ),
then the estimated service capability of occupied taxis is
deducted from r for time slot h1 .
2) Update rk for problem (8): We assume that requests
are uniformly distributed during h1 . Then for each time k of
length t2 , if the corresponding physical time is still in the
current h1 time slot, the request is estimated as an average
part of r; else, it is estimated as an average part for time slot
h1 + 1, h1 + 2, . . . , etc. The rule of choosing rk is
(1
if (k + h2 − 1)t2 ≤ h1 t1
H r,
k
r = 1 l (k+h2 −1)t2 m
, otherwise
H r̂
t1
where dpj (h1 ) is the number of drop off events at region
j during h1 . We assume that an occupied taxi will pick up
at least one passenger within the same region after turning
vacant, and we approximate future service ability of occupied
taxis at region j during time h1 as
where H = t1 /t2 .
3) Update rk for robust dispatch (13): When there are
disruptive events and the predicted requests number is a range
r̂(h1 ) ∈ [R̂1 (h1 ), R̂2 (h1 )], similarly we set the uncertain set
of rk as the following interval for the computationally efficient
pdj (h1 ) = dpj (h1 )/1n dp(h1 ),
9
form of robust dispatch problem (13)
h
i
1
R̂
(h
)
−
r
(h
),
R̂
(h
)
−
r
(h
)
,
1
1
o
1
2
1
o
1
H
k
r ∈
if (k
h +h
mi
l2 − 1)t2 ≤ mh1 t1 , l
1 R̂ ( (k+h2 −1)t2 ), R̂ ( (k+h2 −1)t2 ) , o.w.
2
1
H
t1
t1
4) Spatial and temporal granularity of Algorithm 1: The
main computational cost of each iteration is on step (3), and
t2 should be no shorter than the computational time of the
optimization problem. We regulate parameters according to
experimental results based on a given data set, since there are
no closed form equations to decide optimal design values of
these parameters.
For the parameters we estimate from a given GPS dataset,
the method we use in the experiments (but not restricted to it)
will be discussed in Section V. The length of every time slot
depends on the predict precision of prediction, desired control
outcome, and the available computational resources. We can
set a large time horizon to consider future costs in the long
run. However, in practice we do not have perfect predictions,
thus a large time horizon may amplify the prediction error over
time. Applying real-time information to adjust taxi supply is
a remedy to this problem. Modeling techniques are beyond
the scope of this work. If we have perfect knowledge of
customer demand and taxi mobility models, we can set a
large time horizon to consider future costs in the long run.
However, in practice we do not have perfect predictions, thus
a large time horizon may amplify the prediction error over
time. Likewise, if we choose a small look-ahead horizon, then
the dispatch solution may not count on idle distance cost of
the future. Applying real-time information to adjust taxi supply
is a remedy to this problem. With an approximated mobility
pattern matrix C k , the dispatch solution with large T is even
worse than small T .
5) Selection process of parameters β k , αk , and T : The
process of choosing values of parameters for Algorithm 1
is a trial and adjusting process, by increasing/decreasing the
parameter value and observing the changing trend of the
dispatch cost, till a desired performance is reached or some
turning point occurs that the cost is not reduced any more.
For instance, objective weight β k is related to the objective of
the dispatch system, whether it is more important to reach fair
service or reduce total idle distance. Some parameter is related
to additional information available to the system besides realtime GPS and occupancy status data; for instance, αk can be
adjusted according to the average speed of vehicles or traffic
conditions during time k as discussed in subsection III-C2.
Adjustments of parameters such as objective weight β k , idle
distance threshold αk , prediction horizon T when considering
the effects of model accuracy, control objectives are shown in
Section V. A formal parameter selection method is a direction
for future work.
B. Multi-level dispatch framework
We do not restrict the data source of customer demand – it
can be either predicted results or customer reservation records.
Some companies provide taxi service according to the current
requests in the queue. For reservations received by the dispatch
center ahead of time, the RHC framework in Algorithm 1 is
compatible with this type of demand information — we then
assign value of the waiting requests vector rk , taxi mobility
function f k in (8) according to the reservations, and the
solution is subject to customer bookings.
For customer requests received in real-time, a multi-level
dispatch framework is available based on Algorithm 1. The
process is as follows: run Algorithm 1 with predicted demand
rk , and send dispatch solutions to vacant taxis. When vacant
taxis arrive at dispatched locations, the dispatch center updates
real-time demand such as bookings that recently appear in the
system, then the dispatch method based on current demand
such as the algorithm designed by Lee et al. [15] can be
applied. By this multi-level dispatch framework, vacant taxis
are pre-dispatched at a regional level according to predicted
demand using the RHC framework, and then specific locations
to pick up a passenger who just booked a taxi is sent to a
vacant taxi according to the shortest time path [15], with the
benefit of real-time traffic conditions.
V. C ASE S TUDY: M ETHOD E VALUATION
We conduct trace-driven simulations based on a San Francisco taxi data set [24] summarized in Table II. In this data
set, a record for each individual taxi includes four entries: the
geometric position (latitude and longitude), a binary indication
of whether the taxi is vacant or with passengers, and the Unix
epoch time. With these records, we learn the average requests
and mobility patterns of taxis, which serve as the input of
Algorithm 1. We note that our learning model is not restricted
to the data set used in this simulation, and other models [31]
and date sets can also be incorporated.
We implement Algorithm 1 in Matlab using the optimization
toolbox called CVX [9]. We assume that all vacant taxis follow
the dispatch solution and go to suggested regions. Inside a
target region, we assume that a vacant taxi automatically
picks up the nearest request recorded by the trace data, and
we calculate the total idle mileage including distance across
regions and inside a region by simulation. The trace data
records the change of GPS locations of a taxi in a relatively
small time granularity such as every minute. Moreover, there
is no additional information about traffic conditions or the
exact path between two consecutive data points when they
were recorded. Hence, we consider the path of each taxi as
connected road segments determined by each two consecutive
points of the trace data we use in this section. Assume the
latitude and longitude values of two consecutive points in the
trace data are [lx1 , ly1 ] and [lx2 , ly2 ], for a short road segment,
the mileage distance between the two points (measured in one
minute) is approximated as being proportional to the value
(|lx1 − lx2 | + |ly1 − ly2 |). The geometric location of a taxi is
directly provided by GPS data. Hence, we calculate geographic
distance directly from the data first, and then convert the result
to mileage.
Experimental figures shown in Subsection V-B and V-D are
average results of all weekday data from the data set II. Results
shown in Subsection V-C are based on weekend data.
10
Taxicab GPS Data set
Number of Taxis Data Size
500
90M B
Collection Period
05/17/08-06/10/08
Record Number
1, 000, 000
ID
Date and Time
Format
Status
Direction
Speed
GPS Coordinates
600
Average requests number during a weekday
Region 3
Region 4
Region 7
Region 10
400
600
Average requests number during weekends
Region 3
Region 4
Region 7
Region 10
400
200
0
Average requests number
Average requests number
Table II
S AN F RANCISCO DATA IN THE E VALUATION S ECTION . G IANT BASEBALL GAME IN AT&T PARK ON M AY 31, 2008 IS A DISRUPTIVE EVENT WE USE FOR
EVALUATING THE ROBUST OPTIMIZATION FORMULATION .
200
2
4
6
8 10 12 14 16 18 20 22 24
Time: hour
0
2
4
6
Average drop off events number of a weekday
600
Region 3
Region 4
Region 7
400
Region 10
200
0
2
4
6
8 10 12 14 16 18 20 22 24
Time: hour
(b) Requests during weekends
Average drop off events number
Average drop off events number
(a) Requests during weekdays
8 10 12 14 16 18 20 22 24
Time: hour
Average drop off events during weekends
Region 3
Region 4
Region 7
Region 10
600
400
200
0
2
4
6
8 10 12 14 16 18 20 22 24
Time: Hour
(c) Drop off during weekdays
(d) Drop off during weekends
Figure 4. Requests at different hours during weekdays and weekends, for four selected regions. A given historical data set provides basic spatiotemporal
information about customer demands, which we utilize with real-time data to dispatch taxis.
A. Predicted demand based on historical data
Requests during different times of a day in different regions
vary a lot, and Figure 4(a) and Figure 4(b) compare bootstrap
results of requests r̂(h1 ) on weekdays and weekends for
selected regions. This shows a motivation of this work—
necessary to dispatch the number of vacant taxis according
to the demand from the perspective of system-level optimal
performance. The detailed process is described as follows.
The original SF data set does not provide the number of
pick up events, hence one intuitive way to determine a pick
up (drop off) event is as follows. When the occupancy binary
turns from 0 to 1 (1 to 0), it means a pick up (drop off) event
has happened. Then we use the corresponding geographical
position to determine which region this pick up (drop off)
belongs to; use the time stamp data to decide during which
time slot this pick up (drop off) happened. After counting
the total number of pick up and drop off events during
each time slot at every region, we obtain a set of vectors
rd0 (hk ), dpd0 (hk ), d0 = 1, . . . , d, where d is the number of
days for historical data . In the following analysis, each time
slot h1 is the time slot of predicting demand model chosen
by the RHC framework. The SF data set includes about 24
days of data, so we use d = 18 for weekdays, and d = 6
for weekends. The bootstrap process for a given sample time
number B = 1000 is given as follows.
(a) Randomly sample a size d dataset with replacement from
the data set {r1 (h1 ), . . . , rd (h1 )}, calculate
r̂1 (h1 ) =
d
1 X
rd0 (h1 ), for h1 = 1, . . . , 1440/h1 .
d 0
d =1
(b) Repeat step (a) for (B − 1) times, so that we have B
estimates for each h1 ,
r̂b (h1 ), b = 1, . . . , B.
The estimated mean value of P
r̂(h1 ) based on B samples is
B
1
r̂(h1 ) = B l=1 r̂l (h1 ).
(c) Calculate the sample variance of the B estimates of r(h1 )
for each h1 ,
V̂r̂(h1 ) =
B
B
1 X b
1 X l
(r̂ (h1 ) −
r̂ (h1 )).
B
B
b=1
(17)
l=1
To estimate the demand range for robust dispatch problem (13) according to historical data, we construct the uncertain set of demand rk based on the mean and variance of the
bootstrapped demand model. For every region j, the boundary
of demand interval is defined as
q
R̃1,j (h1 ) = r̂j (h1 ) − V̂r̂(h1 ),j ,
q
(18)
R̃2,j (h1 ) = r̂j (h1 ) + V̂r̂(h1 ),j ,
11
B. RHC with real-time sensor information
To estimate a mobility pattern matrix Ĉ(h2 ), we define a
matrix T (h2 ), where T (h2 )ij is the total number of passenger
trajectories that starting at region i and ending at region j
during time slot h2 . We also apply
P bootstrap process to get
T̂ (h2 ), and Ĉ(h2 )ij = T̂ (h2 )ij /( T̂ (h2 )ij ). Table III shows
j
Average total idle distance: mile
one row of Ĉ(h2 ) for 5:00-6:00 pm during weekdays, the
transition probability for different regions. The average cross
validation error for estimated mobility matrix Ĉ(h2 ) of time
slot h2 , h2 = 1, . . . , 24 during weekdays is 34.8%, which
is a reasonable error for estimating total idle distance in the
RHC framework when real-time GPS and occupancy status
data is available. With only estimated mobility pattern matrix
Ĉ(h2 ), the total idle distance is reduced by 17.6% compared
with the original record without a dispatch method, as shown
in Figure 5. We also tested the case when the dispatch
algorithm is provided with the true mobility pattern matrix
C k , which is impossible in practice, and the dispatch solution
reduces the total idle distance by 68% compared with the
original record. When we only have estimated mobility pattern
matrices instead of the true value to determine ending locations
and potential total idle distance for solving problem (8) or (13),
updating real-time sensing data compensates the mobility
pattern error and improves the performance of the dispatch
framework.
Real-time GPS and occupancy data provides latest position
information of all vacant and occupied taxis. When dispatching
available taxis with true initial positions, the total idle distance
600
480
Idle distance comparison
Dispatch with RT information
Dispatch without RT information
No dispatch
360
240
120
0
2
4
6
8 10 12 14 16 18 20 22 24
Time: Hour
Figure 5. Average idle distance comparison for no dispatch, dispatch without
real-time data, and dispatch with real-time GPS and occupancy information.
Idle distance is reduced by 52% given real-time information, compared with
historical data without dispatch solutions.
Figure 6. Heat map of passenger picking-up events in San Francisco (SF)
with a region partition method. Region 3 covers several busy areas, include
Financial District, Chinatown, Fisherman Wharf. Region 7 is mainly Mission
District, Mission Bay, the downtown area of SF.
is reduced by 52% compared with the result without dispatch
methods, as shown in Figure 5, which is compatible with
the performance when both true mobility pattern matrix C k
and real-time sensing data are available. This is because the
optimization problem (8) returns a solution with smaller idle
distance cost given the true initial position information P 0 ,
instead of estimated initial position provided only by mobility
pattern of the previous time slot in the RHC framework.
Figure 5 also shows that even applying dispatch solution
calculated without real-time information is better than non
dispatched result.
Based on the partition of Figure 6, Figure 7 shows that the
supply demand ratio at each region of the dispatch solution
with real-time information is closest to the supply demand
ratio of the whole city, and the error N1 1TN X k − R1k rk 1
is reduced by 45% compared with no dispatch results. Even
the supply demand ratio error of dispatching without realtime information is better than no dispatch solutions. We still
allocate vacant taxis to reach a nearly balanced supply demand
ratio regardless of their initial positions, but idle distance is
increased without real-time data, as shown in Figure 5. Based
on the costs of two objectives shown in Figures 5 and 7,
the total cost is higher without real-time information, mainly
results from a higher idle distance.
C. Robust taxi dispatch
One disruptive event of the San Francisco data set is
Giant baseball at AT&T park, and we choose the historical
record on May 31, 2008 as an example to evaluate the robust
optimization formulation (12). Customer request number for
Supply demand ratio
where r̂j (h1 ) is the average value of each step (b) and V̂r̂(h1 ),j
is the variance of estimated request number defined in (17).
This one standard deviation range is used for evaluating the
performance of robust dispatch framework in this work.
Estimated drop off events vectors dp(h1 ) are also calculated
via a similar process. Figure 4(c) and 4(d) show bootstrap
results of passenger drop off events dp(h1 ) on weekdays and
weekends for selected regions.
For evaluation convenience, we partition the city map to
regions with equal area. To get the longitude and latitude
position Wi ∈ Rn×2 of vacant taxi i, we randomly pick up a
station position in the city drawn from the uniform distribution.
2.0
Supply demand ratio under different conditions
Global supply demand ratio
1.7
Dispatch without RT information
1.4
Dispatch with RT information
No dispatch
1.1
0.8
0.5
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Region ID
Figure 7.
Supply demand ratio of the whole city and each region for
different dispatch solutions. With real-time GPS and occupancy data, the
supply demand ratio of each region is closest to the global level. The supply
demand ratio mismatch error is reduced by 45% with real-time information,
compared with historical data without dispatch solutions.
12
Region ID
Transit probability
Region ID
Transit probability
1
0.0032
9
0.0001
2
0.0337
10
0.0050
3
0.5144
11
0.0340
4
0.0278
12
0.0136
5
0.0132
13
0.0018
6
0.0577
14
0.0082
7
0.1966
15
0.0248
8
0.0263
16
0.0396
Table III
A N ESTIMATION OF STATE TRANSITION MATRIX BY BOOTSTRAP : ONE ROW OF MATRIX Ĉ(hk )
2
1.998
1.718
5.434
10
2.049
1.096
13.009
without dispatch
2.664
4. 519
47.854
Table IV
AVERAGE COST COMPARISON FOR DIFFERENT VALUES OF β k .
areas near AT&T park is affected, especially Region 7 around
the ending time of the game, which increases about 40% than
average value.
Figure 8 shows that with dispatch solution of the robust
optimization formulation (12), the supply demand mismatch
error N1 1TN X k − R1k rk 1 is reduced by 25% compared with
the solution of (8) and by 46% compared with historical
data without dispatch. The performance of robust dispatch
solutions does not vary significantly and depends on what type
of predicted uncertain demand is available when selecting the
formulation of robust dispatch method. Even under solutions
of (8), the total supply demand ratio error is reduced 28%
compared historical data without dispatch. In general, we
consider the factor of disruptive events in a robust RHC
iteration, thus the system level supply distribution responses
to the demand better under disturbance.
D. Design parameters for Algorithm 1
Supply demand ratio
Parameters like the length of time slots, the region division
function, the objective weight parameter and the prediction
horizon T of Algorithm 1 affect the results of dispatching cost
in practice. Optimal values of parameters for each individual
data set can be different. Given a data set, we change one
parameter to a larger/smaller value while keep others the
same, and compare results to choose a suboptimal value of
the varying parameter. We compare the cost of choosing
different parameters for Algorithm 1, and explain how to adjust
parameters according to experimental results based on a given
historical data set with both GPS and occupancy record.
How the objective weight of (8) – β k affects the cost:
The cost function includes two parts –the idle geographical
distance (mileage) cost and the supply demand ratio mismatch
1.8
1.5
1.3
Supply demand ratio comparison
Global supply demand ratio
Robust optimization (9)
Optimization (8)
No dispatch
1.0
0.7
0.4
2
4
6
8
10 12 14 16
Region ID
Figure 8. Comparison of supply demand ratio at each region under disruptive
events, for solutions of robust optimization problems (12), problem (8) in the
RHC framework, and historical data without dispatch. With the roust dispatch
solutions of (12), the supply demand ratio mismatch error is reduced by 46%.
cost. This trade-off between two parts is addressed by β k , and
the weight of idle distance increases with β k . A larger β k
returns a solution with smaller total idle geographical distance,
while a larger error between supply demand ratio, i.e., a larger
1 k
1 T
k
value. The two components of the cost
N 1N X − R k r
1
with different β k by Algorithm 1, and historical data without
Algorithm 1 are shown in Table IV. The supply demand ratio
mismatch is shown in the s/d error column.
We calculate the total cost as (s/d error +β k × idle distance)
(Use β k = 10 for the without dispatch column). Though with
β k = 0 we can dispatch vacant taxis to make the supply
demand ratio of each region closest to that of the whole city, a
larger idle geographical distance cost is introduced compared
with β k = 2 and β = 10. Compare the idle distance when
β k = 0 with the data without dispatch, we get 23% reduction;
compare the supply demand ratio error of β k = 10 with the
data without dispatch, we get 32%.
Average total idle distance during different hours of one
day for a larger β k is smaller, as shown in Figure 10. The
supply demand ratio error at different regions of one time slot
is increased with larger β k , as shown in Figure 9.
How to set idle distance threshold αk : Figure 11 compares
the error between local supply demand ratio and global supply
demand ratio. Since we directly use geographical distance
Supply demand ratio for different
Global supply demand ratio
k
=0
1.5
Supply demand ratio
0
0.645
3.056
0.645
1.3
k
1.1
k
=10
k
=2
0.9
0.7
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Region ID
Figure 9. Comparison of supply demand ratios at each region during one
time slot for different β k values. When β k is smaller, we put less cost weight
on idle distance that taxis are allowed to run longer to some region, and taxi
supply matches with the customer requests better.
Average total idle distance: mile
βk
s/d error
idle distance
total cost
Average total idle distance for different
=0
=2
180
= 10
240
120
60
0
2
4
6
8
10 12 14 16 18 20 22 24
Time: hour
Figure 10. Average total idle distance of taxis at different hours. When β k
is larger, the idle distance cost weights more in the total cost, and the dispatch
solution causes less total idle distance.
13
Supply demand ratio for different
Supply demand ratio
5
k
k
=0.02
global supply demand ratio
No dispatch
k
=0.002
4
3
k
2
=0.08
1
0
8
10 12 14 16
Region ID
Figure 11. Comparison of supply demand ratios at each region during one
time slot for different αk . When αk is larger, vacant taxis can traverse longer
to dispatched locations and match with customer requests better.
4
6
Total idle distance for different region partitions
4 regions
120 16 regions
64 regions
0 100 regions
256 regions
360
Average total idle distance: mile
90
60
30
0
8 10 12 14 16 18 20 22 24
Time: hour
Figure 13. Average total idle distance at different time of one day compared
for different prediction horizons.When T = 4, idle distance is decreased at
most hours compared with T = 2. For T = 8 the costs are worst.
120
0
240
4
6
8 10 12 14 16 18 20 22 24
Time: hour
Figure 12. Average total idle distance of all taxis during one day, for different
region partitions. Idle distance decreases with a larger region-division number,
till the number increases to a certain level.
6
t2=1 hour
t2=30 mins
t2=10 mins
180
t2 = 1 min
120
60
2
8 10 12 14 16 18 20 22 24
Time: hour
Figure 14. Comparison of average total idle distance for different t2 – the
length of time slot for updating sensor information. With a smaller t2 , the cost
is smaller. But when t2 = 1 is too small to complete calculating problem (8),
the dispatch result is not guaranteed to be better than t2 = 10.
4
6
Supply/demand ratio at each region for different t2
1.7
1.5
1.3
1.1
Global supply/demand ratio
t2=10 mins
t2=30 mins
t2=1 min
t2=1h
0.9
0.7
0.5
2
4
Average total idle distance for different t2
240
240
2
distance of vacant taxis at most hours of one day decreases
as T increases. For T = 8 the driving distance is the largest.
Theoretical reasons are discussed in Section IV.
Decide the length of time slot t2 : For simplicity, we choose
the time slot t1 as one hour, to estimate requests. A smaller
time slot t2 for updating GPS information can reduce the
total idle geographical distance with real-time taxi positions.
However, one iteration of Algorithm 1 is required to finish in
less than t2 time, otherwise the dispatch order will not work for
the latest positions of vacant taxis, and the cost will increase.
Hence t2 is constrained by the problem size and computation
capability. Figure 14 shows that smaller t2 returns a smaller
idle distance, but when t2 = 1 Algorithm 1 can not finish
one step iteration in one minute, and the idle distance is not
reduced. The supply demand ratio at each region does not vary
much for t2 = 30, t2 = 10 minutes and t2 = 1 hour, as shown
in Figure 15. Comparing two parts of costs, we get that t2
mainly affects the idle driving distance cost in practice.
Supply/demand ratio
Average total idle distance:mile
2
Average total idle distance for different T
T=8
T=2
150
T=4
120
180
Average total idle distance: mile
measured by the difference between longitude and latitude
values of two points (GPS locations) on the map, the threshold
value αk is small — 0.1 difference in GPS data corresponds
to almost 7 miles distance on the ground. When αk increase,
the error between local supply demand ratio and global supply
demand ratio decreases, since vacant taxis are more flexible
to traverse further to meet demand.
How to choose the number of regions: In general, the
dispatch solution of problem (8) for a vacant taxi is more
accurate by dividing a city into regions of smaller area, since
the dispatch is closer to road-segment level. However, we
should consider other factors when deciding the number of
regions, like the process of predicting requests vectors and
mobility patterns based on historical data. A linear model we
assume in this work is not a good prediction for future events
when the region area is too small, since pick up and drop off
events are more irregular in over partitioned regions. While
Increasing n, we also increase the computation complexity.
Note that the area of each region does not need to be the
same as we divide the city in this experiment.
Figure 12 shows that the idle distance will decrease with a
larger region division number, but the decreasing rate slows
down; while the region number increases to a certain level,
the idle distance almost keeps steady.
How to decide the prediction Horizon T : In general,
when T is larger, the total idle distance to get a good supply
demand ratio in future time slots should be smaller. However,
when T is large enough, increasing T can not reduce the
total idle distance any more, since the model prediction error
compensates the advantage of considering future costs. For
T = 2 and T = 4, Figure 13 shows that the average total idle
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
Region ID
Figure 15. Comparison of the supply demand ratio at different regions for
different lengths of time slot t2 . For t2 = 30, t2 = 10 mins and t2 = 1 hour,
results are similar. For t2 = 1 min, the supply demand ratio is even worse at
some regions, since the time slot is too short to complete one iteration.
14
VI. C ONCLUSION
In this paper, we propose an RHC framework for the
taxi dispatch problem. This method utilizes both historical
and real-time GPS and occupancy data to build demand
models, and applies predicted models and sensing data to
decide dispatch locations for vacant taxis considering multiple
objectives. From a system-level perspective, we compute suboptimal dispatch solutions for reaching a globally balanced
supply demand ratio with least associated cruising distance
under practical constraints. Demand model uncertainties under
disruptive events are considered in the decision making process
via robust dispatch problem formulations. By applying the
RHC framework on a data set containing taxi operational
records in San Francisco, we show how to regulate parameters such as objective weight, idle distance threshold, and
prediction horizon in the framework design process according
to experiments. Evaluation results support system level performance improvements of our RHC framework. In the future, we
plan to develop privacy-preserving control framework when
data of some taxis are not shared with the dispatch center.
R EFERENCES
[1] I. Amundson and X. D. Koutsoukos. A survey on localization
for mobile wireless sensor networks. In Mobile Entity Localization and Tracking in GPS-less Environnments, pages 235–254.
Springer Berlin Heidelberg, 2009.
[2] M. Asif, J. Dauwels, C. Goh, A. Oran, E. Fathi, M. Xu,
M. Dhanya, N. Mitrovic, and P. Jaillet. Spatio and temporal patterns in large-scale traffic speed prediction. IEEE Transactions
on Intelligent Transportation Systems, (15):797–804, 2014.
[3] M. E. Berge and C. A. Hopperstad. Demand driven dispatch: A
method for dynamic aircraft capacity assignment, models and
algorithms. Operations Research, 41(1):153–168, 1993.
[4] S. Blandin, D. Work, P. Goatin, B. Piccoli, and A. Bayen.
A general phase transition model for vehicular traffic. SIAM
Journal on Applied Mathematics, 71(1):107–127, 2011.
[5] E. Bradley. Bootstrap methods: Another look at the jackknife.
The Annals of Statistics, 7(1-26), 1979.
[6] D. R. Choffnes and F. E. Bustamante. An integrated mobility
and traffic model for vehicular wireless networks. In Proceedings of the 2nd ACM International Workshop on Vehicular Ad
Hoc Networks, pages 69–78, 2005.
[7] J. Cortes, S. Martinez, T. Karatas, and F. Bullo. Coverage
control for mobile sensing networks. IEEE Transactions on
Robotics and Automation, 20(2):243–255, 2004.
[8] R. Ganti, M. Srivatsa, and T. Abdelzaher. On limits of travel
time predictions: Insights from a new york city case study. In
IEEE 34th ICDCS, pages 166–175, June 2014.
[9] M. Grant and S. Boyd. CVX: Matlab software for disciplined
convex programming, version 2.1. http://cvxr.com/cvx, 2014.
[10] Hara Associates Inc. and Corey, Canapary& Galanis.
http://www.sfmta.com/sites/default/files/Draft%20SF%
20UserSurvey%2055%20WEB%20version04042013.pdf.
[11] J. Herrera, D. Work, R. Herring, X. Ban, Q. Jacobson, and
A. Bayen. Evaluation of traffic data obtained via GPS-enabled
mobile phones: The Mobile Century field experiment. Transportation Research Part C, 18(4):568–583, 2010.
[12] A. Howard, M. Matarić, and G. Sukhatme. Mobile sensor
network deployment using potential fields: A distributed, scalable solution to the area coverage problem. In Distributed
Autonomous Robotic Systems 5. Springer Japan, 2002.
[13] Y. Huang and J. W. Powell. Detecting regions of disequilibrium
in taxi services under uncertainty. In Proceedings of the 20th
International Conference on Advances in Geographic Information Systems, number 10, pages 139–148, 2012.
[14] K.-D. Kim. Collision free autonomous ground traffic: A
model predictive control approach. In Proceedings of the
4th ACM/IEEE International Conference on Cyber-Physical
Systems, number 10, pages 51–60, 2013.
[15] D.-H. Lee, R. Cheu, and S. Teo. Taxi dispatch system based on
current demands and real-time traffic conditions. Transportation
Research Record:Journal of the Transportation Research Board,
8(1882):193–200, 2004.
[16] B. Li, D. Zhang, L. Sun, C. Chen, S. Li, G. Qi, and Q. Yang.
Hunting or waiting? discovering passenger-finding strategies
from a large-scale real-world taxi dataset. In IEEE International Conference on Pervasive Computing and Communications Workshops (PERCOM Workshops), pages 63–38, 2011.
[17] W. Li and C. Cassandras. A cooperative receding horizon
controller for multivehicle uncertain environments. IEEE Transactions on Automatic Control, 51(2):242–257, 2006.
[18] Z. Liao. Real-time taxi dispatching using global positioning
systems. Communications of the ACM, 46(5):81–83, May 2003.
[19] F. Miao, S. Lin, S. Munir, J. A. Stankovic, H. Huang, D. Zhang,
T. He, and G. J. Pappas. Taxi dispatch with real-time sensing
data in metropolitan areas: A receding horizon control approach.
In Proceedings of the 6th ACM/IEEE International Conference
on Cyber-Physical Systems, ICCPS ’15, pages 100–109, 2015.
[20] R. Negenborn, B. D. Schutter, and J. Hellendoorn. Multi-agent
model predictive control for transportation networks: Serial
versus parallel schemes. Engineering Applications of Artificial
Intelligence, 21(3):353 – 366, 2008.
[21] New York City Taxi and Limousine Commission.
http://www.nyc.gov/html/tlc/downloads/pdf/tot survey results
02 10 11.pdf.
[22] C. Petres, Y. Pailhas, P. Patron, Y. Petillot, J. Evans, and
D. Lane. Path planning for autonomous underwater vehicles.
IEEE Transactions on Robotics, 23(2):331–341, 2007.
[23] S. Phithakkitnukoon, M. Veloso, C. Bento, A. Biderman, and
C. Ratti. Taxi-aware map: Identifying and predicting vacant
taxis in the city. In the First International Joint Conference on
Ambient Intelligence, pages 86–95, 2010.
[24] M. Piorkowski, N. Sarafijanovic-Djukic, and M. Grossglauser.
A parsimonious model of mobile partitioned networks with
clustering. In First International Communication Systems and
Networks and Workshops (COMSNETS), pages 1–10, 2009.
[25] J. W. Powell, Y. Huang, F. Bastani, and M. Ji. Towards reducing taxicab cruising time using spatio-temporal profitability
maps. In Proceedings of the 12th International Conference on
Advances in Spatial and Temporal Databases, number 19, pages
242–260, 2011.
[26] M. Qu, H. Zhu, J. Liu, G. Liu, and H. Xiong. A cost-effective
recommender system for taxi drivers. In 20th ACM SIGKDD
International Conference on KDD, pages 45–54, 2014.
[27] K.-T. Seow, N. H. Dang, and D.-H. Lee. A collaborative multiagent taxi-dispatch system. IEEE Transactions on Automation
Science and Engineering, 7(3):607–616, 2010.
[28] C. Tomlin, G. Pappas, and S. Sastry. Conflict resolution for air
traffic management: a study in multiagent hybrid systems. IEEE
Transactions on Automatic Control, 43(4):509–521, 1998.
[29] H. Yang, S. Wong, and K. Wong. Demand supply equilibrium
of taxi services in a network under competition and regulation.
Transportation Research Part B: Methodological, 36:799–819,
2002.
[30] J. Yang, P. Jaillet, and H. Mahmassani. Real-time multivehicle
truckload pickup and delivery problems. Transportation Science, 38(2):135–148, 2004.
[31] D. Zhang, T. He, S. Lin, S. Munir, and J. A. Stankovic. Dmodel:
Online taxicab demand model from big sensor data in. In IEEE
International Congress on Big Data, pages 152–159, 2014.
[32] R. Zhang and M. Pavone. Control of robotic mobility-ondemand systems: a queueing-theoretical perspective. In Proceedings of Robotics: Science and Systems, July 2014.
15
Fei Miao (S’13) received the B.Sc. degree in
Automation from Shanghai Jiao Tong University,
Shanghai, China in 2010. Currently, she is working toward the Ph.D. degree in the Department of
Electrical and Systems Engineering at University of
Pennsylvania. Her research interests include datadriven real-time control frameworks of large-scale
interconnected cyber-physical systems under model
uncertainties, and resilient control frameworks to
address security issues of cyber-physical systems.
She was a Best Paper Award Finalist at the 6th
ACM/IEEE International Conference on Cyber-Physical Systems in 2015.
Shuo Han (S’08-M’14) received the B.E. and M.E.
degrees in Electronic Engineering from Tsinghua
University, Beijing, China in 2003 and 2006, and
the Ph.D. degree in Electrical Engineering from the
California Institute of Technology, Pasadena, USA in
2013. He is currently a postdoctoral researcher in the
Department of Electrical and Systems Engineering at
the University of Pennsylvania. His current research
interests include control theory, convex optimization,
applied probability, and their applications in largescale interconnected systems.
Dr. Shan Lin is an assistant professor of the
Electrical and Computer Engineering Department at
Stony Brook University. He received his PhD in
computer science at the University of Virginia in
2010. His PhD dissertation is on Taming Networking
Challenges with Feedback Control. His research is
in the area of networked systems, with an emphasis
on feedback control based design for cyber-physical
systems and sensor systems. He works on wireless
network protocols, interoperable medical devices,
smart transportation systems, and intelligent sensing
systems.
Professor John A. Stankovic is the BP America
Professor in the Computer Science Department at
the University of Virginia. He is a Fellow of both
the IEEE and the ACM. He has been awarded an
Honorary Doctorate from the University of York.
He won the IEEE Real-Time Systems Technical
Committee’s Award for Outstanding Technical Contributions and Leadership. He also won the IEEE
Technical Committee on Distributed Processing’s
Distinguished Achievement Award. He has seven
Best Paper awards, including one for ACM SenSys
2006. Stankovic has an h-index of 105 and over 40,000 citations. Prof.
Stankovic received his PhD from Brown University.
Desheng Zhang is a Research Associate at Department of Computer Science and Engineering of the
University of Minnesota. Previously, he was awarded
his Ph.D in Computer Science from University of
Minnesota. His research includes big data analytics, mobile cyber physical systems, wireless sensor
networks, and intelligent transportation systems. His
research results are uniquely built upon large-scale
urban data from cross-domain urban systems, including cellphone, smartcard, taxi, bus, truck, subway,
bike, personal vehicle, electric vehicle, and road
networks. Desheng designs and implements large-scale data-driven models
and real-world services to address urban sustainability challenges.
Sirajum Munir received his PhD in Computer
Science from the University of Virginia in 2014.
He is currently working at Bosch Research and
Technology Center as a Research Scientist. His
research interest lies in the areas of cyber-physical
systems, wireless sensor and actuator networks, and
ubiquitous computing. He has published papers in
major conferences in these areas, two of which
were nominated for best paper awards at ACM/IEEE
ICCPS.
Hua Huang received the BE degree from Huazhong
University of Science and Technology in 2012, MS
degree in Temple University in 2014. He is currently
working towards the PhD degree in the Department
of Electrical and Computer Engineering in the Stony
Brook University. His research interests include activity recognition in wearable devices and smart
building, device-free indoor localization, deployment
and scheduling in wireless sensor networks.
Dr. Tian He is currently an associate professor in the
Department of Computer Science and Engineering
at the University of Minnesota-Twin City. Dr. He
is the author and co-author of over 200 papers in
premier network journals and conferences with over
17,000 citations (H-Index 52). Dr. He is the recipient
of the NSF CAREER Award, George W. Taylor
Distinguished Research Award and McKnight LandGrant Professorship, and many best paper awards
in networking. His research includes wireless sensor
networks, cyber-physical systems, intelligent transportation systems, real-time embedded systems and distributed systems.
George J. Pappas (S’90-M’91-SM’04-F’09) received the Ph.D. degree in electrical engineering
and computer sciences from the University of California, Berkeley, CA, USA, in 1998. He is currently the Joseph Moore Professor and Chair of the
Department of Electrical and Systems Engineering,
University of Pennsylvania, Philadelphia, PA, USA.
He also holds a secondary appointment with the
Department of Computer and Information Sciences
and the Department of Mechanical Engineering and
Applied Mechanics. He is a Member of the GRASP
Lab and the PRECISE Center. He had previously served as the Deputy
Dean for Research with the School of Engineering and Applied Science. His
research interests include control theory and, in particular, hybrid systems,
embedded systems, cyber-physical systems, and hierarchical and distributed
control systems, with applications to unmanned aerial vehicles, distributed
robotics, green buildings, and bimolecular networks. Dr. Pappas has received
various awards, such as the Antonio Ruberti Young Researcher Prize, the
George S. Axelby Award, the Hugo Schuck Best Paper Award, the George
H. Heilmeier Award, the National Science Foundation PECASE award and
numerous best student papers awards at ACC, CDC, and ICCPS.
| 3 |
arXiv:1001.0641v1 [cs.LO] 5 Jan 2010
Least and Greatest Fixpoints
in Game Semantics
Pierre Clairambault
pierre.clairambault@pps.jussieu.fr
PPS — Université Paris 7
Abstract
We show how solutions to many recursive arena equations can be computed in a natural way by allowing loops in arenas. We then equip arenas
with winning functions and total winning strategies. We present two natural winning conditions compatible with the loop construction which respectively provide initial algebras and terminal coalgebras for a large class
of continuous functors. Finally, we introduce an intuitionistic sequent calculus, extended with syntactic constructions for least and greatest fixed
points, and prove it has a sound and (in a certain weak sense) complete
interpretation in our game model.
1
Introduction
The idea to model logic by game-theoretic tools can be traced back to the work
of Lorenzen [21]. The idea is to interpret a formula by a game between two
players O and P, O trying to refute the formula and P trying to prove it. The
formula A is then valid if P has a winning strategy on the interpretation of A.
Later, Joyal remarked [18] that it is possible to compose strategies in Conway
games [8] in an associative way, thus giving rise to the first category of games
and strategies. This, along with parallel developments in Linear Logic and
Geometry of Interaction, led to the more recent construction of compositional
game models for a large variety of logics [3, 23, 9] and programming languages
[17, 4, 22, 5].
We aim here to use these tools to model an intuitionistic logic with induction
and coinduction. Inductive/coinductive definitions in syntax have been defined
and studied in a large variety of settings, such as linear logic [6], λ-calculus [1] or
Martin-Löf’s type theory [10]. Motivations are multiple, but generally amount
to increasing the expressive power of a language without paying the price of
exponential modalities (as in [6]) or impredicativity (as in [1] or [10]). However,
less work has been carried out when it comes to the semantics of such constructions. Of course we have the famous order-theoretic Knaster-Tarski fixed
point theorem [25], the nice categorical theory due to Freyd [12], set-theoretic
1
models [10] (for the strictly positive fragment) or PER-models [20], but it seems
they have been ignored by the current trend for intensional models (i.e. games
semantics, GoI . . . ). We fix this issue here, showing that (co)induction admits
a nice game-theoretic model which arises naturally if one enriches McCusker’s
[22] work on recursive types with winning functions inspired by parity games
[24].
In Section 2, we first recall the basic definitions of the Hyland-Ong-Nickau
setting of game semantics. Then we sketch McCusker’s interpretation of recursive types, and show how most of these recursive types can be modelled by
means of loops in the arenas. For this purpose, we define a class of functors
called open functors, including in particular all the endofunctors built out of
the basic type constructors. We also present a mechanism of winning functions
inspired by [16], allowing us to build a category Gam of games and total winning strategies. In section 3, we present µLJ, the intuitionistic sequent calculus
with least and greatest fixpoints that we aim to model. We briefly discuss its
proof-theoretic properties, then present its semantic counterpart: we show how
to build initial algebras and terminal coalgebras to most positive open functors. Finally, we use this semantic account of (co)induction to give a sound and
(weakly) complete interpretation of µLJ in Gam.
2
Arena Games
2.1
Arenas and Plays
We recall briefly the now usual definitions of arena games, introduced in [17].
More detailed accounts can be found in [22, 14]. We are interested in games with
two participants: Opponent (O, the environment ) and Player (P, the program).
Possible plays are generated by directed graphs called arenas, which are semantic
versions of types or formulas. Hence, a play is a sequence of moves of the ambient
arena, each of them being annotated by a pointer to an earlier move — these
pointers being required to comply with the structure of the arena. Formally, an
arena is a structure A = (MA , λA , ⊢A ) where:
• MA is a set of moves,
• λA : MA → {O, P } × {Q, A} is a labelling function indicating whether a
move is an Opponent or Player move, and whether it is a question (Q) or
QA
an answer (A). We write λOP
A for the projection of λA to {O, P } and λA
for its projection on {Q, A}. λA will denote λA where the {O, P } part has
been reversed.
• ⊢A is a relation between MA + {⋆} to MA , called enabling, satisfying:
– ⋆ ⊢ m =⇒ λA (m) = OQ;
QA
– m ⊢A n ∧ λQA
A (n) = A =⇒ λA (m) = Q;
OP
– m ⊢A n ∧ m 6= ⋆ =⇒ λOP
A (m) 6= λA (n).
2
In other terms, an arena is a directed bipartite graph, with a set of distinguished
initial moves (m such that ⋆ ⊢A m) and a distinguished set of answers (m
such that λQA
= A) such that no answer points to another answer. We now
A
define plays as justified sequences over A: these are sequences s of moves
of A, each non-initial move m in s being equipped with a pointer to an earlier
move n in s, satisfying n ⊢A m. In other words, a justified sequence s over A is
such that each reversed pointer chain sφ(0) ← sφ(1) ← . . . ← sφ(n) is a path on
A, viewed as a directed bipartite graph.
The role of pointers is to allow reopenings in plays. Indeed, a path on A may
be (slightly naively) understood as a linear play on A, and a justified sequence
as an interleaving of paths, with possible duplications of some of them. This
intuition is made precise in [15]. When writing justified sequences, we will often
omit the justification information if this does not cause any ambiguity. ⊑ will
denote the prefix ordering on justified sequences. If s is a justified sequence on
A, |s| will denote its length.
Given a justified sequence s on A, it has two subsequences of particular
interest: the P-view and O-view. The view for P (resp. O) may be understood
as the subsequence of the play where P (resp. O) only sees his own duplications.
In a P-view, O never points more than once to a given P-move, thus he must
always point to the previous move. Concretely, P-views correspond to branches
of Böhm trees [17]. Practically, the P-view psq of s is computed by forgetting
everything under Opponent’s pointers, in the following recursive way:
• psmq = psqm if λOP
A (m) = P ;
• psmq = m if ⋆ ⊢A m and m has no justification pointer;
• ps1 ms2 nq = psqmn if λOP
A (n) = O and n points to m.
The O-view xsy of s is defined dually. Note that in some cases — in fact if s
does not satisfies the visibility condition introduced below — psq and xsy may
not be correct justified sequences, since some moves may have pointed to erased
parts of the play. However, we will restrict to plays where this does not happen.
The legal sequences over A, denoted by LA , are the justified sequences s on
A satisfying the following conditions:
OP
• Alternation. If tmn ⊑ s, then λOP
A (m) 6= λA (n);
• Bracketing. A question q is answered by a if a is an answer and a
points to q. A question q is open in s if it has not yet been answered.
We require that each answer points to the pending question, i.e. the last
open question.
• Visibility. If tm ⊑ s and m is not initial, then if λOP
A (m) = P the justifier
of m appears in ptq, otherwise its justifier appears in xty.
3
2.2
The cartesian closed category of Innocent strategies
A strategy σ on A is a prefix-closed set of even-length legal plays on A. A
strategy is deterministic if only Opponent branches, i.e. ∀smn, smn′ ∈ σ, n =
n′ . Of course, if A represents a type (or formula), there are often many more
strategies on A than programs (or proofs) on this type. To address this issue
we need innocence. An innocent strategy is a strategy σ such that
sab ∈ σ ∧ t ∈ σ ∧ ta ∈ LA ∧ psaq = ptaq =⇒ tab ∈ σ
We now recall how arenas and innocent strategies organize themselves into
a cartesian closed category. First, we build the product A × B of two arenas
A and B:
MA×B
=
MA + MB
λA×B
⊢A×B
=
=
[λA , λB ]
⊢A + ⊢B
We mention the empty arena I = (∅, ∅, ∅), which will be terminal for the
category of arenas and innocent strategies. We mention as well the arena ⊥ =
(•, • 7→ OQ, (⋆, •)) with only one initial move, which will be a weak initial
object. We define the arrow A ⇒ B as follows:
MA⇒B
=
λA⇒B
=
m ⊢A⇒B n
MA + MB
[λA , λB ]
m 6= ⋆ ∧ m ⊢A n
m 6= ⋆ ∧ m ⊢B n
⇔
⋆ ⊢B m ∧ ⋆ ⊢A n
m = ⋆ ∧ ⋆ ⊢B n
We define composition of strategies by the usual parallel interaction plus
hiding mechanism. If A, B and C are arenas, we define the set of interactions
I(A, B, C) as the set of justified sequences u over A, B and C such that u↾A,B ∈
LA⇒B , u↾B,C ∈ LB⇒C and u↾A,C ∈ LA⇒C . Then, if σ : A ⇒ B and τ : B ⇒ C,
we define parallel interaction:
σ||τ = {u ∈ I(A, B, C) | u↾A,B ∈ σ ∧ u↾B,C ∈ τ }
Composition is then defined as σ; τ = {u↾A,C | u ∈ σ||τ }. It is associative and
preserves innocence (a proof of these facts can be found in [17] or [14]). We also
define the identity on A as the copycat strategy (see [22] or [14] for a definition)
on A ⇒ A. Thus, there is a category Inn which has arenas as objects and
innocent strategies on A ⇒ B as morphisms from A to B. In fact, this category
is cartesian closed, the cartesian structure given by the arena product above and
the exponential closure given by the arrow construction. This category is also
equipped with a weak coproduct A + B [22], which is constructed as follows:
MA+B
=
MA + MB + {q, L, R}
4
λA+B
m ⊢A+B n
2.3
[λA , λB , q 7→ OQ, L 7→ P A, R 7→ P A]
m, n ∈ MA ∧ m ⊢A n
m, n ∈ MB ∧ m ⊢B n
m=⋆∧n=q
⇔
(m
= q ∧ n = L) ∨ (m = q ∧ n = R)
(m = L ∧ ⋆ ⊢A n) ∨ (m = R ∧ ⋆ ⊢B n)
=
Recursive types and Loops
Let us recall briefly the interpretation of recursive types in game semantics, due
to McCusker [22]. Following [22], we first define an ordering E on arenas as
follows. For two arenas A and B, A E B iff
MA
λA
⊆
=
⊢A
=
MB
λB ↾MA
⊢B ∩ (MA + {⋆} × MA )
This defines a (large) dcpo, with least element I and directed sups given by
the componentwise union. If F : Inn → Inn is a functor which is continuous
with respect to E, we
F∞can find an arena D such that D = F (D) in the usual
way by setting D = n=0 F n (I). McCusker showed [22] that when the functors
are closed (i.e. their action can be internalized as a morphism (A ⇒ B) →
(F A ⇒ F B)), and when they preserve inclusion and projection morphisms
(i.e. partial copycat strategies) corresponding to E, this construction defines
minimal invariants [12]. Note that the crucial cases of these constructions are
the functors built out of the product, sum and function space constructions.
We give now a concrete and new (up to the author’s knowledge) description
of a large class of continuous functors, that we call open functors. These
include all the functors built out of the basic constructions, and allow a rereading
of recursive types, leading to the model of (co)induction.
2.3.1
Open arenas.
Let T be a countable set of names. An open arena is an arena A with distinguished question moves called holes, each of them labelled by an element of
T. We denote by X the holes annotated by X ∈ T. We will sometimes write
O
P
X to denote a hole of Player polarity, or X to denote a hole of Opponent
polarity. If A has holes labelled by X1 , . . . , Xn , we denote it by A[X1 , . . . , Xn ].
By abuse of notation, the corresponding open functor we are going to build will
be also denoted by A[X1 , . . . , Xn ] : (Inn × Innop )n → Inn.
2.3.2
Image of arenas.
If A[X1 , . . . , Xn ] is an open arena and B1 , . . . , Bn , B1′ , . . . , Bn′ are arenas (possibly open as well), we build a new arena A(B1 , B1′ , . . . , Bn , Bn′ ) by replacing each
5
O
′
occurrence of P
Xi by Bi and each occurrence of Xi by Bi . More formally:
MA(B1 ,B1′ ,...,Bn ,Bn′ )
=
(MA \ {X1 , . . . , Xn }) +
n
X
(MBi + MBi′ )
i=1
λA(B1 ,B1′ ,...,Bn ,Bn′ )
=
m ⊢A(B1 ,B1′ ,...,Bn ,Bn′ ) p ⇔
[λA , λB1 , λB1′ , . . . , λBn , λBn′ ]
P
Xi ∧ ⋆ ⊢Bi p
m ⊢A O
m
⊢
A
Xi ∧ ⋆ ⊢Bi′ p
⋆ ⊢Bi m ∧ P
Xi ⊢A p
⋆ ⊢Bi′ m ∧ O
Xi ⊢A p
m ⊢Bi p
m
⊢Bi′ p
m ⊢A p
Note that in this definition, we assimilate all the moves sharing the same hole
label Xi and with the same polarity. This helps to clarify notations, and is
justified by the fact that we never need to distinguish moves with the same hole
label, apart from when they have different polarity.
2.3.3
Image of strategies.
If A is an arena, we will, by abuse of notation, denote by IA both the set of initial
moves of A and the subarena of A with only these moves. Let A[X1 , . . . , Xn ] be
an open arena, B1′ , B1 , . . . , Bn′ , Bn and C1′ , C1 , . . . , Cn′ , Cn be arenas. Consider
the application ξ defined on moves as follows:
S
Xi if x ∈ i∈{1,...,n} (IB ′ ∪ IBi ∪ IC ′ ∪ ICi )
i
i
ξ(x) =
x
otherwise
and then extended recursively to an application ξ ∗ on legal plays as follows:
∗
ξ (s) if a is a non-initial move of Bi , B ′ , Ci or C ′
∗
i
i
ξ (sa) =
ξ ∗ (s)ξ(a) otherwise
ξ ∗ erases moves in the inner parts of Bi′ , Bi , Ci′ , Ci and agglomerates all the
initial moves back to the holes. This way we will be able to compare the resulting
play with the identity on A[X1 , . . . , Xn ]. Now, if σi : Bi → Ci and τi : Ci′ → Bi′
are strategies, we can now define the action of open functors on them by stating:
∀i ∈ {1, . . . , n}, s↾Bi ⇒Ci ∈ σi
∀i ∈ {1, . . . , n}, s↾C ′ ⇒B′ ∈ τi
s ∈ A(σ1 , τ1 , . . . , σn , τn ) ⇔
i
i
∗
ξ (s) ∈ idA[X1 ,...,Xn ]
Proposition 1. For any A[X1 , . . . , Xn ], this defines a functor A[X1 , . . . , Xn ] :
(Inn × Innop )n → Inn, which is monotone and continuous with respect to E.
6
Proof sketch. Preservation of identities and composition are rather direct. A
little care is needed to show that the resulting strategy is innocent: this relies
on two facts: First, for each Player move the three definition cases are mutually
exclusive. Second, a P-view of s ∈ A(σ1 , τ1 , . . . , σn , τn ) is (essentially) an initial
copycat appended with a P-view of one of σi or τi , hence the P-view of s
determines uniquely the P-view presented to one of σi , τi or idA[X1 ,...,Xn ] .
Example. Consider the open arena A[X] = X ⇒ X . For any arena B,
we have A(B) = B ⇒ B and for any σ : B1 → C1 and τ : C2 → B2 , we have
A(σ, τ ) = τ ⇒ σ : (B2 ⇒ B1 ) → (C2 ⇒ C1 ), the strategy which precomposes
its argument by τ and postcomposes it by σ.
2.3.4
Loops for recursive types.
Since these open functors are monotone and continuous with respect to E, solutions to their corresponding recursive equations can be obtained by computing
the infinite expansion of arenas (i.e. infinite iteration of the open functors).
However, for a large subclass of the open functors, this solution can be expressed in a simple way by replacing holes with a loop up to the initial moves.
Suppose A[X1 , . . . , Xn ] is an open functor, and i is such that Xi appears only
in non-initial, positive positions in A. Then we define an arena µXi .A as follows:
MµXi .A
=
λµXi .A
=
m ⊢µXi .A n
(MA \ Xi )
λA↾MµX .A
i
m ⊢A n
⇔
m ⊢A Xi ∧ ⋆ ⊢A n
A simple argument ensures that the obtained arena is isomorphic to the one
obtained by iteration of the functor. For this issue we take inspiration from
Laurent [19] and prove a theorem stating that two arenas are isomorphic in the
categorical sense if and only if their set of paths are isomorphic. A path in A
is a sequence of moves a1 , . . . , an such that for all i ∈ {1, . . . , n − 1} we have
ai ⊢A ai+1 . A path isomorphism between A and B is a bijection φ between
the set of paths of A and the set of paths on B such that for any non-empty
path p on A, φ(ip(p)) = ip(φ(p)) (where ip(p) denotes the immediate prefix of
p). We have then the theorem:
Theorem 1. Let A and B be two arenas. They are categorically isomorphic if
and only if there is a path isomorphism between their respective sets of paths.
Now, it is clear by construction that, if A[X] is an open functor such
F∞that X
appears only in non-initial positive positions in A, the set of paths of n=0 An (I)
and of µX.A are isomorphic. Therefore µX.A is solution of the recursive equation X = A(X), and when A[X] is closed and preserves inclusions and projections, µX.A defines as well a minimal invariant for A[X]. But in fact, we have
the following fact:
7
Proposition 2. If A[X] is an open functor, then it is closed and preserves
inclusions and projections. Hence µX.A is a minimal invariant for A[X].
This interpretation of recursive types as loops preserves finiteness of the
arena, and as we shall see, allows to easily express the winning conditions necessary to model induction and coinduction.
2.4
Winning and Totality
A total strategy on A is a strategy σ : A such that for all s ∈ σ, if there is a
such that sa ∈ LA , then there is b such that sab ∈ σ. In other words, σ has a
response to any legal Opponent move. This is crucial to interpret logic because
the interpretation of proofs in game semantics always gives total strategies: this
is a counterpart in semantics to the cut elimination property in syntax. To model
induction and coinduction in logic, we must therefore restrict to total strategies.
However, it is well-known that the class of total strategies is not closed under
composition, because an infinite chattering can occur in the hidden part of the
interaction. This is analogous to the fact that in λ-calculus, the class of strongly
normalizing terms is not closed under application: δ = λx.xx is a normal form,
however δδ is certainly not normalizable. This problem is discussed in [2, 16]
and more recently in [7]. We take here the solution of [16], and equip arenas with
winning functions: for every infinite play we choose a loser, hence restricting to
winning strategies has the effect of blocking infinite chattering.
The definition of legal plays extends smoothly to infinite plays. Let Lω
A
denote the set of infinite legal plays over A. If s ∈ Lω
A , we say that s ∈ σ
when for all s ⊏ s, s ∈ σ. We write LA = LA + Lω
A . A game will be a pair
A = (A, GA ) where A is an arena, and GA is a function from infinite threads
on A (i.e. infinite legal plays with exactly one initial move) to {W, L}. The
winning function GA extends naturally to potentially finite threads by setting,
for each finite s:
W if |s| is even ;
GA (s) =
L otherwise.
Finally, GA extends to legal plays by saying that GA (s) = W iff GA (t) = W for
every thread t of s. By abuse of notation, we keep the same notation for this
extended function. The constructions on arenas presented in section 2.2 extend
to constructions on games as follows:
• GA×B (s) = [GA , GB ] (indeed, a thread on A × B is either a thread on A or
a thread on B) ;
• GA+B (s) = W iff all threads of s↾A are winning for GA and all threads of
s↾B are winning for GB .
• GA⇒B (s) = W iff if all threads of s↾A are winning for GA , then GB (s↾B ) =
W.
It is straightforward to check that these constructions commute with the
extension of winning functions from infinite threads to potentially infinite legal
8
plays. We now define winning strategies σ : A as innocent strategies σ : A
such that for all s ∈ σ, GA (s) = W . Now, the following proposition is satisfied:
Proposition 3. Let σ : A ⇒ B and τ : B ⇒ C be two total winning strategies.
Then σ; τ is total winning.
Proof sketch. If σ; τ is not total, there must be infinite s in their parallel interaction σ||τ , such that s↾A,C is finite. By switching, we have in fact |s↾A | even and
|s↾C | odd. Thus GA (s↾A ) = W and GC (s↾C ) = L. We reason then by disjunction
of cases. Either GB (s↾B ) = W in which case GB⇒C (s↾B,C ) = L and τ cannot
be winning, or GB (s↾B ) = L in which case GA⇒B (s↾A,B ) = L and σ cannot be
winning. Therefore σ; τ is total.
σ; τ must be winning as well. Suppose there is s ∈ σ; τ such that GA⇒C (s) =
L. By definition of GA⇒C , this means that GA (s↾A ) = W and GC (s↾C ) = L. By
definition of composition, there is u ∈ σ||τ such that s = u↾A,C . But whatever
the value of GB (u↾B ) is, one of σ or τ is losing. Therefore σ; τ is winning.
It is clear from the definitions that all plays in the identity are winning. It
is also clear that all the structural morphisms of the cartesian closed structure
of Inn are winning (they are essentially copycat strategies), thus this defines a
cartesian closed category Gam of games and innocent total winning strategies.
3
3.1
3.1.1
Fixpoints
µLJ: an intuitionistic sequent calculus with fixpoints
Formulas.
S ::= S ⇒ T | S ∨ T | S ∧ T | µX.T | νX.T | X | ⊤ | ⊥
A formula F is valid if for any subformula of F of the form µX.F ′ ,
(1) X appears only positively in F ′ ,
(2) X does not appear at the root of F ′ (i.e. X appears at least under a ∨
or a ⇒ in the abstract syntax tree of F ′ ).
(2) corresponds to the restriction to arenas where loops allow to express recursive types, whereas (1) is the usual positivity condition. We could of course
hack the definition to get rid of these restrictions, but we choose not to obfuscate the treatment for an extra generality which is neither often considered in
the literature, nor useful in practical examples of (co)induction.
3.1.2
Derivation rules.
We present the rules with the usual dichotomy.
Identity group
A⊢A
ax
Γ⊢A
∆, A ⊢ B
Γ, ∆ ⊢ B
9
Cut
Structural group
Γ, A, A ⊢ B
Γ, A ⊢ B
Γ⊢B
C
Γ, A, B, ∆ ⊢ C
W
Γ, A ⊢ B
Γ, B, A, ∆ ⊢ C
γ
Logical group
Γ, A ⊢ B
Γ⊢A⇒B
Γ⊢A
Γ⊢A
⇒r
Γ, ∆, A ⇒ B ⊢ C
Γ⊢B
Γ ⊢ A∨B
⇒l
Γ, A ⊢ C
∧r
Γ⊢A∧B
Γ⊢A
∆, B ⊢ C
Γ, A ∧ B ⊢ C
Γ⊢B
←
∨−r
Γ⊢A∨B
−
→
∨
r
Γ, ⊥ ⊢ A
⊥l
Γ⊢⊤
Γ, B ⊢ C
←
∧−l
Γ, A ∧ B ⊢ C
Γ, A ⊢ C
−
→
∧l
∆, B ⊢ C
Γ, ∆, A ∨ B ⊢ C
⊤r
∨l
Fixpoints
Γ ⊢ T [µX.T /X]
Γ ⊢ µX.T
µr
T [A/X] ⊢ A
µX.T ⊢ A
T [νX.T /X] ⊢ B
µl
νX.T ⊢ B
νl
A ⊢ T [A/X]
A ⊢ νX.T
νr
Note that the µl , νl and νr rules are not relative to any context. In fact,
the general rules with a context Γ at the left of the sequent are derivable from
these ones (even if, for µl and νr , the construction of the derivation requires
an induction on T ), and we stick with the present ones to clarify the game
model. Cut elimination on the ⇒, ∧, ∨ fragment is the same as usual. For the
reduction of µ and ν, we need an additional rule to handle the unfolding of
formulas. For this purpose, we add a new rule [T ] for each type T with free
variables. This method can already be found in [1] for strictly positive functors:
no type variable appears on the left of an implication. From now on, T [A/X]
will be abbreviated T (A). This notation implies that, unless otherwise stated,
X will be the variable name for which T is viewed as a functor. In the following
rules, X appears only positively in T and only negatively in N :
Functors
A⊢B
T (A) ⊢ T (B)
A⊢B
[T ]
N (B) ⊢ N (A)
[N ]
The dynamic behaviour of this rule is to locally perform the unfolding. We give
some of the reduction rules. These are of two kinds: the rules for the elimination
of [T ], and the cut elimination rules. Here are the main cases:
10
π
π
A⊢B
T ⊢T
T ⊢T
[T ](X 6∈ F V (T ))
ax
A⊢B
A⊢B
π
[X]
A⊢B
π
π
π
A⊢B
A⊢B
N (B) ⊢ N (A)
N (A) ⇒ T (A) ⊢ N (B) ⇒ T (B)
[N ⇒ T ]
A⊢B
[N ]
T (A) ⊢ T (B)
N (A) ⇒ T (A), N (B) ⊢ T (B)
N (A) ⇒ T (A) ⊢ N (B) ⇒ T (B)
[T ]
⇒l
⇒r
π
π
A⊢B
A⊢B
T (A)[µY.T (B)/Y ] ⊢ T (B)[µY.T (B)/Y ]
µY.T (A) ⊢ µY.T (B)
[µY.T ]
T (A)[µY.T (B)/Y ] ⊢ µY.T (B)
µY.T (A) ⊢ µY.T (B)
[T [µY.T (B)/Y ]]
µr
µl
We omit the rule for ν, which is dual, and for ∧ and ∨, which are simple
pairing and case manipulations. Note also that most of these cases have a
counterpart where T is replaced by negative N , which has the sole effect of π
being a proof of B ⊢ A instead of A ⊢ B in the expansion rules. With that, we
can express the cut elimination rule for fixpoints:
π1
π2
Γ ⊢ T [µX.T /X]
Γ ⊢ µX.T
µr
T [A/X] ⊢ A
µX.T ⊢ A
Γ⊢A
µl
Cut
π2
T [A/X] ⊢ A
µl
π1
µX.T ⊢ A
Γ ⊢ T [µX.T /X]
T [µX.T /X] ⊢ T [A/X]
Γ ⊢ T [A/X]
Γ⊢A
[T ]
Cut
π2
T [A/X] ⊢ A
Cut
We skip once again the rule for ν, which is dual to µ. We choose consciously
not to recall the usual cut elimination rules nor the associated commutation
rules, since they are not central to our goals. µLJ, as presented above, does
not formally eliminate cuts since there is no rule to reduce the following (and
11
its dual with ν):
π1
T (A) ⊢ A
µX.T ⊢ A
π2
µl
Γ, A ⊢ B
Γ, µX.T ⊢ B
Cut
This cannot be reduced without some prior unfolding of the µX.T on the left.
This issue is often solved [6] by replacing the rule for µ presented here above by
the following:
T (A) ⊢ A
Γ, A ⊢ B ′
µ
Γ, µX.T ⊢ B
With the corresponding reduction rule, and analogously for ν. We choose here
not to do this, first because our game model will prove consistency without
the need to prove cut elimination, and second because we want to preserve the
proximity with the categorical structure of initial algebras / terminal coalgebras.
3.2
The games model
We present the game model for fixpoints. We wish to model a proof system,
therefore we need our strategies to be total. The base arenas of the interpretation of fixpoints will be the arenas with loops presented in section 2.3.4, to which
we will adjoin a winning function. While the base arenas will be the same for
greatest and least fixpoints, they will be distinguished by the winning function:
intuitively, Player loses if a play grows infinite in a least fixpoint (inductive)
game, and Opponent loses if this happens in a greatest fixpoint (coinductive)
game. The winning functions we are going to present are strongly influenced
by Santocanale’s work on games for µ-lattices [24]. A win open functor is
a functor T : (Gam × Gamop )n → Gam such that there is an open functor
T [X1 , . . . , Xn ] such that for all games A1 , . . . , A2n of base arenas A1 , . . . , A2n ,
the base arena of T(A1 , . . . , A2n ) is T (A1 , . . . , An ). In other terms, it is the
natural lifting of open functors to the category of games. By abuse of notation,
we denote this by T[X1 , . . . , Xn ], and T [X1 , . . . , Xn ] will denote its underlying
open functor.
3.2.1
Least fixed point.
Let T[X1 , . . . , Xn ] be a win open functor such that X1 appears only positively
and at depth higher than 0 in T [X1 , . . . , Xn ]. Then we define a new win open
functor µX1 .T[X2 , . . . , Xn ] as follows:
• Its base arena is µX1 .T [X2 , . . . , Xn ] ;
• If A3 , . . . , A2n ∈ Gam, GµX1 .T(A3 ,...,A2n ) (s) = W iff
– There is N ∈ N such that no path of s takes the external loop more
that N times, and ;
12
– s is winning in the subgame inside the loop, or more formally:
GT(I,I,A3 ,...,A2n ) (s↾T(I,I,A3 ,...,A2n ) ) = W .
3.2.2
Greatest fixed point.
Dually, if the same conditions are satisfied, we define the win open functor
νX1 .T[X1 , . . . , Xn ] as follows:
• Its base arena is µX1 .T [X2 , . . . , Xn ] ;
• If A3 , . . . , A2n ∈ Gam, GνX1 .T(A3 ,...,A2n ) (s) = W iff
– For any N ∈ N, there is a path of s crossing the external loop more
than N times, or ;
– s is winning in the subgame inside the loop, or more formally:
GT(I,I,A3 ,...,A2n ) (s↾T(I,I,A3 ,...,A2n ) ) = W .
It is straightforward to check that these are still functors, and in particular
win open functors. There is one particular case that is worth noticing: if T[X]
has only one hole which appears only in positive position and at depth greater
than 0, then µX.T is a constant functor, i.e. a game. Moreover, theorem 1
implies that it is isomorphic in Inn to T(µX.T). It is straightforward to check
that this isomorphism iT : T(µX.T) → µX.T is winning (it is nothing but the
identity strategy), which shows that they are in fact isomorphic in Gam. Then,
one can prove the following theorem:
Theorem 2. If T[X] has only one hole which appears only in positive position
and at depth greater than 0, then the pair (µX.T, iT ) defines an initial algebra
for T[X] and (νX.T, i−1
T ) defines a terminal coalgebra for T[X].
Proof. We give the proof for initial alebras, the second part being dual. Let
(A, σ) another algebra of T[X]. We need to show that there is a unique σ † :
µX.T ⇒ B such that
T(σ† )
T(µX.T)
/ T(B)
σ
iT
µX.T
σ†
/B
commutes. The idea is to iterate σ:
...
T3 (σ)
/ T3 (B)
T2 (σ)
/ T2 (B)
T(σ)
/ T(B)
σ
/B
and somehow to take the limit. In fact we can give a direct definition of σ † :
σ
σ (1)
=
σ
(n+1)
=
=
Tn (σ); σ (n)
{s ∈ LµX.T⇒B | ∃n ∈ N∗ , s ∈ σ (n) }
σ
†
13
This defines an innocent strategy, since when restricted to plays of µX.T, these
strategies agree on their common domain. This strategy is winning. Indeed,
take an infinite play s ∈ σ † . Suppose s↾µX.T is winning. By definition of GµX.T ,
this means that there is N ∈ N such that no path of s↾µX.T takes the external loop
more than N times. Thus, s ∈ LTn (I)⇒B . But this implies that s ∈ σ (n) , and
σ (n) is a composition of winning strategies thus winning, therefore s is winning.
Moreover, σ † is the unique innocent strategy making the diagram commute:
suppose there is another f making this square commute. Since T(µX.T) and
µX.T have the same set of paths, iT is in fact the identity, thus we have T(f ); σ =
f . By applying T and post-composing by σ, we get:
T2 (f ); T(σ); σ = T(f ); σ = f
And by iterating this process, we get for all n ∈ N:
Tn+1 (f ); Tn (σ); . . . ; T(σ); σ = f
Thus:
Tn+1 (f ); σ (n) = f
Now take s ∈ f , and let n be the length of the longest path in s. Since T[X]
has no hole at the root, no path of length n can reach B in Tn+1 (B), thus
s ∈ σ (n) , therefore s ∈ σ † . The same reasoning also works for the other inclusion.
Likewise, if σ : B → T(B), we build a unique σ ‡ : B → νX.T making the
coalgebra diagram commute.
3.3
Interpretation of µLJ
3.3.1
Interpretation of Formulas.
As expected, we give the interpretation of valid formulas.
J⊤K
J⊥K
JA ∨ BK
JA ∧ BK
3.3.2
=
=
=
=
JA ⇒ BK
JXK
JµX.T K
JνX.T K
I
⊥
JAK + JBK
JAK × JBK
=
=
=
=
JAK ⇒ JBK
X
µX.JT K
νX.JT K
Interpretation of Proofs.
As usual, the interpretation of a proof π of a sequent A1 , . . . , An ⊢ B will be a
morphism JπK : JA1 K × . . . × JAn K −→ JBK. The interpretation is computed by
induction on the proof tree. The interpretation of the rules of LJ is standard
and its correctness follows from the cartesian closed structure of Gam. Here
are the interpretations for the fixpoint and functor rules:
u
wΓ ⊢ T [µX.T /X]
v
Γ ⊢ µX.T
u
}
π
µr
wT [A/X] ⊢ A
v
~ = JπK; iJT K
µX.T ⊢ A
14
}
π
µl
†
~ = JπK
u
w
vT [νX.T /X] ⊢ B
νX.T ⊢ B
u
}
π
νl
w
vA ⊢ T [A/X]
−1
~ = iJT K ; JπK
u
w
v
}
π
A⊢B
T (A) ⊢ T (B)
}
π
A ⊢ νX.T
νr
‡
~ = JπK
~ = JT K(JπK)
[T ]
We do not give the details of the proof that this defines an invariant of
reduction. The main technical point is the validity of the interpretation of the
functor rule; more precisely when the functor is a (least or greatest) fixpoint.
Given that, we get the following theorem.
Theorem 3. If π
π ′ , then JπK = Jπ ′ K.
In particular, this proves the following theorem which is certainly worth
noticing, because µLJ has large expressive power. In particular, it contains
Gödel’s system T [13].
Theorem 4. µLJ is consistent: there is no proof of ⊥.
Proof. There is no total strategy on the game ⊥.
3.3.3
Completeness.
When it comes to completeness, we run into the issue that the total winning
innocent strategies are not necessarily finite, hence the usual definability process
does not terminate. Nonetheless, we get a definability theorem in an infinitary
version of µLJ. Whether a more precise completeness theorem is possible is
a subtle point. First, we would need to restrict to an adequate subclass of
the recursive total winning strategies (for example, the Ackermann function is
definable in µLJ). Then again, the problem to find a proof whose interpretation
is exactly the original strategy would be highly non-trivial: if σ : µX.T ⇒ A, we
have to guess an invariant B, a proof π1 of T (B) ⊢ B and a proof π2 of B ⊢ A
such that Jπ1 K† ; Jπ2 K = σ. Perhaps it would be more feasible to look for a proof
whose interpretation is observationally equivalent to the original strategy, which
would be very similar to the universality result in [17].
4
Conclusion and Future Work
We have successfully constructed a games model of a propositional intuitionistic sequent calculus µLJ with inductive and coinductive types. It is striking
that the adequate winning conditions on legal plays to model (co)induction
are almost identical to those used in parity games to model least and greatest
fixpoints, to the extent that the restriction of our winning condition to paths
coincides exactly with the winning condition used in [24]. It would be worthwile
to investigate this connection further: given a game viewed as a bipartite graph
15
along with winning conditions for infinite plays, under which assumptions can
these winning conditions be canonically lifted to the set of legal plays on this
graph, viewed as an arena? Results in this direction might prove useful, since
they would allow to import many game-theoretic results into game semantics,
and thus programming languages.
This work is part of a larger project to provide game-theoretic models to
total programming languages with dependent types, such as COQ or Agda.
In these settings, (co)induction is crucial, since they deliberately lack general
recursion. We believe that in the appropriate games setting, we can push the
present results further and model Dybjer’s Inductive-Recursive[11] definitions.
4.0.4
Acknowledgements.
We would like to thank Russ Harmer, Stephane Gimenez and David Baelde for
stimulating discussions, and the anonymous referees for useful comments and
suggestions.
References
[1] A. Abel and T. Altenkirch. A predicative strong normalisation proof for a
lambda-calculus with interleaving inductive types. In TYPES, 1991.
[2] S. Abramsky. Semantics of interaction: an introduction to game semantics.
Semantics and Logics of Computation, pages 1–31, 1996.
[3] S. Abramsky and R. Jagadeesan. Games and full completeness for multiplicative linear logic. J. Symb. Log., 59(2):543–574, 1994.
[4] S. Abramsky, R. Jagadeesan, and P. Malacaria. Full Abstraction for PCF.
Info. & Comp, 2000.
[5] S. Abramsky, H. Kohei, and G. McCusker. A fully abstract game semantics
for general references. In LICS, pages 334–344, 1998.
[6] D. Baelde and D. Miller. Least and greatest fixed points in linear logic. In
LPAR, pages 92–106, 2007.
[7] P. Clairambault and R. Harmer. Totality in Arena Games. Submitted.,
2008.
[8] J.H. Conway. On Numbers and Games. AK Peters, Ltd., 2001.
[9] J. De Lataillade. Second-order type isomorphisms through game semantics.
Ann. Pure Appl. Logic, 151(2-3):115–150, 2008.
[10] P. Dybjer. Inductive sets and families in Martin-Löfs Type Theory and
their set-theoretic semantics: An inversion principle for Martin-Löfs type
theory. Logical Frameworks, 14:59–79, 1991.
16
[11] P. Dybjer. A general formulation of simultaneous inductive-recursive definitions in type theory. J. Symb. Log., 65(2):525–549, 2000.
[12] P. Freyd. Algebraically complete categories. In Proc. 1990 Como Category
Theory Conference, volume 1488, pages 95–104. Springer, 1990.
[13] K. Godel. Über eine bisher noch nicht bentzte Erweiterung des finiten
Standpunktes. Dialectica, 1958.
[14] R. Harmer. Innocent game semantics. Lecture notes, 2004.
[15] R. Harmer, J.M.E. Hyland, and P.-A. Melliès. Categorical combinatorics
for innocent strategies. In LICS, pages 379–388, 2007.
[16] J.M.E. Hyland. Game semantics. Semantics and Logics of Computation,
1996.
[17] J.M.E. Hyland and C.H.L. Ong. On full abstraction for PCF: I, II, and
III. Inf. Comput., 163(2):285–408, 2000.
[18] A. Joyal. Remarques sur la théorie des jeux à deux personnes. Gaz. Sc.
Math. Qu., 1977.
[19] O. Laurent. Classical isomorphisms of types. Mathematical Structures in
Computer Science, 15(5):969–1004, 2005.
[20] R. Loader. Equational theories for inductive types. Ann. Pure Appl. Logic,
84(2):175–217, 1997.
[21] P. Lorenzen. Logik und Agon. Atti Congr. Internat. di Filosofia, 1960.
[22] G. McCusker. Games and full abstraction for FPC. Inf. Comput., 160(12):1–61, 2000.
[23] P.-A. Melliès. Asynchronous games 4: A fully complete model of propositional linear logic. In LICS, pages 386–395, 2005.
[24] L. Santocanale. Free µ-lattices. J. Pure Appl. Algebra, 168(2-3):227–264,
2002.
[25] A. Tarski. A lattice-theoretical fixpoint theorem and its applications. Pacific Journal of Mathematics, 5(2):285–309, 1955.
17
| 6 |
arXiv:1512.05105v1 [] 16 Dec 2015
INVARIANTS OF LINKAGE OF MODULES
TONY J. PUTHENPURAKAL
Abstract. Let (A, m) be a Gorenstein local ring and let M, N be two CohenMacaulay A-modules with M linked to N via a Gorenstein ideal q. Let L be
another finitely generated A-module. We show that ExtiA (L, M ) = 0 for all
i ≫ 0 if and only if TorA
i (L, N ) = 0 for all i ≫ 0. If D is Cohen-Macaulay then
we show that ExtiA (M, D) = 0 for all i ≫ 0 if and only if ExtiA (D † , N ) = 0 for
all i ≫ 0, where D † = ExtrA (D, A) and r = codim D. As a consequence we get
that ExtiA (M, M ) = 0 for all i ≫ 0 if and only if ExtiA (N, N ) = 0 for all i ≫ 0.
We also show that EndA (M )/ rad EndA (M ) ∼
= (EndA (N )/ rad EndA (N ))op .
We also give a negative answer to a question of Martsinkovsky and Strooker.
1. introduction
Let (A, m) be a Gorenstein local ring. Recall an ideal q in A is said to be a
Gorenstein ideal if q is perfect and A/q is a Gorenstein local ring. A special class
of Gorenstein ideals are CI(complete intersection) ideals, i.e., ideals generated by
an A-regular sequence. In this paper, a not necessarily perfect ideal q such that
A/q is a Gorenstein ring will be called a quasi-Gorenstein ideal.
Two ideals I and J are linked by a Gorenstein ideal q if q ⊆ I ∩ J; J = (q : I)
and I = (q : J). We write it as I ∼q J. If q is a complete intersection then we say I
is CI-linked to J via q. If q is a quasi-Gorenstein ideal then we say I is quasi-linked
to J via q. Note that traditionally only CI-linkage used to be considered. However
in recent times more general types of linkage are studied.
We say ideals I and J is in the same linkage class if there is a sequence of ideals
I0 , . . . , In in A and Gorenstein ideals q0 . . . , qn−1 such that
(i) Ij ∼qj Ij+1 , for j = 0, . . . , n − 1.
(ii) I0 = I and In = J.
If n is even then we say that I and J are evenly linked. We can analogously define
CI-linkage class, quasi-linkage class, even CI-linkage class and even quasi-linkage
class (of ideals).
A natural question is that if I and J are in the same linkage class then what
properties of I is shared by J. This was classically done when I and J are in
the same CI-linkage class (or even CI-linkage class). In their landmark paper [12],
Peskine and Szpiro proved that if I and J are in the same CI-linkage class and I
is a Cohen-Macaulay ideal (i.e., A/I is a Cohen-Macaulay ring) then so is J. This
can be proved more generally for ideals in a quasi-linkage class, see [10, Corollary
15, p. 616]. In another landmark paper [8, 1.14], Huneke proved that if I is in
the CI-linkage class of a complete intersection then the Koszul homology Hi (I) are
Date: March 1, 2018.
1991 Mathematics Subject Classification. Primary 13C40; Secondary 13D07.
Key words and phrases. liason of modules, Gorenstien ideals, vanishing of Ext, Tor.
1
2
TONY J. PUTHENPURAKAL
Cohen-Macaulay for all i ≥ 0. It is known that this result is not true in if I is linked
to a complete intersection ( via Gorenstein ideals and not-necessarily CI-ideals). If
A = k[[X1 , . . . , Xn ]] (where k is a field or a complete discrete valuation ring) then
Huneke defined some invariants of even CI-linkage class of equidimensional unmixed
ideals, see [9, 3.2]. Again these invariants are not stable under Gorenstein (even)liason.
In a remarkable paper Martsinkovsky and Strooker, [10], introduced liason for
modules. See section two for definition. We note here that ideals I and J are linked
as ideals if and only if A/I is linked to A/J as modules. One can analogously define
linkage class of modules, even linkage of modules etc. We can also define CI linkage
of modules, quasi-linkage of modules etc.
Thus a natural question arises: If M, N are in the same linkage class of modules
(or same even linkage class of modules) then what properties of M are shared by N .
The generalization of Peskine and Szpiro’s result holds. If M is Cohen-Macaulay
and N is quasi-linked to M then N is also Cohen-Macaulay, see [10, Corollary
15, p. 616]. To state another property which is preserved under linkage first let us
recall the definition of Cohen-Macaulay approximation from [1]. A Cohen-Macaulay
approximation of a finitely generated A-module M is a exact sequence
0 −→ Y −→ X −→ M −→ 0
where X is a maximal Cohen-Macaulay A-module and Y has finite projective dimension. Such a sequence is not unique but X is known to unique up to a free
summand and so is well defined in the stable category CM(A) of maximal CohenMacaulay A-modules. We denote by X(M ) the maximal Cohen-Macaulay approximation of M . In [10, Theorem 13, p. 620], Martsinkovsky and Strooker proved
that if M is evenly linked to N then X(M ) ∼
= X(N ) in CM(A). They also asked
if this result holds for M and N are in the same even quasi-linkage class, see [10,
Question 3, p. 623]. A motivation for this paper was to try and solve this question.
We answer this question in the negative. We prove
Theorem 1.1. There exists a complete intersection A of dimension one and finite
length modules M, N such that M is evenly quasi-linked to N but X(M ) ≇ X(N ).
To prove our next result we introduce a construction (essentially due to Ferrand)
which is very useful, see section three. Let CMg (A) be the full subcategory of
Cohen-Macaulay A-modules of codimension g. We prove:
Theorem 1.2. Let (A, m) be a Gorenstein local ring of dimension d. Let M ∈
CMg (A). Assume M ∼q N where q is a Gorenstien ideal in A. Let L be a finitely
generated A-module and let D ∈ CMr (A). Set D† = ExtrA (D, A). Then
(1) ExtiA (L, M ) = 0 for all i ≫ 0 if and only if TorA
i (L, N ) = 0 for all i ≫ 0.
(2) ExtiA (M, D) = 0 for all i ≫ 0 if and only if ExtiA (D† , N ) = 0 for all i ≫ 0.
We should note that this result is new even in the case for cyclic modules. A
remarkable consequence of Theorem 1.2 is the following result
Corollary 1.3. Let (A, m) be a Gorenstein local ring of dimension d. Let M ∈
CMg (A) and C ∈ CMr (A). Assume M ∼q N and C ∼n D where q, n are Gorenstein ideals in A. Then
ExtiA (M, C) = 0 for all i ≫ 0 ⇐⇒ ExtiA (D, N ) = 0 for all i ≫ 0.
INVARIANTS OF LINKAGE OF MODULES
3
In particular
ExtiA (M, M ) = 0 for all i ≫ 0 ⇐⇒ ExtiA (N, N ) = 0 for all i ≫ 0.
We also prove the following surprising invariant of quasi-linkage.
Theorem 1.4. Let (A, m) be a Gorenstein local ring and let M, N, N ′ ∈ CMg (A).
Assume M is quasi-evenly linked to N and that it is quasi-oddly linked to N ′ . Then
(1) End(M )/ rad End(M ) ∼
= End(N )/ rad End(N ).
op
(2) End(M )/ rad End(M ) ∼
= (End(N ′ )/ rad End(N ′ )) .
Here if Γ is a ring then Γop is its opposite ring.
We now describe in brief the contents of this paper. In section two we recall some
preliminaries regarding linkage of modules as given in [10]. In section three we give
a construction which is needed to prove our resuts. We prove Theorem 1.2(1) in
section four and Theorem 1.2(2) in section five. We recall some facts regarding
cohomological operators in section six. This is needed in section seven where we
prove Theorem 1.1. Finally in section eight we prove Theorem 1.4.
2. Some preliminaries on Liason of Modules
In this section we recall the definition of linkage of modules as given in [10].
Throughout (A, m) is a Gorenstein local ring of dimension d.
φ
→ F0 → M → 0
2.1. Let us recall the definition of transpose of a module. Let F1 −
be a minimal presentation of M . Let (−)∗ = Hom(−, A). The transpose Tr(M ) is
defined by the exact sequence
φ∗
0 → M ∗ → F0∗ −→ F1∗ → Tr(M ) → 0.
Also let Ω(M ) be the first syzygy of M .
Definition 2.2. Two A-modules M and N are said to be horizontally linked if
M∼
= Ω(Tr(M )).
= Ω(Tr(N )) and N ∼
Next we define linkage in general.
Definition 2.3. Two A-modules M and N are said to be linked via a Gorenstein
ideal q if
(1) q ⊆ ann M ∩ ann N , and
(2) M and N are horizontally linked as A/q-modules.
We write it as M ∼q N .
If q is a complete intersection we say M is CI-linked to N via q. If q is a quasi
Gorenstein ideal then we say M is quasi-linked to N via q.
Remark 2.4. It can be shown that ideals I and J are linked by a quasi-Gorenstein
ideal q (definition as in the introduction) if and only if the module A/I is quasilinked to A/J by q, see [10, Proposition 1, p. 592].
2.5. We say M, N are in same linkage class of modules if there is a sequence of
A-modules M0 , . . . , Mn and Gorenstein ideals q0 . . . , qn−1 such that
(i) Mj ∼qj Mj+1 , for j = 0, . . . , n − 1.
(ii) M0 = M and Mn = N .
If n is even then we say that M and N are evenly linked. Analogously we can define
the notion of CI-linkage class, quasi-linkage class, even CI-linkage class and even
quasi-linkage class (of modules).
4
TONY J. PUTHENPURAKAL
3. A Construction
In this section we describe a construction essentially due to Ferrand. Throughout
(A, m) is a Gorenstein local ring of dimension d.
3.1. We note the following well-known result, see [4, 3.3.10]. Let D ∈ CMg (A).
Then ExtiA (D, A) = 0 for i 6= g. Set D† = ExtgA (D, A). Then D† ∈ CMg (A).
Furthermore (D† )† ∼
= D.
The following result is well-known. However we give a proof as we do not have
a reference.
Lemma 3.2. Let (A, m) be a Gorenstein local ring of dimension d and let M ∈
CMg (A). Let q be a quasi-Gorenstein ideal of grade g contained in ann M . Set
B = A/q. Then ExtgA (M, A) ∼
= HomB (M, B).
Proof. Let y = y1 , . . . , yg ∈ q be a regular sequence. Set C = A/(y). Then we
have a natural ring homomorphism C → B. As C, B are Gorenstein rings we have
HomC (B, C) ∼
= B, see [4, 3.3.7]. We now note that
Extg (M, A) ∼
see [4, 3.1.16]
= HomC (M, C),
A
= HomC (M ⊗B B, C),
∼
= HomB (M, HomC (B, C)),
= HomB (M, B).
3.3. Construction: Let M ∈ CMg (A) and let q be a quasi-Gorenstein ideal in A of
codimension g contained in ann M . Let M ∼q N . Let P be minimal free resolution
of M and let Q be a minimal free resolution of P0 /qP0 . We have a natural map
P0 /qP0 → M → 0. We lift this to a chain map φ : Q → P. We then dualize this
map to get a chain map φ∗ : P∗ → Q∗ . Let C = cone(φ∗ ).
Lemma 3.4.
i
H (C) =
(
N,
0,
if i = g,
if i =
6 g.
Proof. We have an exact sequence of complexes
0 → Q∗ → C → P∗ (−1) → 0.
Notice
i
∗
H (P ) =
ExtiA (M, A)
=
(
M † = ExtgA (M, A),
0,
if i = g,
if i =
6 g.
We also have
H i (Q∗ ) = ExtiA (P0 /qP0 , A) = 0 if i 6= g.
It is now immediate that
H i (C) = 0
for i 6= g − 1, g.
Set B = A/q and P = P0 /qP0 . Then by 3.2 we have
Extg (M, A) ∼
= HomB (M, B), and
A
ExtgA (P , A)
∼
= HomB (P , B).
INVARIANTS OF LINKAGE OF MODULES
5
We have an exact sequence of MCM B-modules
ǫ
→ M → 0.
0→K→P −
Dualizing with respect to B we get an exact sequence
ǫ∗
0 → HomB (M, B) −→ HomB (P , B) → K ∗ → 0.
∼ K ∗ . We note that the map H g (P∗ ) → H g (Q∗ ) is ǫ∗ .
As M ∼q N we get that N =
As a consequence we obtain that
H g−1 (C) = 0 and H g (C) = N.
Remark 3.5.
(1) If A is regular local, M = A/I, q is a complete intersection
and I ∼q J, then this construction was used by Ferrand to give a projective
resolution of A/J, see [12, Proposition 2.6].
(2) If M is perfect A-module of codimension g and q is a Gorenstein ideal
then this construction was used by Martsinkovsky and Strooker to give a
projective resolution of N , see [10, Proposition 10, p. 597].
Our interest in this construction is due to the following:
3.6. Observation: Let B g (C) be the module of g-boundaries of C and Z g (C) be
the module of g-cocycles of C. Then projdimA B g (C) is finite and Z g (C) is a
maximal Cohen-Macaulay A-module. Thus the sequence
0 → B g (C) → Z g (C) → N → 0,
is a maximal Cohen-Macaulay approximation of N .
Proof This follows from Lemma 3.4.
4. Proof of Theorem 1.2(1)
In this section we prove Theorem 1.2(1). It is an easy consequence of the following result:
Theorem 4.1. Let (A, m) be a Gorenstein local ring of dimension d. Let M ∈
CMg (A). Assume M ∼q N where q is a Gorenstien ideal in A. Let L be a maximal
Cohen-Macaulay A-module. Let P be minimal free resolution of N and let Q be
a minimal free resolution of P0 /qP0 . We do the construction as in 3.3 and let
C = cone(φ∗ ). Set X = Z g (C) and Y = B g (C). For s ≥ 1, let Xs be the image
of the map P∗s−1 → P∗s . Let x = x1 , . . . , xd be a maximal regular A-sequence. Set
A = A/(x) and L = L/xL. Then the following assertions are equivalent:
(i)
(ii)
(iii)
(iv)
(v)
(vi)
(vii)
(viii)
ExtiA (L, M ) = 0 for all i ≫ 0.
ExtiA (L, X) = 0 for all i ≫ 0.
For all s ≫ 0 and for all i ≥ 1 we have ExtiA (L, Xs ) = 0.
H i (HomA (L, P∗ )) = 0 for all i ≫ 0.
H i (HomA (L ⊗A P, A)) = 0 for all i ≫ 0.
H i (HomA (L ⊗A P, A)) = 0 for all i ≫ 0.
TorA
i (L, N ) = 0 for i ≫ 0.
TorA
i (L, N ) = 0 for i ≫ 0.
We need a few preliminary results before we are able to prove Theorem 4.1.
6
TONY J. PUTHENPURAKAL
d
n
Kn+1 → · · · be a co-chain complex. Let s be an integer.
4.2. Let K : · · · → Kn −→
By K≥s we mean the co-chain complex
d
d
n
s
Kn+1 → · · ·
Ks+1 → · · · → Kn −→
0 → Ks −→
Clearly H i (K≥s ) = H i (K) for all i ≥ s + 1.
4.3. Let P, Q, C be as in Theorem 4.1. As q is a Gorenstein ideal, it has in
particular finite projective dimension. It follows that for i ≥ d+1, we get Ci = P∗i+1
and the map Ci → Ci+1 is same as the map P∗i+1 → P∗i+2 . In particular if s ≥ d+1
then C≥s = P∗≥s+1 .
We need the following:
Lemma 4.4. Let (A, m) be a Gorenstein local ring and let D be a chain-complex
with Dn = 0 for n ≤ −1. Assume that Dn is a maximal Cohen-Macaulay Amodule for all n ≥ 0. Let x ∈ m be A-regular. Let A = A/(x) and let D be the the
∗
complex D ⊗A A. Let D∗ be the complex HomA (D, A) and let D be the complex
HomA (D, A). Then the following are equivalent:
(i) H i (D∗ ) = 0 for i ≫ 0.
∗
(ii) H i (D ) = 0 for i ≫ 0.
Proof. Let E be a maximal Cohen-Macaulay A-module. Notice x is E-regular. Furthermore HomA (E, A) is a maximal Cohen-Macaulay A-module and ExtiA (M, A) =
0 for i ≥ 1. Set E = E/xE.
x
The exact sequence 0 → A −
→ A → A → 0 induces an exact sequence
x
0 → HomA (E, A) −
→ HomA (E, A) → HomA (E, A) → 0.
Thus we have an exact sequence of co-chain complexes of A-modules
x
∗
0 → D∗ −
→ D∗ → D → 0.
This in turn induces a long-exact sequence in cohomology
(4.4.1)
x
∗
· · · → H i (D∗ ) −
→ H i (D∗ ) → H i (D ) → H i+1 (D∗ ) → · · ·
We now prove (i) =⇒ (ii). This follows from (4.4.1).
x
(ii) =⇒ (i). From (4.4.1) we get that for all i ≫ 0 the map H i (D∗ ) −
→ H i (D∗ )
is surjective. The result now follows from Nakayama’s Lemma.
As an easy consequence of 4.4 we get the following:
Corollary 4.5. (with same hypotheses as in Lemma 4.4). Let x = x1 , . . . , xd be
a maximal regular sequence in A. Set B = A/(x), K = D ⊗A B, Let K∗ be the
complex HomB (K, B). Then the following are equivalent:
(i) H i (D∗ ) = 0 for i ≫ 0.
(ii) H i (K∗ ) = 0 for i ≫ 0.
We now give
Proof of Theorem 4.1. (i) ⇐⇒ (ii). By 3.6 we get that X is a maximal CohenMacaulay A-module, projdimA Y is finite and we have an exact sequence
0 → Y → X → M → 0.
As A is Gorenstein we have that injdimA Y is finite. It follows that ExtiA (L, X) ∼
=
ExtiA (L, M ) for all i ≥ d + 1. The result follows.
INVARIANTS OF LINKAGE OF MODULES
7
fs = image(Cs → Cs+1 ). For s ≥ g + 1 by Lemma
(ii) ⇐⇒ (iii). For s ≥ 1 let X
fs is maximal Cohen-Macaulay. By Lemma 3.4 we also have an
3.4 we get that X
exact sequence
fs → 0.
0 → X → Cg → Cg+1 → · · · → Cs → X
It follows that if s ≫ 0 then ExtiA (L, X) = 0 for i ≫ 0 is equivalent to
fs ) = 0 for i ≥ 1. The result now follows as q is a Gorenstein ideal; so we
ExtiA (L, X
fs = Xs+1 for s ≥ d + 1, see 4.3.
have X
(iii) =⇒ (iv). Suppose ExtiA (L, Xs ) = 0 for all i ≥ 1 and all s ≥ c. Set
a = max{g + 1, c}. As H i (P∗ ) = ExtA
i (N, A) = 0 for i ≥ g + 1 we have an exact
sequence
0 → Xa → P∗a+1 → P∗a+2 → · · · → P∗n → P∗n+1 → · · ·
As ExtiA (L, Xs ) = 0 for all i ≥ 1 and all s ≥ a we get that the induced sequence
0 → HomA (L, Xa ) → HomA (L, P∗a+1 ) → · · ·
is exact. It follows that H i (L, P∗≥a+1 ) = 0 for i ≥ a + 2. Thus by 4.2 we get that
H i (L, P∗ ) = 0 for i ≥ a + 2.
(iv) =⇒ (iii). Suppose H i (L, P∗ ) = 0 for i ≥ r. If s ≥ r then H i (L, P∗≥s ) = 0
for all i ≥ s + 1. Now let a = max{g + 1, r}. Let s ≥ a. As argued before we have
an exact sequence
0 → Xs → P∗s+1 → P∗s+2 → · · · → P∗n → P∗n+1 → · · ·
As H i (L, P∗≥s ) = 0 for all i ≥ s + 1 we get that the induced sequence
0 → HomA (L, Xs ) → HomA (L, P∗s+1 ) → · · ·
2
1
∼
is exact. In particular ExtA
1 (L, Xs ) = 0. Notice ExtA (L, Xs ) = ExtA (L, Xs+1 ).
i
The latter module is zero by the same argument. Iterating we get ExtA (L, Xs ) = 0
for all i ≥ 1.
(iv) ⇐⇒ (v). We have an isomorphism of complexes
HomA (L, P∗ ) ∼
= HomA (L ⊗A P, A).
The result follows.
(v) ⇐⇒ (vi). D = L⊗A P is a complex of maximal Cohen-Macaulay A-modules
since L is maximal Cohen-Macaulay and Pn is a finitely generated free A-module.
Also Dn = 0 for n ≤ −1. Set K = D ⊗A A = L ⊗A P. The result now follows from
Corollary 4.5.
(vi) ⇐⇒ (vii). We note that A is a zero-dimensional Gorenstein local ring and
so is injective as an A-module. Furthermore A is the injective hull of its residue
field. It follows that
H i (Hom (L ⊗A P, A)) ∼
= Hom (Hi (L ⊗A P), A) = Hom (TorA (L, N ), A).
A
A
A
i
Therefore by [4, 3.2.12] we get the result.
(vii) ⇐⇒ (viii). As L is a maximal Cohen-Macaulay A-module we get that x
is an L-regular sequence. Set Lj = L/(x1 , . . . , xj ) for j = 1, . . . , d. Note Ld = L.
x1
L → L1 → 0. This induces a long exact
We have an exact sequence 0 → L −→
sequence
x
1
A
A
A
· · · → TorA
i (L, N ) −→ Tori (L, N ) → Tori (L1 , N ) → Tori−1 (L, N ) → · · ·
8
TONY J. PUTHENPURAKAL
A
Clearly if TorA
i (L, N ) = 0 for i ≫ 0 then Tori (L1 , N ) = 0 for all i ≫ 0. Conversely
x1
A
A
if Tori (L1 , N ) = 0 for all i ≫ 0 then for the map TorA
i (L, N ) −→ Tori (L, N ) is
A
surjective for all i ≫ 0. By Nakayama’s Lemma we get Tori (L, N ) = 0 for i ≫ 0.
x2
L1 → L2 → 0. A similar argument
We also have an exact sequence 0 → L1 −→
A
gives that TorA
i (L1 , N ) = 0 for i ≫ 0 if and only if Tori (L2 , N ) = 0 for all i ≫ 0.
A
Combining with the previous result we get that Tori (L, N ) = 0 for i ≫ 0 if and
only if TorA
i (L2 , N ) = 0 for all i ≫ 0.
Iterating this argument yields the result.
We now give
Proof of Theorem 1.2(1). Let 0 → W → E → L → 0 be a maximal CohenMacaulay approximation of L. As projdimA W is finite we get that ExtiA (L, M ) ∼
=
A
∼
ExtiA (E, M ) for i ≥ d + 2 and TorA
i (L, N ) = Tori (E, N ) for i ≥ d + 2. The result
now follows from Theorem 4.1.
5. Proof of Theorem 1.2(2) and Corollary 1.3
In this section we prove Theorem 1.2(2). We need a few preliminary facts to
prove this result.
5.1. Let (A, m) be a Noetherian local ring and let E be the injective hull of its
residue field. If G is an A-module then set G∨ = HomA (G, E). Let ℓ(G) denote
the length of G. The following result is known. We give a proof as we are unable
to find a reference.
Lemma 5.2. Let (A, m) be a Noetherian local ring and let E be the injective hull
of its residue field. Let M be finitely generated A-module. Let L be an A-module
with ℓ(L) < ∞. Then for all i ≥ 0 we have an isomorphism
Exti (M, L) ∼
= (TorA (M, L∨ ))∨ .
A
i
Proof. We note that (L ) ∼
= L, see [4, 3.2.12]. Let P be a minimal projective
resolution of M . We have the following isomorphism of complexes:
HomA (P ⊗A L∨ , E) ∼
= HomA (P, HomA (L∨ , E)) ∼
= HomA (P, L).
∨ ∨
We have H i (HomA (P, L)) = ExtA
i (M, L). Notice as E is an injective A-module we
have
H i (HomA (P ⊗A L∨ , E)) ∼
= HomA (Hi (P ⊗A L∨ ), E)
∨
∼
= HomA (TorA
i (M, L ), E)
∨ ∨
= (TorA
i (M, L )) .
The following result is also known. We give a proof as we are unable to find a
reference.
Lemma 5.3. Let (A, m) be a Gorenstein local ring of dimension d and let E be the
injective hull of its residue field. Let D ∈ CMr (A) and let x = x1 , . . . , xc ∈ m be a
D-regular sequence (note c ≤ d − r). Then
(1) x is a D† -regular sequence.
(2) (D/xD)† ∼
= D† /xD† .
INVARIANTS OF LINKAGE OF MODULES
9
(3) If r = d (and so ℓ(D) < ∞) then
D† ∼
= D∨ (= HomA (D, E)).
(4) If T is another finitely generated D-module then
(a) ExtiA (T, D) = 0 for i ≫ 0 if and only if ExtiA (T, D/xD) = 0 for i ≫ 0.
(b) ExtiA (D, T ) = 0 for i ≫ 0 if and only if ExtiA (D/xD, T ) = 0 for i ≫ 0.
Proof. (1) and (2) : Let x be D-regular. Then notice D/xD is a Cohen-Macaulay
x
A-module of codimension r + 1. We have a short-exact sequence 0 → D −
→D→
D/xD → 0. Applying the functor HomA (−, A) yields a long exact sequence which
after applying 3.1 reduces to a short-exact sequence
x
0 → ExtrA (D, A) −
→ ExtrA (D, A) → Extr+1
A (D/xD, A) → 0.
It follows that x is D† -regular and D† /xD† ∼
= (D/xD)† .
Iterating this argument yields (1) and (2).
(3): Let y = y1 , . . . , yd ⊂ annA D be a maximal A-regular sequence. Set B =
A/(y). Then
D† = ExtnA (D, A) ∼
= HomB (D, B).
We have
D∨ = HomA (D, E) = HomA (D ⊗A B, E) ∼
= HomB (M, HomA (B, E)).
Now HomA (B, E) is an injective B-module of finite length. As B is an Artin Gorenstein local ring we get that HomA (B, E) is free as a B-module. As ℓ(HomA (B, E)) =
ℓ(B) (see [4, 3.2.12]) it follows that HomA (B, E) = B. The result follows.
(4): For i = 1, . . . , c set Di = D/(x1 , . . . , xi )D.
x1
D → D1 → 0 induces a long exact sequence
4(a): The exact sequence 0 → D −→
(5.3.2)
x
1
ExtiA (T, D) → ExtiA (T, D1 ) → Exti+1
· · · ExtiA (T, D) −→
A (T, D) → · · ·
If Exti (T, D) = 0 for all i ≫ 0 then by (5.3.2) we get ExtiA (T, D1 ) = 0 for all i ≫ 0.
Conversely if ExtiA (T, D1 ) = 0 for all i ≫ 0 then by (5.3.2) we get that the map
x1
ExtiA (T, D) −→
ExtiA (T, D) is surjective for all i ≫ 0. So by Nakayama’s Lemma
i
Ext (T, D) = 0 for all i ≫ 0.
x2
D1 → D2 → 0. A similar argument
We also have an exact sequence 0 → D1 −→
to the above yields that Exti (T, D1 ) = 0 for all i ≫ 0 if and only if Exti (T, D2 ) = 0
for all i ≫ 0. Combining this with the previous result we get Exti (T, D) = 0 for
all i ≫ 0 if and only if Exti (T, D2 ) = 0 for all i ≫ 0.
Iterating this argument yields the result.
4(b): This is similar to 4(a).
We now give
Proof of Theorem 1.2(2). Let x = x1 , . . . , xd−r be a maximal D-regular sequence.
Then by 5.3, x is also a D† regular sequence. Also by 5.3, we get D† /xD† =
(D/xD)† . Let E be the injective hull of the residue field of A.
10
TONY J. PUTHENPURAKAL
We have
ExtiA (M, D) = 0 for all i ≫ 0
⇐⇒ ExtiA (M, D/xD) = 0 for all i ≫ 0; see 5.3,
∨
⇐⇒ TorA
i (M, (D/xD) ) = 0 for all i ≫ 0; see 5.2 and [4, 3.2.12],
†
†
⇐⇒ TorA
i (M, D /xD ) = 0 for all i ≫ 0; see 5.3,
†
†
⇐⇒ ExtA
i (D /xD , N ) = 0 for all i ≫ 0; see Theorem 1.2(1),
†
⇐⇒ ExtA
i (D , N ) = 0 for all i ≫ 0; see 5.3.
We now give
Proof of Corollary 1.3. By Theorem 1.2(2) we get that
ExtiA (M, C) = 0 for all i ≫ 0 ⇐⇒ ExtiA (C † , N ) = 0 for all i ≫ 0.
We note that C † = ExtrA (C, A) ∼
= HomA/n (C, A/n), see 3.2. As C ∼n D we have
†
an exact sequence 0 → C → G → D → 0, where G is a finitely generated free
A/n-module. As n is a Gorenstein ideal in A we get projdimA G is finite. The result
follows.
6. Some preliminaries to prove Theorem 1.1
In this section we discuss a few preliminaries which will enable us to prove
Theorem 1.1. More precisely we need the notion of cohomological operators over a
complete intersection ring; see [7] and [6].
6.1. Let f = f1 , . . . , fc be a regular sequence in a local Noetherian ring (Q, n). We
assume f ⊆ n2 . Set I = (f ) and A = Q/I.
6.2. The Eisenbud operators, [6] are constructed as follows:
∂
∂
→ Fi → · · · be a complex of free A-modules.
→ Fi+1 −
Let F : · · · → Fi+2 −
Step 1: Choose a sequence of free Q-modules Fei and maps ∂e between them:
e
e
∂
∂
e : · · · → Fei+2 −
→ Fei → · · ·
→ Fei+1 −
F
e
so that F = A ⊗ F
Pc
Step 2: Since ∂e2 ≡ 0 modulo (f ), we may write ∂e2 = j=1 fj e
tj where tej : Fei →
e
Fi−2 are linear maps for every i.
Step 3: Define, for j = 1, . . . , c the map tj = tj (Q, f , F) : F → F(−2) by tj =
A⊗e
tj .
6.3. The operators t1 , . . . , tc are called Eisenbud’s operator’s (associated to f ) . It
can be shown that
(1) ti are uniquely determined up to homotopy.
(2) ti , tj commute up to homotopy.
6.4. Let R = A[t1 , . . . , tc ] be a polynomial ring over A with variables t1 , . . . , tc
of degree 2. Let M, N be finitely generated A-modules. By considering a free
resolution F of M we get well defined maps
tj : ExtnA (M, N ) → Extn+2
R (M, N )
for 1 ≤ j ≤ c and all n,
INVARIANTS OF LINKAGE OF MODULES
11
L
which turn Ext∗A (M, N ) = i≥0 ExtiA (M, N ) into a module over R. Furthermore
these structure depend on f , are natural in both module arguments and commute
with the connecting maps induced by short exact sequences.
6.5. Gulliksen, [7, 3.1], proved that if projdimQ M is finite then Ext∗A (M, N ) is a
finitely generated R-module. For N = k, the residue field of A, Avramov in [2,
3.10] proved a converse; i.e., if Ext∗A (M, k) is a finitely generated R-module then
projdimQ M is finite.
6.6. We need to recall the notion of complexity of a module. This notion was
th
introduced by Avramov in [2]. Let βiA (M ) = ℓ(TorA
Betti number
i (M, k)) be the i
of M over A. The complexity of M over A is defined by
β A (M )
cxA M = inf b ∈ N | lim n b−1 < ∞ .
n→∞ n
If A is a local complete intersection of codim c then cxA M ≤ c. Furthermore all
values between 0 and c occur.
6.7. Since m ⊆ ann ExtiA (M, k) for all i ≥ 0 we get that Ext∗A (M, k) is a module
over S = R/mR = k[t1 , . . . , tc ]. If projdimQ M is finite then Ext∗A (M, k) is a finitely
generated S-module of Krull dimension cx M .
6.8. If (Q, n) is regular then by [3, Theorem I(3)] we get that dimS Ext∗A (M, k) =
dimS Ext∗A (k, M ). In particular if M is maximal Cohen-Macaulay A-module then
cx M = cx M ∗ . Using this fact it is not difficult to show that if M is a CohenMacaulay A-module then cx M = cx M † .
6.9. Let Q is regular local with infinite residue field and let M be a finitely generated A-module with cx(M ) = r. The surjection Q → A factors as Q → R → A,
with the kernels of both maps generated by regular sequences, projdimR M < ∞
and cxA M = projdimR A (see [2, 3.9]).
We need the following:
Proposition 6.10. Let Q = k[x, y, z](x,y,z) where k is an infinite field. Let m be
the maximal ideal of Q. Let q be an m-primary Gorenstein ideal such that q is not a
complete intersection. Suppose q ⊇ (u, v) where u, v ∈ m2 is an Q-regular sequence.
Set A = Q/(u, v) and q = q/(u, v). Then cx A/q = 2.
Proof. As codim A = 2 we have that cx A/q = 0, 1 or 2. We prove cx A/q 6= 0, 1.
If cx A/q = 0 then A/q has finite projective dimension over A. So by AuslanderBuchsbaum formula projdimA A/q = 1. Therefore q is a principal ideal. It follows
that q is a complete intersection, a contradiction.
If cx A/q = 1 then by 6.9, the surjection Q → A factors as Q → R → A, with
the kernels of both maps generated by regular sequences, projdimR A/q < ∞ and
cxA A/q = projdimR A = 1. Thus dim R = 2. So R = Q/(h) for some h.
As A/q has finite length, by Auslander-Buchsbaum formula projdimR A/q = 2.
Consider the minimal resolution of A/q over R:
0 → Rb → Ra → R → A/q → 0.
As A/q is a Gorenstein ring we have b = 1 and so a = 2. Thus there exists α, β ∈ R
with A/q = R/(α, β) = Q/(h, α, β). It follows that q is a complete intersection
ideal, a contradiction.
12
TONY J. PUTHENPURAKAL
7. Proof of Theorem 1.1
In this section we prove Theorem 1.1. We first make the following:
7.1. Construction: Let Q = Q[x, y, z](x,y,z) and let m be its maximal ideal. We
construct a m-primary Gorenstein ideal q in Q such that
(1) q is not a complete intersection.
(2) There exists a Q-regular sequence f, u, v ∈ m2 such that
(f a , v b , u) ⊆ q ⊆ (f, u, v) for some a, b ≥ 2.
Set u = x2 + y 2 + z 2 , R = Q/(u), n = m/(u). Then note (x, y) is a reduction
of n and n2 = (x, y)n. It follows that x7 , y 7 is a regular sequence in R. So I =
(x7 , y 7 ) : (xy + yz + xz) is a Gorenstein ideal. Using Singular, [5], it can be shown
that I has 12 minimal generators and I ⊆ n6 . In particular I is not a complete
intersection in R. Also note that n6 ⊆ (x, y)5 ⊆ (x2 , y 2 ). Let q be an ideal in Q
containing u such that q/(u) = I. Then q has 12 or 13 minimal generators. So q is
not a complete intersection. Also clearly q is m-primary. Note
((x2 )4 , (y 2 )4 , u) ⊆ (x7 , y 7 , u) ⊆ q ⊆ (x2 , y 2 , u).
Set f = x2 , v = y 2 and a = b = 4.
We now give
Proof of Theorem 1.1. Let Q = Q[x, y, z](x,y,z) and let m be its maximal ideal.
We make the construction as in 7.1. Let E be a non-free stable maximal CohenMacaulay Q/(f )-module. Let
0 → Qr → Qr → E → 0
be a minimal resolution of E as a Q-module. Note u is Q/(f )-regular and so
E-regular. Thus we have an exact sequence
0 → (Q/(u))r → (Q/(u))r → E/uE → 0.
Set A = Q/(u, f a ) and q = q/(u, f a ). Then note that q is a quasi-Gorenstein
ideal in A which is not a complete intersection. Furthermore notice that E/uE is
a maximal Cohen-Macaulay Q/(f, u)-module and so a maximal Cohen-Macaulay
A-module. Furthermore notice that cx E/uE = 1 as an A-module.
We now note that v is A-regular and so E/uE-regular. Set M = E/(u, v)E.
Thus M is an A-module of finite length. Also cxA M = 1. Furthermore
q ⊆ (f, u, v) ⊆ annQ M.
So we have q ⊆ annA M . Thus we have finite length module M of complexity one
and a quasi-Gorenstein ideal q of complexity two. Set B = A/q. Clearly M does
not have B as a direct summand. Set N = ΩB (TrB (M )). Then M is horizontally
linked to N as B-modules, see [10, Proposition 8, p. 596]. So M ∼q N as Amodules. If t ∈ annA M is a regular element then it is not difficult to show that
there exists i ≥ 1 such that the C = A/(ti )-module M has no free summands as a
C-module. Let M ∼ti L as A-modules.
By Lemma 7.2 we have: cxA N = 2 and cxA L = 1.
This finishes the proof as for any module P the complexity of a maximal CohenMacaulay approximation of P is equal to complexity of P . We note that N is
evenly linked to L and cx X(N ) = cx N = 2 while cx X(L) = cx L = 1. It follows
that X(N ) is not stably isomorphic to X(L).
INVARIANTS OF LINKAGE OF MODULES
13
We now state and prove the Lemma we need to finish the proof of Theorem 1.1.
Lemma 7.2. Let (Q, n) be a regular local ring and let f = f1 , . . . , fc ∈ n2 be a
regular sequence. Set A = Q/(f ). Let M ∈ CMg (A). Also let M ∼q N where q is
a quasi-Gorenstein ideal in A. Then
(1) If q is a Gorenstein ideal then cx M = cx N .
(2) If projdim A/q = ∞ and cx A/q > cx M then cx N = cx A/q.
Proof. Set B = A/q. Let M † = ExtgA (M, A) ∼
= HomB (M, B), by Lemma 3.2. As
M ∼q N we have an exact sequence
(7.2.3)
0 → M † → G → N → 0;
where G is free B-module.
By [3, 3.3] we get cx M † = cx M . The result now follows from the exact sequence
7.2.3 and 6.7.
8. Proof of Theorem 1.4
In this section we prove Theorem 1.4. We need to prove several preliminary
results first.
8.1. Let M, N be finitely generated A-modules. By β(M, N ) we mean the subset
of HomA (M, N ) which factor through a finitely generated free A-module. We first
prove the following
Proposition 8.2. Let (A, m) be a Gorenstein local ring. Let M be a maximal
Cohen-Macaulay A-module with no free summands. Then β(M, M ) ⊆ rad End(M ).
Proof. Let f ∈ β(M, M ). Say f = v ◦ u where u : M → F , v : F → M and
F = An . Let u = (u1 , . . . , un ) where ui : M → A. As M does not have a free
summand we get that ui (M ) ⊆ m for each i. Thus u(M ) ⊆ mF . It follows that
f (M ) ⊆ mM . Thus f ∈ HomA (M, mM ). However it is well-known and easy to
prove that HomA (M, mM ) ⊆ rad End(M ).
8.3. It can be easily seen that β(M, M ) is a two-sided ideal in End(M ). Set
End(M ) = End(M )/β(M, M ). Assume A is Gorenstein and M is maximal CohenMacaulay with no free summands. Let
u
π
0→N −
→F −
→ M → 0,
be a minimal presentation. Note that N is also maximal Cohen-Macaulay with no
free-summnads. We construct a ring homomorphism
σ : End(M ) → End(N )
as follows: Let θ ∈ End(M ). Let δ ∈ End(N ) be a lift of θ. We first note that if
δ ′ is another lift of θ then it can be easily shown that there exists ξ : F → N such
that ξ ◦ u = δ − δ ′ . Thus we have a well defined element σ(θ) ∈ End(N ). Thus we
have a map
σ
e : End(M ) → End(N )
It is easy to see that σ
e is a ring homomorphism. We prove
Proposition 8.4. (with hypotheses as above) If f ∈ β(M, M ) then σ
e (f ) = 0.
14
TONY J. PUTHENPURAKAL
Proof. Let F be a minimal resolution of M with F0 = F . Suppose f = ψ ◦ φ
where ψ : G → M , φ : M → G and G = Am for some m. We may take G be a
minimal resolution of G with G0 = G and Gn = 0 for n > 0. We can construct a
lift fe: F → F of f by composing a lift of v with that of u. It follows that fen = 0
for n ≥ 1. An easy computation shows that for this lift fe the corresponding map
δ : N → N is infact zero. Thus σ
e(f ) = 0.
Thus we have a ring homomorphism σ : End(M ) → End(N ). Our next result is
Proposition 8.5. (with hypotheses as above) σ is an isomorphism.
Proof. We construct a ring homomorphism τ : End(N ) → End(M ) which we show
is the inverse of σ.
Let θ : N → N be A-linear. Then θ∗ : N ∗ → N ∗ is also A-linear. We dualize the
exact sequence 0 → N → F → M → 0 to get an exact sequence
π∗
u∗
0 → M ∗ −→ F ∗ −→ N ∗ → 0.
We can lift θ∗ to an A-linear map δ : M ∗ → M ∗ . Also if δ ′ is another lift then as
before it is easy to see δ − δ ′ ∈ β(M ∗ , M ∗ ). We define τe : End(N ) → End(M ) by
τe(θ) = δ ∗ . It is clear that τe is a ring homomorphism. Finally as in Proposition 8.4,
it can be easily proved that if θ ∈ β(N, N ) then τe(θ) = 0. Thus we have a ring
homomorphism τ : End(M ) → End(N ). Finally it is tautalogical that
τ ◦ σ = 1End(M)
and σ ◦ τ = 1End(N ) .
8.6. If φ : R → S is an isomophism of rings then it is easy to see that φ(rad R) =
rad S and thus we have an isomorphism of rings R/ rad R ∼
= S/ rad S. As a corollary
we obtain
Corollary 8.7. (with hypotheses as above)
End(M )/ rad End(M ) ∼
= End(N )/ rad End(N ).
Proof. By Proposition 8.5 we have an isomorphism σ : End(M ) → End(N ). By 8.2
we have that β(M, M ) ⊆ rad End(M ). It follows that
rad End(M ) = rad End(M )/β(M, M ).
Similarly rad End(N ) = rad End(N )/β(N, N ). The result follows from 8.6.
We now give
Proof of Theorem 1.4. It suffices to consider to prove that if
M0 ∼q M1
∼
then EndA (M0 )/ rad EndA (M0 ) = (EndA (M0 )/ rad EndA (M0 ))op . By assumption
M0 , M1 ∈ CMg (A). It follows that q is a codimension g quasi-Gorenstein ideal [10,
Lemma 14, p. 616]. Set B = A/q. Then B is a Gorenstein ring. Notice M0 , M1 are
maximal Cohen-Macaulay B-modules. Furthermore they are stable B-modules, see
[10, Proposition 3, p. 593]. Notice HomA (Mi , Mi ) = HomB (Mi , Mi ) for i = 0, 1.
As M0 is horizontally linked to M1 we get that M0∗ ∼
= Ω(M1 ). By 8.7 we get
that
End(M1 )/ rad End(M1 ) ∼
= End(M0∗ )/ rad End(M0∗ ).
INVARIANTS OF LINKAGE OF MODULES
15
Furthermore it is easy to see that EndB (M0 ) ∼
= EndB (M0∗ )op and this is preserved
when we go mod radicals. Thus
op
End(M1 )/ rad End(M1 ) ∼
= (End(M0 )/ rad End(M0 )) .
References
[1] M. Auslander and R-O. Buchweitz, The homological theory of maximal Cohen-Macaulay
approximations, Colloque en l’honneur de Pierre Samuel (Orsay, 1987). Mm. Soc. Math.
France (N.S.) No. 38 (1989), 5-37.
[2] L. L. Avramov, Modules of finite virtual projective dimension, Invent. math 96 (1989), 71–
101.
[3] L. L. Avramov and R-O. Buchweitz, Support varieties and cohomology over complete intersections, Invent. Math. 142 (2000), no. 2, 285-318.
[4] W. Bruns and J. Herzog, Cohen-Macaulay Rings, revised edition, Cambridge Stud. Adv.
Math., vol. 39, Cambridge University Press, Cambridge, (1998).
[5] W. Decker, G. -M. Greuel, G. Pfister and H. Schönemann, Singular 4-0-2 — A computer
algebra system for polynomial computations. http://www.singular.uni-kl.de (2015).
[6] D. Eisenbud, Homological algebra on a complete intersection, with an application to group
representations, Trans. Amer. Math. Soc. 260 (1980), 35–64.
[7] T. H. Gulliksen, A change of ring theorem with applications to Poincaré series and intersection multiplicity, Math. Scand. 34 (1974), 167–183.
[8] C. Huneke, Linkage and the Koszul homology of ideals, Amer. J. Math. 104 (1982), no. 5,
1043-1062.
[9] C. Huneke, Numerical invariants of liaison classes, Invent. Math. 75 (1984), no. 2, 301-325.
[10] A. Martsinkovsky and J. R. Strooker, Linkage of modules, J. Algebra 271 (2004), no. 2,
587-626.
[11] H. Matsumura, Commutative ring theory, Translated from the Japanese by M. Reid. Second edition. Cambridge Studies in Advanced Mathematics, 8. Cambridge University Press,
Cambridge, 1989.
[12] C. Peskine and L. Szpiro, Liaison des varits algbriques. I, Invent. Math. 26 (1974) 271-302.
Department of Mathematics, Indian Institute of Technology Bombay, Powai, Mumbai
400 076, India
E-mail address: tputhen@math.iitb.ac.in
| 0 |
arXiv:1604.02691v1 [] 10 Apr 2016
On an algorithm for receiving Sudoku matrices
Krasimir Yordzhev
Faculty of Mathematics and Natural Sciences, South-West University
Ivan Mihaylov 66, Blagoevgrad, 2700, Bulgaria
E-mail: yordzhev@swu.bg
Abstract
This work examines the problem to describe an efficient algorithm for
obtaining n2 × n2 Sudoku matrices. For this purpose, we define the concepts of n × n Πn -matrix and disjoint Πn -matrices. The article, using the
set-theoretical approach, describes an algorithm for obtaining n2 -tuples of
n × n mutually disjoint Πn matrices. We show that in input n2 mutually
disjoint Πn matrices, it is not difficult to receive a Sudoku matrix.
Keywords: Sudoku matrix; S-permutation matrix; Πn -matrix; disjoint matrices; data type set
2010 Mathematics Subject Classification: 05B20, 68Q65
1
Introduction and notation
Let n be a positive integer. Throughout [n] denotes the set
[n] = {1, 2, . . . , n} ,
Un = [n] × [n] = {ha, bi | a, b ∈ [n]}
and Vn denotes the set of all subsets of Un .
Let Pij , 1 ≤ i, j ≤ n, be n2 square n× n matrices, whose entries are elements
of the set [n2 ] = {1, 2, . . . , n2 }. The n2 × n2 matrix
P11 P12 · · · P1n
P21 P22 · · · P2n
P = .
..
..
..
..
.
.
.
Pn1
Pn2
· · · Pnn
is called a Sudoku matrix, if every row, every column and every submatrix Pij ,
1 ≤ i, j ≤ n comprise a permutation of the elements of the set [n2 ], i.e., every
integer s ∈ {1, 2, . . . , n2 } is found just once in each row, column, and submatrix
Pij . Submatrices Pij are called blocks of P .
This work is dedicated to the problem of finding an algorithm for getting all
n2 × n2 Sudoku matrices for an arbitrary integer n ≥ 2. This task is solved for
1
n = 2 and n = 3 [3]. When n > 3, according to our information, this problem is
still open. Finding algorithm to obtain n2 × n2 , n ≥ 4 Sudoku matrices will lead
to solving the problem of constructing Sudoku puzzle of higher order, which will
increase the interest in this entertaining game. Here we not going to examine
and compare different algorithms for solving any Sudoku puzzle. Here we will
examine some algebraic properties of n2 × n2 Sudoku matrices, which are the
basis for obtaining various Sudoku puzzles.
A binary (or boolean, or (0,1)-matrix ) is a matrix all of whose elements
belong to the set B = {0, 1}. With Bn we will denote the set of all n × n binary
matrices.
Two n × n binary matrices A = (aij ) ∈ Bn and B = (bij ) ∈ Bn will be
called disjoint if there are not integers i, j ∈ [n] such that aij = bij = 1, i.e. if
aij = 1 then bij = 0 and if bij = 1 then aij = 0.
A matrix A ∈ Bn2 is called an S-permutation if in each row, in each column,
and in each block of A there is exactly one 1. Let the set of all n2 × n2 Spermutation matrices be denoted by Σn2 .
A formula for calculating the number of all pairs of disjoint S-permutatiom
matrices is given in [11].
S-permutation matrices and their algebraic properties have an important
part in the description of the discussed in [6] algorithm.
The concept of S-permutation matrix was introduced by Geir Dahl [2] in
relation to the popular Sudoku puzzle. It is well known that Sudoku matrices
are special cases of Latin squares. It is widespread puzzle nowadays, which
presents in the entertaining pages in most of the newspapers and magazines
and in entertaining web sites. Sudoku, or Su Doku, is a Japanese word (or
phrase) meaning something like Number Place.
Obviously a square n2 × n2 matrix M with elements of [n2 ] = {1, 2, . . . , n2 }
is a Sudoku matrix if and only if there are matrices A1 , A2 , . . . , An2 ∈ Σn2 , each
two of them are disjoint and such that P can be given in the following way:
M = 1 · A1 + 2 · A2 + · · · + n2 · An2
(1)
Thus, the problem to describe an efficient algorithm for obtaining all n2 tuples of mutually disjoint S-permutation matrices naturally arises. This work
is devoted to this task. For this purpose, in the next section using the settheoretical approach, we define the concepts of Πn -matrix and disjoint Πn matrices. We will prove that so defined task can be reduced to the task of
receiving all n2 -tuples of mutually disjoint Πn -matrices.
In section 3 we will describe an algorithm for obtaining n2 -tuples of n ×
n mutually disjoint Πn matrices and we will show that in input n2 mutually
disjoint Πn matrices, it is not difficult to receive a Sudoku matrix. Described
in this article algorithm essentially differs from the algorithm described in [3].
2
2
A representation of S-permutation matrices
Let n be a positive integer. If z1 z2 . . . zn is a permutation of the elements of
the set [n] = {1, 2, . . . , n} and let us shortly denote σ this permutation. Then
in this case we will denote by σ(i) the i-th element of this permutation, i.e.
σ(i) = zi , i = 1, 2, . . . , n.
Definition 1 Let Πn denotes the set of all n × n matrices, constructed such
that π ∈ Πn if and only if the following three conditions are true:
i) the elements of π are ordered pairs of integers hi, ji, where 1 ≤ i, j ≤ n;
ii) if
[ha1 , b1 i ha2 , b2 i · · · han , bn i]
is the i-th row of π for any i ∈ [n] = {1, 2, . . . , n}, then a1 a2 . . . an in this
order is a permutation of the elements of the set [n];
iii) if
ha1 , b1 i
ha2 , b2 i
..
.
han , bn i
is the j-th column of π for any j ∈ [n], then b1 , b2 , . . . , bn in this order is a
permutation of the elements of the set [n].
The matrices of the set Πn we will call Πn -matrices.
From Definition 1, it follows that we can represent each row and each column
of a matrix M ∈ Πn with the help of a permutation of elements of the set [n].
Conversely for every (2n)-tuple
hhρ1 , ρ2 , . . . , ρn i, hσ1 , σ2 , . . . , σn ii,
where
ρi = ρi (1) ρi (2) . . . ρi (n),
1≤i≤n
σj = σj (1) σj (2) . . . σj (n),
1≤j≤n
are 2n permutations of elements of [n] (not necessarily different), then the matrix
hρ1 (1), σ1 (1)i hρ1 (2), σ2 (1)i · · · hρ1 (n), σn (1)i
hρ2 (1), σ1 (2)i hρ2 (2), σ2 (2)i · · · hρ2 (n), σn (2)i
π=
..
..
..
..
.
.
.
.
hρn (1), σ1 (n)i
hρn (2), σ2 (n)i
is matrix of Πn . Hence
· · · hρn (n), σn (n)i
|Πn | = (n!)2n
(2)
Definition 2 We say that matrices π ′ = p′ ij n×n ∈ Πn and π ′′ = p′′ ij n×n ∈
Πn are disjoint, if p′ ij 6= p′′ ij for every i, j ∈ [n].
3
Definition 3 Let π ′ , π ′′ ∈ Πn , π ′ = p′ ij n×n , π ′′ = p′′ ij n×n and let the
integers i, j ∈ [n] are such that p′ ij = p′′ ij . In this case we will say that p′ ij and
p′′ ij are component-wise equal elements.
Obviously two Πn -matrices are disjoint if and only if they do not have
component-wise equal elements.
Example 1 We consider the following Π3 -matrices:
h3, 1i h2, 1i h1, 2i
π ′ = p′ij = h2, 3i h3, 2i h1, 1i
h3, 2i h1, 3i h2, 3i
h3, 2i h1, 3i h2, 1i
π ′′ = p′′ij = h3, 3i h1, 1i h2, 2i
h2, 1i h1, 2i h3, 3i
h3, 1i h1, 3i h2, 2i
h2, 2i h3, 1i h1, 1i
π ′′′ = p′′′
ij =
h2, 3i h1, 2i h3, 3i
Matrices π ′ and π ′′ are disjoint, because they do not have component-wise
equal elements.
Matrices π ′ and π ′′′ are not disjoint, because they have two component-wise
′
′′′
equal elements: p′11 = p′′′
11 = h3, 1i and p23 = p23 = h1, 1i.
Matrices π ′′ and π ′′′ are not disjoint, because they have three component′′′
′
′′′
′′
wise equal elements: p′′12 = p′′′
12 = h1, 3i, p32 = p32 = h1, 2i, and p33 = p33 =
h3, 3i.
The relationship between S-permutation matrices and the matrices from the
set Πn are given by the following theorem:
Theorem 1 Let n be an integer, n ≥ 2. Then there is one to one correspondence θ : Πn ⇆ Σn2 .
Proof. Let π = [pij ]n×n ∈ Πn , where pij = hai , bj i, i, j ∈ [n], ai , bj ∈ [n].
Then for every i, j ∈ [n] we construct a binary n × n matrices Aij with only one
1 with coordinates (ai , bj ). Then we obtain the matrix
A11 A12 · · · A1n
A21 A22 · · · A2n
(3)
A= .
.. .
..
..
..
.
.
.
An1
An2
· · · Ann
According to the properties i), ii) and iii), it is obvious that the obtained matrix
A is n2 × n2 S-permutation matrix.
Conversely, let A ∈ Σn2 . Then A is in the form shown in (3) and for every
i, j ∈ [n] in the block Aij there is only one 1 and let this 1 has coordinates (ai , bj ).
4
For every i, j ∈ [n] we obtain ordered pairs of integers hai , bj i corresponding to
these coordinates. As in every row and every column of A there is only one
1, then the matrix π = [pij ]n×n , where pij = hai , bj i, 1 ≤ i, j ≤ n, which is
obtained by the ordered pairs of integers is matrix of Πn , i.e. matrix for which
the conditions i), ii) and iii) are true.
Corollary 1 Let π ′ , π ′′ ∈ Πn and let A′ = θ(π ′ ), A′′ = θ(π ′′ ), where θ is the
bijection defined in Theorem 1. Then A′ and A′′ are disjoint if and only if π ′
and π ′′ are disjoint.
Proof. It is easy to see that with respect of the described in Theorem 1 one
to one correspondence, every pair of disjoint matrices of Πn will correspond to
a pair of disjoint matrices of Σn2 and conversely every pair of disjoint matrices
of Σn2 will correspond to a pair of disjoint matrices of Πn .
Corollary 2 [2] The number of all n2 × n2 S-permutation matrices is equal to
|Σn2 | = (n!)
2n
Proof. It follows immediately from Theorem 1 and formula (2).
3
(4)
Description of the algorithm
Algorithm 2 Receive n2 mutually disjoint Πn -matrices.
Input: Integer n
Output: P1 , P2 , . . . , Pn2 ∈ Πn such that Pi and Pj are disjoint when i 6= j
1. Construct n × n arrays P1 , P2 , . . . Pn whose entries assume values of the
set Vn ;
2. Initialize all entries of P1 , P2 , . . . , Pn with Un ;
3. For every k = 1, 2, . . . , n2 do loop
4.
For every i = 1, 2, . . . , n do loop
5.
For every j = 1, 2, . . . , n do loop
6.
Choose ha, bi ∈ Pk [i][j];
7.
Pk [i][j] = {ha, bi};
8.
For every t = k + 1, k + 2, . . . n2 from the set Pt [i][j] remove the
element ha, bi;
9.
For every t = j + 1, j + 2, . . . n from the set Pt [i][j] remove all
elements hx, yi such that x = a;
5
10.
For every t = i + 1, i + 2, . . . n from the set Pt [i][j] remove all
elements hx, yi such that y = b;
end loop 5;
end loop 4;
end loop 3.
Algorithm 3 Receive a S-permutation matrix from a Πn -matrix.
Input: P = [hak , bl i]n×n ∈ Πn , 1 ≤ k, l ≤ n.
Output: S = [sij ]n2 ×n2 ∈ Σn2 , 1 ≤ i, j ≤ n2 .
1. Construct an n2 × n2 integer array S = [sij ], 1 ≤ i, j ≤ n2 and initialize
sij = 0 for all i, j ∈ {1, 2, . . . , n2 };
2. For every k, l ∈ {1, 2, . . . , n} do loop
3.
i = (k − 1) ∗ n + ak ;
4.
j = (l − 1) ∗ n + bl ;
5.
sij = 1
end loop.
Algorithm 4 Receive Sudoku matrices.
Input: Integer n.
Output: Sudoku matrix A.
1. Get n2 mutually disjoint Πn matrices P1 , P2 , . . . Pn2 (Algorithm 2);
2. For every k = 1, 2, . . . , n2 from Pk receive Sk ∈ Σn2 (Algorithm 3);
3. A = 1 ∗ S1 + 2 ∗ S2 + · · · + n2 ∗ Sn2 .
4
Conclusion and remarks
• Described in section 3 algorithms will work more efficiently if the programmer uses programming languages and programming environments
with integrated tools for working with data structure set [1, 4, 5, 7, 8, 9].
• If in item 6 of Algorithm 2 we choose ordered pair ha, bi ∈ Pk [i][j] randomly, then we will get a random Sudoku matrix [10]. Thus we tested the
effectiveness of the algorithm.
• If in item 6 of Algorithm 2 we choose all ordered pairs ha, bi ∈ Pk [i][j], then
finally we will get all n2 × n2 Sudoku matrices. We do not know a general
formula for finding the number θn of all n2 × n2 Sudoku matrices for each
integer n ≥ 2. We consider that this is an open mathematical problem.
Using a computer program based on described in section 3 algorithms, we
calculated that when n = 2, there are θ2 = 288 number of 4 × 4 Sudoku
6
matrices. This number coincides with our results obtained using other
methods described in [12]. In [3], it has been shown that there are exactly
θ3 = 9! · 722 · 27 · 27 704 267 971 = 6 670 903 752 021 072 936 960 number
of 9 × 9 Sudoku matrices. The next step is to calculate the number θ4 of
16 × 16 Sudoku matrices.
References
[1] Pavel Azalov. Object-oriented programming. Data structures and STL.
Ciela, Sofia, 2008.
[2] Geir Dahl. Permutation matrices related to sudoku. Linear Algebra and
its Applications, 430(8–9):2457–2463, 2009.
[3] Bertram Felgenhauer and Frazer Jarvis. Enumerating possible sudoku
grids, 2005.
[4] Ivor Horton. Beginning STL: Standard Template Library. Apress, 2015.
[5] Kathleen Jensen and Niklaus Wirth. PASCAL - User Manual and Report.
Lecture Notes in Computer Science. Springer, Berlin Heidelberg, 1985.
[6] Pallavi Mishra, D. K. Gupta, and Rakesh P. Badoni. A new algorithm for
enumerating all possible sudoku squares. Discrete Mathematics, Algorithms
and Applications, 8(2):1650026 (14 pages), 2016.
[7] Herbert Sghildt. Java: The Complete Reference, Ninth Edition. McGrawHill Education, 2014.
[8] Kiat Shi Tan, Willi-Hans Steeb, and Yorick Hardy. Symbolic C++: An
Introduction to Computer Algebra using Object-Oriented Programming.
Springer-Verlag, London, 2000.
[9] Magdalina Todorova. Data structures and programming in C ++. Ciela,
Sofia, 2011.
[10] Krasimir Yordzhev. Random permutations, random sudoku matrices and
randomized algorithms. International J. of Math. Sci. & Engg. Appls.,
6(VI):291 – 302, 2012.
[11] Krasimir Yordzhev. Calculation of the number of all pairs of disjoint spermutation matrices. Applied Mathematics and Computation, 268:1 – 11,
2015.
[12] Krasimir Yordzhev and Hristina Kostadinova. On some entertaining applications of the concept of set in computer science course. Informational
Technologies in Education, (10):24–29, 2011.
7
| 8 |
Suprisal-Driven Zoneout
arXiv:1610.07675v6 [cs.LG] 13 Dec 2016
Kamil Rocki
Tomasz Kornuta
IBM Research, San Jose, CA 95120, USA
KMROCKI @ US . IBM . COM
TKORNUT @ US . IBM . COM
Tegan Maharaj
Ecole Polytechnique de Montreal
Abstract
We propose a novel method of regularization for
recurrent neural networks called suprisal-driven
zoneout. In this method, states zoneout (maintain
their previous value rather than updating), when
the suprisal (discrepancy between the last state’s
prediction and target) is small. Thus regularization is adaptive and input-driven on a per-neuron
basis. We demonstrate the effectiveness of this
idea by achieving state-of-the-art bits per character of 1.31 on the Hutter Prize Wikipedia dataset,
significantly reducing the gap to the best known
highly-engineered compression methods.
1. Introduction
An important part of learning is to go beyond simple memorization, to find as general dependencies in the data as
possible. For sequences of information, this means looking for a concise representation of how things change over
time. One common way of modeling this is with recurrent
neural networks (RNNs), whose parameters can be thought
of as the transition operator of a Markov chain. Training
an RNN is the process of learning this transition operator.
Generally speaking, temporal dynamics can have very different timescales, and intuitively it is a challenge to keep
track of long-term dependencies, while accurately modeling more short-term processes as well.
The Long-Short Term Memory (LSTM) (Hochreiter and
Schmidhuber, 1997) architecture, a type of RNN, has
proven to be exceptionally well suited for learning longterm dependencies, and is very widely used to model sequence data. Learned, parameterized gating mechanisms
control what is retrieved and what is stored by the LSTM’s
state at each timestep via multiplicative interactions with
LSTM’s state. There have been many approaches to capturing temporal dynamics at different timescales, e.g. neural networks with kalman filters, clockwork RNNs, narx
TEGAN . MAHARAJ @ POLYMTL . CA
Memory
N or
th
ern Ireland]]
Memory
clock
= =
E
xternal links ==
*
(No operation)
(No operation)
(Predictable sequence)
(Predictable sequence)
Operation
probability
Surprisal
Input clock
Input
N o r t h e r n
I r e l a n d ] ]
= =
E x t e r n a l
l i n k s
= =
*
t
Figure 1.1: Illustration of the adaptive zoneout idea
networks, and recently hierarchical multiscale neural networks.
It has been proven (Solomonoff, 1964) that the most general solution to a problem is the one with the lowest Kolmogorov complexity, that is its code is as short as possible. In terms of neural networks one could measure the
complexity of a solution by counting the number of active neurons. According to Redundancy-Reducing Hypothesis (Barlow, 1961) neurons within the brain can code messages using different number of impulses. This indicates
that the most probable events should be assigned codes
with fewer impulses in order to minimize energy expenditure, or, in other words, that the more frequently occuring
patterns in lower level neurons should trigger sparser activations in higher level ones. Keeping that in mind, we
have focused on the problem of adaptive regularization, i.e.
minimization of a number of neurons being activated depending on the novelty of the current input.
Zoneout is a recently proposed regularizer for recurrent
neural networks which has shown success on a variety of
benchmark datasets (Krueger et al., 2016; Rocki, 2016a).
Zoneout regularizes by zoning out activations, that is:
freezing the state for a time step with some fixed probability. This mitigates the unstable behavior of a standard dropout applied to recurrent connections. However,
since the zoneout rate is fixed beforehand, one has decide
a priori to prefer faster convergence or higher stochasticity,
Suprisal-Driven Zoneout
The main contribution of this paper is the introduction of
Surprisal-Driven Adaptive Zoneout, where each neuron is
encouraged to be active as rarely as possible with the most
preferred state being no operation. The motivation behind
this idea is that low complexity codes will provide better
generalization.
Normalize every output unit:
i
eyt
pit = P yi
t
ie
(2.13)
.
z¯t
...
ct−1
.
.
memory
...
ct
+
zt
yt−1
...
st
ht−1
zt
ft
internal state
control logic
whereas we would like to be able to set this per memory
cell according to learning phase, i.e. lower initially and
higher later to prevent memorization/unnecessary activation. This is why we have decided to add surpisal-driven
feedback (Rocki, 2016b), since it gives a measurement of
current progress in learning. The provided (negative) feedback loop enables to change the zoneout rate online within
the scope of a given cell, allowing the zoneout rate to adapt
to current information. As learning progresses, the activations of that cell become less frequent in time and more
iterations will just skip memorization, thus the proposed
mechanism in fact enables different memory cells to operate on different time scales. The idea is illustrated in Fig.
1.1.
.
it
ot
.
ut
ht
...
xt
Figure 2.1: Sparse LSTM basic operational unit with
suprisal-driven adaptive zoneout. Dashed line denotes the
zoneout memory lane.
3. Experiments
3.1. Datasets
Hutter Prize Wikipedia Hutter Prize Wikipedia (also
known as enwik8) dataset (Hutter, 2012).
2. The model
We used the surprisal-feedback LSTM (Rocki, 2016b):
st = log pt−1 · xTt
(2.1)
Linux This dataset comprises approximately 603MB of
raw Linux 4.7 kernel source code∗
Next we compute the gate activations:
3.2. Methodology
ft = σ(Wf · xt + Uf · ht−1 + Vf · st + bf )
(2.2)
it = σ(Wi · xt + Ui · ht−1 + Vi · st + bi )
(2.3)
ot = σ(Wo · xt + Uo · ht−1 + Vo · st + bo )
(2.4)
ut = tanh(Wu · xt + Uu · ht−1 + Vu · st + bu )
(2.5)
The zoneout rate is adaptive; it is a function of st , τ is a
threshold parameter added for numerical stability, Wy is a
h → y connection matrix:
St = pt−1 − xt
(2.6)
zt = min(τ + |St · WyT |, 1)
(2.7)
Sample a binary mask Zt according to zoneout probability
zt :
Zt ∼ zt
(2.8)
New memory state depends on Zt . Intuitively, Zt = 0
means NOP, that is dashed line in Fig. 2.1.
ct = (1 − ft
Zt )
ct−1 + Zt
ĉt = tanh(ct )
ht = ot
ĉt
it
ut
3.3. Results and discussion
(2.11)
Remark: Surprisal-Driven Feedback has sometimes been
wrongly characterized as a ’dynamic evaluation’ method.
This is incorrect for the following reasons : 1. It never actually sees test data during training. 2. It does not adapt
weights during testing. 3. The evaluation procedure is exactly the same as using standard RNN - same inputs. Therefore it is fair to compare it to ’static’ methods.
(2.12)
∗
http://olab.is.s.u-tokyo.ac.jp/˜kamil.
rocki/data/
(2.9)
(2.10)
Outputs:
yt = Wy · ht + by
In both cases the first 90% of each corpus was used for
training, the next 5% for validation and the last 5% for reporting test accuracy. In each iteration sequences of length
10000 were randomly selected. The learning algorithm
used was Adadelta with a learning rate of 0.001. Weights
were initialized using the so-called Xavier initialization
(Glorot and Bengio, 2010). Sequence length for BPTT was
100 and batch size 128. In all experiments only one layer
of 4000 LSTM cells was used. States were carried over for
the entire sequence of 10000 emulating full BPTT. Forget
bias was set initially to 1. Other parameters were set to
zero. The algorithm was written in C++ and CUDA 8 and
ran on GTX Titan GPU for up to 2 weeks.
Suprisal-Driven Zoneout
2.2
SF-LSTM (Test)
LSTM (Test)
Sparse Zoneout SF-LSTM (Test)
Zoneout SF-LSTM (Test)
SF-LSTM (Train)
LSTM (Train)
Sparse Zoneout SF-LSTM (Train)
Zoneout SF-LSTM (Train)
2.1
2
1.9
1.8
1.7
1.6
1.5
1.4
1.3
1.2
1.1
1
(a)
(b)
0.9
Colors indicate I/O gate activations: Red – write (i), Green –
read (o), Blue - erase (f), White – all, Black – no operation
0.8
4h
12h
24h 32h
48h
72h
96h
144h
Figure 3.1: Learning progress on enwik8 dataset
Table 3.1: Bits per character on the enwik8 dataset (test)
BPC
mRNN∗ (Sutskever et al., 2011)
GF-RNN (Chung et al., 2015)
Grid LSTM (Kalchbrenner et al., 2015)
Layer-normalized LSTM (Ba et al., 2016)
Standard LSTM‡
MI-LSTM (Wu et al., 2016)
Array LSTM (Rocki, 2016a)
HM-LSTM (Chung et al., 2016)
HyperNetworks (Ha et al., 2016)
SF-LSTM (Rocki, 2016b)
RHN (Zilly et al., 2016)
1.60
1.58
1.47
1.46
1.45
1.44
1.40
1.40
1.38
1.37
1.32
Surprisal-Driven Zoneout
1.31
†
cmix v11
(c)
(d)
Hidden state
1.245
Table 3.2: Bits per character on the Linux dataset (test)
(e)
BPC
SF-LSTM
Surprisal-Driven Zoneout
(f)
Memory cell state
1.38
1.18
We observed substantial improvements on enwik8 (Table
3.1) and Linux (Table 3.2) datasets. Our hypothesis is that
it is due to the presence of memorizable tags and nestedness
in this dataset, which are ideal for learning with suprisaldriven zoneout. Patterns such as < timestamp > or long
periods of spaces can be represented by a single impulse
in this approach, zoning out entirely until the end of the
pattern. Without adaptive zoneout, this would have to be
controlled entirely by learned gates, while suprisal allows
quick adaptation to this pattern. Fig 3.2 shows side by side
comparison of a version without and with adaptive zoneout, demonstrating that in fact the dynamic span of memory cells is greater when adaptive zoneout is used. Fur-
(g) Mean: 0.27/timestep
(h) Mean: 0.092/timestep
Memory cell change (L1-norm)
Figure 3.2: Visualization of memory and network states
in time (left to right, 100 time steps, Y-axis represents cell
index); Left: without Adaptive Zoneout, Right: with Adaptive Zoneout
Suprisal-Driven Zoneout
thermore, we show that the activations using adaptive zoneout are in fact sparser than without it, which supports
our intuition about the inner workings of the network. An
especially interesting observation is the fact that adaptive
zoneout seems to help separate instructions which appear
mixed otherwise (see Fig 3.2). A similar approach to the
same problem is called Hierarchical Multiscale Recurrent
Neural Networks (Chung et al., 2016). The main difference is that we do not design an explicit hierarchy of levels, instead allowing each neuron to operate on arbitrary
timescale depending on its zoneout rate. Syntactic patterns
in enwik8 and linux datasets are highly nested. For example (< page >, < revision >, < comment >, [[:en,
..., not mentioning parallel semantic context (movie, book,
history, language). We believe that in order to learn such
complex structure, we need distributed representations with
every neuron operating at arbitrary time scale independent
of another. Hardcoded hierarchical architecture will have
problems solving such a problem.
4. Summary
The proposed surprisal-driven zoneout appeared to be a
flexible mechanism to control the activation of a given cell.
Empirically, this method performs extremely well on the
enwik8 and linux datasets.
5. Further work
We would like to explore variations of suprisal-driven zoneout on both state and cell. Another interesting direction to pursue is the connection with sparse coding - using suprisal-driven zoneout, the LSTM’s cell contents are
more sparsely revealed through time, potentially resulting
in information being used more effectively.
Acknowledgements
This work has been supported in part by the Defense Advanced Research Projects Agency (DARPA).
References
J. L. Ba, J. R. Kiros, and G. E. Hinton. Layer normalization.
arXiv preprint arXiv:1607.06450, 2016.
H. B. Barlow. Possible principles underlying the transformations of sensory messages. 1961.
J. Chung, Ç. Gülçehre, K. Cho, and Y. Bengio. Gated feedback recurrent neural networks. CoRR, abs/1502.02367,
†
‡
Best known compressor: http://mattmahoney.net/dc/text.html
our implementation
2015.
URL http://arxiv.org/abs/1502.
02367.
J. Chung, S. Ahn, and Y. Bengio. Hierarchical multiscale recurrent neural networks.
arXiv preprint
arXiv:1609.01704, 2016.
X. Glorot and Y. Bengio. Understanding the difficulty of
training deep feedforward neural networks. In In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS10). Society for Artificial Intelligence and Statistics, 2010.
D. Ha, A. Dai, and Q. V. Le. Hypernetworks. arXiv
preprint arXiv:1609.09106, 2016.
S. Hochreiter and J. Schmidhuber. Long short-term
memory.
Neural Comput., 9(8):1735–1780, Nov.
1997. ISSN 0899-7667. doi: 10.1162/neco.1997.
9.8.1735. URL http://dx.doi.org/10.1162/
neco.1997.9.8.1735.
N. Kalchbrenner, I. Danihelka, and A. Graves. Grid long
short-term memory. CoRR, abs/1507.01526, 2015. URL
http://arxiv.org/abs/1507.01526.
D. Krueger, T. Maharaj, J. Kramár, M. Pezeshki, N. Ballas, N. R. Ke, A. Goyal, Y. Bengio, H. Larochelle,
A. C. Courville, and C. Pal. Zoneout: Regularizing
rnns by randomly preserving hidden activations. CoRR,
abs/1606.01305, 2016. URL http://arxiv.org/
abs/1606.01305.
K. Rocki. Recurrent memory array structures.
preprint arXiv:1607.03085, 2016a.
arXiv
K. M. Rocki. Surprisal-driven feedback in recurrent networks. arXiv preprint arXiv:1608.06027, 2016b.
R. J. Solomonoff. A formal theory of inductive inference.
part i. Information and control, 7(1):1–22, 1964.
I. Sutskever, J. Martens, and G. Hinton. Generating
text with recurrent neural networks. In L. Getoor and
T. Scheffer, editors, Proceedings of the 28th International Conference on Machine Learning (ICML-11),
ICML ’11, pages 1017–1024, New York, NY, USA, June
2011. ACM. ISBN 978-1-4503-0619-5.
Y. Wu, S. Zhang, Y. Zhang, Y. Bengio, and R. Salakhutdinov. On multiplicative integration with recurrent neural
networks. CoRR, abs/1606.06630, 2016. URL http:
//arxiv.org/abs/1606.06630.
J. G. Zilly, R. K. Srivastava, J. Koutnı́k, and J. Schmidhuber.
Recurrent highway networks.
CoRR,
abs/1607.03474, 2016. URL http://arxiv.org/
abs/1607.03474.
| 9 |
Event-triggered leader-following tracking control for
multivariable multi-agent systems ⋆
arXiv:1603.04125v1 [] 14 Mar 2016
Yi Cheng a and V. Ugrinovskii a
a
School of Engineering and Information Technology, University of New South Wales at the Australian Defence Force
Academy, Canberra, ACT 2600, Australia.
Abstract
The paper considers event-triggered leader-follower tracking control for multi-agent systems with general linear dynamics.
For both undirected and directed follower graphs, we propose event triggering rules which guarantee bounded tracking errors.
With these rules, we also prove that the systems do not exhibit Zeno behavior, and the bounds on the tracking errors can be
tuned to a desired small value. We also show that the combinational state required for the proposed event triggering conditions
can be continuously generated from discrete communications between the neighboring agents occurring at event times. The
efficacy of the proposed methods is discussed using a simulation example.
Key words: Event-triggered control, leader-follower tracking, consensus control, multi-agent systems.
1
Introduction
Cooperative control of multi-agent systems has received
increasing attention in the past decade, see [1] and
references therein. However, many control techniques
developed so far rely on continuous communication
between agents and their neighbors. This limits
practicality of these techniques.
To address this concern, several approaches have been
proposed in recent years. One approach is to apply
sampled control [2]. However in sampled data control
schemes control action updates continue periodically
with the same frequency even after the system has
reached the control goal with sufficient accuracy and no
longer requires intervention from the controller. Efforts
to overcome this shortcoming have led to the idea of
triggered control. Self-triggered control strategies [3,4,9]
employ a triggering mechanism to proactively predict
the next time for updating the control input ahead of
time, using the current measurements. On the other
hand, event-triggered controllers [5,6,7,8,9,10] trigger
control input updates by reacting to excessive deviations
⋆ This work was supported by the Australian Research
Council under the Discovery Projects funding scheme
(project DP120102152). Accepted for publication in
Automatica on March 14, 2016.
Email addresses: y.cheng@adfa.edu.au (Yi Cheng),
v.ugrinovskii@gmail.com (V. Ugrinovskii).
Preprint submitted to Automatica
of the decision variable from an acceptable value, i.e.,
when a continuously monitored triggering condition is
violated. This latter approach is the main focus in this
paper.
The development of event-triggered controllers remains
challenging, because the agents in a multi-agent
system do not have access to the complete system
state information required to make decisions about
control input updates. To prove the concept of eventtriggering, the early work was still assuming continuous
communication between the neighboring agents [7,9].
To circumvent this limitation, several approaches have
been proposed, e.g., see [11,12,13,14,15,16,17]. For
instance, different from [7,9] where state-dependent
event triggering conditions were used, [11] proposed an
event-triggered control strategy using a time-dependent
triggering function which did not require neighbors’
information. In [12], a state-dependent event triggering
condition was employed, complemented by an iterative
algorithm to estimate the next triggering time, so that
continuous communications between neighboring agents
were no longer needed. In [13], sampled-data event
detection has been used. It must be noted that these
results as well as many other results in this area were
developed for multi-agent systems with single or double
integrator dynamics. Most recently, similar results have
been developed for multi-agent systems with general
dynamics [14,15] and nonlinear dynamics [16,17].
Accepted for publication in Automatica on March 14, 2016.
In comparison with the recent work on event-triggered
control for general linear systems [14,30,15,25], the
main distinction of our method is computing the
combinational state directly using the neighbors’
information. This allowed us to avoid additional
sampling when checking event triggering conditions,
cf. [14,30]. In contrast in [15], to avoid continuous
transmission of information, each agent was equipped
with models of itself and its neighbors. In [25], estimators
were embedded into each node to enable the agents
to estimate their neighbors’ states. Both approaches
make the controller rather complex, compared with
our controller which does not require additional models
or estimators. The leader-follower context and the
treatment of both directed and undirected versions of
the problem are other distinctions.
All the papers mentioned above considered the eventtriggered control problem for leaderless systems. The
leader-following control is one of the important
problems in cooperative control of multi-agent systems
[18,19,20,1], and the interest in event-based solutions
to this problem is growing [21,22,23,24]. General
multidimensional leader following problems still remain
technically challenging, and the development is often
restricted to the study of single or double integrator
dynamics [21,22,23,24]. Zeno behavior presents another
challenge, and is not always excluded [21,22]. Excluding
Zeno behavior is an important requirement on control
protocols since excessively frequent communications
reduce the advantages of using the event-triggered
control.
In this paper, we also consider the event-triggered leaderfollowing control problem for multi-agent systems.
Unlike [21,22,23,24], the class of systems considered
allows for general linear dynamics. Also, the leader
can be marginally stable or even unstable. For both
undirected and directed system interconnections, we
propose sufficient conditions for the design of controllers
which guarantee that the leader tracking errors are
contained within certain bounds; these bounds can
be optimized by tuning the parameters of the design
procedure. We also show that with the proposed eventtriggered control protocols, the system does not exhibit
Zeno behavior. These results are the main contribution
of the paper.
The paper is organized as follows. Section 2 includes the
problem formulation and preliminaries. The main results
are given in Sections 3 and 4. In Section 3 we consider
the case when the system of followers is connected over a
directed graph. Although these results are applicable to
systems connected over an undirected graph as well, the
symmetry of the graph Laplacian makes it possible to
derive an alternative control design scheme in Section 4.
In Section 5, the generation of the combinational state
is discussed. Section 6 provides an illustrative example.
The conclusions are given in Section 7.
Throughout the paper, ℜn and ℜn×m are a real
Euclidean n-dimensional vector space and a space of real
n × m matrices. ⊗ denotes the Kronnecker product of
two matrices. λmax (·) and λmin (·) will denote the largest
and the smallest eigenvalues of a real symmetric matrix.
For q ∈ ℜn , diag{q} denotes the diagonal matrix with
the entries of q as its diagonal elements. IN is the N × N
identity matrix. When the dimension is clear from the
context, the subscript N will be suppressed.
Its another contribution is the event-triggered control
protocols that do not require the neighboring agents to
communicate continuously. Instead, the combinational
state to be used in the event triggering condition
is generated continuously within the controllers, by
integrating the information obtained from the neighbors
during their communication events. The idea is inspired
by [12], however, the procedure in [12] developed for
single integrator systems cannot be applied to multiagent systems with general linear dynamics considered
here, since in our case dynamics of the measurement
error depend explicitly on the combinational state. Also
different from [12], the proposed algorithm involves oneway communications between the neighboring agents.
The combinational state is computed continuously
by each agent and is broadcast to its neighbors
only at the time when the communication event is
triggered at this node and only in one direction. The
neighbors then use this information for their own
computation, and do not send additional requests to
measure the combinational state. This is an important
advantage of our protocol compared with eventtriggered control strategies proposed in [12,21,23,24,30].
In these references, when an event is triggered at
one agent, it must request its neighbors for additional
information to update the control signals. Owing to this,
our scheme is applicable to systems with a directed graph
which only involves one way communications.
2
2.1
Problem formulation and preliminaries
Communication graph
Consider a communication graph Ḡ = (V̄, Ē, Ā), where
V̄ = {0, . . . , N } is a finite nonempty node set, Ē ⊆
V̄ × V̄ is an edge set of pairs of nodes, and Ā is an
adjacency matrix. Without loss of generality, node 0 will
be assigned to represent the leader, while the nodes from
the set V = {1, . . . , N } will represent the followers.
The (in general, directed) subgraph G = (V, E, A)
obtained from Ḡ by removing the leader node and the
corresponding edges describes communications between
the followers; the edge set E ⊆ V × V represents the
communication links between them, with the ordered
pair (j, i) ∈ E indicating that node i obtains information
from node j; in this case j is the neighbor of i. The
set of neighbors of node i in the graph G is denoted as
2
denoted by ti0 , ti1 , . . ., based on samples zi (tik ) of its
combinational state. The value of the combinational
state is held constant between updates, thus giving rise
to the measurement signal ẑi (t) = zi (tik ), t ∈ [tik , tik+1 ).
Based on this model, consider the following control law
Ni = {j|(j, i) ∈ E}. Following the standard convention,
we assume that G does not have self-loops or repeated
edges. The adjacency matrix A = [aij ] ∈ ℜN ×N of G is
defined as aij = 1 if (j, i) ∈ E, and aij = 0 otherwise.
PN
Let di = j=1 aij be the in-degree of node i ∈ V and
D = diag{d1 , . . . , dN } ∈ ℜN ×N . Then L = D − A is the
Laplacian matrix of the graph G, it is symmetric when
G is undirected.
ui (t) = −K ẑi (t), t ∈ [tik , tik+1 ),
where K ∈ ℜp×n is a feedback gain matrix to be defined
later. The problem in this paper is to find a control law
(4) and an event triggering strategy which achieve the
following leader-following property
We assume throughout the paper that the leader is
observed by a subset of followers. If the leader is observed
by follower i, then the directed edge (0, i) is included in
Ē and is assigned with the weighting gi = 1, otherwise
we let gi = 0. We refer to node i with gi 6= 0 as a pinned
node. Let G = diag{g1 , . . . , gN } ∈ ℜN ×N . The system is
assumed to have at least one follower which can observe
the leader, hence G 6= 0.
lim sup
t→∞
i
3
′
Consider a multi-agent system consisting of a leader
agent and N follower agents. Dynamics of the ith
follower are described by the equation
j∈Ni
xj (t) − xi (t) + gi x0 (t) − xi (t) .
(6)
and constants ω > 0 and µi > 0 chosen so that
ω
min (Q)
ρi = α1 − µi − α
α2 > 0, where α1 = λλmax
(Y ) and
α2 =
(2)
λmax (P )λ2max (Y BR−1 B ′ Y )
.
ϑλmin (Y )
Let ρmin = min ρi and
i
select νi > 0, σi ∈ (0, ρmin ) and γ > 0. Introduce the
combinational state measurement error for agent i
si (t) = ẑi (t) − zi (t).
We wish to find a distributed event-triggered control law
for each follower to be able to track the leader. For each
agent i, introduce a combinational state zi (t),
X
control
Y A + A′ Y − 2ϑmin Y BR−1 B ′ Y + Q ≤ 0,
Note that the matrix A is not assumed to be Hurwitz,
it can be marginally stable or even unstable.
zi (t) =
(5)
Event-triggered
leader-following
under a directed graph G
(1)
where xi ∈ ℜn is the state, ui ∈ ℜp is the control input.
Also, the dynamics of the leader agent are given by
ẋ0 = Ax0 .
− xi (t)k2 ≤ ∆,
In this section, we propose an event triggering rule and a
leader-following tracking control for multi-agent systems
where the followers are connected over a directed
graph. Our result will involve certain symmetric positive
definite matrices R, Q, and Y related through the
following Riccati inequality
Problem formulation
ẋi = Axi + Bui ,
i=1 kx0 (t)
Definition 1 We say that the leader-follower system
(1), (2) with a control law (4) does not exhibit Zeno
behavior if over any finite time period there are only
a finite number of communication events between the
follower systems, i.e., for every agent i the sequence of
event times tik has the property inf k (tik+1 − tik ) > 0.
and F = (L + G) (L + G).
2.2
PN
where ∆ is a given positive constant. Furthermore, the
closed loop dynamics of the followers must not exhibit
Zeno behavior with the proposed event triggering rule.
In addition, we assume the graph G contains a spanning
tree rooted at a pinned node ir , i.e., gir > 0. Then,
−(L + G) is a Metzler matrix. According to [28], the
matrix −(L + G) is Hurwitz stable 1 , which implies that
−(L + G) is diagonally stable [29]. That is, there exists a
positive definite diagonal matrix Θ = diag{ϑ1 , . . . , ϑN }
such that H = Θ−1 (L + G) + (L + G)′ Θ−1 > 0. We will
also use the following notation: α = 21 λmin (H), ϑmin =
−1
min(ϑi ), ϑ = min(ϑ−1
(L + G)(L + G)′ Θ−1
i ), P = Θ
i
(4)
(7)
Theorem 1 Given R = R′ > 0, Q = Q′ > 0, suppose
there exists Y = Y ′ > 0 such that (6) holds. Then under
the control law (4) with K = − α1 R−1 B ′ Y , the system
(1), (2) achieves the leader-follower tracking property
Nγ
, if the
of the form (5) with ∆ = ϑλmin (Y )λ
min (F )ρmin
communication events are triggered at
n
tik = inf t > tik−1 :
o
′
−σi t
ksi k2 ≥ αω(µi ϑ−1
+ γ) . (8)
i z i Y z i + νi e
(3)
We seek to develop a control scheme where agent i
updates its control input at event times, which are
1
These properties of the matrix L + G can be guaranteed
under weaker assumptions on the graph G [28].
3
on every interval [tik , tik+1 ), then it follows from (13) that
In addition, the system does not exhibit Zeno behavior.
Remark 1 The Riccati inequality (6) is similar to
the Riccati inequality employed in [15]. However, [15]
considers an undirected topology, and the design uses
the second smallest eigenvalue of L. In contrast, (6) uses
ϑmin associated with the directed graph G. Also, the
inequality (6) is equivalent to the following LMI in Y −1 ,
which can be solved using the existing LMI solvers,
"
AY −1 + Y −1 A′ − 2ϑmin BR−1 B Y −1
Y −1
−Q−1
#
N
N
X
X
dV (ε)
′
νi e−σi t + N γ
ρi ϑ−1
≤−
i zi Y zi +
dt
i=1
i=1
≤ −ρmin V (ε) +
N
X
νi e−σi t + N γ.
(15)
i=1
Thus, we have
≤ 0.
(9)
N
X
V (ε) ≤ e−ρmin t V (ε(0)) −
i=1
Remark 2 We note that the event triggering condition
(8) involves monitoring of the combinational state zi (t),
hence the means for generating zi (t) continuously are
needed to implement it. A computational algorithm will
be introduced later to generate the combinational state
using event-triggered communications.
+
≤ V (ε(0)) +
kzi (t)k ≤
(17)
N
X
i=1
kzi k2 . (18)
p
κ(ϑλmin (Y ))−1 = κ̄.
(19)
Next, we prove that the system does not exhibit Zeno
behavior. Suppose t1 , t2 are two adjacent zero points
of si (t) on the interval [tik , tik+1 ), tik ≤ t1 < t2 < tik+1 .
Then ksi (t)k > 0 for all t ∈ (t1 , t2 ) ⊆ (tik , tik+1 ), and the
following inequality holds on the interval (t1 , t2 )
(11)
d
s′ ṡi
ksi kkṡi k
d
ksi k = (s′i si )1/2 = i ≤
= kṡi k. (20)
dt
dt
ksi k
ksi k
Furthermore, note that on the interval [t1 , t2 )
ṡi (t) = −żi (t),
(12)
2
si (t+
= 0.
1)
(21)
It follows from (2) that ∀t ∈ [t1 , t2 )
Using (6), it follows from (11) and (12) that
kṡi (t)k =
(13)
X
j∈Ni
= Azi (t) + BK
Since the triggering condition (8) enforces the property
′
−σi t
(αω)−1 ksi k2 ≤ µi ϑ−1
+γ
i z i Y z i + νi e
Nγ
νi
+
= κ,
ρmin − σi
ρmin
This implies that for all i, kzi (t)k is bounded,
(10)
Since K = − α1 R−1 B ′ Y , the following inequality holds
1 ′
dV (ε)
≤ − z ′ Θ−1 ⊗ Q z +
ss
dt
αω
ω ′
+ z P ⊗ (Y BR−1 B ′ Y )2 z.
α
i=1
(16)
κ ≥ V (ε) = z ′ (Θ−1 ⊗ Y )z ≥ ϑλmin (Y )
Let the Lyapunov function candidate for the system
comprised of the systems (10) be V (ε) = z ′ (Θ−1 ⊗ Y )z,
where z = ((L + G) ⊗ In )ε, and ε = [ε′1 . . . ε′N ]′ . Then
2z ′ ((Θ−1 (L + G)) ⊗ (Y BK))z
1
= −z ′ H ⊗ ( Y BR−1 B ′ Y ) z
α
1
′
≤ −2αz IN ⊗ ( Y BR−1 B ′ Y ) z.
α
N
X
Nγ
νi
+
ρmin − σi
ρmin
where the constant κ depends on the initial conditions.
It then follows from (17) that for all t ≥ 0
Define the tracking error εi (t) = x0 (t) − xi (t) at node i.
It follows from (7) and (4) that
dV (ε)
= 2z ′ (Θ−1 (L + G) ⊗ Y BK)z
dt
+ (Θ−1 ⊗ Y A)z + (Θ−1 (L + G) ⊗ Y BK)s .
e−σi t
i=1
Proof of Theorem 1: We first prove that kzi (t)k are
bounded. This fact will then be used to prove that under
the proposed control law the system does not exhibit
Zeno behavior. Also, the property (5) will be proved after
Zeno behavior is excluded.
ε̇i (t) =Aεi (t) + BKzi (t) + BKsi (t).
N
X
νi
Nγ
−
ρmin − σi
ρmin
≤ kAkksi (t)k +
(14)
2
4
ẋj (t) − ẋi (t) + gi ẋ0 (t) − ẋi (t)
X
j∈Ni
Mki ,
zi (tik ) − ẑj (t) + gi zi (tik )
As usual, s(a+ ) , limt↓a s(t), s(b− ) , limt↑b s(t).
(22)
where Mki =
max
t∈[tik ,tik+1 )
kAzi (tik ) + BK
P
j∈Ni
Proof: According to (26), for q
any δ > 0, there exists
tδ such that kzi (t)k < (1 + δ) ϑλminN(Yγ )ρmin , ̟1 for
all i and t > tδ . Therefore, for a sufficiently large k,
Mki ≤ kAk + (2di + gi )kBKk ̟1 ≤ η̟̄1 . Then (28)
follows from (24).
✷
zi (tik ) −
ẑj (t) + gi zi (tik ) k. Hence, using (20) and (22) we obtain
ksi k ≤
Mki kAk(tik+1 −tik )
Mki kAk(t−t1 )
(e
− 1) ≤
(e
− 1)
kAk
kAk
(23)
Remark 3 From (27), the upper bound on the tracking
error depends on the parameter γ and the size of the
network N . Therefore, the tracking performance can be
guaranteed even for larger systems, if γ is sufficiently
small. On the other hand, the lower bound on the interevent times in (25) reduces if γ is reduced. This means
that a higher tracking precision can be achieved by
reducing γ, but the communications may become more
frequent. However, Corollary 1 shows that when γ is
reduced, the frequency of communication events may
increase only on an initial interval [0, tδ ], and after time
tδ the minimum inter-event time π is independent of γ.
for all t ∈ (t1 , t2 ). Since si (t+
1 ) = 0, (23) holds for all
t ∈ [t1 , t2 ) ⊆ [tik , tik+1 ). The expression on the right hand
side of (23) is independent of t; hence the above reasoning
applies to all such intervals [t1 , t2 ). Hence, (23) holds for
all t ∈ [tik , tik+1 ). Thus, from the definition of the event
time tik+1 in (8) and (23) we obtain
√
Mki kAk(tik+1 −tik )
αωγ ≤ ksi ((tik+1 )− )k ≤
(e
− 1). (24)
kAk
According to (19), for any k, Mki ≤ kAk + (2di +
gi )kBKk κ̄ = ηi κ̄ ≤ η̄κ̄, where η̄ = max ηi . Hence, it
Remark 4 Selecting the parameters for the eventtriggering condition (8) involves the following steps: (a)
choose matrices Q > 0 and R > 0 and solve the Riccati
inequality (6), or equivalently the LMI (9), to obtain the
matrix Y , then compute α1 and α2 ; (b) choose µi > 0
and ω > 0 to compute ρi > 0; (c) choose σi ∈ (0, ρmin );
(d) based on the desired upper bound ∆, select γ, see
(27); (e) Lastly, choose νi . Note that the term νi e−σi t
in (8) governs the triggering threshold during the initial
stage of the tracking process. Thus it determines the
frequency of communication events during this stage.
The value of νi depends on the selected σi . If σi is large,
then typically a relatively large νi must be chosen to
ensure the communication events occur less frequently.
i
follows from (24) that
tik+1
−
tik
√
kAk αωγ
1
.
ln 1 +
≥
kAk
η̄κ̄
(25)
Thus, the inter-event intervals are bounded from below
uniformly in k, that is, Zeno behavior does not occur.
Since Zeno behavior has been ruled out, it follows from
(16) and the rightmost inequality in (18) that for all i,
lim sup
t→∞
PN
i=1 kzi (t)k
2
≤ N γ(ϑλmin (Y )ρmin )−1 .
(26)
Since z = ((L + G) ⊗ In )ε, this further implies
lim sup
t→∞
4
Nγ
. (27)
i=1 kεi (t)k ≤
ϑλmin (Y )λmin (F )ρmin
PN
2
I.e., (5) holds. This concludes the proof.
Theorem 2 Let R = R′ > 0, Q = Q′ > 0 be given
matrices. Suppose there exists a matrix Y = Y ′ > 0, Y ∈
ℜn×n , solving the following Riccati inequality
Corollary 1 For any δ > 0, there exists a sufficiently
large tδ such that with the control law and event triggering
condition proposed in Theorem 1,
inf
Y A + A′ Y − 2λY BR−1 B ′ Y + Q ≤ 0,
(29)
where λ = λmin (λi ) and λi are the eigenvalues of
L + G. Then under the control law (4) with K =
−R−1 B ′ Y the system (1), (2) achieves the leaderfollower tracking property of the form (5) with ∆ =
N γ(ρλmin (Y )λmin (F ))−1 , if the communication events
(tik+1 − tik )
p
kAk αωϑλmin (Y )ρmin
1
√
≥
= π.
ln 1 +
kAk
η̄(1 + δ) N
control
Although the problem for an undirected G can be
regarded as a special case of the problem in Section 3,
an independent derivation is of interest, which uses the
symmetry of the matrix L + G. Accordingly, a different
event triggering condition is proposed for this case.
✷
According to (25) and (27), the parameter γ not only
helps to exclude Zeno behavior, but also determines the
upper bound of the tracking errors. We now show that
after a sufficiently large time, the lower bound on the
inter-event intervals becomes independent of γ. More
precisely, the following statement holds.
k : tik >tδ
Event-triggered
leader-following
under an undirected graph G
(28)
5
coordinate transformation
ζ = (T −1 ⊗ In )ε, the identity
′ ′
z = (L + G) ⊗ In ε, z = [z1′ , . . . , zN
] , and condition
(30) we can show that on every interval [tik , tik+1 ),
are triggered at
n
µi zi′ Qzi + νi e−σi t + γ o
;
tik = inf t > tik−1 : kzi kksi k ≥
2̟2
(30)
N
X
V (ζ) ≤e−ρt V (ζ(0)) −
here ̟2 = λmax (L + G)λmax (Y BR−1 B ′ Y ), µi , νi , σi
and γ are positive constants chosen so that 0 < µi < 1,
νi > 0, γ > 0 and σi ∈ (0, ρ), where ρ = (1 −
µmax )λmin (Q)/λmax (Y ), µmax = max µi . In addition
i=1
+
under this control law, Zeno behavior is ruled out:
≤V (ζ(0)) +
(31)
5
and s =
...
Generation of the combinational state
To implement the event triggering conditions (8) and
(30) in Theorems 1 and 2, the combinational state
zi (t) must be known at all times. We now describe
how node i can generate zi (t) continuously using only
discrete communications from its neighbors at event
times. This eliminates the need for agent i to monitor
and communicate with its neighbors continuously.
According to (1), for t ∈ [tik , tik+1 ), the dynamics of xi (t)
and xj (t), j ∈ Ni , on this interval can be expressed as
ε̇ = (IN ⊗ A + (L + G) ⊗ BK)ε + (IN ⊗ BK)s, (32)
...
i
xi (t) = eA(t−tk ) xi (tik ) −
s′N ]′ .
i
xj (t) = eA(t−tk ) xj (tik ) −
It follows from [20] that all the eigenvalues of matrix
L + G ar positive. Let T ∈ ℜN ×N be an orthogonal
matrix such that T −1 (L+G)T = Λ = diag{λ1 , . . . , λN }.
′ ′
Also, let ζ = (T −1 ⊗ In )ε, ζ = [ζ1′ . . . ζN
] . Using
this coordinate transformation, the system (32) can be
represented in terms of ζ and s, as
ζ̇ = IN ⊗ A + Λ ⊗ (BK) ζ + (T −1 ⊗ (BK))s.
(35)
Remark 6 It can be shown that the observation made
in Remark 3 applies in this case as well. The parameters
in the event triggering conditions (30) can be selected
following a process similar to that outlined in Remark 4.
Proof of Theorem 2: The proof is similar to the proof
of Theorem 1 except for the procedure of obtaining an
upper bound of zi (t). Therefore, we only outline the
proof of boundedness of zi (t). The closed loop system
consisting of error dynamics (10) is represented as
where as before ε =
(34)
PN
It then follows from (35) that λmin (Y ) i=1 kzi (t)k2 ≤
V (ζ). The rest of the proof of this theorem is similar to
the proof of Theorem 1 and is omitted for brevity.
✷
Y A + A′ Y − 2(λ̂ − ̺1 )Y BR−1 B ′ Y + Q ≤ 0.
[s′1
i=1
νi
Nγ
+
= h.
ρ − σi
ρ
V (ζ) = ε′ ((L + G)2 ⊗ Y )ε = z ′ (IN ⊗ Y )z.
Remark 5 The Riccati inequality (29) in this theorem
is similar to the Riccati inequality employed in [15].
However, our condition (29) depends on the smallest
eigenvalue λ of the matrix L + G. In contrast, in [15]
the second smallest eigenvalue of the graph Laplacian
matrix is required to build the consensus algorithm.
When the graph topology is completely known at each
node, λ can be readily computed. But even when the
graph G is not known at each node, λ can be estimated
in a decentralized manner [26]. Errors between the true
eigenvalue λ and its estimate λ̂ can be accommodated
by replacing (29) with a slightly more conservative
condition. Suppose |λ − λ̂| < ̺1 , then the following
Riccati inequality can be used in lieu of (29):
ε′N ]′
N
X
V (ζ) can be expressed in terms of ε using the inverse
transformation ζ = (T −1 ⊗ In )ε and z = ((L + G) ⊗ In )ε
p
here ~ = h/λmin (Y ), h is defined in (34) below.
[ε′1
Nγ
νi
+
ρ − σi
ρ
e−σi t
i=1
i
1
kAkγ
inf (tik+1 − tik ) ≥
ln 1 +
;
k
kAk
2̟2 η̄~2
N
X
νi
Nγ
−
ρ − σi
ρ
−
t̂j =
(33)
X
m : tik <tjm <t
(
Z
t
Z
tik
Z
eA(t−τ ) BKzi (tik )dτ,
t̂j
tik
min(t,tjm+1 )
tjm
(36)
eA(t−τ ) BKzj (tjl )dτ
eA(t−τ ) BKzj (tjm )dτ, (37)
tjl+1 , if j has at least one event on [tik , t),
t,
otherwise,
where tjl+1 = min(tjm : tjm ∈ [tik , t)). Equation (37)
accounts for the fact that agent j may experience several
Consider the following Lyapunov function candidate for
the system (33), V (ζ) = ζ ′ (Λ2 ⊗ Y )ζ. Using (29), the
6
events at times tjm , m = l+1, . . ., within the time interval
[tik , tik+1 ). When [tik , t) contains no event triggered by
agent j, the last term in (37) vanishes. Similarly, the
dynamics of the tracking error εi (t) can be expressed as
1
A2
εi (t) = e
A(t−tik )
εi (tik ) +
Z
2
t
tik
eA(t−τ ) BKzi (tik )dτ. (38)
t1p
A1
t1p+1
z1 (t1p )
z1 (t1p+1)
t3q
z2 (t2k )
t2k
t1p
t1p+1 t2k+1 z2 (t2k+1)
z3 (t3q )
A3
t3q
3
R t′
′
Using the notation Φ(t, t′ ) = t eA(t −τ ) BKdτ , it
follows from (3), (36), (37) and (38) that
(a)
(b)
Fig. 1. Communication between followers in an undirected
network: (a) The graph; (b) Communication events.
i
zi (t) = eA(t−tk ) zi (tik ) + gi Φ(tik , t)zi (tik )
X
X
+
Φ(tjm , min(t, tjm+1 ))(zi (tik ) − zj (tjm ))
j∈Ni m : ti <tjm <t
k
+
X
j∈Ni
Φ(tik , t̂j )(zi (tik ) − zj (tjl )).
1
(39)
A2
2
According to (39), to generate zi (t), agent i must know
zi (tik ) and zj (tjm ), tik < tjm < t. It has zi (tik ) in hand
and thus it must only receive zj (tjm ) when an event is
triggered at node j during [tik , tik+1 ). To ensure this, we
propose an algorithm to allow every agent to compute
its combinational state and broadcast it to its neighbors
at time instants determined by its triggering condition.
This algorithm has a noteworthy feature that follows
from (39) in that only one-way communications occur
between the neighboring agents at the triggering time,
even when the graph G is undirected.
A3
3
(a)
t1p
A1
t1p+1
z1 (t1p )
z1 (t1p+1)
t2k+1
t2k
z2 (t2k )
t1p
t1p+1
z2 (t2k+1)
z3 (t3q )
t3q
(b)
Fig. 2. Communication between followers in a directed
network: (a) The graph; (b) Communication events.
the local event time record tik = 0, and the local
measurement error si (t) = 0.
(b) Receive xj (0) from all neighbors j ∈ Ni ;
(c) Send xi (0) to agents r such that i ∈ Nr ;
(d) Compute zi (0) using the received xj (0), j ∈ Ni ;
(e) Receive zj (0), j ∈ Ni , and send zi (0) to agents r
such that i ∈ Nr .
Do While (8) (if G is directed) or (30) (if G is
undirected) is not satisfied:
(a) Compute zi (t) with the latest received ẑj (t), j ∈
Ni , using (39), then update si (t) using (7);
Else
(a) Advance the event counter k = k + 1, and set
tik = t, si (t) = 0;
(b) Set zi (tik ) = zi (t) and send zi (tik ) to agents r such
that i ∈ Nr ;
(c) Update the control signal ui = −Kzi (tik ).
End
Before presenting the algorithm formally, let us illustrate
using (39) with an example involving three agents, A1 ,
A2 and A3 ; see Fig. 1 and 2. E.g., consider the timeline
in Fig. 1(b). According to the timeline of A2 , an event
has been triggered for A2 at time t2k . Until it receives
communications from the neighbors, A2 computes z2 (t)
using (39) with the information z1 (t1p−1 ) and z3 (t3q−1 )
received from A1 and A3 prior time t2k . This information
is used in the third line of (39). Note that t̂1 = t̂3 = t
until A2 receives the first message; the terms in the
second line are zero until then. At time t1p , an event
occurs at node 1, and A2 receives the value of A1 ’s
combinational state, z1 (t1p ). From this time on, it starts
using Φ(t1p , t)(z2 (t2k ) − z1 (t1p )) in (39), until the next
message arrives, this time from A3 . Overall during the
interval [t2k , t2k+1 ), A2 receives z1 (t1p ) and z1 (t1p+1 ) from
A1 and z3 (t3q ) from A3 , which it uses in (39) to compute
z2 (t). When G is directed, A2 computes z2 (t) in the same
manner, but it only receives z1 (t1p ) and z1 (t1p+1 ) from
A1 , as shown in Fig. 2(b).
As one can see, the algorithm uses only one-directional
communications between agents at event times: the
information is received from j ∈ Ni when an event occurs
at node j and is sent to r, i ∈ Nr when an event occurs
at node i, e.g., see Figs. 1(b) and 2(b).
6
We conclude this section by summarizing the algorithm
for generating the combinational state zi (t) at each node.
Example
Consider a system consisting of twenty identical
pendulums. Each pendulum is subject to an input as
shown in Fig. 3. The dynamic of the i-th pendulum is
Initialization. (a) Synchronize local clocks to set t =
0 at each node, also set the event counter k = 0,
7
3 [30], also using the directed graph in Fig. 4(b).
Out of the results in [30], we chose Theorem 3 for
comparison, because it has a way to avoid continuous
communications between the followers; this allows for a
fair comparison with our methods.
l
α0
α20
α1
u1 →
Leader
u20 →
Fig. 3. The system consisting of twenty pendulums and the
leader pendulum.
1
0
2
8
In the first three simulations, we aimed to restrict the
predicted upper bound on the tracking error to ∆ ≤
0.05. In the design, we chose the same Q matrix and
adjusted R to obtain the same control gains in the three
simulations. The parameters of the triggering conditions
(8) and (30) and the design parameters were set as shown
in Table 1. In Simulation 4, using the same matrix Q,
we computed the control gain K = [2.63, 7.24] and also
chose the parameters required by Theorem 3 of [30] as
follows: h = 5, β1 = 0.1, β2 = 0.15, γ = 0.2 and τ = 0
(see [30] for the definition of these parameters). In all
simulations we endeavoured to achieve the least number
of communications events.
20
12
15
19
(a) Undirected follower graph
1
0
2
8
20
12
15
19
(b) Directed follower graph
Fig. 4. Communication graphs for the example.
The simulation results achieved in Simulations 1-3 are
shown in Table 1. In the table, J[18,20] denotes the
PN
maximum actual tracking error i=1 kεi (t)k2 observed
over the time interval t ∈ [18, 20], t[0,20] , t[0,10] , t[10,20]
and E[0,20] , E[0,10] , E[10,20] represent the minimum interevent intervals and the total number of events occurred
in the system on the time intervals [0, 20], [0, 10], [10, 20],
respectively. The corresponding results of Simulation 4
are J[18,20] = 1.8778 × 10−6 , t[0,20] = 21.1 ms, t[0,10] =
21.1 ms, t[10,20] = 101.5 ms, E[0,20] = 2857, E[0,10] =
1767 and E[10,20] = 1090. The tracking errors are
shown in Fig. 5, which illustrates that all four eventtriggered tracking control laws enable all the followers
to synchronize to the leader.
governed by the following linearized equation
ml2 α̈i = − mglαi − ui ,
i = 1, . . . , 20,
(40)
where l is the length of the pendulum, g = 9.8 m/s2 is the
gravitational acceleration constant, m is the mass of each
pendulum and ui is the control torque (realized using a
DC motor). In addition, consider the leader pendulum
which is identical to those given and whose dynamic is
described by the linearized equation
ml2 α̈0 = −mglα0 .
(41)
Choosing the state vectors as xi = (αi , α̇i ), i = 0, . . . , 20,
equations (40) and "(41) can#be written
in the# form of
"
(1), (2), where A =
0
1
−g/l 0
,B=
0
−1/(ml2 )
The first comparison was made between the techniques
developed in this paper for systems connected over
directed and undirected graphs; these techniques were
applied in Simulations 1 and 2, respectively. Although
the minimum inter-event intervals in Simulation 2 were
observed to be smaller than those in Simulation 1,
on average the events were triggered less frequently
in Simulation 2. This demonstrates that connecting
the followers into an undirected network and using
the design scheme based on Theorem 2 may lead to
some advantages in terms of usage of communication
resources.
. In this
example, we let m = 1 kg, l = 1 m.
Both undirected and directed follower graphs G are
considered in the example, shown in Fig. 4(a) and 4(b),
respectively. According to Fig. 4, in both cases agents
1, 8, 12 and 15 measure the leader’s state, however in
the graph in Fig. 4(b) follower i is restricted to receiving
information from follower i−1 only, whereas in Fig. 4(a),
it can receive information from both i − 1 and i + 1.
Next, we compared Simulations 2 and 3 using the
same undirected follower graph in Fig. 4(a) based on
Theorems 1 and 2 developed in the paper. Compared
with Simulation 2, more communication events and
smaller minimum inter-event intervals were observed in
Simulation 3. One possible explanation to this is because
the method based on Theorem 2 takes an advantage
of the symmetry property of the matrix L + G of the
undirected follower graph in the derivation.
We implemented four simulations to compare the results
proposed in this paper and also to compare them with
the results in [30]. The directed graph in Fig. 4(b) was
employed to illustrate Theorem 1 in Simulation 1. In
Simulation 2, we implemented the controller designed
using Theorem 2 with the undirected graph in Fig. 4(a).
We applied Theorem 1 using the same undirected graph
in Fig. 4(a) in Simulation 3. In Simulation 4, we applied
the event-based control strategy proposed in Theorem
Finally, we compared Simulations 1 and 4 where we
8
Table 1
The design parameters and simulation results.
Q
0.8
Simulation 2
Simulation 3
(Theorem 1
(Theorem 2
(Theorem 1
0.7
0.6
Tracking errors
Simulation 1
and Fig. 4(b)) and Fig. 4(a)) and Fig. 4(a))
10.59 0.42
10.59 0.42
10.59 0.42
0.42 1.05
0.42 1.05
0.42 1.05
0.5
0.4
0.3
0.2
R
1.1394
0.1
1.5405
K
[5.23 13.08]
[5.23 13.08]
[5.23 13.08]
α
0.0877
−
0.0649
ω
0.001
−
0.001
µi
0.1
0.1
0.1
0.8
σi
0.5025
0.7198
0.3007
0.7
νi
2.5
∆
2.9769 × 10
0.0462
7.9990 × 10
J[18,20] 3.4524 × 10
3.5259 × 10
5
10
Time (s)
15
20
15
20
15
20
15
20
0.6
−6
3.1507 × 10
0.0462
−6
0
(a) Simulation 1
1.2
−6
0.0462
−7
0
Tracking errors
γ
2
−5
0.1
−6
1.5518 × 10
0.5
0.4
0.3
t[0,20]
6.2 ms
5.9 ms
1.9 ms
0.2
t[0,10]
6.2 ms
5.9 ms
1.9 ms
0.1
t[10,20]
30.7 ms
24.9 ms
26.0 ms
E[0,20]
2240
2091
2979
E[0,10]
1285
1217
1993
E[10,20]
955
874
986
0
0
5
10
Time (s)
(b) Simulation 2
0.8
0.7
used the same directed follower graph in Fig. 4(b)
for both designs. Although compared with the method
of Theorem 3 of [30], our method produced smaller
minimum time intervals between the events, the total
number of events occurred during the simulation using
our method was also smaller. We remind that we
endeavoured to select the simulation parameters and
the controller gains for this simulation to reduce the
total number of events. We also tried to compare
the performance of the two methods by tuning the
controller of [30] to almost the same gain as in Simulation
1, but the results in Simulation 4 were even worse,
producing a much greater number of communication
events (E[0,20] = 5553, E[0,10] = 3100 and E[10,20] =
2453).
Tracking errors
0.6
0.5
0.4
0.3
0.2
0.1
0
0
5
10
Time (s)
(c) Simulation 3
0.8
0.7
0.6
Conclusions
Tracking errors
7
The paper has studied the event-triggered leaderfollower tracking control problem for multi-agent
systems. We have presented sufficient conditions to
guarantee that the proposed event-triggered control
scheme leads to bounded tracking errors. Furthermore,
our results show that by adjusting the parameters
of the triggering condition, the upper bound on the
tracking errors guaranteed by these conditions can be
0.5
0.4
0.3
0.2
0.1
0
0
5
10
Time (s)
(d) Simulation 4
Fig. 5. Tracking errors kεi k.
9
tuned to a desired small value, at the expense of more
frequent communications during an initial stage of the
tracking process. Such conditions have been derived for
both undirected and directed follower graphs. Also, we
showed that the proposed event triggering conditions
do not lead to Zeno behavior even if a tight accuracy
requirement on the tracking errors is imposed. In fact,
with the proposed triggering rules, such tight accuracy
requirements do not impact the inter-event intervals
after a sufficiently large time. We also presented a
computational algorithm which allows the nodes to
continuously generate the combinational state at every
node which is needed to implement these event triggering
schemes. Thus, continuous monitoring the neighboring
states is avoided. The efficacy of the proposed algorithm
has been demonstrated using a simulation example.
Future work will include the study of robustness of the
proposed control scheme.
8
[12] Y. Fan, G. Feng, Y. Wang, and C. Song, “Distributed eventtriggered control for multi-agent systems with combinational
measurements,” Automatica, 49, 671–675, 2013.
[13] X. Meng and T. Chen, “Event based agreement protocols for
multi-agent networks,” Automatica, 49, 2125–2132, 2013.
[14] W. Zhu, Z. P. Jiang, and G. Feng, “Event-based consensus of
multi-agent systems with general linear models,” Automatica,
50, 552–558, 2014.
[15] E. Garcia, Y. Cao, and D. W. Casbeer, “Decentralized
event-triggered consensus with general linear dynamics,”
Automatica, 50, 2633–2640, 2014.
[16] D. Liuzza, D. V. Dimarogonas, M. di Bernardo, and K. H.
Johansson, “Distributed model-based event-triggered control
for synchronization of multi-agent systems,” Proc. IFAC Conf.
Nonlinear Contr. Syst, 2013, pp. 329-334.
[17] A. Adaldo, F. Alderisio, D. Liuzza, G. Shi, D. V.
Dimarogonas, M. di Bernardo and K. H. Johansson, “Eventtriggered pinning control of complex networks with switching
topologies,” Proc. 53rd IEEE CDC, 2014, pp. 2783–2788.
[18] A. Jadbabaie, J. Lin, and S. A. Morse, “Coordination of
groups of mobile autonomous agents using nearest neighbor
rules,” IEEE Trans. Automat. Contr., 48, 988–1001, 2003.
Acknowledgements
[19] W. Ren and E. Atkins, “Distributed multi-vehicle
coordinated control via local information exchange,” Int. J.
Robust and Nonlinear Contr., 17, 1002–1033, 2007.
The authors thank the Associate Editor and the
Reviewers for their helpful and constructive comments.
[20] Y. G. Hong, J. P. Hu, and L. X. Gao, “Tracking control
for multi-agent consensus with an active leader and variable
topology,” Automatica, 42, 1177–1182, 2006.
References
[21] J. Hu, G. Chen, and H. Li, “Distributed event-triggered
tracking control of leader-follower multi-agent systems with
communication delays,” Kybernetika, 47, 630–643, 2011.
[1] W. Ren and R. W. Beard, Distributed consensus in multivehicle cooperative control. London: Springer-Verlag, 2008.
[22] Y. Zhang and Y. Hong, “Distributed event-triggered tracking
control of multi-agent systems with active leader,” Proc.
10th World Congress on Intelligent Control and Automation,
Beijing, China, 2012, pp. 1453–1458.
[2] G. Xie, H. Liu, L. Wang, and Y. Jia, “Consensus in networked
multi-agent systems via sampled control: fixed topology case,”
Proc. ACC, 2009, pp.3902–3907.
[23] H. Li, X. Liao, T. Huang, and W. Zhu, “Event-triggering
sampling based leader-following consensus in second-order
multi-agent systems,” IEEE Trans. Autom. Contr., 60, 1998–
2003, 2015.
[3] M. Mazo Jr., A. Anta, and P. Tabuada, “An ISS self-triggered
implementation of linear controller,” Automatica, 46, 1310–
1314, 2010.
[4] W. P. M. H. Heemels, K. H. Johansson, and P. Tabuada, “An
introduction to event-triggered and self-triggered control,”
Proc. 51st IEEE CDC, 2012, pp.3270–3285.
[24] J. Hu, J. Geng and H. Zhu, “An observer-based consensus
tracking control and application to event-triggered tracking”,
Communications in Nonlinear Science and Numerical
Simulation, 20, 559–570, 2015.
[5] P. Tabuada, “Event-triggered real-time scheduling of
stabilizing control tasks,” IEEE Trans. Autom. Contr., 52,
1680–1685, 2007.
[25] T. Liu, M. Cao, C. De Persis, and J. M. Hendrickx,
“Distributed event-triggered control for synchronization of
dynamical networks with estimators”, Proc. IFAC Workshop
on Distributed Estimation and Control in Networked Systems,
Koblenz, Germany, September 2013, pp. 116–121.
[6] J. Lunze and D. Lehmann, “A state-feedback approach to
event-based control,” Automatica, 46, 211–215, 2010.
[7] D. V. Dimarogonas and K. H. Johansson,“Event-triggered
control for multi-agent systems”, Proc. 48th IEEE CDC - 28th
CCC, 2009, pp.7131–7136.
[26] M. Franceschelli, A. Gasparri, A. Giua, and C. Seatzu,
“Decentralized estimation of Laplacian eigenvalues in multiagent systems,” Automatica, 49, 1031–1036, 2013.
[8] X. Wang and M. Lemmon, “Event-triggering in distributed
networked control systems,” IEEE Trans. Autom. Contr., 56,
586–601, 2011.
[27] H. W. Zhang, Frank L. Lewis, and Z. H. Qu, “Lyapunov,
adaptive, and optimal design techniques for cooperative
systems on directed communication graphs,” IEEE Trans.
Industrial Electronics, 59, 3026–3041, 2012.
[9] D. V. Dimarogonas, E. Frazzoli, and K.H. Johansson,
“Distributed event-triggered control for multi-agent systems,”
IEEE Trans. Autom. Contr., 57, 1291–1297, 2012.
[28] J. Hu and Y. Hong, “Leader-following coordination of multiagent systems with coupling time delays,” Physica A, 374,
853–863, 2007.
[10] W.P.M.H. Heemels and M.C.F. Donkers, “Model-based
periodic event-triggered control for linear systems,”
Automatica, 49, 698–711, 2013.
[29] E. Kaszkurewicz and A. Bhaya, Matrix Diagonal in Systems
and Computation, Birkhäuser Boston, 2000.
[30] W. Zhu and Z.P. Jiang, “Event-based leader-following
consensus of multi-agent systems with input time delay,” IEEE
Trans. Automat. Contr., 60, 1362–1367, 2015.
[11] G. S. Seyboth, D. V. Dimarogonas, and K. H.
Johansson, “Event-based broadcasting for multi-agent average
consensus,” Automatica,49, 245–252, 2013.
10
| 3 |
CROSS-MODAL EMBEDDINGS FOR VIDEO AND AUDIO RETRIEVAL
Didac Surı́s1 , Amanda Duarte2 , Amaia Salvador1 , Jordi Torres2 and Xavier Giró-i-Nieto1
1
2
Universitat Politècnica de Catalunya (UPC)
Barcelona Supercomputing Center (BSC-CNS)
arXiv:1801.02200v1 [cs.IR] 7 Jan 2018
ABSTRACT
The increasing amount of online videos brings several opportunities for training self-supervised neural networks. The
creation of large scale datasets of videos such as the YouTube8M allows us to deal with this large amount of data in manageable way. In this work, we find new ways of exploiting
this dataset by taking advantage of the multi-modal information it provides. By means of a neural network, we are able
to create links between audio and visual documents, by projecting them into a common region of the feature space, obtaining joint audio-visual embeddings. These links are used
to retrieve audio samples that fit well to a given silent video,
and also to retrieve images that match a given a query audio.
The results in terms of Recall@K obtained over a subset of
YouTube-8M videos show the potential of this unsupervised
approach for cross-modal feature learning. We train embeddings for both scales and assess their quality in a retrieval
problem, formulated as using the feature extracted from one
modality to retrieve the most similar videos based on the features computed in the other modality.
Index Terms— Sonorization, embedding, retrieval,
cross-modal, YouTube-8M
1. INTRODUCTION
Videos have become the next frontier in artificial intelligence.
The rich semantics contained in them make them a challenging data type posing several challenges in both perceptual,
reasoning or even computational level. Mimicking the learning process and knowledge extraction that humans develop
from our visual and audio perception remains an open research question, and video contain all this information in a
format manageable for science and research.
Videos are used in this work for two main reasons. Firstly,
they naturally integrate both visual and audio data, providing
a weak labeling of one modality with respect to the other. Secondly, the high volume of both visual and audio data allows
training machine learning algorithms whose models are governed by a high amount of parameters. The huge scale video
archives available online and the increasing number of video
cameras that constantly monitor our world, offer more data
than computation power available to process them.
The popularization of deep neural networks among the
computer vision and audio communities has defined a common framework boosting multimodal research. Tasks like
video sonorization, speaker impersonation or self-supervised
feature learning have exploited the opportunities offered by
artificial neurons to project images, text and audio in a feature space where bridges across modalities can be built.
This work exploits the relation between the visual and
audio contents in a video clip to learn a joint embedding
space with deep neural networks. Two multilayer perceptrons
(MLPs), one for visual features and a second one for audio
features, are trained to be mapped into the same cross-modal
representation. We adopt a self-supervised approach, as we
exploit the unsupervised correspondence between the audio
and visual tracks in any video clip.
We propose a joint audiovisual space to address a retrieval
task formulating a query from any of the two modalities. As
depicted in Figure 1, whether a video or an audio clip can be
used as a query to search its matching pair in a large collection
of videos. For example, an animated GIF could be sonorized
by finding an adequate audio track, or an audio recording illustrated with a related video.
In this paper, we present a simple yet very effective model
for retrieving documents with a fast and light search. We do
not address an exact alignment between the two modalities
that would require a much higher computation effort.
The paper is structured as follows. Section 2 introduces
the related work on learned audiovisual embeddings with neural networks. Section 3 presents the architecture of our model
and Section 4 how it was trained. Experiments are reported
in Section 5 and final conclusions drawn in Section 6. The
source code and trained model used in this paper is publicly available from https://github.com/surisdi/
youtube-8m.
2. RELATED WORK
In the past years, the relationship between the audio and the
visual content in videos has been researched in several contexts. Overall, conventional approaches can be divided into
four categories according to the task: generation, classification, matching and retrieval.
As online music streaming and video sharing websites
Fig. 1. A learned cross-modal embedding allows retrieving
video from audio, and vice versa.
have become increasingly popular, some research has been
done on the relationship between music and album covers
[1, 2, 3, 4] and also on music and videos (instead of just images) as the visual modality [5, 6, 7, 8] to explore the multimodal information present in both types of data.
A recent study [9] also explored the cross-modal relations
between the two modalities but using images with people talking and speech. It is done through Canonical Correlation
Analysis (CCA) and cross-modal factor analysis. Also applying CCA, [10] uses visual and sound features and common subspace features for aiding clustering in image-audio
datasets. In a work presented by [11], the key idea was to use
greedy layer-wise training with Restricted Boltzmann Machines (RBMs) between vision and sound.
The present work is focused on using the information
present in each modality to create a joint embedding space to
perform cross-modal retrieval. This idea has been exploited
especially using text and image joint embeddings [12, 13, 14],
but also between other kinds of data, for example creating a
visual-semantic embedding [15] or using synchronous data
to learn discriminative representations shared across vision,
sound and text [16].
However, joint representations between the images
(frames) of a video and its audio have yet to be fully exploited,
being [17] the work that most has explored this option up to
the knowledge of the authors. In their paper, they seek for a
joint embedding space but only using music videos to obtain
the closest and farthest video given a query video, only based
on either image or audio.
The main idea of the current work is borrowed from [14],
which is the baseline to understand our approach. There, the
authors create a joint embedding space for recipes and their
images. They can then use it to retrieve recipes from any food
image, looking to the recipe that has the closest embedding.
Apart from the retrieval results, they also perform other experiments, such as studying the localized unit activations, or
doing arithmetics with the images.
images of the video and the vector of features representing
the audio. These features are already precomputed and provided in the YouTube-8M dataset [18]. In particular, we use
the video-level features, which represent the whole video clip
with two vectors: one for the audio and another one for the
video. These feature representations are the result of an average pooling of the local audio features computed over windows of one second, and local visual features computed over
frames sampled at 1 Hz.
The main objective of the system is to transform the two
different features (image and audio, separately) to other features laying in a joint space. This means that for the same
video, ideally the image features and the audio features will
be transformed to the same joint features, in the same space.
We will call these new features embeddings, and will represent them with Φi , for the image embeddings, and Φa , for the
audio embeddings.
The idea of the joint space is to represent the concept of
the video, not just the image or the audio, but a generalization
of it. As a consequence, videos with similar concepts will
have closer embeddings and videos with different concepts
will have embeddings further apart in the joint space. For
example, the representation of a tennis match video will be
close to the one of a football match, but not to the one of a
maths lesson.
Thus, we use a set of fully connected layers of different
sizes, stacked one after the other, going from the original features to the embeddings. The audio and the image network
are completely separated. These fully connected layers perform a non-linear transformation on the input features, mapping them to the embeddings, being the parameters of this
non-linear mapping learned in the optimization process.
After that, a classification from the two embeddings is
done, also using a fully connected layer from them to the different classes, using a sigmoid as activation function. We will
get more insight on this step in section 4.
The number of hidden layers is not necessarily fixed, as
well as the number of neurons per layer, since we experimented with different configurations. Each hidden layer uses
ReLu as activation function, and all the weights in each layer
are regularized using L2 norm.
4. TRAINING
In this section we present the used losses as well as their
meaning and intuition.
4.1. Similarity Loss
3. ARCHITECTURE
In this section we present the architecture for our joint embedding model, which is depicted in the Figure 2.
As inputs, we have the vector of features representing the
The objective of this work is to get the two embeddings of the
same video to be as close as possible (ideally, the same), while
keeping embeddings from different videos as far as possible.
Formally, we are given a video vk , represented by the audio and visual features vk = {ik , ak } (ik represents the image
Fig. 2. Schematic of the used architecture.
features and ak the audio features of vk ). The objective is to
maximize the similarity between Φik , the embedding obtained
by transformations on ik , and Φak , the embedding obtained by
transformations on ak .
At the same time, however, we have to prevent embeddings from different videos to be “close” in the joint space. In
other words, we want them to have low similarity. However,
the objective is not to force them to be opposite to each other.
Instead of forcing them to have similarity equal to zero, we
allow a margin of similarity small enough to force the embeddings to be clearly not in the same place in in the joint
space. We call this margin α.
During the training, both positive and negative pairs are
used, being the positive pairs the ones for which ik and ak
correspond to the same video vk , and the negative pairs the
ones for which ik1 and ak2 do not correspond to the same
video, this is, k1 6= k2. The proportion of negative samples
is pnegative .
For the negative pairs, we selected random pairs that did
not have any common label, in order to help the network to
learn how to distinguish different videos in the embedding
space. The notion of “similarity” or “closeness” is mathematically translated into a cosine similarity between the embeddings, being the cosine similarity definedNas:
P
xk zk
k=1
s
similarity = cos(x, z) = s
(1)
N
N
P
P
2
2
xk
zk
k
i
for any pair of real-valued vectors x and z.
From this reasoning we get to the first and most important
loss:
Lcos ((Φa ,Φi ), y) =
1 − cos(Φa , Φi ),
if
=
max(0, cos(Φa , Φi ) − α), if
y=1
y = −1
(2)
where y = 1 denotes positive sampling, and y = −1
denotes negative sampling.
4.2. Classification Regularization
Inspired by the work presented in [14], we provide additional
information to our system by incorporating the video labels
(classes) provided by the YouTube-8M dataset. This information is added as a regularization term that seeks to solve
the high-level classification problem, both from the audio and
from the video embeddings, sharing the weights between the
two branches. The key idea here is to have the classification
weights from the embeddings to the labels shared by the two
modalities.
This loss is optimized together with the previously explained similarity loss, serving as a regularization term. Basically, the system learns to classify the audio and the images of
a video (separately) into different classes or labels provided
by the dataset. We limit its effect by using a regularization
parameter λ.
To incorporate the previously explained regularization to
the joint embedding, we use a single fully connected layer, as
shown in Figure 2. Formally, we can obtain the label probabilities as pi = softmax(W Φi ) and pa = softmax(W Φa ),
where W represents the learned weights, which are shared
between the two branches. The softmax activation is used in
order to obtain probabilities at the output. The objective is to
make pi as similar as possible to ci , and pa as similar as possible to ca , where ci and ca are the category labels for the video
represented by the image features and the audio features, respectively. For positive pairs, ci and ca are the same.
The loss function used for the classification is the well
known cross entropy loss:
L(x, z) = −
X
k
xk log(zk )
(3)
Thus, the classification loss is:
X
Lclass (pi , pa , ci , ca ) = −
(pik log(cik )+(pak log(cak )) (4)
k
Finally, the loss function to be optimized is:
L = Lcos + λLclass
(5)
4.3. Parameters and Implementation Details
For our experiments we used the following parameters:
• Batch size of 1024.
• We saw that starting with λ different than zero led to
a bad embedding similarity because the classification
accuracy was preferred. Thus, we began the training
with λ = 0 and set it to 0.02 at step number 10,000.
• Margin α = 0.2.
• Percentage of negative samples pnegative = 0.6.
• 4 hidden layers in each network branch, the number of
neurons per layer being, from features to embedding,
2000, 2000, 700, 700 in the image branch, and 450,
450, 200, 200 in the audio branch.
• Dimensionality of the feature vector = 250.
• We trained a single epoch.
The simulation was programmed using Tensorflow [19],
having as a baseline the code provided by the YouTube-8M
challenge authors1 .
5. RESULTS
5.1. Dataset
The experiments presented in this section were developed
over a subset of 6,000 video clips from the YouTube-8M
dataset [18]. This dataset does not contain the raw video files,
but their representations as precomputed features, both from
audio and video. Audio features were computed using the
method explained in [20] over audio windows of 1 second,
while visual features were computed over frames sampled at
1 Hz with the Inception model provided in TensorFlow [19].
The dataset provides video-level features, which represent
all the video using a single vector (one for audio and another
for visual information), and thus does not maintain temporal information; and also provides frame-level features, which
consist on a single vector representing each second of audio,
and a single vector representing each frame of the video, sampled at 1 frame per second.
The main goal of this dataset is to provide enough data to
reach state of the art results in video classification. Nevertheless, such a huge dataset also permits approaching other tasks
related to videos and cross-modal tasks, such as the one we
1 https://www.kaggle.com/c/youtube8m
Table 1. Evaluation of Recall from audio to video
Number of elements Recall@1 Recall@5 Recall@10
256
21.5%
52.0%
63.1%
512
15.2%
39.5%
52.0%
1024
9.8%
30.4%
39.6%
Table 2. Evaluation of Recall from video to audio
Number of elements Recall@1 Recall@5 Recall@10
256
22.3%
51.7%
64.4%
512
14.7%
38.0%
51.5%
1024
10.2%
29.1%
40.3%
approach in this paper. For this work, and as a baseline, we
only use the video-level features.
5.2. Quantitative Performance Evaluation
We divide our results in two different categories: quantitative
(numeric) results and qualitative results.
To obtain the quantitative results we use the Recall@k
metric. We define Recall@k as the recall rate at top K for
all the retrieval experiments, this is, the percentage of all the
queries where the corresponding video is retrieved in the top
K, hence higher is better.
The experiments are performed with different dimension
of the feature vector. The Table 1 shows the results of recall from audio to video. In other words, from the audio embedding of a video, how many times we retrieve the embedding corresponding to the images of that same video. Table 2
shows the recall from video to audio.
To have a reference, the random guess result would be
k/Number of elements. The obtained results show a very
clear correspondence between the embeddings coming from
the audio features and the ones coming from the video features. It is also interesting to notice that the results from audio
to video and from video to audio are very similar, because the
system has been trained bidirectionally.
5.3. Qualitative Performance Evaluation
In addition to the objective results, we performed some insightful qualitative experiments. They consisted on generating the embeddings of both the audio and the video for a list
of 6,000 different videos. Then, we randomly chose a video,
and from its image embedding, we retrieved the video with
the closest audio embedding, and the other way around (from
one video’s audio we retrieved the video with the closest image embedding). If the closest embedding corresponded to
the same video, we took the second one in the ordered list.
The Figure 3 shows some experiments. On the left, we
can see the results given a video query and getting the closest
audio; and on the right the input query is an audio. Examples depicting the real videos and audio are available online
Fig. 3. Qualitative results. On the left we show the results obtained when we gave a video as a query. On the right, the results
are based on an audio as a query.
2
. It shows both the results when going from image to audio,
and when going from audio to image. Four different random
examples are shown in each case. For each result and each
query, we also show their YouTube-8M labels, for completeness.
The results show that when starting from the image features of a video, the retrieved audio represents a very accurate
fit for those images. Subjectively, there are non negligible
cases where the retrieved audio actually fits better the video
than the original one, for example when the original video has
some artificially introduced music, or in cases where there
is some background commentator explaining the video in a
foreign (unknown) language. This analysis can also be done
similarly the other way around, this is, with the audio colorization approach, providing images for a given audio.
6. CONCLUSIONS AND FUTURE WORK
We presented an effective method to retrieve audio samples
that fit correctly to a given (muted) video. The qualitative
results show that the already existing online videos, due to its
variety, represent a very good source of audio for new videos,
even in the case of only retrieving from a small subset of this
large amount of videos. Due to the existing difficulty to create
new audio from scratch, we believe that a retrieval approach
is the path to follow in order to give audio to videos.
The range of possibilities to extend the presented work is
excitingly broad. The first idea would be to make use of the
YouTube-8M dataset variety and information. The temporal
2 https://goo.gl/NAcJah
information provided by the individual image and audio features is not used in the current work. The most promising
future work implies using this temporal information to match
audio and images, making use of the implicit synchronization
the audio and the images of a video have, without needing
any supervised control. Thus, the next step in our research is
introducing a recurrent neural network, which will allow us to
create more accurate representations of the video, and also retrieve different audio samples for each image, creating a fully
synchronized system.
Also, it would be very interesting to study the behavior of
the system depending on the class of the input. Observing the
dataset, it is clear that not all the classes have the same degree
of correspondence between audio and image, as for example
some videos have artificially (posterior) added music, which
is not related at all to the images.
In short, we believe the YouTube-8M dataset allows for
promising research in the future in the field of video sonorization and audio retrieval, for it having a huge amount of samples, and for it capturing multi-modal information in a highly
compact way.
7. ACKNOWLEDGEMENTS
This work was partially supported by the Spanish Ministry
of Economy and Competitivity and the European Regional
Development Fund (ERDF) under contract TEC2016-75976R. Amanda Duarte was funded by the mobility grant of the
Severo Ochoa Program at Barcelona Supercomputing Center
(BSC-CNS).
8. REFERENCES
deep learning,” in Proceedings of the 28th international
conference on machine learning, 2011, pp. 689–696. 2
[1] Eric Brochu, Nando De Freitas, and Kejie Bao, “The
sound of an album cover: Probabilistic multimedia and
information retrieval,” in Artificial Intelligence and
Statistics (AISTATS), 2003. 2
[12] Liwei Wang, Yin Li, and Svetlana Lazebnik, “Learning deep structure-preserving image-text embeddings,”
CoRR, vol. abs/1511.06078, 2015. 2
[2] Rudolf Mayer, “Analysing the similarity of album art
with self-organising maps,” in International Workshop
on Self-Organizing Maps. Springer, 2011, pp. 357–366.
2
[13] Ryan Kiros, Ruslan Salakhutdinov, and Richard S.
Zemel, “Unifying visual-semantic embeddings with
multimodal neural language models,”
CoRR, vol.
abs/1411.2539, 2014. 2
[3] Janis Libeks and Douglas Turnbull, “You can judge an
artist by an album cover: Using images for music annotation,” IEEE MultiMedia, vol. 18, no. 4, pp. 30–37,
2011. 2
[14] Amaia Salvador, Nicholas Hynes, Yusuf Aytar, Javier
Marin, Ferda Ofli, Ingmar Weber, and Antonio Torralba,
“Learning cross-modal embeddings for cooking recipes
and food images,” in CVPR, 2017. 2, 3
[4] Jiansong Chao, Haofen Wang, Wenlei Zhou, Weinan
Zhang, and Yong Yu, “Tunesensor: A semantic-driven
music recommendation service for digital photo albums,” in 10th International Semantic Web Conference,
2011. 2
[15] Andrea Frome, Greg Corrado, Jonathon Shlens, Samy
Bengio, Jeffrey Dean, Marc’Aurelio Ranzato, and
Tomas Mikolov, “Devise: A deep visual-semantic embedding model,” in Neural Information Processing Systems, 2013. 2
[5] Alexander Schindler and Andreas Rauber, “An audiovisual approach to music genre classification through
affective color features,” in European Conference on
Information Retrieval. Springer, 2015, pp. 61–67. 2
[16] Yusuf Aytar, Carl Vondrick, and Antonio Torralba, “See,
hear, and read: Deep aligned representations,” arXiv
preprint arXiv:1706.00932, 2017. 2
[6] Xixuan Wu, Yu Qiao, Xiaogang Wang, and Xiaoou
Tang, “Bridging music and image via cross-modal ranking analysis,” IEEE Transactions on Multimedia, vol.
18, no. 7, pp. 1305–1318, 2016. 2
[7] Esra Acar, Frank Hopfgartner, and Sahin Albayrak,
“Understanding affective content of music videos
through learned representations,” in International Conference on Multimedia Modeling. Springer, 2014, pp.
303–314. 2
[8] Olivier Gillet, Slim Essid, and Gal Richard, “On the
correlation of automatic audio and visual segmentations
of music videos,” IEEE Transactions on Circuits and
Systems for Video Technology, vol. 17, no. 3, pp. 347–
355, 2007. 2
[9] Dongge Li, Nevenka Dimitrova, Mingkun Li, and Ishwar K Sethi, “Multimedia content processing through
cross-modal association,” in Proceedings of the eleventh
ACM international conference on Multimedia. ACM,
2003, pp. 604–611. 2
[10] Hong Zhang, Yueting Zhuang, and Fei Wu, “Crossmodal correlation learning for clustering on imageaudio dataset,” in 15th ACM international conference
on Multimedia. ACM, 2007, pp. 273–276. 2
[11] Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan
Nam, Honglak Lee, and Andrew Y Ng, “Multimodal
[17] Sungeun Hong, Woobin Im, and Hyun S. Yang, “Deep
learning for content-based, cross-modal retrieval of
videos and music,” CoRR, vol. abs/1704.06761, 2017. 2
[18] Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul
Natsev, George Toderici, Balakrishnan Varadarajan, and
Sudheendra Vijayanarasimhan, “Youtube-8m: A largescale video classification benchmark,” CoRR, vol.
abs/1609.08675, 2016. 2, 4
[19] Martı́n Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S
Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin,
et al., “Tensorflow: Large-scale machine learning
on heterogeneous distributed systems,” arXiv preprint
arXiv:1603.04467, 2016. 4
[20] S. Hershey, S. Chaudhuri, D. P. W. Ellis, J. F. Gemmeke, A. Jansen, R. C. Moore, M. Plakal, D. Platt, R. A.
Saurous, B. Seybold, M. Slaney, R. J. Weiss, and K. Wilson, “Cnn architectures for large-scale audio classification,” in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), March
2017, pp. 131–135. 4
| 1 |
arXiv:1304.5833v1 [] 22 Apr 2013
ON THE EQUALITY OF ORDINARY AND SYMBOLIC POWERS OF
IDEALS
ALINE HOSRY, YOUNGSU KIM AND JAVID VALIDASHTI
Abstract. We consider the following question concerning the equality of ordinary and
symbolic powers of ideals. In a regular local ring, if the ordinary and symbolic powers
of a one-dimensional prime ideal are the same up to its height, then are they the same
for all powers? We provide supporting evidence of a positive answer for classes of prime
ideals defining monomial curves or rings of low multiplicities.
1. Introduction
Let R be a Noetherian local ring of dimension d and P a prime ideal of R. For a
positive integer n, the n-th symbolic power of P , denoted by P (n) , is defined as
P (n) := P n RP ∩ R = {x ∈ R | ∃ s ∈ R \ P, sx ∈ P n }.
Symbolic powers of ideals are central objects in commutative algebra and algebraic geometry for their tight connection to primary decomposition of ideals and the order of
vanishing of polynomials. One readily sees from the definition that P n ⊆ P (n) for all n,
but they are not equal in general. Therefore, one would like to compare the ordinary and
symbolic powers and provide criteria for equality. This problem has long been a subject
of interest, see for instance [2, 8, 11–14, 16, 22]. In this paper, we are interested in criteria
for the equality. In particular, we would like to know if P n = P (n) for all n up to some
value, then they are equal for all n. The following question was posed by Huneke in this
regard.
Question 1.1. Let R be a regular local ring of dimension d and P a prime ideal of height
d − 1. If P n = P (n) for all n ≤ ht P , then is P n = P (n) for all n?
An affirmative answer to Question 1.1 is equivalent to P being generated by a regular
sequence [7]. Furthermore, it is equivalent to showing that if P n = P (n) for all n ≤ d − 1,
then the analytic spread of P is d − 1. This is not known even when P is the defining
ideal of a monomial curve in A4k . Huneke answered Question 1.1 positively in dimension
3, and in dimension 4 if R/P is Gorenstein [15, Corollaries 2.5, 2.6]. One would like
to remove the Gorenstein assumption. There are supporting examples showing that the
Gorenstein property of R/P might follow from P 2 = P (2) . In fact, this is very close to
a conjecture by Vasconcelos which states that if P is syzygetic and R/P and P/P 2 are
Cohen-Macaulay, then R/P is Gorenstein [24]. Note that if P has height d − 1, then
R/P is Cohen-Macaulay, and P/P 2 being Cohen-Macaulay is equivalent to P 2 = P (2) .
Therefore, one is tempted to ask the following question.
Question 1.2. Let R be a regular local ring of dimension d and P a prime ideal of height
d − 1. If P 2 = P (2) , then is R/P Gorenstein?
1
2
A. HOSRY, Y. KIM AND J. VALIDASHTI
By Huneke’s result [15, Corollary 2.6], if Question 1.2 has an affirmative answer, then
so does Question 1.1 when dimension of R is 4. The converse of Question 1.2 is true
in dimension 4 by Herzog [9]. Also, Question 1.2 has been answered positively for some
classes of algebras [19], but it is not true in general (see for instance [19, Example 6.1]).
In this paper, we consider the case where P is the defining ideal of a monomial curve
k[[ta1 , . . . , tad ]] and we give an affirmative answer to Questions 1.1 and 1.2 when d = 4 and
{ai } forms an arithmetic sequence. In higher dimensions, if {ai } contains an arithmetic
subsequence of length 5 in which the terms are not necessarily consecutive, we observe
that P 2 6= P (2) , hence we have a positive answer to Questions 1.1 and 1.2. We extend
these results to certain modifications of arithmetic subsequences. We also consider onedimensional prime ideals P of a regular local ring R in general and we show that if R/P
has low multiplicity, then Question 1.1 has a positive answer. We note that if we drop
the height d − 1 assumption on P , then this question does not have a positive answer in
general, due to a counterexample by Guardo, Harbourne and Van Tuyl [10].
2. Monomial Curves
Let a1 , . . . , ad be an increasing sequence of positive integers with gcd(a1 , . . . , ad ) = 1.
Assume that ai ’s generate a numerical semigroup non-redundantly. Consider the monomial curve
A = k[[ta1 , . . . , tad ]] ⊂ k[[t]]
over a field k, with maximal ideal mA := (ta1 , . . . , tad )A. Let R = k[[x1 , . . . , xd ]] be a
formal power series ring with maximal ideal m = (x1 , . . . , xd )R, and let P be the kernel
of the homomorphism
k[[x1 , . . . , xd ]] −→ k[[ta1 , . . . , tad ]]
obtained by mapping xi to tai for all i. Therefore, A is isomorphic to R/P . Note that
P ⊂ m2 because of the non-redundancy assumption on ai ’s. We state the following
well-known properties about monomial curves.
Lemma 2.1. In the above setting,
(1) The ideal ta1 A is a minimal reduction of mA .
(2) The Hilbert-Samuel multiplicity e(mA , A) of A is a1 .
(3) The multiplicity e(mA , A) is at least d, i.e., a1 ≥ d.
For the third property in above, we may assume that k is an infinite field. Then
by [1, Fact (1)], we have e(mA , A) ≥ λ(mA /m2A ), where λ(−) denotes the length, and
observe that λ(mA /m2A ) = d, by the non-redundancy condition on the ai ’s. Note that
the third property also follows from Theorem 3.1. We begin with the following result
that describes the generators of P when d = 4 and the set of exponents {ai } forms an
arithmetic sequence.
Proposition 2.2. Let A be the monomial curve k[[ta , ta+r , ta+2r , ta+3r ]], where a and r
are positive integers that are relatively prime. Regard A as R/P , where R = k[[x, y, z, w]]
ON THE EQUALITY OF ORDINARY AND SYMBOLIC POWERS OF IDEALS
3
and P is the defining ideal of A. Then P is minimally generated by
z 2 − yw, yz − xw, y 2 − xz, xk+r − w k ,
if a = 3k
z 2 − yw, yz − xw, y 2 − xz, xk+r z − w k+1, xk+r y − zw k , xk+r+1 − yw k , if a = 3k + 1
z 2 − yw, yz − xw, y 2 − xz, xk+r+1 − zw k , xk+r y − w k+1 ,
if a = 3k + 2
where k is a positive integer.
Proof. Since the numerical semigroup is non-redundantly generated, a is greater than or
equal to 4 by Lemma 2.1. Thus k ≥ 2 if a = 3k and k ≥ 1 if a = 3k + 1 or 3k + 2. In
each case, let I be the ideal generated by the above-listed elements and m be the maximal
ideal (x, y, z, w) of R. One can directly check that I ⊂ P . For all cases, we will use the
following method to show I = P . First, we show that (P, x) = (I, x). Then it follows
that P = I + x(P : x). But (P : x) = P , since x is not in P . Thus P = I + xP , which
implies P = I, by Nakayama’s Lemma. To show (P, x) = (I, x), let I˜ = (I, x). The short
exact sequence
·y
˜ y) −→ 0
0 −→ R/(I˜ : y) −−−→ R/I˜ −→ R/(I,
˜ = λR (R/(I˜ : y)) + λR(R/(I,
˜ y)). Since R/P is Cohenyields the length equation λR (R/I)
Macaulay and the image of the ideal (x) in R/P is a minimal reduction of m/P by
Lemma 2.1, we have
˜
a = e(m, R/P ) = λR (R/(P, x)) ≤ λR (R/I).
Thus, it is enough to show
˜ y)) ≤ a.
λR (R/(I˜ : y)) + λR (R/(I,
˜ y) = (x, y, z 2 , w k ) and the ideal
If a = 3k, then I˜ = (x, z 2 − yw, yz, y 2, w k ). Therefore, (I,
I˜ : y contains the ideal (x, y, z, w k ). Thus,
˜ y)) ≤ λR (R/(x, y, z, w k )) + λR (R/(x, y, z 2 , w k )) ≤ k + 2k = a.
λR (R/(I˜ : y)) + λR (R/(I,
If a = 3k + 1, then I˜ = (x, z 2 − yw, yz, y 2, w k+1, zw k , yw k ). Hence, (x, y, z, w k ) ⊂ I˜ : y
˜ y) = (x, y, z 2 , zw k , w k+1). Note that λR (R/(x, y, z 2 , zw k , w k+1)) = 2k + 1 and
and (I,
λR (R/(x, y, z, w k )) = k. Thus,
˜ y)) ≤ k + (2k + 1) = a.
λR (R/(I˜ : y)) + λR (R/(I,
If a = 3k + 2, then I˜ = (x, z 2 − yw, yz, y 2, zw k , w k+1). Therefore, (x, y, z, w k+1) ⊂ I˜ : y
˜ y) = (x, y, z 2 , zw k , w k+1). Similar to the previous case, λR (R/(x, y, z 2 , zw k , w k+1))
and (I,
is 2k + 1 and λR (R/(x, y, z, w k+1)) = k + 1. Hence we obtain
˜ y)) ≤ (k + 1) + (2k + 1) = a.
λR (R/(I˜ : y)) + λR (R/(I,
To show that P is minimally generated by the listed elements in each case, we can compute
µ(P ) = λR (P/mP ). In fact, if we let R̄ = R/xR, then
µ(P R̄) = λR (P R̄/mP R̄) = λR (P + (x)/mP + (x)) = λR (P/mP + P ∩ (x)).
But P ∩ (x) = x(P : x) = xP ⊂ mP . Thus µ(P R̄) = µ(P ). Therefore, to compute the
minimal number of generators in each case, we can go modulo (x) first. If a = 3k, we
will show in Theorem 2.3 that P 2 6= P (2) , hence P is not a complete intersection ideal.
4
A. HOSRY, Y. KIM AND J. VALIDASHTI
Thus it cannot have fewer number of generators than 4. If a = 3k + 2, we will show
in Theorem 2.3 that P is generated by the 4 by 4 Pfaffians of a 5 by 5 skew-symmetric
matrix. Hence by Buchsbaum-Eisenbud structure theorem for height 3 Gorenstein ideals
in [6], P is minimally generated by the listed elements in this case. Thus, we only need to
deal with the case a = 3k + 1, where one can check directly that the ideal P R̄ is minimally
generated by z 2 − yw, yz, y 2 , w k+1, zw k , yw k .
Theorem 2.3. Let A be the monomial curve A = k[[ta , ta+r , ta+2r , ta+3r ]], where a and r
are positive integers that are relatively prime. Regard A as R/P , where R = k[[x, y, z, w]]
and P is the defining ideal of A.
(1) If a = 3k or 3k + 1, then R/P is not Gorenstein and P 2 6= P (2) .
(2) If a = 3k + 2, then R/P is Gorenstein, P 2 = P (2) and P 3 6= P (3) .
Proof. If a = 3k, then one can see that P contains the 2 by 2 minors of
x
y
z
.
z
w
M = y
k−1
k+r
k+r−1
zw
x
yx
Let D = det(M). Note that D 6∈ P 2, since D is not in P 2 modulo (x, y). We will show
that D ∈ P (2) . We have det(adj(M)) · D = D 3 , where adj(M) is the adjoint matrix of
M. Note that D 6= 0, for example it is not zero modulo (x, y). Thus D 2 = det(adj(M)).
But det(adj(M)) ∈ P 3, since the entries of adj(M) are in P . Hence D 2 ∈ P 3 . Therefore,
the image of D 2 in the associated graded ring GP := grP RP (RP ) is zero. Note that GP
is a domain as RP is a regular local ring. Hence the image of D is zero in GP , which
shows that the image of D in the localization RP is in P 2 RP , i.e., D ∈ P (2) . One could
also directly show that w · det(M) ∈ P 2 , hence det(M) ∈ P (2) , as w is not in P . Now by
Herzog’s theorem [9, Satz 2.8], we conclude that R/P is not Gorenstein. We note that
since in Proposition 2.2 we have shown that P is minimally generated by 4 elements, we
could also use Buchsbaum-Eisenbud structure theorem for height 3 Gorenstein ideals in
[6], or Bresinsky’s result in [3] which states that if a monomial curve in dimension 4 is
Gorenstein, then P is minimally generated by 3 or 5 elements.
If a = 3k + 1, then P contains the 2 by 2 minors of
x
y
z
z
w .
M = y
k−1
k
k+r
zw
w x
With a similar argument as in the previous case, one can show that det(M) ∈ P (2) \ P 2.
Thus by Herzog’s result, R/P is not Gorenstein.
ON THE EQUALITY OF ORDINARY AND SYMBOLIC POWERS OF IDEALS
If a = 3k + 2, then by Proposition 2.2, one can see
Pfaffians of
0
−w k
0
k
k+r
w
0
x
k+r
0
M =
0 −x
−x
−y
−z
−y
−z
−w
5
that P is generated by the 4 by 4
x y
y z
z w
.
0 0
0 0
Thus, by Buchsbaum-Eisenbud structure theorem for height 3 Gorenstein ideals in [6],
we obtain that R/P is Gorenstein and P is minimally generated by the 5 listed elements
in Proposition 2.2. Hence, P 2 = P (2) by Herzog’s result [9, Satz 2.8], and P 3 6= P (3) by
Huneke’s result [15, Corollary 2.6], as P is not a complete intersection ideal.
Corollary 2.4. Question 1.1 and Question 1.2 have affirmative answers for monomial
curves as in Theorem 2.3.
Now we consider monomial curves in higher dimensions.
Theorem 2.5. Let A be the monomial curve k[[ta1 , . . . , tad ]]. Consider A as R/P , where
R = k[[x1 , . . . , xd ]] and P is the defining ideal of A. If {ai } has an arithmetic subsequence
of length 5, whose terms are not necessarily consecutive, then P 2 6= P (2) .
Proof. If {ai } has an arithmetic subsequence {b1 , . . . , b5 } of length 5, without loss of
generality we may assume that x1 , . . . , x5 correspond to tb1 , . . . , tb5 . Then, one can see
that P contains the 2 by 2 minors of
x1 x2 x3
M = x2 x3 x4 .
x3 x4 x5
We observe that det(M) 6∈ P 2, since det(M) is a polynomial of degree 3 and the generators
of P 2 have degree at least 4 as P ⊂ m2 . Also note that det(M) 6= 0, for example it is not
zero modulo (x2 , x3 ). Thus, by a similar argument as in the proof of Theorem 2.3, one
can show that det(M) ∈ P (2) .
Corollary 2.6. Question 1.1 and Question 1.2 have positive answers for monomial curves
as in Theorem 2.5.
Using a result of Morales [21, Lemma 3.2], we can extend Theorems 2.3 and 2.5 to a
larger class of monomial curves. As before, let A be the monomial curve k[[ta1 , . . . , tad ]].
In the following we will not assume any particular order on the ai ’s. Write A as R/P ,
where R is k[[x1 , . . . , xd ]] and P is the defining ideal of A. For a positive integer c,
relatively prime to a1 , let à be the modified monomial curve k[[ta1 , tca2 , . . . , tcad ]]. Note
that a1 , ca2 , . . . , cad non-redundantly generate their numerical semigroup too. Write à as
R̃/P̃ , where R̃ denotes k[[x1 , . . . , xd ]] and P̃ is the defining ideal of Ã. Consider R̃ as an
R-module via the map φ : R −→ R̃ that sends x1 to xc1 and fixes xi for all i 6= 1. For a
polynomial f (x1 , . . . , xd ) ∈ R, let f˜ be the polynomial f (xc1 , . . . , xd ).
Lemma 2.7.(Morales) R̃ is a faithfully flat extension of R. Moreover, P R̃ ∩ R = P and
P̃ = P R̃. In fact, f ∈ P if and only if f˜ ∈ P̃ , and if {gi } is a minimal generating set for
6
A. HOSRY, Y. KIM AND J. VALIDASHTI
P , then {g˜i } is a minimal generating set for P̃ . In addition, for all positive integers k,
f ∈ P k if and only if f˜ ∈ P̃ k , and f ∈ P (k) if and only if f˜ ∈ P̃ (k) , i.e., P̃ k ∩ R = P k and
P̃ (k) ∩ R = P (k) .
Using Lemma 2.7, we obtain the following extension of Theorems 2.3 and 2.5.
Corollary 2.8. If Question 1.1 has an affirmative answer for a monomial curve A, then
it also has an affirmative answer for the monomial curve Ã. In particular, Question
1.1 has an affirmative answer for successive modifications of the monomial curves as in
Theorems 2.3 and 2.5 in the sense of Morales.
Proof. If P̃ n = P̃ (n) for all positive integers n ≤ d − 1, then by Lemma 2.7, we obtain
that P n = P (n) for all n ≤ d − 1. Thus, by hypothesis, P is a complete intersection and
hence, by Lemma 2.7, we obtain that P̃ is a complete intersection.
3. Low Multiplicities
Let R be a regular local ring with maximal ideal m and of dimension d. Let P be a
prime ideal of height d − 1. We will show that Question 1.1 has an affirmative answer
when the Hilbert-Samuel multiplicity e(R/P ) is sufficiently small.
Theorem 3.1. Let R be a regular local ring with maximal ideal m and of dimension d.
Assume P is a prime ideal of height d − 1 such that P ⊂ m2 . Then P n 6= P (n) for a
positive integer n, if
d−2
Y
2n + r
.
e(R/P ) <
n+r
r=0
Proof. We may assume the residue field of R is infinite, see for instance [17, Lemma 8.4.2].
Thus, as R/P has dimension one, there exists x ∈ R whose image in R/P is a minimal
reduction of m/P . Note that x cannot be in m2 by Nakayama’s Lemma, hence R/(x)
is regular. Recall that in a regular local ring S with maximal ideal n and of dimension
k, λS (S/nn ) = n+k−1
for all positive integers n. Therefore, since P n ⊂ m2n , we have
k
λR (R/(P n , x)) ≥ λR (R/(m2n , x)) = 2n+d−2
. On the other hand, since R/P is a oned−1
dimensional Cohen-Macaulay ring, using the associativity formula for multiplicities, we
obtain
λR (R/(P (n) , x)) = e((x), R/P (n) )
= λRP (RP /P n RP ) · e((x), R/P )
= n+d−2
· e(R/P ).
d−1
2n+d−2
The multiplicity bound in the statement is equivalent to n+d−2
·
e(R/P
)
<
.
d−1
d−1
(n)
n
n
(n)
Therefore, λR (R/(P , x)) < λR (R/(P , x)). Thus, P and P cannot be the same.
One can easily observe that the multiplicity bound in Theorem 3.1 is increasing with
respect to n. Thus, letting n = d − 1, we obtain the largest bound that guarantees
P d−1 6= P (d−1) . Therefore, we have the following corollary.
ON THE EQUALITY OF ORDINARY AND SYMBOLIC POWERS OF IDEALS
7
Corollary 3.2. Under the assumptions of Theorem 3.1, Question 1.1 has a positive answer provided
d−2
Y 2d + r − 2
e(R/P ) <
.
d
+
r
−
1
r=0
Note that the multiplicity bound in Corollary 3.2 grows at least exponentially with
respect to d, since each term of the product is greater than 32 . The next corollary is an
application of Theorem 3.1 in the case of monomial curves in embedding dimension 4.
Corollary 3.3. Let A = k[[ta1 , ta2 , ta3 , ta4 ]]. Consider A as R/P , where R = k[[x, y, z, w]]
and P is the defining ideal of A. If a1 = 4 or 5, then P 3 6= P (3) . Therefore, Question 1.1
has a positive answer in this case.
Proof. Apply Theorem 3.1 for n = 3 and d = 4. On the one hand e(R/P ) = a1 ≤ 5, and
on the other hand the multiplicity bound reduces to 5.6. Hence P 3 6= P (3) .
We remark that, by Corollary 2.8, Question 1.1 has an affirmative answer for successive
modifications of the monomial curves as in Corollary 3.3 in the sense of Morales.
4. Remarks
We end this paper with some remarks and observations on equality of the ordinary and
symbolic powers of ideals.
Remark 4.1. The multiplicity bound in Theorem 3.1 approaches 2d−1 as n tends to
infinity. Thus, if e(R/P ) < 2d−1 , then P n 6= P (n) for n large. Hence, if Question 1.1
has a positive answer and P n = P (n) for all n ≤ d − 1, then e(R/P ) ≥ 2d−1 . This is
consistent with the conclusion of Question 1.1, that P is a complete intersection. To
see this, suppose P is generated by a regular sequence a1 , . . . , ad−1 and x is a minimal
reduction of m/P in R/P . Then, by [20, Theorem 14.9], we have
e(m, R/P ) = λR (R/(P, x)) = λR (R/(a1 , . . . , ad )) ≥
d
Y
ordm (ai ) ≥ 2d−1 ,
i=1
where ad = x. Note that ordm (x) = 1 and ordm (ai ) ≥ 2 for i = 1, . . . , d − 1, as we are
assuming P ⊂ m2 .
Remark 4.2. We know that if P n = P (n) for n large, then P is a complete intersection [7].
The conclusion is also true if P n = P (n) for infinitely many n, see for instance Brodmann’s
result on stability of associated primes of R/P n in [4]. This can also be obtained using
superficial elements, at least when R has infinite residue field and P has positive grade.
If P n = P (n) for infinitely many n, then one can show that P n = P (n) for n large. To
see this, let x ∈ P be a superficial element, in the sense that P n+1 : x = P n for n large,
see [17, Proposition 8.5.7]. Hence, if there exists an element b ∈ P (n) \ P n , then we have
xb ∈ P (n+1) \ P n+1 for n large.
Remark 4.3. If P n = P (n) for n large, then the analytic spread of P is at most d − 1
[5]. We note that this can also be seen via ε-multiplicity for one-dimensional primes. For
8
A. HOSRY, Y. KIM AND J. VALIDASHTI
a prime ideal P of height d − 1, we have
H0m (R/P n ) = P (n) /P n ,
where the left hand side is the zero-th local cohomology of R/P n with support in m.
Thus, if P n = P (n) for n large, then ε-multiplicity of P is zero, where
ε(P ) = lim sup
n
d!
· λR (H0m (R/P n )).
nd
Hence, by [18, Theorem 4.7] or [23, Theorem 4.2], the analytic spread of P is at most d−1.
Acknowledgements
This research was initiated at the MRC workshop on commutative algebra at Snowbird,
Utah, in the summer of 2010. We would like to thank AMS, NSF and MRC for providing
funding and a stimulating research environment. We are also thankful to the organizers
of the workshop, David Eisenbud, Craig Huneke, Mircea Mustata and Claudia Polini for
many helpful ideas and suggestions regarding this work.
References
[1] S. S. Abhyankar, Local rings of high embedding dimension, Amer. J. Math. 89 (1967), 1073–1077.
[2] C. Bocci and B. Harbourne, Comparing powers and symbolic powers of ideals, J. Algebraic Geom.
19 (2010), no. 3, 399–417.
[3] H. Bresinsky, Symmetric semigroups of integers generated by 4 elements, Manuscripta Math. 17
(1975), no. 3, 205–219.
[4] M. Brodmann, Asymptotic stability of Ass(M/I n M ), Proc. Amer. Math. Soc. 74 (1979), no. 1,
16–18.
[5]
, The asymptotic nature of the analytic spread, Math. Proc. Cambridge Philos. Soc. 86 (1979),
no. 1, 35–39.
[6] D. Buchsbaum and D. Eisenbud, Algebra structures for finite free resolutions, and some structure
theorems for ideals of codimension 3, Amer. J. Math. 99 (1977), no. 3, 447–485.
[7] R. C. Cowsik and M. V. Nori, On the fibres of blowing up, J. Indian Math. Soc. (N.S.) 40 (1976),
no. 1-4, 217–222 (1977).
[8] L. Ein, R. Lazarsfeld, and K. E. Smith, Uniform bounds and symbolic powers on smooth varieties,
Invent. Math. 144 (2001), no. 2, 241–252.
[9] J. Herzog, Ein Cohen-Macaulay-Kriterium mit Anwendungen auf den Konormalenmodul und den
Differentialmodul, Math. Z. 163 (1978), no. 2, 149–162 (German).
[10] E. Guardo, B. Harbourne, and A. Van Tuyl, Symbolic powers versus regular powers of ideals of
general points in P1 × P1 , arXiv:1107.4906.
[11] M. Hochster, Criteria for equality of ordinary and symbolic powers of primes, Math. Z. 133 (1973),
53–65.
[12] M. Hochster and C. Huneke, Comparison of symbolic and ordinary powers of ideals, Invent. Math.
147 (2002), no. 2, 349–369.
, Fine behavior of symbolic powers of ideals, Illinois J. Math. 51 (2007), no. 1, 171–183.
[13]
[14] S. Huckaba and C. Huneke, Powers of ideals having small analytic deviation, Amer. J. Math. 114
(1992), no. 2, 367–403.
[15] C. Huneke, The primary components of and integral closures of ideals in 3-dimensional regular local
rings, Math. Ann. 275 (1986), no. 4, 617–635.
ON THE EQUALITY OF ORDINARY AND SYMBOLIC POWERS OF IDEALS
9
[16] C. Huneke, D. Katz, and J. Validashti, Uniform equivalence of symbolic and adic topologies, Illinois
J. Math. 53 (2009), no. 1, 325–338.
[17] C. Huneke and I. Swanson, Integral closure of ideals, rings, and modules, London Mathematical
Society Lecture Note Series, vol. 336, Cambridge University Press, Cambridge, 2006.
[18] D. Katz and J. Validashti, Multiplicities and Rees valuations, Collect. Math. 61 (2010), no. 1, 1–24.
[19] P. Mantero and Y. Xie, On the Cohen–Macaulayness of the conormal module of an ideal, J. Algebra
372 (2012), 35–55.
[20] H. Matsumura, Commutative ring theory, Cambridge Studies in Advanced Mathematics, vol. 8,
Cambridge University Press, Cambridge, 1986. Translated from the Japanese by M. Reid.
[21] M. Morales, Noetherian symbolic blow-ups, J. Algebra 140 (1991), no. 1, 12–25.
[22] S. Morey, Stability of associated primes and equality of ordinary and symbolic powers of ideals,
Comm. Algebra 27 (1999), no. 7, 3221–3231.
[23] B. Ulrich and J. Validashti, A criterion for integral dependence of modules, Math. Res. Lett. 15
(2008), no. 1, 149–162.
[24] W. V. Vasconcelos, Koszul homology and the structure of low codimension Cohen-Macaulay ideals,
Trans. Amer. Math. Soc. 301 (1987), no. 2, 591–613.
Department of Mathematics and Statistics, Notre Dame University-Louaize, P.O. Box: 72,
Zouk Mikael, Zouk Mosbeh, Lebanon
E-mail address: ahosry@ndu.edu.lb
Department of Mathematics, Purdue University, West Lafayette, IN 47907
E-mail address: kim455@purdue.edu
Department of Mathematics, University of Illinois at Urbana-Champaign, IL 61801
E-mail address: jvalidas@illinois.edu
| 0 |
A Genetic Algorithm for solving Quadratic
Assignment Problem (QAP)
H. Azarbonyada, R. Babazadehb
a
b
Department of Electrical and computers engineering, University of Tehran, Tehran, Iran, h.azarbonyad@ece.ut.ac.ir
Department of industrial engineering, University of Tehran, Tehran, Iran, r.babazadeh@ut.ac.ir
Abstract— The Quadratic Assignment Problem (QAP) is
one of the models used for the multi-row layout problem
with facilities of equal area. There are a set of n facilities
and a set of n locations. For each pair of locations,
a distance is specified and for each pair of facilities a
weight or flow is specified (e.g., the amount of supplies
transported between the two facilities). The problem is to
assign all facilities to different locations with the aim of
minimizing the sum of the distances multiplied by the
corresponding flows. The QAP is among the most difficult
NP-hard combinatorial optimization problems. Because of
this, this paper presents an efficient Genetic algorithm
(GA) to solve this problem in reasonable time. For
validation the proposed GA some examples are selected
from QAP library. The obtained results in reasonable time
show the efficiency of proposed GA.
Key words- Genetic Algorithm, QAP, Multi-row layout problems
optimization problems. The QAP is NP-hard optimization
problem [7], so, to practically solve the QAP one has to apply
heuristic algorithms which find very high quality solutions in
short
computation
time.
And
also
there
is
no
known algorithm for solving this problem in polynomial time,
and even small instances may require long computation time
[2].
The location of facilities with material flow between them was
first modeled as a QAP by Koopmans and Beckmann [3]. In a
facility layout problem in which there are n facilities to be
assigned to n given location. The QAP formulation requires an
equal number of facilities and locations. If there are fewer than
n, say m<n, facilities to be assigned to n locations, then to use
the QAP formulation, n-m dummy facilities should be created
and a zero flow between each of these and all others must be
1. INTRODUCTION
assigned ( including the other dummy facilities). If there are
Some multi-row layout problems are the control layout
problem, the machine layout problem in an automated
manufacturing system and office layout problem also
the Travelling Salesman Problem (TSP) may be seen as a
fewer locations than facilities, then the problem is infeasible
[4]. Considering previous works in QAP ([2],[8] and [9]) this
paper is developed to solve this problem with GA in short
computation time.
special case of QAP if one assumes that the flows connect all
facilities only along a single ring, all flows have the same non-
In the next section the QAP is described and formulated. After
zero
of
introducing GA in Section 3, the encoding scheme, solution
standard combinatorial optimization problems may be written
representation and GA operators are described in this section.
in this form (see [1] and [2]).
The computational results are reported in Section 4. Finally,
The Quadratic Assignment Problem (QAP) is one of the
Section 5 concludes this paper and offers some directions for
classical combinatorial optimization problems and is known
further research.
for its diverse applications and is widely regarded as one of
2. Mathematical model
(constant)
value.
Many
other
problems
the most difficult problem in classical combinatorial
The following notation is used in formulation of QAP [4]:
parameters
total number of facilities and locations
n
flow of material from facility I to facility k
fik
distance from location j to location l
djl
Variable
1 If facility I is assigned to location j
=
xij
0 Otherwise
individuals and happens in generations. In each generation, the
fitness of every individual in the population is evaluated,
multiple individuals are stochastically selected from the
current population (based on their fitness), and modified
(recombined and possibly randomly mutated) to form a new
population. The new population is then used in the next
iteration of the algorithm. Commonly, the algorithm
n
n
n
i
j
k
(1)
n
∑∑∑∑
Min
f ik d jl x ij x kl
terminates when either a maximum number of generations has
been produced, or a satisfactory fitness level has been reached
l
for the population. If the algorithm has terminated due to a
n
∑x
ij
=1
(2)
∀i
maximum number of generations, a satisfactory solution may
or may not have been reached [5].
j
n
∑x
ij
=1
(3)
∀j
i
(4)
xij is binary
3.1. Encoding scheme
In this section we describe the encoding scheme for different
component of proposed GA when n=5. For creating initial
The objective function (1) minimizes the total distances and
flows between facilities. Constraints (2) and (4) ensure that
each facility I is assigned to exactly one location. As well as
Constraints (3) and (4) assure that each location j has exactly
one facility which assigned to it. The term quadratic stems
population we consider a chromosome with n gene, as any
gene is depictive of assignment each facility to exactly one
location. For example the chromosome in Figure 1 show that
facilities 2,4,3,1 and 5 is assigned to locations 1,2,3,4 and 5
respectively that it is a feasible solution.
from the formulation of the QAP as an integer optimization
problem with a quadratic objective function.
3. Genetic Algorithm
2
4
3
1
5
1
2
3
4
5
Fig.1. chromosome representation
Genetic Algorithms (Gas) are routinely used to generate useful
For making above chromosome (feasible solution) to create
solutions to optimization and search problems. Genetic
initial population, firstly we assume that all content of
algorithms belong to the larger class of evolutionary
chromosome are placed equal to 1, then random integer
algorithms (EA), which generate solutions to optimization
number from 1 to 5 is produced, say that 3, and then it is
problems using techniques inspired by natural evolution. In a
compared with the content of corresponding square, so if the
genetic
(called
corresponding content is equal to 1, facility 3 is assigned to
chromosomes or the genotype of the genome), which encode
location 1 and then this content is set equal to zero (see figure
candidate solutions (called individuals, creatures) to an
2). Again a random integer number from 1 to 5 is produced,
optimization problem, evolves toward better solutions.
then If it is equal to 3 because of the corresponding content is
Traditionally, solutions are represented in binary as strings of
zero (it means that facility 3 is assigned) so, another number
0s and 1s, but other encodings are also possible. The evolution
should be produced. This way continues until all facilities are
algorithm,
a
population
of
strings
usually starts from a population of randomly generated
assigned to locations and also the determined number of initial
2
5
3
1
4
2
4
3
1
5
population of chromosomes is satisfied.
1
1
1
1
1
1
1
0
1
1
Fig.4. Mutation representation
3.1.3. Selection
Fig.2. making chromosome (feasible solution)
In selection operator the number of chromosomes required to
3.1.1. Crossover
complete next generation from crossover, mutation and
We use two point crossovers for implementation of GA. For
making child 1 and child 2, two points are randomly selected
from 1 to 5 and between these points is hold fixed. Then, child
1 is created according to orders of parent 2 and fixed
components of parent 1, as well as child 2 is produced
previous generation are determined special mechanism that we
use roulette wheel mechanism. In roulette wheel selection,
candidate solutions are given a probability of being selected
that is directly proportionate to their fitness. The selection
probability of a candidate solution I is
Fi
, where Fi is
P
∑
according to order of parent 1 and fixed components of parent
Fj
j
2 (see Figure 3). After tuning the crossover probability is
the fitness of chromosome I (objective function in QAP
determined equal to 0.8. This means that 80% of selected
formulation) and P is the population size.
parents participate in crossover operation.
3
5
2
4
1
5
3
4. Computational Results
PARENT 1
2
1
4
PARENT 2
For validation the proposed GA some examples are selected
from QAP library. The proposed GA was programmed in Java
net beans 6.9.1 on a Pentium dual-core 2.66 GHZ computer
with 4 GB RAM. The objective function values of the best
1
5
2
4
3
As it is illustrated in Table 1 the proposed GA results in
CHILD 1
3
2
4
known solutions are given by Burkard et al. [6].
5
1
the short computational time with acceptable GAP.
CHILD 2
Table 1: Computational results of proposed GA
FIG.3. CROSSOVER REPRESENTATION
3.1.2. Mutation
Mutation operator changes the value of each gene in a
chromosome with probability 0.2. For mutation two genes are
randomly selected from selected chromosome and then their
locations are substituted (see Figure 4).
Test Problems
Nug 12
Nug 17
Nug 20
Nug 24
Nug 28
chr12a.dat
chr12b.dat
chr15a.dat
Global/local
optimum
global
local
local
local
local
local
local
local
Gap%
0
.0034
0
.0034
.012
0
0
1
Run time
(second)
1
3
3
4
5
1
2
4
5. Conclusion
In this paper at first QAP as a special type of multi-row layout
problem with facilities of equal area is introduced and its
various applications in different fields are described. Then,
due to NP-hard nature of QAP an efficient GA algorithm is
developed to solve it in reasonable time. Finally, some
computational results from data set of QAP library are
provided to show the efficiency and capability of proposed
GA. Comparing outcome results with those works available in
literature justify the proficiency of proposed GA in achieving
acceptable solutions in reasonable time. For future research
direction, developing other heuristic and/or meta-heuristic
such as simulated annealing (SA) algorithm and PSO
algorithm and then comparison results of them with results of
this paper can be attractive work.
References
[1] Burkard R.E.(1984). Location with spatial interaction – Quadratic
assignment problem in discrete location theory edited by R.L.
Francis and P.B. Mirchandani. New York: Academic.
[2] Ravindra K. Ahuja, James B. Orlin, Ashish Tiwari (2000). A
greedy genetic algorithm for the quadratic assignment Problem.
Computers & Operations Research 27. 917-934.
[3] Koopmans T.C. and Beckmann M. (1957). Assignment problems
and the location of economic activities. Econometrica 25, 1:53-76.
[4] Heragu S. (1997). Facilities Design. Boston, PWS Publishing
company.
[5] Eiben, A. E. et al (1994). Genetic algorithms with multi-parent
recombination. PPSN III: Proceedings
of the International
Conference on Evolutionary Computation. The Third Conference on
Parallel Problem Solving from Nature: 78–87.
[6] Burkard RE, Karisch SE, Rendl F. (1997). QAPLIB – A quadratic
assignment program library. J. Global Optim.10:391- 403.
[7] S. Sahni and T. Gonzales, P-Complete Approximation Problems,
J. ACM, vol. 23, pp. 555-565, 1976.
[8] V. Maniezzo and A. Colorni. The Ant System Applied to the
Quadratic Assignment Problem. Accepted for publication in IEEE
Transactions on Knowledge and Data Engineering, 1999.
[9] P. Merz and B. Freisleben. A Genetic Local Search Approach to
the Quadratic Assignment Problem. In T. B¨ack, editor, Proceedings
of the Seventh International Conference on Genetic Algorithms
(ICGA’97), pages 465–472. Morgan Kaufmann, 1997.
| 9 |
Synchronization Strings: Codes for Insertions and Deletions
Approaching the Singleton Bound∗.
arXiv:1704.00807v1 [] 3 Apr 2017
Bernhard Haeupler
Carnegie Mellon University
haeupler@cs.cmu.edu
Amirbehshad Shahrasbi
Carnegie Mellon University
shahrasbi@cs.cmu.edu
Abstract
We introduce synchronization strings, which provide a novel way of efficiently dealing with synchronization errors, i.e., insertions and deletions. Synchronization errors are
strictly more general and much harder to deal with than more commonly considered half-errors,
i.e., symbol corruptions and erasures. For every ε > 0, synchronization strings allow to index a
sequence with an ε−O(1) size alphabet such that one can efficiently transform k synchronization errors into (1 + ε)k half-errors. This powerful new technique has many applications.
In this paper, we focus on designing insdel codes, i.e., error correcting block codes (ECCs) for
insertion-deletion channels.
While ECCs for both half-errors and synchronization errors have been intensely studied, the
later has largely resisted progress. As Mitzenmacher puts it in his 2009 survey [22]: “Channels
with synchronization errors . . . are simply not adequately understood by current theory. Given
the near-complete knowledge we have for channels with erasures and errors ... our lack of understanding about channels with synchronization errors is truly remarkable.” Indeed, it took until
1999 for the first insdel codes with constant rate, constant distance, and constant alphabet size
to be constructed and only since 2016 are there constructions of constant rate insdel codes for
asymptotically large noise rates. Even in the asymptotically large or small noise regime these
codes are polynomially far from the optimal rate-distance tradeoff. This makes the understanding of insdel codes up to this work equivalent to what was known for regular ECCs after Forney
introduced concatenated codes in his doctoral thesis 50 years ago.
A straight forward application of our synchronization strings based indexing method gives a
simple black-box construction which transforms any ECC into an equally efficient insdel
code with only a small increase in the alphabet size. This instantly transfers much of the highly
developed understanding for regular ECCs into the realm of insdel codes. Most notably, for the
complete noise spectrum we obtain efficient “near-MDS” insdel codes which get arbitrarily close
to the optimal rate-distance tradeoff given by the Singleton bound. In particular, for any
δ ∈ (0, 1) and ε > 0 we give insdel codes achieving a rate of 1 − δ − ε over a constant size
alphabet that efficiently correct a δ fraction of insertions or deletions.
∗
Supported in part by the National Science Foundation through grants CCF-1527110 and CCF-1618280.
1
Introduction
Since the fundamental works of Shannon, Hamming, and others the field of coding theory has
advanced our understanding of how to efficiently correct symbol corruptions and erasures. The
practical and theoretical impact of error correcting codes on technology and engineering as well as
mathematics, theoretical computer science, and other fields is hard to overestimate. The problem
of coding for timing errors such as closely related insertion and deletion errors, however, while
also studied intensely since the 60s, has largely resisted such progress and impact so far. An expert panel [8] in 1963 concluded: “There has been one glaring hole in [Shannon’s] theory; viz.,
uncertainties in timing, which I will propose to call time noise, have not been encompassed . . . .
Our thesis here today is that the synchronization problem is not a mere engineering detail, but a
fundamental communication problem as basic as detection itself !” however as noted in a comprehensive survey [21] in 2010: “Unfortunately, although it has early and often been conjectured that
error-correcting codes capable of correcting timing errors could improve the overall performance of
communication systems, they are quite challenging to design, which partly explains why a large
collection of synchronization techniques not based on coding were developed and implemented over
the years.” or as Mitzenmacher puts in his survey [22]: “Channels with synchronization errors,
including both insertions and deletions as well as more general timing errors, are simply not adequately understood by current theory. Given the near-complete knowledge we have for channels
with erasures and errors . . . our lack of understanding about channels with synchronization errors is
truly remarkable.” We, too, believe that the current lack of good codes and general understanding
of how to handle synchronization errors is the reason why systems today still spend significant
resources and efforts on keeping very tight controls on synchronization while other noise is handled
more efficiently using coding techniques. We are convinced that a better theoretical understanding
together with practical code constructions will eventually lead to systems which naturally and more
efficiently use coding techniques to address synchronization and noise issues jointly. In addition,
we feel that better understanding the combinatorial structure underlying (codes for) insertions and
deletions will have impact on other parts of mathematics and theoretical computer science.
In this paper, we introduce synchronization strings, a new combinatorial structure which allows
efficient synchronization and indexing of streams under insertions and deletions. Synchronization
strings and our indexing abstraction provide a powerful and novel way to deal with synchronization
issues. They make progress on the issues raised above and have applications in a large variety
of settings and problems. We already found applications to channel simulations, synchronization
sequences [21], interactive coding schemes [4–7, 15, 17], edit distance tree codes [2], and error correcting codes for insertion and deletions and suspect there will be many more. In this paper we
focus on the last application, namely, designing efficient error correcting block codes over large
alphabets for worst-case insertion-deletion channels.
The knowledge on efficient error correcting block codes for insertions and deletions, also called
insdel codes, severely lacks behind what is known for codes for Hamming errors. While Levenshtein [18] introduced and pushed the study of such codes already in the 60s it took until 1999
for Schulman and Zuckerman [25] to construct the first insdel codes with constant rate, constant
distance, and constant alphabet size. Very recent work of Guruswami et al. [10, 13] in 2015 and
2016 gave the first constant rate insdel codes for asymptotically large noise rates, via list decoding.
These codes are however still polynomially far from optimal in their rate or decodable distance
respectively. In particular, they achieve a rate of Ω(ε5 ) for a relative distance of 1 − ε or a relative
distance of O(ε2 ) for a rate of 1 − ε, for asymptotically small ε > 0 (see Section 1.5 for a more
detailed discussion of related work).
This paper essentially closes this line of work by designing efficient “near-MDS” insdel codes
1
which approach the optimal rate-distance trade-off given by the Singleton bound. We prove that
for any 0 ≤ δ < 1 and any constant ε > 0, there is an efficient insdel code over a constant size
alphabet with block length n and rate 1 − δ − ε which can be uniquely and efficiently decoded from
any δn insertions and deletions. The code construction takes polynomial time; and encoding and
decoding can be done in linear and quadratic time, respectively. More formally, let us define the
edit distance of two given strings as the minimum number of insertions and deletions required to
convert one of them to the other one.
Theorem 1.1. For any ε > 0 and δ ∈ (0, 1) there exists an encoding map E : Σk → Σn and a
decoding map D : Σ∗ → Σk such that if EditDistance(E(m), x) ≤ δn then D(x) = m. Further
k
n > 1 − δ − ε, |Σ| = f (ε), and E and D are explicit and can be computed in linear and quadratic
time in n.
We obtain this code via a black-box construction which transforms any ECC into an equally
efficient insdel code with only a small increase in the alphabet size. This transformation, which
is a straight forward application of our new synchronization strings based indexing method, is so
simple that it can be summarized in one sentence:
−1
For any efficient length n ECC with alphabet bit size logεε , attaching to every
codeword, symbol by symbol, a random or suitable pseudorandom string over an
alphabet of bit size log ε−1 results in an efficient insdel code with a rate and
decodable distance that changed by at most ε.
Far beyond just implying Theorem 1.1, this allows to instantly transfer much of the highly
developed understanding for regular ECCs into the realm of insdel codes.
Theorem 1.1 is obtained by using the “near-MDS” expander codes of Guruswami and Indyk [9]
as a base ECC. These codes generalize the linear time codes of Spielman [27] and can be encoded
and decoded in linear time. Our simple encoding strategy, as outlined above, introduces essentially
no additional computational complexity during encoding. Our quadratic time decoding algorithm,
however, is slower than the linear time decoding of the base codes from [9] but still pretty fast. In
particular, a quadratic time decoding for an insdel code is generally very good given that, in contrast
to Hamming codes, even computing the distance between the received and the sent/decoded string
is an edit distance computation. Edit distance computations in general do usually not run in
sub-quadratic time, which is not surprising given the recent SETH-conditional lower bounds [1].
For the settings of for insertion-only and deletion-only errors we furthermore achieve analogs of
Theorem 1.1 with linear decoding complexities.
In terms of the dependence of the alphabet bit size on the parameter ε, which characterizes how
close a code is to achieving an optimal rate/distance pair summing to one, our transformation seem
to inherently produce an alphabet bit size that is near linear in 1ε . However, the same is true for the
state of the art linear-time base ECCs [9] which have an alphabet bit size of Θ( ε12 ). Existentially
it is known that an alphabet bit size logarithmic in 1ε is necessary and sufficient and ECCs based
on algebraic geometry [29] achieving such a bound up to constants are known, but their encoding
and decoding complexities are higher.
1.1
High-level Overview, Intuition and Overall Organization
While extremely powerful, the concept and idea behind synchronization strings is easily demonstrated. In this section, we explain the high-level approach taken and provide intuition for the
formal definitions and proofs to follow. This section also explains the overall organization of the
rest of the paper.
2
1.1.1
Synchronization Errors and Half-Errors
Consider a stream of symbols over a large but constant size alphabet Σ in which some constant
fraction δ of symbols is corrupted.
There are two basic types of corruptions we will consider, half-errors and synchronization errors.
Half-errors consist of erasures, that is, a symbol being replaced with a special “?” symbol indicating
the erasure, and symbol corruptions in which a symbol is replaced with any other symbol in Σ.
The wording half-error comes from the realization that when it comes to code distances erasures
are half as bad as symbol corruptions. An erasure is thus counted as one half-error while a symbol
corruption counts as two half-errors (see Section 2 for more details). Synchronization errors consist
of deletions, that is, a symbol being removed without replacement, and insertions, where a new
symbol from Σ is added anywhere.
It is clear that synchronization errors are strictly more general and harsher than
half-errors. In particular, any symbol corruption, worth two half-errors, can also be achieved
via a deletion followed by an insertion. Any erasure can furthermore be interpreted as a deletion
together with the often very helpful extra information where this deletion took place. This makes
synchronization errors at least as hard as half-errors. The real problem that synchronization errors bring with them however is that they cause sending and receiving parties to become “out of
synch”. This easily changes how received symbols are interpreted and makes designing codes or
other systems tolerant to synchronization errors an inherently difficult and significantly less well
understood problem.
1.1.2
Indexing and Synchronization Strings: Reducing Synchronization Errors to
Half-Errors
There is a simple folklore strategy, which we call indexing, that avoids these synchronization problems: Simply enhance any element with a time stamp or element count. More precisely, consecutively number the elements and attach this position count or index to each stream element. Now,
if we deal with only deletions it is clear that the position of any deletion is easily identified via a
missing index, thus transforming it into an erasure. Insertions can be handled similarly by treating
any stream index which is received more than once as erased. If both insertions and deletions are
allowed one might still have elements with a spoofed or incorrectly received index position caused
by a deletion of an indexed symbol which is then replaced by a different symbol with the same
index. This however requires two insdel errors. Generally this trivial indexing strategy can seen to
successfully transform any k synchronization errors into at most k half-errors.
In many applications, however, this trivial indexing cannot be used, because having to attach a
log n bit1 long index description to each element of an n long stream is prohibitively costly. Consider
for example an error correcting code of constant rate R over some potentially large but nonetheless
constant size alphabet Σ, which encodes logRn|Σ| bits into n symbols from Σ. Increasing Σ by a
factor of n to allow each symbol to carry its log n bit index would destroy the desirable property
of having an alphabet which is independent from the block length n and would furthermore reduce
the rate of the code from R to Θ( logR n ), which approaches zero for large block lengths. For streams
of unknown or infinite length such problems become even more pronounced.
This is where synchronization strings come to the rescue. Essentially, synchronization strings allow to index every element in an infinite stream using only a constant size alphabet while
achieving an arbitrarily good approximate reduction from synchronization errors to half-errors. In
particular, using synchronization strings k synchronization errors can be transformed into
1
Throughout this paper all logarithms are binary.
3
at most (1 + ε)k half-errors using an alphabet of size independent of the stream length
and in fact only polynomial in 1ε . Moreover, these synchronization strings have simple constructions
and fast and easy decoding procedures.
Attaching our synchronization strings to the codewords of any efficient error correcting code,
which efficiently tolerates the usual symbol corruptions and erasures, transforms any such code into
an efficiently decodable insdel code while only requiring a negligible increasing in the alphabet size.
This allows to use the decades of intense research in coding theory for Hamming-type errors to be
transferred into the much harder and less well understood insertion-deletion setting.
1.2
Synchronization Strings: Definition, Construction, and Decoding
Next, we want to briefly motivate and explain how we arrive at a natural definition of these
magical indexing sequences S over a finite alphabet Σ and what intuition lies behind their efficient
constructions and decoding procedures.
Suppose a sender has attached some indexing sequence S one-by-one to each element in a
stream and consider a time t at which a receiver has received a corrupted sequence of the first
t index descriptors, i.e., a corrupted version of the length t prefix of S. When the receiver tries
to guess or decode the current index it should naturally consider all indexing symbols received so
far and find the “best” prefix of S. This suggests that the prefix of length l of a synchronization
string S acts as a codeword for the index position l and that one should think of the set of prefixes
of S as a code associated with the synchronization string S. Naturally one would want such a
code to have good distance properties between any two codewords under some distance measure.
While edit distance, i.e., the number of insertions and deletions needed to transform one string
into another seems like the right notion of distance for insdel errors in general, the prefix nature
of the codes under consideration will guarantee that codewords for indices l and l0 > l will have
edit distance exactly l0 − l. This implies that even two very long codewords only have a tiny edit
distance. On the one hand, this precludes synchronization codes with a large relative edit distance
between its codewords. On the other hand, one should see this phenomenon as simply capturing the
fact that at any time a simple insertion of an incorrect symbol carrying the correct next indexing
symbol will lead to an unavoidable decoding error. Given this natural and unavoidable sensitivity
of synchronization codes to recent corruptions, it makes sense to instead use a distance measure
which captures the recent density of errors. In this spirit, we suggest the definition of a, to our
knowledge, new string distance measure which we call relative suffix distance, which intuitively
measures the worst fraction of insdel errors to transform suffixes, i.e., recently sent parts of two
strings, into each other. This natural measure, in contrast to a similar measure defined in [2], turns
out to induce a metric space on any set of strings.
With this natural definitions for an induced set of codewords and a natural distance metric
associated with any such set the next task is to design a string S for which the set of codewords
has as large of a minimum pairwise distance as possible. When looking for (infinite) sequences
that induce such a set of codewords and thus can be successfully used as synchronization strings
it became apparent that one is looking for highly irregular and non-self-similar strings over a fixed
alphabet Σ. It turns out that the correct definition to capture these desired properties, which we
call ε-synchronization property, states that any two neighboring intervals of S with total length l
should require at least (1−ε)l insertions and deletions to transform one into the other, where ε ≥ 0.
A one line calculation also shows that this clean property also implies a large minimum relative
suffix distance between any two codewords. Not surprisingly, random strings essentially satisfy
this ε-synchronization property, except for local imperfections of self-similarity, such as, symbols
repeated twice in a row, which would naturally occur in random sequences about every |Σ| positions.
4
This allows us to use the probabilistic method and the general Lovász Local Lemma to prove the
existence ε-synchronization strings. This also leads to an efficient randomized construction.
Finally, decoding any string to the closest codeword, i.e., the prefix of the synchronization string
S with the smallest relative suffix distance, can be easily done in polynomial time because the set of
synchronization codewords is linear and not exponential in n and (edit) distance computations (to
each codeword individually) can be done via the classical Wagner-Fischer dynamic programming
approach.
1.3
More Sophisticated Decoding Procedures
All this provides an indexing solution which transforms any k synchronization errors into at most
(5 + ε)k half-errors. This already leads to insdel codes which achieve a rate approaching 1 − 5δ for
any δ fraction of insdel errors with δ < 51 . While this is already a drastic improvement over the
√
previously best 1 − O( δ) rate codes from [10], which worked only for sufficiently small δ, it is a
far less strong result than the near-MDS codes we promised in Theorem 1.1 for every δ ∈ (0, 1).
We were able to improve upon the above strategy slightly by considering an alternative to
the relative suffix distance measure, which we call relative suffix pseudo distance RSPD. RSPD
was introduced in [2] and while neither being symmetric nor satisfying the triangle inequality, can
act as a pseudo distance in the minimum-distance decoder. For any set of k = ki + kd insdel
errors consisting of ki insertions and kd deletions this improved indexing solution leads to at most
(1 + ε)(3ki + kd ) half-errors which already implies near-MDS codes for deletion-only channels but
still falls short for general insdel errors. We leave open the question whether an improved pseudo
distance definition can achieve an indexing solution with negligible number of misdecodings for a
minimum-distance decoder.
In order to achieve our main theorem we developed an different strategy. Fortunately, it turned
out that achieving a better indexing solution and the desired insdel codes does not require any
changes to the definition of synchronization codes, the indexing approach itself, or the encoding
scheme but solely required a very different decoding strategy. In particular, instead of decoding
indices in a streaming manner we consider more global decoding algorithms. We provide several
such decoding algorithms in Section 6. In particular, we give a simple global decoding algorithm
which for which the number of misdecodings goes to zero as the quality ε of the ε-synchronization
string used goes to zero, irrespectively of how many insdel errors are applied.
Our global decoding algorithms crucially build on another key-property which we prove holds
for any ε-synchronization string S, namely that there is no monotone matching between S and itself
which mismatches more than a ε fraction of indices. Besides being used in our proofs, considering
this ε-self-matching property has another advantage. We show that this property is achieved
easier than the full ε-synchronization property and that indeed a random string satisfies it with
good probability. This means that, in the context of error correcting codes, one can even use
a simple uniformly random
string as a “synchronization string”. Lastly, we show that even a
log n
−O(1)
n
-approximate O log 1 -wise independent random strings satisfy the desired ε-self-matching
ε
property which, using the celebrated small sample space constructions from [24] also leads to a
deterministic polynomial time construction.
Lastly, we provide simpler and faster global decoding algorithms for the setting of deletion-only
and insertion-only corruptions. These algorithms are essentially greedy algorithms which run in
linear time. They furthermore guarantee that their indexing decoding is error-free, i.e., they only
output “I don’t know” for some indices but never produce an incorrectly decoded index. Such
5
decoding schemes have the advantage that one can use them in conjunction with error correcting
codes that efficiently recover from erasures (and not necessarily also symbol corruptions).
1.4
Organization of this Paper
The organization of this paper closely follows the flow of the high-level description above.
We start by giving more details on related work in Section 1.5 and introduce notation used in
the paper in Section 2 together with a formal introduction of the two different error types as well as
(efficient) error correcting codes and insdel codes. In Section 3, we formalize the indexing problem
and (approximate) solutions to it. Section 4 shows how any solution to the indexing problem can
be used to transform any regular error correcting codes into an insdel code. Section 5 introduces
the relative suffix distance and ε-synchronization strings, proves the existence of ε-synchronization
strings and provides an efficient construction. Section 5.2 shows that the minimum suffix distance
decoder is efficient and leads to a good indexing solution. We elaborate on the connection between
ε-synchronization strings and the ε-self-matching property in Section 6.1 and provide our improved
decoding algorithms in the remainder of Section 6.
1.5
Related Work
Shannon was the first to systematically study reliable communication. He introduced random error
channels, defined information quantities, and gave probabilistic existence proofs of good codes.
Hamming was the first to look at worst-case errors and code distances as introduced above. Simple
counting arguments on the volume of balls around codewords given in the 50’s by Hamming and
Gilbert-Varshamov produce simple bounds on the rate of q-ary codes with relative distance δ. In
particular, they show the existence of codes with relative distance δ and rate at least 1−Hq (δ) where
log(1−x)
Hq (x) = x log(q − 1) − x log x−(1−x)
is the q-ary entropy function. This means that for any
log q
δ < 1 and q = ω(1/δ) there exists codes with distance δ and rate approaching 1 − δ. Concatenated
codes and the generalized minimum distance decoding procedure introduced by Forney in 1966
led to the first codes which could recover from constant error fractions δ ∈ (0, 1) while having
polynomial time encoding and decoding procedures. The rate achieved by
√ concatenated codes for
large alphabets with sufficiently small distance δ comes out to be 1 − O( δ). On the other hand,
for δ sufficiently close to one, one can achieve a constant rate of O(δ 2 ). Algebraic geometry codes
suggested by Goppa in 1975 later lead to error correcting codes which for every ε > 0 achieve the
optimal rate of 1 − δ − ε with an alphabet size polynomial in ε while being able to efficiently correct
for a δ fraction of half-errors [29].
While this answered the most basic questions, research since then has developed a tremendously
powerful toolbox and selection of explicit codes. It attests to the importance of error correcting
codes that over the last several decades this research direction has developed into the incredibly
active field of coding theory with hundreds of researchers studying and developing better codes.
A small and highly incomplete subset of important innovations include rateless codes, such as,
LT codes [20], which do not require to fix a desired distance at the time of encoding, explicit
expander codes [9, 27] which allow linear time encoding and decoding, polar codes [12, 14] which
can approach Shannon’s capacity polynomially fast, network codes [19] which allow intermediate
nodes in a network to recombine codewords, and efficiently list decodable codes [11] which allow to
list-decode codes of relative distance δ up to a fraction of about δ symbol corruptions.
While error correcting codes for insertions and deletions have also been intensely studied, our
understanding of them is much less well developed. We refer to the 2002 survey by Sloan [26] on
single-deletion codes, the 2009 survey by Mitzenmacher [22] on codes for random deletions and
6
the most general 2010 survey by Mercier et al. [21] for the extensive work done around codes
for synchronization errors and only mention the results most closely related to Theorem 1.1 here:
Insdel codes were first considered by Levenshtein [18] and since then many bounds and constructions
for such codes have been given. However, while essentially the same volume and sphere packing
arguments as for regular codes show that there exists insdel codes capable of correcting a fraction δ
of insdel erros with rate 1−δ, no efficient constructions anywhere close to this rate-distance tradeoff
are known. Even the construction of efficient insdel codes over a constant alphabet with any (tiny)
constant relative distance and any (tiny) constant rate had to wait until Schulman and Zuckerman
gave the first such code in 1999 [25]. Over the last two years Guruswami et al. provided new codes
improving over this state of the art the asymptotically small or large noise regime by giving the
first codes which achieve a constant rate for noise rates going to one and codes which provide a
rate going to one for an asymptotically small noise rate. In particular, [13] gave the first efficient
codes codes over fixed alphabets to correct a deletion fraction approaching 1, as well as efficient
binary codes to correct a small constant fraction of deletions with rate approaching 1. These codes
could, however, only be efficiently decoded for deletions and not insertions. A follow-up work gave
new and improved codes with similar rate-distance tradeoffs which can be efficiently decoded
√ from
insertions and deletions [10]. In particular, these codes achieve a rate of Ω(δ 5 ) and 1 − Õ( δ) while
being able to efficiently recover from a δ fraction of insertions and deletions. These works put the
current state of the art for error correcting codes for insertions and deletions pretty much equal
to what was known for regular error correcting codes 50 years ago, after Forney’s 1965 doctoral
thesis.
2
Definitions and Preliminaries
In this section, we provide the notation and definitions we will use throughout the rest of the paper.
2.1
String Notation and Edit Distance
0
String Notation. For two strings S ∈ Σn and S 0 ∈ Σn be two strings over alphabet Σ. We define
0
S · S 0 ∈ Σn+n to be their concatenation. For any positive integer k we define S k to equal k copies
of S concatenated together. For i, j ∈ {1, . . . , n}, we denote the substring of S from the ith index
through and including the j th index as S[i, j]. Such a consecutive substring is also called a factor
of S. For i < 1 we define S[i, j] = ⊥−i+1 · S[1, j] where ⊥ is a special symbol not contained in Σ.
We refer to the substring from the ith index through, but not including, the j th index as S[i, j).
The substrings S(i, j] and S[i, j] are similarly defined. Finally, S[i] denotes the ith symbol of S
and |S| = n is the length of S. Occasionally, the alphabets we use are the cross-product of several
alphabets, i.e. Σ = Σ1 × · · · × Σn . If T is a string over Σ, then we write T [i] = [a1 , . . . , an ], where
ai ∈ Σi .
Edit Distance. Throughout this work, we rely on the well-known edit distance metric defined as
follows.
Definition 2.1 (Edit distance). The edit distance ED(c, c0 ) between two strings c, c0 ∈ Σ∗ is the
minimum number of insertions and deletions required to transform c into c0 .
It is easy to see that edit distance is a metric on any set of strings and in particular is symmetric
and satisfies the triangle inequality property. Furthermore, ED (c, c0 ) = |c| + |c0 | − 2 · LCS (c, c0 ),
where LCS (c, c0 ) is the longest common substring of c and c0 .
We also use some string matching notation from [2]:
7
Definition 2.2 (String matching). Suppose that c and c0 are two strings in Σ∗ , and suppose that
∗ is a symbol not in Σ. Next, suppose that there exist two strings τ1 and τ2 in (Σ ∪ {∗})∗ such
that |τ1 | = |τ2 |, del (τ1 ) = c, del(τ2 ) = c0 , and τ1 [i] ≈ τ2 [i] for all i ∈ {1, . . . , |τ1 |}. Here, del is a
function that deletes every ∗ in the input string and a ≈ b if a = b or one of a or b is ∗. Then we
say that τ = (τ1 , τ2 ) is a string matching between c and c0 (denoted τ : c → c0 ). We furthermore
denote with sc (τi ) the number of ∗’s in τi .
Note that the edit distance ED(c, c0 ) between strings c, c, ∈ Σ∗ is exactly equal to
minτ :c→c0 {sc (τ1 ) + sc (τ2 )}.
2.2
Error Correcting Codes
Next we give a quick summary of the standard definitions and formalism around error correcting
codes. This is mainly for completeness and we remark that readers already familiar with basic
notions of error correcting codes might want to skip this part.
Codes, Distance, Rate, and Half-Errors An error correcting code C is an injective function
0
which takes an input string s ∈ (Σ0 )n over alphabet Σ0 of length n0 and generates a codeword
C(s) ∈ Σn of length n over alphabet Σ. The length n of a codeword is also called the block
length. The two most important parameters of a code are its distance ∆ and its rate R. The
log |Σ|
rate R = nn0 log
|Σ0 | measures what fraction of bits in the codewords produced by C carries nonredundant information about the input. The code distance ∆(C) = mins,s0 ∆(C(s), C(s0 )) is simply
the minimum Hamming distance between any two codewords. The relative distance δ(C) = ∆(C)
n
measures what fraction of output symbols need to be corrupted to transform one codeword into
another.
It is easy to see that if a sender sends out a codeword C(s) of code C with relative distance δ a
receiver can uniquely recover s if she receives a codeword in which less than a δ fraction of symbols
are affected by an erasure, i.e., replaced by a special “?” symbol. Similarly, a receiver can uniquely
recover the input s if less than δ/2 symbol corruptions, in which a symbol is replaced by any other
symbol from Σ, occurred. More generally it is easy to see that a receiver can recover from any
combination of ke erasures and kc corruptions as long as ke + 2kc < δn. This motivates defining
half-errors to incorporate both erasures and symbol corruptions where an erasure is counted as a
single half-error and a symbol corruption is counted as two half-errors. In summary, any code of
distance δ can tolerate any error pattern of less than δn half-errors.
We remark that in addition to studying codes with decoding guarantees for worst-case error
pattern as above one can also look at more benign error models which assume a distribution over
error patterns, such as errors occurring independently at random. In such a setting one looks for
codes which allow unique recovery for typical error patterns, i.e., one wants to recover the input
with probability tending to 1 rapidly as the block length n grows. While synchronization strings
might have applications for such codes as well, this paper focuses exclusively on codes with good
distance guarantees which tolerate an arbitrary (worst-case) error pattern.
Synchronization Errors In addition to half-errors, we study synchronization errors which consist of deletions, that is, a symbol being removed without replacement, and insertions, where a new
symbol from Σ is added anywhere. It is clear that synchronization errors are strictly more
general and harsh than half-errors (see Section 1.1.1). The above formalism of codes, rate,
and distance works equally well for synchronization errors if one replaces the Hamming distance
with edit distance. Instead of measuring the number of symbol corruptions required to transform
8
one string into another, edit distance measures the minimum number of insertions and deletions to
do so. An insertion-deletion error correcting code, or insdel code for short, of relative distance δ
is a set of codewords for which at least δn insertions and deletions are needed to transformed any
codeword into another. Such a code can correct any combination of less than δn/2 insertions and
deletions. We remark that it is possible for two codewords of length n to have edit distance up to
2n putting the (minimum) relative edit distance between zero and two and allowing for constant
rate codes which can tolerate (1 − ε)n insdel errors.
Efficient Codes In addition to codes with a good minimum distance, one furthermore wants
efficient algorithms for the encoding and error-correction tasks associated with the code. Throughout this paper we say a code is efficient if it has encoding and decoding algorithms running in time
polynomial in the block length. While it is often not hard to show that random codes exhibit a good
rate and distance, designing codes which can be decoded efficiently is much harder. We remark
that most codes which can efficiently correct for symbol corruptions are also efficient for half-errors.
For insdel codes the situation is slightly different. While it remains true that any code that can
uniquely be decoded from any δ(C) fraction of deletions can also be decoded from the same fraction
of insertions and deletions [18] doing so efficiently is often much easier for the deletion-only setting
than the fully general insdel setting. .
3
The Indexing Problem
In this section, we formally define the indexing problem. In a nutshell, this problem is that of sending a suitably chosen string S of length n over an insertion-deletion channel such that the receiver
will be able to figure out the indices of most of the symbols he receives correctly. This problem
can be trivially solved by sending the string S = 1, 2, . . . , n over the alphabet Σ = {1, . . . , n} of
size n. Interesting solution to the indexing problem, however, do almost as well while using a finite
size alphabet. While very intuitive and simple, the formalization of this problem and its solutions
enables an easy use in many applications.
To set up an (n, δ)-indexing problem, we fix n, i.e., the number of symbols which are being sent,
and the maximum fraction δ of symbols that can be inserted or deleted. We further call the string
S the synchronization string. Lastly, we describe the influences of the nδ worst-case insertions
and deletions which transform S into the related string Sτ in terms of a string matching τ . In
particular, τ = (τ1 , τ2 ) is the string matching from S to Sτ such that del(τ1 ) = S, del(τ2 ) = Sτ ,
and for every k
( (S[i], ∗)
(S[i], Sτ [j])
(τ1 [k], τ2 [k]) =
(∗, Sτ [j])
if S[i] is deleted
if S[i] is delivered as Sτ [j]
if Sτ [j] is inserted
where i = |del(τ1 [1, k])| and j = |del(τ2 [1, k])|.
Definition 3.1 ((n, δ)-Indexing Algorithm). The pair (S, DS ) consisting of a synchronization string
S ∈ Σn and an algorithm DS is called a (n, δ)-indexing algorithm over alphabet Σ if for any set
of nδ insertions and deletions represented by τ which alter S to a string Sτ , the algorithm DS (Sτ )
outputs either ⊥ or an index between 1 and n for every symbol in Sτ .
The ⊥ symbol here represents an “I don’t know” response of the algorithm while an index j
output by DS (Sτ ) for the ith symbol of Sτ should be interpreted as the (n, δ)-indexing algorithm
guessing that this was the j th symbol of S. One seeks algorithms that decode as many indices as
9
possible correctly. Naturally, one can only correctly decode indices that were correctly transmitted.
Next we give formal definitions of both notions:
Definition 3.2 (Correctly Decoded Index). An (n, δ) indexing algorithm (S, DS ) decodes index
j correctly under τ if DS (Sτ ) outputs i and there exists a k such that i = |del(τ1 [1, k])|, j =
|del(τ2 [1, k])|, τ1 [k] = S[i], τ2 [k] = Sτ [j]
We remark that this definition counts any ⊥ response as an incorrect decoding.
Definition 3.3 (Successfully Transmitted Symbol). For string Sτ , which was derived from a synchronization string S via τ = (τ1 , τ2 ), we call the j th symbol Sτ [j] successfully transmitted if it stems
from a symbol coming from S, i.e., if there exists a k such that |del(τ2 [1, k])| = j and τ1 [k] = τ2 [k].
We now define the quality of an (n, δ)-indexing algorithm by counting the maximum number of
misdecoded indices among those that were successfully transmitted. Note that the trivial indexing
strategy with S = 1, . . . , n which outputs for each symbol the symbol itself has no misdecodings.
One can therefore also interpret our quality definition as capturing how far from this ideal solution
an algorithm is (stemming likely due to the smaller alphabet which is used for S).
Definition 3.4 (Misdecodings of an (n, δ)-Indexing Algorithm). Let (S, DS ) be an (n, δ)-indexing
algorithm. We say this algorithm has at most k misdecodings if for any τ corresponding to at most
nδ insertions and deletions the number of correctly transmitted indices that are incorrectly decoded
is at most k.
Now, we introduce two further useful properties that a (n, δ)-indexing algorithm might have.
Definition 3.5 (Error-free Solution). We call (S, DS ) an error-free (n, δ)-indexing algorithm with
respect to a set of deletion or insertion patterns if every index output is either ⊥ or correctly
decoded. In particular, the algorithm never outputs an incorrect index, even for indices which are
not correctly transmitted.
It is noteworthy that error-free solutions are essentially only obtainable when dealing with the
insertion-only or deletion-only setting. In both cases, the trivial solution with S = 1, · · · , n which
decodes any index that was received exactly once is error-free. We later give some algorithms which
preserve this nice property, even over a smaller alphabet, and show how error-freeness can be useful
in the context of error correcting codes.
Lastly, another very useful property of some (n, δ)-indexing algorithms is that their decoding
process operates in a streaming manner, i.e, the decoding algorithm decides the index output for
Sτ [j] independently of Sτ [j 0 ] where j 0 > j. While this property is not particularly useful for the
error correcting block code application put forward in this paper, it is an extremely important and
strong property which is crucial in several applications we know of, such as, rateless error correcting
codes, channel simulations, interactive coding, edit distance tree codes, and other settings.
Definition 3.6 (Streaming Solutions). We call (S, DS ) a streaming solution if the decoded index
for the ith element of the received string Sτ only depends on Sτ [1, i].
Again, the trivial solution for (n, δ)-index decoding problem over an alphabet of size n with zero
misdecodings can be made streaming by outputting for every received symbols the received symbol
itself as an index. This solution is also error-free for the deletion-only setting but not error-free for
the insertion-only setting. In fact, it is easy to show that an algorithm cannot be both streaming
and error-free in any setting which allows insertions.
10
Overall, the important characteristics of an (n, δ)-indexing algorithm are (a) its alphabet size
|Σ|, (b) the bound on the number of misdecodings, (c) the complexity of the decoding algorithm D,
(d) the preprocessing complexity of constructing the string S, (e) whether the algorithm works for
the insertion-only, the deletion-only or the full insdel setting, and (f) whether the algorithm satisfies
the streaming or error-freeness property. Table 1 gives a summary over the different solutions for
the (n, δ)-indexing problem we give in this paper.
Algorithm
Section
Section
Section
Section
Section
Section
5.2
6.3
6.4
6.5
6.5
6.6
Type
ins/del
ins/del
del
ins
del
ins/del
Misdecodings
Error-free
(2 + ε) · nδ
√
ε·n
ε · nδ
(1 + ε) · nδ
ε · nδ
(1 + ε) · nδ
Streaming
Complexity
X
O(n4 )
√
O n2 / ε
O(n)
O(n)
O(n)
O(n4 )
X
X
X
X
Table 1: Properties and quality of (n, δ)-indexing algorithms with S being a ε-synchronization string
4
Insdel Codes via Indexing Algorithms
Next, we show how a good (n, δ)-indexing algorithms (S, DS ) over alphabet ΣS allows one to
transform any regular ECC C with block length n over alphabet ΣC which can efficiently correct
half-errors, i.e., symbol corruptions and erasures, into a good insdel code over alphabet Σ = ΣC ×ΣS .
To this end, we simply attach S symbol-by-symbol to every codeword of C. On the decoding
end, we first decode the indices of the symbols arrived using the indexing part of each received
symbol and then interpret the message parts as if they have arrived in the decoded order. Indices
where zero or multiple symbols are received get considered as erased. We will refer to this procedure
as the indexing procedure. Finally, the decoding algorithm DC for C is used. These two straight
forward algorithms are formally described as Algorithm 1 and Algorithm 2.
Theorem 4.1. If (S, DS ) guarantees k misdecodings for the (n, δ)-index problem, then the indexing
procedure recovers the codeword sent up to nδ + 2k half-errors, i.e., half-error distance of the sent
codeword and the one recovered by the indexing procedure is at most nδ +2k. If (S, DS ) is error-free,
the indexing procedure recovers the codeword sent up to nδ + k half-errors.
Proof. Consider a set insertions and deletions described by τ consisting of Dτ deletions and Iτ
insertions. Note that among n encoded symbols, at most Dτ were deleted and less than k of are
decoded incorrectly. Therefore, at least n − Dτ − k indices are decoded correctly. On the other
hand at most Dτ + k of the symbols sent are not decoded correctly. Therefore, if the output only
consisted of correctly decoded indices for successfully transmitted symbols, the output would have
contained up to Dτ + k erasures and no symbol corruption, resulting into a total of Dτ + k halferrors. However, any symbol which is being incorrectly decoded or inserted may cause a correctly
decoded index to become an erasure by making it appear multiple times or change one of original
Iτ + k erasures into a corruption error by making the indexing procedure mistakenly decode an
index. Overall, this can increase the number of half-errors by at most Iτ + k for a total of at most
Dτ + k + Iτ + k = Dτ + Iτ + 2k = nδ + 2k half-errors. For error-free indexing algorithms, any
misdecoding does not result in an incorrect index and the number of incorrect indices is Iτ instead
of Iτ + k leading to the reduced number of half-errors in this case.
11
This makes it clear that applying an ECC C which is resilient to nδ + 2k half-errors enables the
receiver side to fully recover m.
Algorithm 1 Insertion Deletion Encoder
Input: n, m = m1 , · · · , mn
1: m̃ = EC (m)
2: for i = 1 to n do
3:
Mi = (mi , S[i])
Output: M
Algorithm 2 Insertion Deletion Decoder
Input: n, M 0 = (m̃0 , S 0 )
1: Dec ← DS (S 0 )
2: for i = 1 to n do
3:
if there is a unique j for which Dec[j] = i then
4:
m0i = m̃0j
5:
else
6:
m0i = ?
7: m = DC (m0 )
Output: m
Next, we formally state how a good (n, δ)-indexing algorithm (S, DS ) over alphabet ΣS allows
one to transform any regular ECC C with block length n over alphabet ΣC which can efficiently
correct half-errors, i.e., symbol corruptions and erasures, into a good insdel code over alphabet
Σ = ΣC × ΣS . The following Theorem is a corollary of Theorem 4.1 and the definition of the
indexing procedure:
Theorem 4.2. Given an (efficient) (n, δ)-indexing algorithm (S, DS ) over alphabet ΣS with at
most k misdecodings, and decoding complexity TDS (n) and an (efficient) ECC C over alphabet ΣC
with rate RC , encoding complexity TEC , and decoding complexity TDC that corrects up to nδ + 2k
half-errors, one obtains an insdel code that can be (efficiently) decoded from up to nδ insertions
and deletions. The rate of this code is
log ΣS
RC · 1 −
log ΣC
The encoding complexity remains TEC , the decoding complexity is TDC +TDS (n) and the preprocessing
complexity of constructing the code is the complexity of constructing C and S.
Furthermore, if (S, DS ) is error-free, then choosing a C which can recover only from nδ + k erasures
is sufficient to produce the same quality code.
ΣS
Note that if one chooses ΣC such that log
log ΣC = o(δ), the rate loss due to the attached symbols will
be negligible. With all this in place one can obtain Theorem 1.1 as a consequence of Theorem 4.2.
0
Proof
ofTheorem 1.1. Given the δ and ε from the statement of Theorem 1.1 we choose ε =
2
O 6ε
and use Theorem 6.13 to construct a string S of length n over alphabet ΣS of size ε−O(1)
with the ε0 -self-matching property. We then use the (n, δ)-indexing algorithm
√ (S, Dε S ) where given
in Section 6.3 and line 2 of Table 1 which guarantees that it has at most ε0 = 3 misdecodings.
12
Finally, we choose a near-MDS expander code [9] C which can efficiently correct up to δC = δ + 3ε
half-errors and has a rate of RC > 1 − δC − 3ε over an alphabet ΣC = exp(ε−O(1) ) such that
ΣS
ε
log |ΣC | ≥ 3 logε|ΣS | . This ensures that the final rate is indeed at least RC − log
log ΣC = 1 − δ − 3 3
ε
and the number of insdel errors that can be efficiently corrected is δC − 2 3 ≥ δ. The encoding and
decoding complexities are furthermore straight forward and as is the polynomial time preprocessing
time given Theorem 6.13 and [9].
5
Synchronization Strings
In this section, we formally define and develop ε-synchronization strings, which can be used as our
base synchronization string S in our (n, δ)-indexing algorithms.
As explained in Section 1.2 it makes sense to think of the prefixes S[1, l] of a synchronization
string S as codewords encoding their length l, as the prefix S[1, l], or a corrupted version of it,
will be exactly all the indexing information that has been received by the time the lth symbol is
communicated:
Definition 5.1 (Codewords Associated with a Synchronization String). Given any synchronization string S we define the set of codewords associated with S to be the set of prefixes of S, i.e.,
{S[1, l] | 1 ≤ l ≤ |S|}.
Next, we define a distance metric on any set of strings, which will be useful in quantifying how
good a synchronization string S and its associated set of codewords is:
Definition 5.2 (Relative Suffix Distance). For any two strings S, S 0 ∈ Σ∗ we define their relative
suffix distance RSD as follows:
RSD(S, S 0 ) = max
k>0
ED (S(|S| − k, |S|], S 0 (|S 0 | − k, |S 0 |])
2k
Next we show that RSD is indeed a distance which satisfies all properties of a metric for any
set of strings. To our knowledge, this metric is new. It is, however, similar in spirit to the suffix
“distance” defined in [2], which unfortunately is non-symmetric and does not satisfy the triangle
inequality but can otherwise be used in a similar manner as RSD in the specific context here (see
also Section 6.6).
Lemma 5.3. For any strings S1 , S2 , S3 we have
• Symmetry: RSD(S1 , S2 ) = RSD(S2 , S1 ),
• Non-Negativity and Normalization: 0 ≤ RSD(S1 , S2 ) ≤ 1,
• Identity of Indiscernibles: RSD(S1 , S2 ) = 0 ⇔ S1 = S2 , and
• Triangle Inequality: RSD(S1 , S3 ) ≤ RSD(S1 , S2 ) + RSD(S2 , S3 ).
In particular, RSD defines a metric on any set of strings.
Proof. Symmetry and non-negativity follow directly from the symmetry and non-negativity of edit
distance. Normalization follows from the fact that the edit distance between two length k strings
can be at most 2k. To see the identity of indiscernibles note that RSD(S1 , S2 ) = 0 if and only if for
all k the edit distance of the k prefix of S1 and S2 is zero, i.e., if for every k the k-prefix of S1 and
S2 are identical. This is equivalent to S1 and S2 being equal. Lastly, the triangle inequality also
essentially follows from the triangle inequality for edit distance. To see this let δ1 = RSD(S1 , S2 )
and δ2 = RSD(S2 , S3 ). By the definition of RSD this implies that for all k the k-prefixes of S1 and
13
S2 have edit distance at most 2δ1 k and the k-prefixes of S2 and S3 have edit distance at most 2δ2 k.
By the triangle inequality for edit distance, this implies that for every k the k-prefix of S1 and S3
have edit distance at most (δ1 + δ2 ) · 2k which implies that RSD(S1 , S3 ) ≤ δ1 + δ2 .
With these definitions in place, it remains to find synchronization strings whose prefixes induce
a set of codewords, i.e., prefixes, with large RSD distance. It is easy to see that the RSD distance
for any two strings ending on a different symbol is one. This makes the trivial synchronization
string, which uses each symbol in Σ only once, induce an associated set of codewords of optimal
minimum-RSD-distance one. Such trivial synchronization strings, however, are not interesting as
they require an alphabet size linear in the length n. To find good synchronization strings over
constant size alphabets, we give the following important definition of an ε-synchronization string.
The parameter 0 < ε < 1 should be thought of measuring how far a string is from the perfect
synchronization string, i.e., a string of n distinct symbols.
Definition 5.4 (ε-Synchronization String). String S ∈ Σn is an ε-synchronization string if for
every 1 ≤ i < j < k ≤ n + 1 we have that ED (S[i, j), S[j, k)) > (1 − ε)(k − i). We call the set of
prefixes of such a string an ε-synchronization string.
The next lemma shows that the ε-synchronization string property is strong enough to imply a
good minimum RSD distance between any two codewords associated with it.
Lemma 5.5. If S is an ε-synchronization string, then RSD(S[1, i], S[1, j]) > 1 − ε for any i < j,
i.e., any two codewords associated with S have RSD distance of at least 1 − ε.
Proof. Let k = j − i. The ε-synchronization string property of S guarantees that
ED (S[i − k, i), S[i, j)) > (1 − ε)2k.
Note that this holds even if i−k < 1. To finish the proof we note that the maximum in the definition
of RSD includes the term ED(S[i−k,i),S[i,j))
> 1 − ε, which implies that RSD(S[1, i], S[1, j]) >
2k
1 − ε.
5.1
Existence and Construction
The next important step is to show that the ε-synchronization strings we just defined exist, particularly, over alphabets whose size is independent of the length n. We show the existence of
ε-synchronization strings of arbitrary length for any ε > 0 using an alphabet size which is only
polynomially large in 1/ε. We remark that ε-synchronization strings can be seen as a strong generalization of square-free sequences in which any two neighboring substrings S[i, j) and S[j, k) only
have to be different and not also far from each other in edit distance. Thue [28] famously showed
the existence of arbitrarily large square-free strings over a trinary alphabet. Thue’s methods for
constructing such strings however turns out to be fundamentally too weak to prove the existence
of ε-synchronization strings, for any constant ε < 1.
Our existence proof requires the general Lovász local lemma which we recall here first:
Lemma 5.6 (General Lovász local lemma). Let A1 , . . . , An be a set of “bad” events. The directed
graph G(V, E) is called a dependency graph for this set of events if V = {1, . . . , n} and each event
Ai is mutually independent of all the events {Aj : (i, j) 6∈ E}.
Now, if there exists x1 , . . . , xn ∈ [0, 1) such that for all i we have
Y
P [Ai ] ≤ xi
(1 − xj )
(i,j)∈E
14
then there exists a way to avoid all events Ai simultaneously and the probability for this to happen
is bounded by
"n
#
n
Y
^
P
Āi ≥
(1 − xi ) > 0.
i=1
i=1
Theorem 5.7. For any ε ∈ (0, 1), n ≥ 1, there exists an ε-synchronization string of length n over
an alphabet of size Θ(1/ε4 ).
Proof. Let S be a string of length n obtained by concatenating two strings T and R, where T is
simply the repetition of 0, . . . , t − 1 for t = Θ ε12 , and R is a uniformly random string of length n
over alphabet Σ. In particular, Si = (i mod t, Ri ).
We prove that S is an ε-synchronization string by showing that there is a positive probability
that S contains no bad triple, where (x, y, z) is a bad triple if ED(S[x, y), S[y, z)) ≤ (1 − ε)(z − x).
First, note that a triple (x, y, z) for which z − x < t cannot be a bad triple as it consists of
completely distinct symbols by courtesy of T . Therefore, it suffices to show that there is no bad
triple (x, y, z) in R for x, y, z such that z − x > t.
Let (x, y, z) be a bad triple and let a1 a2 · · · ak be the longest common subsequence of R[x, y) and
R[y, z). It is straightforward to see that ED(R[x, y), R[y, z)) = (y − x) + (z − y) − 2k = z − x − 2k.
Since (x, y, z) is a bad triple, we have that z −x−2k ≤ (1−ε)(z −x), which means that k ≥ 2ε (z −x).
With this observation in mind, we say that R[x, z) is a bad interval if it contains a subsequence
a1 a2 · · · ak a1 a2 · · · ak such that k ≥ 2ε (z − x).
To prove the theorem, it suffices to show that a randomly generated string does not contain any
bad intervals with a non-zero probability. We first upper bound the probability that an interval of
length l is bad:
εl
l
Pr [I is bad] ≤
|Σ|− 2
l
εl
I∼Σ
εl
εl
el
|Σ|− 2
≤
εl
!εl
e
p
=
,
ε |Σ|
where the first inequality holds because if an interval of length l is bad, then it must contain a
repeating subsequence of length lε2 . Any such sequence can be specified via εl positions in the l
εl
long interval and the probability that a given fixed sequence is valid for a random string is |Σ|− 2 .
k
The second inequality comes from the fact that nk < ne
.
k
The resulting inequality shows that the probability of an interval of length l being bad is bounded
above by C −εl , where C can be made arbitrarily large by taking a sufficiently large alphabet size
|Σ|.
To show that there is a non-zero probability that the uniformly random string R contains no
bad interval I of size t or larger, we use the general Lovász local lemma stated in Lemma 5.6. Note
that the badness of interval I is mutually independent of the badness of all intervals that do not
intersect I. We need to find real numbers xp,q ∈ [0, 1) corresponding to intervals R[p, q) for which
Y
P r [Interval R[p, q) is bad] ≤ xp,q
(1 − xp0 ,q0 ).
R[p,q)∩R[p0 ,q 0 )6=∅
We have seen that the left-hand side can be upper bounded by C −ε|R[p,q)| = C ε(p−q) . Furthermore, any interval of length l0 intersects at most l + l0 intervals of length l. We propose
15
xp,q = D−ε|R[p,q)| = Dε(p−q) for some constant D > 1. This means that it suffices to find a constant
D that for all substrings R[p, q) satisfies
C
ε(p−q)
≤D
ε(p−q)
n
Y
1 − D−εl
l+(q−p)
,
l=t
or more clearly, for all l0 ∈ {1, · · · , n},
C
−l0
≤D
−l0
n
Y
1−D
−εl
l+l0
ε
,
l=t
which means that
D
C≥Q
n
l=t (1
−
D−εl )
1+l/l0
ε
.
(1)
For D > 1, the right-hand side of Equation (1) is maximized when n = ∞ and l0 = 1, and since
we want Equation (1) to hold for all n and all l0 ∈ {1, · · · , n}, it suffices to find a D such that
D
C≥Q
∞
−εl )
l=t (1 − D
l+1
ε
.
To this end, let
(
L = min
D>1
)
D
Q∞
−εl )
l=t (1 − D
.
l+1
ε
Then, it suffices to have Σ large enough so that
p
ε |Σ|
≥ L,
C=
e
which means that |Σ| ≥
e2 L2
ε2
suffices to allow us to use the Lovász local lemma. We claim that
1
log ε
L = Θ(1), which will complete the proof. Since t = ω
,
ε
∀l ≥ t
D−εl ·
l+1
1.
ε
Therefore, we can use the fact that (1 − x)k > 1 − xk to show that:
D
Q∞
−εl )
l=t (1 − D
l+1
ε
<
Q∞
l=t
<
=
D
−εl
1 − l+1
ε ·D
D
1−
P∞
1−
1
ε
l=t
l+1
ε
· D−εl
D
P∞
l=t (l
+ 1) · (D−ε )l
D
=
1−
1
1−
16
2
D− ε
ε3 (1−D−ε )2
(3)
(4)
(5)
t
1 2t(D−ε )
ε (1−D−ε )2
D
=
(2)
.
(6)
Equation (3) is derived using the fact that
result of the following equality for x < 1:
Q∞
i=1 (1
− xi ) ≥ 1 −
P∞
i=1 xi
and Equation (5) is a
∞
X
xt (1 + t − tx)
2txt
(l + 1)xl =
<
.
(1 − x)2
(1 − x)2
l=t
One can see that for D = 7, maxε
1
2
D− ε
ε3 (1−D−ε )2
< 0.9, and therefore step (3) is legal and (6)
can be upper-bounded by a constant. Hence, L = Θ(1) and the proof is complete.
Remarks on the alphabet size: Theorem 5.7 shows that for any ε > 0 there exists an εsynchronization string over alphabets of size O(ε−4 ). A polynomial dependence on ε is also necessary. In particular, there do not exist any ε-synchronization string over alphabets of size smaller
than ε−1 . In fact, any consecutive substring of size ε−1 of an ε-synchronization string has to contain
completely distinct elements. This can be easily proven as follows: For sake of contradiction let
S[i, i + ε−1 ) be a substring of an ε-synchronization string where S[j] = S[j 0 ] for i ≤ j < j 0 < i + ε−1 .
Then, ED (S[j], S[j + 1, j 0 + 1))) = j 0 −j −1 = (j 0 +1−j)−2 ≤ (j 0 +1−j)(1−2ε). We believe that
using the Lovász Local Lemma together with a more sophisticated non-uniform probability space,
which avoids any repeated symbols within a small distance, allows avoiding the use of the string
T in our proof and improving the alphabet size to O(ε−2 ). It seems much harder to improved the
alphabet size to o(ε−2 ) and we are not convinced that it is possible. This work thus leaves open
the interesting question of closing the quadratic gap between O(ε−2 ) and Ω(ε−1 ) from either side.
Theorem 5.7 also implies an efficient randomized construction.
Lemma 5.8. There exists a randomized algorithm which for any ε > 0 constructs a εsynchronization string of length n over an alphabet of size O(ε−4 ) in expected time O(n5 ).
Proof. Using the algorithmic framework for the Lovász local lemma given by Moser and Tardos [23]
and the extensions by Haeupler et al. [16] one can get such a randomized algorithm from the proof
in Theorem 5.7. The algorithm starts with a random string over any alphabet Σ of size ε−C for
some sufficiently large C. It then checks all O(n2 ) intervals for a violation of the ε-synchronization
string property. For every interval this is an edit distance computation which can be done in O(n2 )
time using the classical Wagner-Fischer dynamic programming algorithm. If a violating interval
is found the symbols in this interval are assigned fresh random values. This is repeated until no
more violations are found. [16] shows that this algorithm performs only O(n) expected number of
re-samplings. This gives an expected running time of O(n5 ) overall, as claimed.
Lastly, since synchronization strings can be encoded and decoded in a streaming fashion they
have many important applications in which the length of the required synchronization string is not
known in advance. In such a setting it is advantageous to have an infinite synchronization string
over a fixed alphabet. In particular, since every consecutive substring of an ε-synchronization
string is also an ε-synchronization string by definition, having an infinite ε-synchronization string
also implies the existence for every length n, i.e., Theorem 5.7. Interestingly, a simple argument
shows that the converse is true as well, i.e., the existence of an ε-synchronization string for every
length n implies the existence of an infinite ε-synchronization string over the same alphabet:
Lemma 5.9. For any ε ∈ (0, 1) there exists an infinite ε-synchronization string over an alphabet
of size Θ(1/ε4 ).
17
Proof of Lemma 5.9. Fix any ε ∈ (0, 1). According to Theorem 5.7 there exist an alphabet Σ of
size O(1/ε4 ) such that there exists an at least one ε-synchronization strings over Σ for every length
n ∈ N. We will define a synchronization string S = s1 · s2 · s3 . . . with si ∈ Σ for any i ∈ N for
which the ε-synchronization property holds for any i, j, k ∈ N. We define this string inductively.
In particular, we fix an ordering on Σ and define s1 ∈ Σ to be the first symbol in this ordering
such that an infinite number of ε-synchronization strings over Σ starts with s1 . Given that there
is an infinite number of ε-synchronization over Σ such an s1 exists. Furthermore, the set of εsynchronization strings over Σ which start with s1 remains infinite by definition, allowing us to
define s2 ∈ Σ to be the lexicographically first symbol in Σ such there exists an infinite number of
ε-synchronization strings over Σ starting with s1 · s2 . In the same manner, we inductively define
si to be the lexicographically first symbol in Σ for which there exists and infinite number of εsynchronization strings over Σ starting with s1 · s2 · . . . · si . To see that the infinite string defined
in this manner does indeed satisfy the edit distance requirement of the ε-synchronization property
defined in Definition 5.4, we note that for every i < j < k with i, j, k ∈ N there exists, by definition,
an ε-synchronization string, and in fact an infinite number of them, which contains S[1, k] and thus
also S[i, k] as a consecutive substring implying that indeed ED (S[i, j), S[j, k)) > (1 − ε)(k − i) as
required. Our definition thus produces the unique lexicographically first infinite ε-synchronization
string over Σ.
We remark that any string produced by the randomized construction of Lemma 5.8 is guaranteed to be a correct ε-synchronization string (not just with probability one). This randomized
synchronization string construction is furthermore only needed once as a pre-processing step. The
encoder or decoder of any resulting error correcting codes do not require any randomization. Furthermore, in Section 6 we will provide a deterministic polynomial time construction of a relaxed
version of ε-synchronization strings that can still be used as a basis for good (n, δ)-indexing algorithms thus leading to insdel codes with a deterministic polynomial time code construction as
well.
It nonetheless remains interesting to obtain fast deterministic constructions of finite and infinite
ε-synchronization strings. In a subsequent work we achieve such efficient deterministic constructions
for ε-synchronization strings. Our constructions even produce the infinite ε-synchronization string
S proven to exist by Lemma 5.9, which is much less explicit: While for any n and any ε an εsynchronization string of length n can in principle be found using an exponential time enumeration
there is no straight forward algorithm which follows the proof of Lemma 5.9 and given an i ∈ N
produces the ith symbol of such an S in a finite amount of time (bounded by some function in i).
Our constructions require significantly more work but in the end lead to an explicit deterministic
construction of an infinite ε-synchronization string for any ε > 0 for which the ith symbol can be
computed in only O(log i) time – thus satisfying one of the strongest notions of constructiveness
that can be achieved.
5.2
Decoding
We now provide an algorithm for decoding synchronization strings, i.e., an algorithm that can
form a solution to the indexing problem along with ε-synchronization strings. In the beginning of
Section 5, we introduced the notion of relative suffix distance between two strings. Theorem 5.5
stated a lower bound of 1 − ε for relative suffix distance between any two distinct codewords
associated with an ε-synchronization string, i.e., its prefixes. Hence, a natural decoding scheme for
detecting the index of a received symbol would be finding the prefix with the closest relative suffix
distance to the string received thus far. We call this algorithm the minimum relative suffix distance
18
decoding algorithm.
We define the notion of relative suffix error density at index i which presents the maximized
density of errors taken place over suffixes of S[1, i]. We will introduce a very natural decoding
approach for synchronization strings that simply works by decoding a received string by finding the
codeword of a synchronization string S (prefix of synchronization string) with minimum distance
to the received string. We will show that this decoding procedure works correctly as long as the
relative suffix error density is not larger than 1−ε
2 . Then, we will show that if adversary is allowed
to perform c many insertions or deletions, the relative suffix distance may exceed 1−ε
2 upon arrival
2c
of at most 1−ε
many successfully transmitted symbols. Finally, we will deduce that this decoding
2c
scheme decodes indices of received symbols correctly for all but 1−ε
many of successfully transmitted
symbols. Formally, we claim that:
Theorem 5.10. Any ε-synchronization string of length n along with the minimum relative suffix
2
distance decoding algorithm form a solution to (n, δ)-indexing problem that guarantees 1−ε
nδ or
less misdecodings. This decoding algorithm is streaming and can be implemented so that it works
in O(n4 ) time.
Before proceeding to the formal statement and the proofs of the claims above, we first provide
the following useful definitions.
Definition 5.11 (Error Count Function). Let S be a string sent over an insertion-deletion channel.
We denote the error count from index i to index j with E(i, j) and define it to be the number of
insdels applied to S from the moment S[i] is sent until the moment S[j] is sent. E(i, j) counts the
potential deletion of S[j]. However, it does not count the potential deletion of S[i].
Definition 5.12 (Relative Suffix Error Density). Let string S be sent over an insertion-deletion
channel and let E denote the corresponding error count function. We define the relative suffix error
density of the communication as:
E (|S| − i, |S|)
max
i≥1
i
The following lemma relates the suffix distance of the message being sent by sender and the
message being received by the receiver at any point of a communication over an insertion-deletion
channel to the relative suffix error density of the communication at that point.
Lemma 5.13. Let string S be sent over an insertion-deletion channel and the corrupted message
S 0 be received on the other end. The relative suffix distance RSD(S, S 0 ) between the string S that
was sent and the string S 0 which was received is at most the relative suffix error density of the
communication.
Proof. Let τ̃ = (τ̃1 , τ̃2 ) be the string matching from S to S 0 that characterizes insdels that have
turned S into S 0 . Then:
ED(S(|S| − k, |S|], S 0 (|S 0 | − k, |S 0 |])
k>0
2k
minτ :S(|S|−k,|S|]→S 0 (|S 0 |−k,|S 0 |] {sc(τ1 ) + sc(τ2 )}
= max
k>0
2k
2(sc(τ10 ) + sc(τ20 ))
≤ max
≤ Relative Suffix Error Density
k>0
2k
RSD(S, S 0 ) = max
(7)
(8)
(9)
where τ 0 is τ̃ limited to its suffix corresponding to S(|S|−k, |S|]). Note that Steps (7) and (8) follow
from the definitions of edit distance and relative suffix distance. Moreover, to see Step (9), one has
19
to note that one single insertion or deletion on the k-element suffix of a string may result into a string
with k-element suffix of edit distance two of the original string’s k-element suffix; one stemming from
the inserted/deleted symbol and the other one stemming from a symbol appearing/disappearing at
the beginning of the suffix in order to keep the size of suffix k.
A key consequence of Lemma 5.13 is that if an ε-synchronization string is being sent over
an insertion-deletion channel and at some step the relative suffix error density corresponding to
corruptions is smaller than 1−ε
2 , the relative suffix distance of the sent string and the received
one at that point is smaller than 1−ε
2 ; therefore, as RSD of all pairs of codewords associated with
an ε-synchronization string are greater than 1 − ε, the receiver can correctly decode the index of
the corrupted codeword he received by simply finding the codeword with minimum relative suffix
distance.
The following lemma states that such a guarantee holds most of the time during transmission
of a synchronization string:
Lemma 5.14. Let ε-synchronization string S be sent over an insertion-channel channel and corrupted string S 0 be received on the other end. If there are ci symbols inserted and cd symbols deleted,
then, for any integer t, the relative suffix error density is smaller than 1−ε
t upon arrival of all but
t(ci +cd )
1−ε − cd many of the successfully transmitted symbols.
Proof. Let E denote the error count function of the communication. We define the potential function
Φ over {0, 1, · · · , n} as follows:
t · E(i − s, i)
Φ(i) = max
−s
1≤s≤i
1−ε
Also, set Φ(0) = 0. We prove the theorem by showing the correctness of the following claims:
1. If E(i − 1, i) = 0, i.e., the adversary does not insert or delete any symbols in the interval
starting right after the moment S[i − 1] is sent and ending at when S[i] is sent, then the value
of Φ drops by 1 or becomes/stays zero, i.e., Φ(i) = max {0, Φ(i − 1) − 1}.
2. If E(i − 1, i) = k, i.e., adversary inserts or deletes k symbols in the interval starting right after
the moment S[i − 1] is sent and ending at when S[i] is sent, then the value of Φ increases by
tk
tk
at most 1−ε
− 1, i.e., Φ(i) ≤ Φ(i − 1) + 1−ε
− 1.
3. If Φ(i) = 0, then the relative suffix error density of the string that is received when S[i] arrives
at the receiving side is not larger than 1−ε
t .
Given the correctness of claims made above, the lemma can be proved as follows. As adversary
i +cd )
can apply at most ci + cd insertions or deletions, Φ can gain a total increase of t·(c1−ε
. Therefore,
the value of Φ can be non-zero for at most
t·(ci +cd )
1−ε
many inputs. As value of Φ(i) is non-zero for
t·(ci +cd )
− cd indices i where
1−ε
t·(ci +cd )
− cd many of correctly
1−ε
all i’s where S[i] has been removed by adversary, there are at most
Φ(i) is non-zero and i is successfully transmitted. Hence, at most
transmitted symbols can possibly be decoded incorrectly.
We now proceed to the proof of each of the above claims to finish the proof:
20
1. In this case, E(i − s, i) = E(i − s, i − 1). So,
t · E(i − s, i)
Φ(i) = max
−s
1≤s≤i
1−ε
t · E(i − s, i − 1)
−s
= max
1≤s≤i
1−ε
t · E(i − s, i − 1)
−s
= max 0, max
2≤s≤i
1−ε
t · E(i − 1 − s, i − 1)
= max 0, max
−s−1
1≤s≤i−1
1−ε
= max {0, Φ(i − 1) − 1}
2. In this case, E(i − s, i) = E(i − s, i − 1) + k. So,
t · E(i − s, i)
−s
Φ(i) = max
1≤s≤i
1−ε
tk
t · E(i − s, i − 1) + tk
= max
− 1, max
−s
2≤s≤i
1−ε
1−ε
tk
t · E(i − 1 − s, i − 1)
tk
= max
− 1,
+ max
−s−1
1−ε
1 − ε 1≤s≤i−1
1−ε
tk
t · E(i − 1 − s, i − 1)
=
− 1 + max 0, max
−s
1≤s≤i−1
1−ε
1−ε
tk
=
− 1 + max {0, Φ(i − 1)}
1−ε
tk
= Φ(i − 1) +
−1
1−ε
3.
⇒
⇒
⇒
⇒
t · E(i − s, i)
Φ(i) = max
−s =0
1≤s≤i
1−ε
t · E(i − s, i)
∀1 ≤ s ≤ i :
−s≤0
1−ε
∀1 ≤ s ≤ i : t · E(i − s, i) ≤ s(1 − ε)
ε
E(i − s, i)
∀1 ≤ s ≤ i :
≤
s
t
E(i − s, i)
1−ε
Relative Suffix Error Density = max
≤
1≤s≤i
s
t
These finish the proof of the lemma.
Now, we have all necessary tools to analyze the performance of the minimum relative suffix
distance decoding algorithm:
Proof of Theorem 5.10. As adversary is allowed to insert or delete up nδ symbols, by Lemma 5.14,
2nδ
there are at most 1−ε
successfully transmitted symbols during the arrival of which at the receiving
21
side, the relative suffix error density is greater than 1−ε
2 ; Hence, by Lemma 5.13, there are at most
2nδ
1−ε misdecoded successfully transmitted symbols.
Further, we remark that this algorithm can be implemented in O(n4 ) as follows: Using dynamic
programming, we can pre-process the edit distance of any consecutive substring of S, like S[i, j]
to any consecutive substring of S 0 , like S 0 [i0 , j 0 ], in O(n4 ). Then, for each symbol of the received
string, like S 0 [l0 ], we can find the codeword with minimum relative suffix distance to S 0 [1, l0 ] by
calculating the relative suffix distance of it to all n codewords. Finding suffix distance of S 0 [1, l0 ]
0 (l0 −k,l0 ])
and a codeword like S[1, l] can also be simply done by minimizing ED(S(l−k,l],S
over k which
k
can be done in O(n). With a O(n4 ) pre-process and a O(n3 ) computation as mentioned above, we
have shown that the decoding process can be implemented in O(n4 ).
We remark that by taking ε = o(1), one can obtain a solution to the (n, δ)-indexing problem
with a misdecoding guarantee of 2nδ(1 + o(1)) which, using Theorem 4.1, results into a translation
of nδ insertions and deletions into nδ(5 + o(1)) half-errors. As explained in Section 1.3, such a
guarantee however falls short of giving Theorem 1.1. In Section 6.6, we show that this guarantee
of the min-distance-decoder can be slightly improved to work beyond an RSD distance of 1−ε
2 ,
at the cost of some simplicity, by considering an alternative distance measure. In particular, the
relative suffix pseudo distance RSPD, which was introduced in [2], can act as a metric stand-in for
the minimum-distance decoder and lead to slightly improved decoding guarantees, despite neither
being symmetric nor satisfying the triangle inequality. For any set of k = ki + kd insdel errors
consisting of ki insertions and kd deletions the RSPD based indexing solution leads to at most
(1 + ε)(3ki + kd ) half-errors which does imply “near-MDS” codes for deletion-only channels but still
falls short for general insdel errors.
This leaves open the intriguing question whether a further improved (pseudo) distance definition
can achieve an indexing solution with negligible number of misdecodings for the minimum-distance
decoder.
6
More Advanced Global Decoding Algorithms
Thus far, we have introduced ε-synchronization strings as fitting solutions to the indexing problem.
In Section 5.2, we provided an algorithm to solve the indexing problem along with synchronization
strings with an asymptotic guarantee of 2nδ misdecodings. As explained in Section 1.3, such a
guarantee falls short of giving Theorem 1.1. In this section, we thus provide a variety of more
advanced decoding algorithms that provide a better decoding guarantees, in particular achieve a
misdecoding fraction which goes to zero as ε goes to zero.
We start by pointing out a very useful property of ε-synchronization strings in Section 6.1.
We define a monotone matching between two strings as a common subsequence of them. We will
next show that in a monotone matching between an ε-synchronization string and itself, the number
of matches that both correspond to the same element of the string is fairly large. We will refer
to this property as ε-self-matching property. We show that one can very formally think of this
ε-self-matching property as a robust global guarantee in contrast to the factor-closed strong local
requirements of the ε-synchronization property. One advantage of this relaxed notion of ε-selfmatching is that one can show that a random string over alphabets polynomially large in ε−1
satisfies this property (Section 6.2). This leads to a particularly simple generation process for S.
Finally, showing that this property even holds for approximately log n-wise independent strings
directly leads to a deterministic polynomial time algorithm generating such strings as well.
In Section 6.3, we propose a decoding algorithm for insdel errors that basically works by finding
monotone matchings between the received string and the synchronization string. Using the ε-self22
√
matching property we show that this algorithm guarantees O (n ε) misdecodings. This algorithm
√
works in time O(n2 / ε) and is exactly what we need to prove our main theorem.
Lastly, in Sections 6.4 and 6.5 we provide two simpler linear time algorithms that solve the
indexing problem under the assumptions that the adversary can only delete symbols or only insert
ε
new symbols. These algorithms not only guarantee asymptotically optimal 1−ε
nδ misdecodings
but are also error-free. Table 1 provides a break down of the decoding schemes presented in this
paper, describing the type of error they work under, the number of misdecodings they guarantee,
whether they are error-free or streaming, and their decoding complexity.
6.1
Monotone Matchings and the ε-Self Matching Property
Before proceeding to the main results of this section, we start by defining monotone matchings
which provide a formal way to refer to common substrings of two strings:
Definition 6.1 (Monotone Matchings). A monotone matching between S and S 0 is a set of pairs
of indices like:
M = {(a1 , b1 ), · · · , (am , bm )}
where a1 < · · · < am , b1 < · · · < bm , and S[ai ] = S 0 [bi ].
We now point out a key property of synchronization strings that will be broadly used
in our decoding algorithms. Basically, Theorem 6.2 states that two similar subsequences of
an ε-synchronization string cannot disagree on many positions. More formally, let M =
{(a1 , b1 ), · · · , (am , bm )} be a monotone matching between S and itself. We call the pair (ai , bi )
a good pair if ai = bi and a bad pair otherwise. Then:
Theorem 6.2. Let S be an ε-synchronization string of size n and M = {(a1 , b1 ), · · · , (am , bm )} be
a monotone matching of size m from S to itself containing g good pairs and b bad pairs. Then,
b ≤ ε(n − g)
Proof. Let (a01 , a02 ), · · · , (a0m0 , b0m0 ) indicate the set of bad pairs in M indexed as a01 < · · · < a0m0 and
b01 < · · · < b0m0 . Without loss of generality, assume that a01 < b01 . Let k1 be the largest integer such
that a0k1 < b01 . Then, the pairs (a01 , a02 ), · · · , (a0k1 , b0k1 ) form a common substring of size k1 between
T1 = S[a01 , b01 ) and T10 = S[b01 , b0k1 ]. Now, the synchronization string guarantee implies that:
k1 ≤ LCS(T1 , T10 )
|T1 | + |T10 | − ED (T1 , T10 )
≤
2
ε(|T1 | + |T10 |)
≤
2
Note that the monotonicity of the matching guarantees that there are no good matches occurring
on indices covered by T1 and T10 , i.e., a01 , · · · , b0k1 . One can repeat very same argument for the
remaining bad matches to rule out bad matches (a0k1 +1 , b0k1 +1 ), · · · , (a0k1 +k2 , b0k1 +k2 ) for some k2
having the following inequality guaranteed:
k2 ≤
where
(
ε(|T2 | + |T20 |)
2
(10)
T2 = [a0k1 +1 , b0k1 +1 ) and T20 = [b0k1 +1 , b0k1 +k2 ]
a0k1 +1 < b0k1 +1
T2 = [b0k1 +1 , a0k1 +1 ) and T20 = [a0k1 +1 , a0k1 +k2 ]
a0k1 +1 > b0k1 +1
23
𝑆
𝑆
𝑎′1
𝑎′𝑘1
𝑎′𝑘1+𝑘2
𝑎′𝑘1+1
𝑇1
𝑆
𝑇2
𝑇′1
𝑏′1
𝑏′𝑘1 +1 𝑏′𝑘1 +𝑘2
𝑎′𝑘1+1 𝑎′𝑘1+𝑘2
𝑎′𝑘1
𝑇1
𝑆
𝑇′2
𝑏′𝑘1
𝑎′1
𝑇2
𝑇′1
𝑏′1
(a) a0k1 +1 < b0k1 +1
𝑇′2
𝑏′𝑘1 𝑏′𝑘1 +1 𝑏′𝑘1 +𝑘2
(b) a0k1 +1 > b0k1 +1
Figure 1: Pictorial representation of T2 and T20
For a pictorial representation see Figure 1.
Continuing the same procedure, one can find k1 , · · · , kl , T1 , · · · , Tl , and T10 , · · · , Tl0 for some l.
Summing up all inequalities of form (10), we will have:
!
l
l
l
X
X
X
ε
0
|Ti | +
ki ≤ ·
|Ti |
(11)
2
i=1
i=1
i=1
Pl
Note that i=1 ki = u and Ti s are mutually exclusive and contain no indices where a good pair
P
P
occurs at. Same holds for Ti0 s. Hence, li=1 |Ti | ≤ n − g and li=1 |Ti0 | ≤ n − g. All these along
with (11) give that:
ε
u ≤ · 2 (n − g) = ε(n − g) ⇒ n − g − b ≥ (1 − ε)(n − g) ⇒ b ≤ ε(n − g)
2
We define the ε-self-matching property as follows:
Definition 6.3 (ε-self-matching property). String S satisfies ε-self-matching property if any monotone matching between S and itself contains less than ε|S| bad pairs.
Note that ε-synchronization property concerns all substrings of a string while the ε-self-matching
property only concerns the string itself. Granted that, we now show that ε-synchronization property
and satisfying ε-self-matching property on all substrings are equivalent up to a factor of two:
Theorem 6.4. ε-synchronization and ε-self matching properties are related as follows:
a) If S is an ε-synchronization string, then all substrings of S satisfy ε-self-matching property.
b) If all substrings of string S satisfy the 2ε -self-matching property, then S is ε-synchronization
string.
Proof of Theorem 6.4 (a). This part is a straightforward consequence of Theorem 6.2.
Proof of Theorem 6.4 (b). Assume by contradiction that there are i < j < k such that
ED(S[i, j), S[j, k)) ≤ (1 − ε)(k − i). Then, LCS(S[i, j), S[j, k)) ≥ k−i−(1−ε)(k−i)
= 2ε (k − i). The
2
corresponding pairs of such longest common substring form a monotone matching of size 2ε (k − i)
which contradicts 2ε -self-matching property of S.
As a matter of fact, the decoding algorithms we will propose for ε-synchronization strings in
Sections 6.3, 6.4, and 6.5 only make use of the ε-self-matching property of the ε-synchronization
string.
We now proceed to the definition of ε-bad-indices which will enable us to show that ε-self
matching property, as opposed to the ε-synchronization property, is robust against local changes.
24
Definition 6.5 (ε-bad-index). We call index k of string S an ε-bad-index if there exists a factor
S[i, j] of S with i ≤ k ≤ j where S[i, j] does not satisfy the ε-self-matching property. In this case,
we also say that index k blames interval [i, j].
Using the notion of ε-bad indices, we now present Lemma 6.6. This lemma suggests that a
string containing limited fraction of ε-bad indices would still be an ε0 -self matching string for some
ε0 > ε. An important consequence of this result is that if one changes a limited number of elements
in a given ε-self matching string, the self matching property will be essentially preserved to a lesser
extent. Note that ε-synchronization property do not satisfy any such robustness quality.
Lemma 6.6. If the fraction of ε-bad indices in string S is less than γ, then S satisfies (ε + 2γ)-self
matching property.
Proof. Consider a matching from S to itself. The number of bad matches whose both ends refer to
non-ε-bad indices of S is at most |S|(1 − γ)ε by definition. Further, each ε-bad index can appear
at most once in each end of bad pairs. Therefore, the number of bad pairs in S can be at most:
|S|(1 − γ)ε + 2|S|γ ≤ |S|(ε + 2γ)
which, by definition, implies that S satisfies the (ε + 2γ)-self-matching property.
On the other hand, in the following theorem, we will show that within a given ε-self matching
string, there can be a limited number of ε0 -bad indices for sufficiently large ε0 > ε.
Lemma 6.7. Let S be an ε-self matching string of length n. Then, for any 3ε < ε0 < 1, at most
3nε
0
ε0 many indices of S can be ε -bad.
Proof. Let s1 , s2 , · · · , sk be bad indices of S and ε0 -bad index si blame substring S[ai , bi ). As
intervals S[ai , bi ) are supposed to be bad, there has to be a ε0 -self matching within each S[ai , bi )
like Mi for which |Mi | ≥ ε0 · |[ai , bi )|. We claim that one can choose a subset of [1..k] like I for
which
• Corresponding intervals to the indices in I are mutually exclusive. In other words, for any
i, j ∈ I where i 6= j, [Ai , bi ) ∩ [aj , bj ) = ∅.
P
k
•
i∈I |[Ai , bi )| ≥ 3 .
S
0
If such I exists, one can take i∈I Mi as a self matching in S whose size is larger than kε3 . As
S is an ε-self matching string,
kε0
3nε
≤ nε ⇒ k ≤ 0
3
ε
which
finishes the proof. The only remaining piece is proving the claim. Note that any index in
S
0
0
[a
i∈I i , bi ) is
S a ε -bad index as they by definition belong to an interval with a ε -self matching.
Therefore,
i∈I [ai , bi ) = k. In order to find set I, we greedily choose the largest substring
[ai , bi ), put its corresponding index into I and then remove any interval intersecting [ai , bi ). We
continue repeating this procedure until all substrings are removed. The set I obtained by this
procedure clearly satisfies the first claimed property. Moreover, note that if li = |[ai , bi )|, any
interval intersecting [ai , bi ) falls into [ai − li , bi + li ) which is an interval of length 3li . This certifies
the second property and finishes the proof.
As the final remark on the ε-self matching property and its relation with the more strict εsynchronization property, we show that using the minimum RSD decoder for indexing together
with an ε-self matching string leads to guarantees on the misdecoding performance which are only
25
slightly weaker than the guarantee obtained by ε-synchronization strings. In order to do so, we
first show that the (1 − ε) RSD distance property of prefixes holds for any non-ε-bad index in
any arbitrary string in Theorem 6.8. Then, using Theorem 6.8 and Lemma 6.7, we upper-bound
the number of misdecodings that may happen using a minimum RSD decoder along with an ε-self
matching string in Theorem 6.9.
Theorem 6.8. Let S be an arbitrary string of length n and 1 ≤ i ≤ n be such that i’th index of S
is not an ε-bad index. Then, for any j 6= i, RSD(S[1, i], S[1, j]) > 1 − ε.
Proof. Without loss of generality assume that j < i. Consider the interval [2j − i + 1, i]. As i is not
an ε-bad index, there is no self matching of size 2ε(i − j) within [2j − i, i]. In particular, the edit
distance of S[2j − i + 1, j] and [j + 1, i] has to be larger than (1 − ε) · 2(i − j) which equivalently
means RSD(S[1, i], S[1, j]) > 1 − ε. Note that if 2j − i + 1 < 0 the proof goes through by simply
replacing 2j − i + 1 with zero.
Theorem 6.9. Using any ε-self matching string along with minimum RSD algorithm, one can
solve the (n, δ)-indexing problem with a guarantee of n(4δ + 6ε) misdecodings.
Proof. Note that applying Lemma 6.7 for ε0 gives that there are at most 3nε
ε0 indices in S that are
2nδ
0
ε -bad. Further, using Theorems 5.10 and 6.8, at most 1−ε0 many of the other indices might be
decoded incorrectly
arrivals. Therefore, this solution for the (n, δ)-indexing problem can
upon their
2δ
3ε
3ε
gives an upper
contain at most n ε0 + 1−ε0 many incorrectly decoded indices. Setting ε0 = 3ε+2δ
bound of n(4δ + 6ε) on the number of misdecodings.
6.2
Efficient Polynomial Construction of ε-self matching strings
In this section, we will use Lemma 6.6 to show that there is a polynomial deterministic construction
of a string of length n with the ε-self-matching property, which can then for example be used to
obtain a deterministic code construction. We start by showing that even random strings satisfy
the ε-selfmatching property for an ε polynomial in the alphabet size:
Theorem 6.10. A random string on an alphabet of size O(ε−3 ) satisfies ε-selfmatching property
with a constant probability.
Proof. Let S be a random string on alphabet Σ of size |Σ| = ε−3 . We are going to find the
expected number of ε-bad indices in S. We first count the expected number of ε-bad indices that
blame intervals of length 2ε or smaller. If index k blames interval S[i, j] where j − i < 2ε−1 , there
has to be two identical symbols appearing in S[i, j] which gives that there are two identical elements
in 4ε−1 neighborhood of S. Therefore, the probability of index k being ε-bad blaming S[i, j] for
−1
1
j − i < 2ε−1 can be upper-bounded by 4ε2 |Σ|
≤ 8ε. Thus, the expected fraction of ε-bad indices
that blame intervals of length 2ε or smaller is less than 8ε.
We now proceed to finding the expected fraction of ε-bad indices in S blaming intervals of length
2ε−1 or more. Since every interval of length l which does not satisfy ε-self-matching property causes
at most l ε-bad indices, we get that the expected fraction of such indices, i.e., γ 0 , is at most:
26
0
E[γ ] =
n
n
1 X X
l · Pr[S[i, i + l) does not satisfy ε-self-matching property]
n
−1
l=1
i=2ε
=
n
X
l · Pr[S[i, i + l) does not satisfy ε-self-matching property]
l=2ε−1
≤
n
X
l=2ε−1
l
2
l
1
lε |Σ|lε
(12)
2
Last inequality holds because the number of possible matchings is at most lεl . Further, fixing the
matching edges, the probability of the elements corresponding to pair (a, b) of the matching being
identical is independent from all pairs (a0 , b0 ) where a0 < a and b0 < b. Hence, the probability of
the set of pairs being a matching between random string S and itself is |Σ|1lε . Then,
E[γ 0 ] ≤
n
X
le
lε
l
e
p
ε |Σ|
l=2ε−1
≤
n
X
l=2ε−1
≤
0
−1
2ε−1 x2ε
E[γ ] ≤ 8ε
−1
1
|Σ|lε
!2εl
∞
X
e
p
ε |Σ|
l
l=2ε−1
P
l
Note that series ∞
l=2ε−1 lx =
P∞
l
−1 2ε−1 . So,
l=l0 lx < 8ε x
2lε
l
−1 +1
−(2ε−1 −1)x2ε
(1−x)2
e
p
2ε |Σ|
!4εε−1
=
!2ε l
for |x| < 1. Therefore, for 0 < x <
1
2,
e4
e4 −5 1
ε
≤
ε
2
|Σ|2
2
Using Lemma 6.6, this random structure has to satisfy (ε + 2γ)-self-matching property where
E[ε + 2γ] = ε + 16ε + e4 ε = O(ε)
Therefore, using Markov inequality, a randomly generated string over alphabet O(ε−3 ) satisfies
ε-matching property with constant probability. The constant probability can be as high as one
wishes by applying higher constant factor in alphabet size.
As the next step,
we prove a similar claim for strings of length n whose symbols are chosen
log n
from an Θ log(1/ε) -wise independent [24] distribution over a larger, yet still ε−O(1) size, alphabet.
This is the key step in allowing for a derandomization using the small sample spaces of Naor and
Naor [24]. The proof of Theorem 6.11 follows a similar strategy as was used in [3] to derandomize
the constructive Lovász local lemma. In particular the crucial idea, given by Claim 6.12, is to show
that for any large obstruction there has to exist a smaller yet not too small obstruction. This allows
one to prove that in the absence of any small and medium size obstructions no large obstructions
exist either.
27
c log n
Theorem 6.11. A log(1/ε)
-wise independent random string of size n on an alphabet of size O(ε−6 )
satisfies ε-matching property with a non-zero constant probability. c is a sufficiently large constant.
Proof. Let S be a pseudo-random string of length n with
c log n
log(1/ε) -wise
independent symbols. Then,
c log n
Step (12) is invalid as the proposed upper-bound does not work for l > ε log(1/ε)
. To bound the
c log n
probability of intervals of size Ω ε log(1/ε)
not satisfying ε-self matching property, we claim that:
Claim 6.12. Any string of size l > 100m which contains an ε-self-matching contains two subintervals I1 and I2 of size m where there is a matching of size 0.99 mε
2 between I1 and I2 .
c log n
Using Claim 6.12, one can conclude that any string of size l > 100 ε log(1/ε)
which contains an
ε-self-matching contains two sub-intervals I1 and I2 of size
c log n
ε log(1/ε)
where there is a matching of
c log n
2 log(1/ε)
size
between I1 and I2 . Then, Step (12) can be revised by upper-bounding the probability
of a long interval having an ε-self-matching by a union bound over the probability of pairs of its
c log n
subintervals having a dense matching. Namely, for l > 100 ε log(1/ε)
, let us denote the event of
S[i, i + l) containing a ε-self-matching by Ai,l . Then,
c log n
c log n
and LCS(I1 , I2 ) ≥ 0.99
Pr[Ai,l ] ≤ Pr S contains I1 , I2 : |Ii | =
ε log(1/ε)
2 log(1/ε)
−1
2 c log n
ε c log n/ log(1/ε)
1 2 log(1/ε)
≤ n2
0.99c log n/2 log(1/ε)
|Σ|
2×0.99c log n
6c log n
≤ n2 2.04eε−1 2 log(1/ε) ε 2 log(1/ε)
= n2 (2.04e)
= n
2+
0.99c log n
log(1/ε)
c ln(2.04e)
−2.01c
log(1/ε)
4.02c log n
ε 2 log(1/ε)
0
< n2−c/4 = O n−c
where first inequality follows from the fact there can be at most n2 pairs of intervals of size
c log n
c log n
ε log(1/ε) in S and the number of all possible matchings of size log(1/ε) between them is at most
ε−1 c log n/ log(1/ε)2
. Further, for small enough ε, constant c0 can be as large as one desires by
c log n/2 log(1/ε)
setting constant c large enough. Thus, Step (12) can be revised as:
!2ε l
100c log n/(ε log(1/ε))
n
X
X
e
+
E[γ 0 ] ≤
l· p
l Pr[Ai,l ]
ε
|Σ|
−1
l=100c log n/(ε log(1/ε))
l=ε
l
!
2ε
∞
X
e
+ n2 · O(n−c0 ) ≤ O(ε) + O(n2−c0 )
p
≤
l·
ε |Σ|
l=ε−1
For an appropriately chosen c, 2 − c0 < 0; hence, the later term vanishes as n grows. Therefore, the
log n
conclusion E[γ] ≤ O(ε) holds for the limited log(1/ε)
-wise independent string as well.
Proof of Claim 6.12. Let M be a self-matching of size lε or more between S and itself containing
only bad edges. We chop S into ml intervals of size m. On the one hand,
P the size of M is greater
than lε and on the other hand, we know that the size of M is exactly i,j |Ei,j | where Ei,j denotes
the number of edges between interval i and j. Thus:
P
X
ε
i,j |Ei,j |/m
lε ≤
|Ei,j | ⇒ ≤
2
2l/m
i,j
28
Note that
|Ei,j |
m
represents the density of edges between interval i and interval j. Further, Since M
|E
|
2l
is monotone, there are at most m
intervals for which |Ei,j | =
6 0 and subsequently mi,j 6= 0. Hence,
2l
on the right hand side we have the average of m
many non-zero terms which is greater than ε/2.
0
0
So, there has to be some i and j for which:
|Ei0 ,j 0 |
ε
mε
≤
⇒
≤ |Ei0 ,j 0 |
2
m
2
To analyze more accurately, if l is not divisible by m, we simply throw out up to m last elements
ε
of the string. This may decrease ε by ml < 100
.
Note that using the polynomial sample spaces of [24] Theorem 6.11 directly leads to a deterministic algorithm for finding a string of size n with ε-self-matching property. For this one simply
c log n
checks all possible points in the sample space of the log(1/ε)
-wise independent strings and finds a
string S with γS ≤ E[γ] = O(ε). In
using brute-force, one can find a string satisfying
other words,
c log n
O(ε)-self-matching property in O |Σ| log(1/ε)
= nO(1) .
Theorem 6.13. There is a deterministic algorithm running in nO(1) that finds a string of length
n satisfying ε-self-matching property over an alphabet of size O(ε−6 ).
6.3
Insdel Errors
Now, we provide an alternative indexing algorithm to be used along with ε-synchronization strings.
Throughout the following sections, we let ε-synchronization string S be sent as the synchronization
string in an instance of (n, δ)-indexing problem and string S 0 be received at the receiving end
being affected by up to nδ insertions or deletions. Furthermore, let di symbols be inserted into the
communication and dr symbols be deleted from it.
The algorithm works as follows. On the first round, the algorithm finds the longest common
subsequence between S and S 0 . Note that this common subsequence corresponds to a monotone
matching M1 between S and S 0 . On the next round, the algorithm finds the longest common
subsequence between S and the subsequence of unmatched elements of S 0 (those that have not
appeared in M1 ). This common subsequence corresponds to a monotone matching between S and
the elements of S 0 that do not appear in M1 . The algorithm repeats this procedure β1 times to
obtain M1 , · · · , M1/β where β is a parameter that we will fix later.
In the output of this algorithm, S 0 [ti ] is decoded as S[i] if and only if S[i] is only matched to
0
S [ti ] in all M1 , · · · , M1/β . Note that the longest common subsequence of two strings of length
O(n) can be found in O(n2 ) using dynamic programming. Therefore, the whole algorithm runs in
O n2 /β .
Now we proceed to analyzing the performance of the algorithm by bounding the number of
misdecodings.
Theorem 6.14. This decoding algorithm guarantees a maximum misdecoding count of (n + di −
√
√
dr )β + βε n. More specifically, for β = ε, the number misdecodings will be O (n ε) and running
√
time will be O n2 / ε .
Proof. First, we claim that at most (n + di − dr )β many of the symbols that have been successfully
transmitted are not matched in any of M1 , · · · , M1/β . Assume by contradiction that more than
(n + di − dr )β of the symbols that pass through the channel successfully are not matched in any
of M1 , · · · , M1/β . Then, there exists a monotone matching of size greater than (n + di − dr )β
29
between the unmatched elements of S 0 and S after β1 rounds of finding longest common substrings.
Hence, size of any of Mi s is at least (n + di − dr )β. So, the summation of their sizes exceeds
(n + di − dr )β × β1 = n + di − dr = |S 0 | which brings us to a contradiction.
Furthermore, as a result of Theorem 6.4, any of Mi s contain at most εn many incorrectly
matched elements. Hence, at least βε n many of the matched symbols are matched to incorrect
index.
Hence, the total number of misdecodings can be bounded by (n + di − dr )β + βε n.
6.4
Deletion Errors Only
We now introduce a very simple linear time streaming algorithm that decodes a received synchronization string of length n which can be affected by up to nδ many deletions. Our scheme is
ε
guaranteed to have less than 1−ε
· nδ misdecodings.
Before proceeding to the algorithm description, let dr denote the number of symbols removed
by adversary. As adversary is restricted to symbol deletion, each symbol received at the receiver
corresponds to a symbol sent by the sender. Hence, there exists a monotone matching of size
|S 0 | = n0 = n − dr like M = {(t1 , 1), (t2 , 2), · · · , (tn−dr , n − dr )} between S and S 0 which matches
each of the received symbols to their actual indices.
Our simple streaming algorithm greedily matches S to the left-most possible subsequence of S.
To put it another words, the algorithm matches S 0 [1] to S[t01 ] where S[t01 ] = S 0 [1] and t01 is as small
as possible, then matches S 0 [2] to the smallest t02 > t01 where S[t02 ] = S 0 [2] and construct the whole
matching M 0 by repeating this procedure. Note that as there is a matching of size |S 0 | between S
and S 0 , the size of M 0 will be |S 0 | too.
This algorithm clearly works in a streaming manner and runs in linear time. To analyze the
performance, we basically make use of the fact that M and M 0 are both monotone matchings of size
|S 0 | between S and S 0 . Therefore, M̄ = {(t1 , t01 ), (t2 , t02 ), · · · , (tn−dr , t0n−dr )} is a monotone matching
between S and itself. Note that if ti 6= t0i , then the algorithm has decoded the index ti incorrectly.
Let p be the number of indices i where ti 6= t0i . Then matching M̄ consists of n − dr − p good pairs
and p bad pairs. Therefore, using Theorem 6.2
n − (n − dr − p) − p > (1 − ε)(n − (n − dr − p)) ⇒ dr > (1 − ε)(dr + p) ⇒ p <
ε
· dr
1−ε
This proves the following theorem:
Theorem 6.15. Any ε-synchronization string along with the algorithm described in Section 6.4
ε
form a linear-time streaming solution for deletion-only (n, δ)-indexing problem guaranteeing 1−ε
·nδ
misdecodings.
6.5
Insertion Errors Only
We now depart to another simplified case where adversary is restricted to only insert symbols. We
propose a decoding algorithm whose output is guaranteed to be error-free and contain less than
nδ
1−ε misdecodings.
Assume that di symbols are inserted into the string S to turn it in into S 0 of size n + di on
the receiving side. Again, it is clear that there exists a monotone matching M of size n like
M = {(1, t1 ), (2, t2 ), · · · , (n, tn )} between S and S 0 that matches each symbol in S to its actual
index when it arrives at the receiver.
The decoding algorithm we present, matches S[i] to S 0 [t0i ] in its output, M 0 , if and only if in
all possible monotone matchings between S and S 0 that saturate S (i.e., are of size |S| = n), S[i]
30
is matched to S 0 [t0i ]. Note that any symbol S[i] that is matched to S 0 [t0i ] in M 0 has to be matched
to the same element in M ; therefore, the output of this algorithm does not contain any incorrectly
decoded indices; therefore, the algorithm is error-free.
Now, we are going to first provide a linear time approach to implement this algorithm and then
di
on the number of misdecodings. To this end, we make use of the
prove an upper-bound of 1−ε
following lemma:
Lemma 6.16. Let ML = {(1, l1 ), (2, l2 ), · · · , (n, ln )} be a monotone matching between S and S 0
such that l1 , · · · , ln has the smallest possible value lexicographically. We call ML the left-most
matching between S and S 0 . Similarly, let MR = {(1, r1 ), · · · , (n, rn )} be the monotone matching
such that rn , · · · , r1 has the largest possible lexicographical value. Then S[i] is matched to S 0 [t0i ] in
all possible monotone matchings of size n between S and S 0 if and only if (i, t0i ) ∈ MR ∩ ML .
This lemma can be proved by a simple contradiction argument. Our algorithm starts by computing left-most and right-most monotone matchings between S and S 0 using the trivial greedy
algorithm introduced in Section 6.4 on (S, S 0 ) and them reversed. It then outputs the intersection
of these two matching as the answer. This algorithm clearly runs in linear time.
To analyze this algorithm, we bound the number of successfully transmitted symbols that the
algorithm refuses to decode, denoted by p. To bound the number of such indices, we make use of
the fact that n − p elements of S 0 are matched to the same element of S in both ML and MR . As
there are p elements in S that are matched to different elements in S 0 and there is a total of n + di
elements in S 0 , there has to be at least 2p − [(n + di ) − (n − p)] = p − di elements in S 0 who are
matched to different elements of S in ML and MR .
Consider the following monotone matching from S to itself as follows:
M
= {(i, i) : If S[i] is matched to the same position of S 0 in both M and M 0 }
∪ {(i, j) : ∃k s.t. (i, k) ∈ ML , (j, k) ∈ MR }
Note that monotonicity follows the fact that both ML and MR are both monotone matchings
between S and S 0 . We have shown that the size of the second set is at least p − di and the size of
the first set is by definition n − p. Also, all pairs in the first set are good pairs and all in the second
one are bad pairs. Therefore, by Theorem 6.2:
(n − (n − p) − (p − di )) > (1 − ε)(n − (n − p)) ⇒ p <
di
1−ε
which proves the efficiency claim and gives the following theorem.
Theorem 6.17. Any ε-synchronization string along with the algorithm described in Section 6.5
1
form a linear-time error-free solution for insertion-only (n, δ)-indexing problem guaranteeing 1−ε
·nδ
misdecodings.
Finally, we remark that a similar non-streaming algorithm can be applied to the case of deletiononly errors. Namely, one can compute the left-most and right-most matchings between the received
string and string that is supposed to be received and output the common edges. By a similar
argument as above, one can prove the following:
Theorem 6.18. Any ε-synchronization string along with the algorithm described in Section 6.5
ε
form a linear-time error-free solution for deletion-only (n, δ)-indexing problem guaranteeing 1−ε
·nδ
misdecodings.
31
In the same manner as Theorem 1.1, we can derive the following theorem concerning deletiononly and insertion-only codes based on Theorems 6.17 and 6.18.
Theorem 6.19. For any ε > 0 and δ ∈ (0, 1):
• There exists an encoding map E : Σk → Σn and a decoding map D : Σ∗ → Σk such that if
x is a subsequence of E(m) where |x| ≥ n − nδ then D(x) = m. Further nk > 1 − δ − ε,
|Σ| = f (ε), and E and D are explicit and have linear running times in n.
• There exists an encoding map E : Σk → Σn and a decoding map D : Σ∗ → Σk such that if
E(m) is a subsequence of x where |x| ≤ n + nδ then D(x) = m. Further nk > 1 − δ − ε,
|Σ| = f (ε), and E and D are explicit and have linear running times in n.
Finally, we remark that since indexing solutions offered in Theorems 6.17 and 6.18 are error free,
it suffices to use good erasure codes along with synchronization strings to obtain Theorem 6.19.
6.6
Decoding Using the Relative Suffix Pseudo-Distance (RSPD)
In this section, we show how one can slightly improve the constants in the results obtained in
Section 5.2 by replacing RSD with a related notion of “distance” between two strings introduced
in [2]. We call this notion relative suffix pseudo-distance or RSPD both to distinguish it from our
RSD relative suffix distance and also because RSPD is not a metric distance per se – it is neither
symmetric nor satisfies the triangle inequality.
Definition 6.20 (Relative Suffix Pseudo-Distance (RSPD)). Given any two strings c, c̃ ∈ Σ∗ , the
suffix distance between c and c̃ is
|τ1 |
sc (τ1 [i, |τ1 |]) + sc (τ2 [i, |τ2 |])
RSP D (c, c̃) = min max
i=1
τ :c→c̃
|τ1 | − i + 1 − sc (τ1 [i, |τ1 |])
We derive our algorithms by proving a useful property of synchronization strings:
LemmaS6.21. Let S ∈ Σn be an ε-synchronization string and c̃ ∈ Σm . Then there exists at most
one c ∈ ni=1 S[1..i] such that RSP D(c, c̃) ≤ 1 − ε.
Before proceeding to the proof of Lemma 6.21, we prove the following lemma:
Lemma 6.22. Let RSP D(S, T ) ≤ 1 − ε, then:
1. For every 1 ≤ s ≤ |S|, there exists t such that ED (S[s, |S|], T [t, |T |]) ≤ (1 − ε)(|S| − s + 1).
2. For every 1 ≤ t ≤ |T |, there exists s such that ED (S[s, |S|], T [t, |T |]) ≤ (1 − ε)(|S| − s + 1).
Proof.
Part 1 Let τ be the string matching chosen in RSP D(S, T ). There exist some r such that
del(τ1 [r, |τ1 |]) = S[s, |S|]. Note that del(τ2 [r, |τ2 |]) is a suffix of T . Therefore, there exists some t
such that T [t, |T |] = del(τ2 [r, |τ2 |]). Now,
ED(S[s, |S|], T [t, |T |]) ≤ sc(del(τ1 [r, |τ1 |])) + sc(del(τ2 [r, |τ1 |]))
sc(del(τ1 [r, |τ1 |])) + sc(del(τ2 [r, |τ1 |]))
=
· (|τ1 | − r + 1 − sc(τ1 [r, |τ1 |]))
|τ1 | − r + 1 − sc(τ1 [r, |τ1 |])
≤ RSP D(S, T ) · (|S| − s + 1)
≤ (1 − ε) · (|S| − s + 1)
32
(13)
𝑠
𝑆
𝑟
≈
𝜏1
𝜏2
𝑟
𝑇
𝑡
Figure 2: Pictorial representation of the notation used in Lemma 6.22
Part 2 Similarly, let τ be the string matching chosen in RSP D(S, T ). There exists some r such
that del(τ2 [r, |τ2 |]) = T [t, |T |]. Now, del(τ1 [r, |τ1 |]) is a suffix of S. Therefore, there exists some s
such that S[s, |S|] = del(τ1 [r, |τ1 |]). Now, all the steps we took to prove equation (13) hold and the
proof is complete.
Algorithm 3 Synchronization string decode
Input: A received message c̃ ∈ Σm and an ε-synchronization string S ∈ Σn
1: ans ← ∅
2: for Any prefix c of S do
|τ1 | sc(τ1 [k..|τ1 |])+sc(τ2 [k..|τ2 |])
3:
d[i][j][l] ← minτ :c(i)→c̃(j) maxk=1
|τ1 |−k+1+sc(τ1 [k..|τ1 |])
sc(τ1 )=l
|c̃|
RSP D(c, c̃) ← minl0 =0 d[i][|c̃|][l0 ]
if RSP D(c, c̃) ≤ 1 − ε then
ans ← c
Output: ans
4:
5:
6:
Proof of Lemma 6.21. For a contradiction, suppose that there exist a c̃, l and l0 such that l < l0
and RSP D(S[1, l], c̃) ≤ 1 − ε and RSP D(S[1, l0 ], c̃) ≤ 1 − ε. Now, using part 1 of Lemma 6.22,
there exists k such that ED (S[l + 1, l0 ], c̃[k, |c̃|]) ≤ (1 − ε)(l0 − l). Further, part 2 of Lemma 6.22
gives that there exist l00 such that ED (S[l00 + 1, l], c̃[k, |c̃|]) ≤ (1 − ε)(l − l00 ). Hence,
ED(S[l + 1, l0 ], S[l0 + 1, l00 ]) ≤ ED(S[l + 1, l0 ], c̃[k, |c̃|]) + ED(S[l0 + 1, l00 ], c̃[k, |c̃|]) ≤ (1 − ε)(l0 − l00 )
which contradicts the fact that S is an ε-synchronization string.
Lemma 6.21 implies a natural algorithm for decoding c̃: simply search over all prefixes of S
for the one with small enough suffix distance from c̃. We prove that this is possible in O(n5 ) via
dynamic programming.
Theorem 6.23. Let S ∈ Σn be an ε-synchronization string, and c̃ ∈ Σm . Then Algorithm 3, given
input S and c̃, either returns the unique prefix c of S such that RSP D(c, c̃) ≤ 1 − ε or returns ∅ if
33
no such prefix exists. Moreover, Algorithm 3 runs in time O(n5 ); spending O(n4 ) for each received
symbol.
Proof. To find c, we calculate the RSPD of c̃ and all prefixes of S one by one. We only need to
show that the RSPD of two strings of length at most n can be found in O(n3 ). We do this using
dynamic programming. Let us try to find RSP D(s, t). Further, let s(i) represent the suffix of s
of length i and t(j) represent the suffix of t of length j. Now, let d[i][j][l] be the minimum string
matching (τ1 , τ2 ) from s(i) to t(j) such that sc(τ1 ) = l. In other words,
d[i][j][l] =
min
|τ1 |
max
τ :s(i)→t(j) k=1
sc(τ1 )=l
sc (τ1 [k.. |τ1 |]) + sc (τ2 [k.. |τ2 |])
,
|τ1 | − k + 1 + sc (τ1 [k.. |τ1 |])
where τ is a string matching for s(i) and t(j). Note that for any τ : s(i) → t(j), one the following
three scenarios might happen:
1. τ1 (1) = τ2 (1) = s (|s| − (i − 1)) = t(|t| − (j − 1)): In this case, removing the first elements of
τ1 and τ2 gives a valid string matching from s(i − 1) to t(j − 1).
2. τ1 (1) = ∗ and τ2 (1) = t(|t| − (j − 1)): In this case, removing the first element of τ1 and τ2
gives a valid string matching from s(i) to t(j − 1).
3. τ2 (1) = ∗ and τ1 (1) = s(|s| − (i − 1)): In this case, removing the first element of τ1 and τ2
gives a valid string matching from s(i − 1) to t(j).
This implies that
(
d[i][j][l] = min
d[i − 1][j − 1][l] (Only if s(i) = t(j)),
l + (j − (i − l))
,
max d[i][j − 1][l − 1],
(i + l) + l
)
l + (j − (i − l))
max d[i − 1][j][l],
.
(i + l) + l
Hence, one can find RSP D(s, t) by minimizing d[|s|][|t|][l] over all possible values of l, as Algorithm 3
does in Step 4 for all prefixes of S. Finally, Algorithm 3 returns the prefix c such that RSP D(s, t) ≤
1−ε
2 if one exists, and otherwise it returns ∅.
We conclude by showing that if an ε-synchronization string of length n is used along with the
nδ
minimum RSPD algorithm, the number of misdecodings will be at most 1−ε
.
Theorem 6.24. Suppose that S is an ε-synchronization string of length n over alphabet Σ that is
sent over an insertion-deletion channel with ci insertions and cd deletions. By using Algorithm 3
cd ε
ci
for decoding the indices, the outcome will contain less than 1−ε
+ 1−ε
misdecodings.
Proof. The proof of this theorem is similar to the proof of Theorem 5.10. Let prefix S[1, i] be sent
through the channel Sτ [1, j] be received on the other end as the result of adversary’s set of actions
τ . Further, assume that Sτ [j] is successfully transmitted and is actually S[i] sent be the other end.
34
We first show that RSP D(S[1, i], S 0 [1, j]) is less than the relative suffix error density:
|τ̃1 |
sc (τ̃1 [k, |τ̃1 |]) + sc (τ̃2 [k, |τ̃2 |])
0
RSP D(S[1, i], S [1, j]) = min max
τ̃ :c→c̃
k=1
|τ̃1 | − k + 1 − sc (τ̃1 [k, |τ̃1 |])
|τ1 |
sc (τ1 [k, |τ1 |]) + sc (τ2 [k, |τ2 |])
≤ max
k=1
|τ1 | − k + 1 − sc (τ1 [k, |τ1 |])
E(j, i)
= Relative Suffix Error Density
= max
j≤i i − j
Now, using Theorem 5.14, we know that the relative suffix error density is smaller than 1−ε upon
+dd
arrival of all but at most ci1−ε
− cd of successfully transmitted symbols. Along with Lemma 6.21,
ci
1
this results into the conclusion that the minimum RSPD decoding guarantees 1−ε
+ cd 1−ε
−1
misdecodings. This finishes the proof of the theorem.
Acknowledgements
The authors thank Ellen Vitercik and Allison Bishop for valuable discussions in the early stages of
this work.
35
References
[1] Arturs Backurs and Piotr Indyk. Edit distance cannot be computed in strongly subquadratic
time (unless seth is false). In Proceedings of the Forty-Seventh Annual ACM on Symposium
on Theory of Computing, pages 51–58. ACM, 2015.
[2] Mark Braverman, Ran Gelles, Jieming Mao, and Rafail Ostrovsky. Coding for interactive communication correcting insertions and deletions. In Proceedings of the International Conference
on Automata, Languages, and Programming (ICALP), 2016.
[3] Karthekeyan Chandrasekaran, Navin Goyal, and Bernhard Haeupler. Deterministic algorithms
for the lovsz local lemma. SIAM Journal on Computing (SICOMP), pages 2132–2155, 2013.
[4] Ran Gelles. Coding for interactive communication: A survey, 2015.
[5] Ran Gelles and Bernhard Haeupler. Capacity of interactive communication over erasure channels and channels with feedback. Proceeding of the ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1296–1311, 2015.
[6] Mohsen Ghaffari and Bernhard Haeupler. Optimal error rates for interactive coding II: Efficiency and list decoding. Proceeding of the IEEE Symposium on Foundations of Computer
Science (FOCS), pages 394–403, 2014.
[7] Mohsen Ghaffari, Bernhard Haeupler, and Madhu Sudan. Optimal error rates for interactive
coding I: Adaptivity and other settings. Proceeding of the ACM Symposium on Theory of
Computing (STOC), pages 794–803, 2014.
[8] SW Golomb, J Davey, I Reed, H Van Trees, and J Stiffler. Synchronization. IEEE Transactions
on Communications Systems, 11(4):481–491, 1963.
[9] Venkatesan Guruswami and Piotr Indyk. Linear-time encodable/decodable codes with nearoptimal rate. IEEE Transactions on Information Theory, 51(10):3393–3400, 2005.
[10] Venkatesan Guruswami and Ray Li. Efficiently decodable insertion/deletion codes for highnoise and high-rate regimes. In Proceedings of the 2016 IEEE International Symposium on
Information Theory, 2016.
[11] Venkatesan Guruswami and Atri Rudra. Explicit codes achieving list decoding capacity: Errorcorrection with optimal redundancy. IEEE Transactions on Information Theory, 54(1):135–
150, 2008.
[12] Venkatesan Guruswami and Ameya Velingker. An entropy sumset inequality and polynomially
fast convergence to shannon capacity over all alphabets. Proceedings of the 30th Conference
on Computational Complexity, pages 42–57, 2015.
[13] Venkatesan Guruswami and Carol Wang. Deletion codes in the high-noise and high-rate
regimes. In Proceedings of the 19th International Workshop on Randomization and Computation (RANDOM), pages 867–880, 2015.
[14] Venkatesan Guruswami and Patrick Xia. Polar codes: Speed of polarization and polynomial
gap to capacity. IEEE Transactions on Information Theory, 61(1):3–16, 2015.
[15] Bernhard Haeupler. Interactive channel capacity revisited. Proceeding of the IEEE Symposium
on Foundations of Computer Science (FOCS), pages 226–235, 2014.
36
[16] Bernhard Haeupler, Barna Saha, and Aravind Srinivasan. New constructive aspects of the
lovsz local lemma. Journal of the ACM (JACM), pages 28:1–28:28, 2012.
[17] Gillat Kol and Ran Raz. Interactive channel capacity. In ”Proceedings of the Annual Symposium on Theory of Computing (STOC)”, pages 715–724, 2013.
[18] Vladimir Levenshtein. Binary codes capable of correcting deletions, insertions, and reversals.
Doklady Akademii Nauk SSSR 163, 4:845–848, 1965.
[19] S-YR Li, Raymond W Yeung, and Ning Cai. Linear network coding. IEEE transactions on
information theory, 49(2):371–381, 2003.
[20] Michael Luby. LT codes. Proceedings of the IEEE Symposium on Foundations of Computer
Science (FOCS), pages 271–282, 2002.
[21] Hugues Mercier, Vijay K Bhargava, and Vahid Tarokh. A survey of error-correcting codes
for channels with symbol synchronization errors. IEEE Communications Surveys & Tutorials,
1(12):87–96, 2010.
[22] Michael Mitzenmacher. A survey of results for deletion channels and related synchronization
channels. Probability Surveys, 6:1–33, 2009.
[23] Robin A. Moser and Gabor Tardos. A constructive proof of the general lovász local lemma.
Journal of the ACM (JACM), 57(2):11, 2010.
[24] Joseph Naor and Moni Naor. Small-bias probability spaces: Efficient constructions and applications. SIAM journal on computing, 22(4):838–856, 1993.
[25] Leonard J. Schulman and David Zuckerman. Asymptotically good codes correcting insertions, deletions, and transpositions. IEEE Transactions on Information Theory (TransInf ),
45(7):2552–2557, 1999.
[26] Neil JA Sloane. On single-deletion-correcting codes. Codes and Designs, de Gruyter, Berlin,
pages 273–291, 2002.
[27] Daniel A Spielman. Linear-time encodable and decodable error-correcting codes. Proceedings
of the ACM Symposium on Theory of Computing (STOC), pages 388–397, 1995.
[28] A Thue. Uber die gegenseitige lage gleicher teile gewisser zeichenreihen (1912). Selected
mathematical papers of Axel Thue, Universitetsforlaget, 1977.
[29] Michael Tsfasman and Serge G Vladut. Algebraic-geometric codes, volume 58. Springer Science
& Business Media, 2013.
37
| 8 |
arXiv:1709.05061v1 [] 15 Sep 2017
Accelerating Dynamic Graph
Analytics on GPUs
Technical Report
Version 2.0
Mo Sha, Yuchen Li, Bingsheng He and Kian-Lee Tan
July 15, 2017
Contents
1 Introduction
1
2 Related Work
2.1 Graph Stream Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Graph Analytics on GPUs . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.3 Storage Formats on GPUs . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
3
3
4
3 A dynamic framework on GPUs
5
4
5
GPMA Dynamic Graph Processing
4.1 GPMA Graph Storage on GPUs . . . .
4.2 Adapting Graph Algorithms to GPMA .
. . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . .
6
7
9
GPMA+: GPMA Optimization
5.1
5.2
11
Bottleneck Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Lock-Free Segment-Oriented Updates . . . . . . . . . . . . . . . . . . . . . . 12
6 Experimental Evaluation
6.1 Experimental Setup . . . . . . . . . .
6.2 The Performance of Handling Updates
6.3 Application Performance . . . . . . . .
6.4 Scalability . . . . . . . . . . . . . . . .
6.5 Overall Findings . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
15
16
18
19
21
22
7 Conclusion & Future Work
22
Appendices
A
TryInsert+ Optimizations . . . . . . . . . . . . . . . . . . . . . . . . . . .
B
Additional Experimental Results For Data Transfer . . . . . . . . . . . . . .
C
The Performance of Handling Updates on Sorted Graphs . . . . . . . . . .
D
Additional Experimental Results For Graph Streams with Explicit Deletions
23
23
27
27
28
Accelerating Dynamic Graph Analytics on GPUs
M. Sha et al.
Abstract
As graph analytics often involves compute-intensive operations, GPUs have been
extensively used to accelerate the processing. However, in many applications such
as social networks, cyber security, and fraud detection, their representative graphs
evolve frequently and one has to perform a rebuild of the graph structure on GPUs
to incorporate the updates. Hence, rebuilding the graphs becomes the bottleneck of
processing high-speed graph streams. In this paper, we propose a GPU-based dynamic
graph storage scheme to support existing graph algorithms easily. Furthermore, we
propose parallel update algorithms to support efficient stream updates so that the
maintained graph is immediately available for high-speed analytic processing on GPUs.
Our extensive experiments with three streaming applications on large-scale real and
synthetic datasets demonstrate the superior performance of our proposed approach.
1
Introduction
Due to the rising complexity of data generated in the big data era, graph representations are used ubiquitously. Massive graph processing has emerged as the de facto standard of analytics on web graphs, social networks (e.g., Facebook and Twitter), sensor
networks (e.g., Internet of Things) and many other application domains which involve
high-dimen-sional data (e.g., recommendation systems). These graphs are often highly
dynamic: network traffic data averages 109 packets/hour/router for large ISPs [23]; Twitter has 500 million tweets per day [40]. Since real-time analytics is fast becoming the norm
[26, 12, 35, 42], it is thus critical for operations on dynamic massive graphs to be processed
efficiently.
Dynamic graph analytics has a wide range of applications. Twitter can recommend information based on the up-to-date TunkRank (similar to PageRank) computed based on
a dynamic attention graph [14] and cellular network operators can fix traffic hotspots in
their networks as they are detected [27]. To achieve real-time performance, there is a growing interest to offload the graph analytics to GPUs due to its much stronger arithmetical
power and higher memory bandwidth compared with CPUs [43]. Although existing solutions, e.g. Medusa [57] and Gunrock [48], have explored GPU graph processing, we are
aware that the only one work [29] has considered a dynamic graph scenario which is a
major gap for running analytics on GPUs. In fact, a delay in updating a dynamic graph
may lead to undesirable consequences. For instance, consider an online travel insurance
system that detects potential frauds by running ring analysis on profile graphs built from
active insurance contracts [5]. Analytics on an outdated profile graph may fail to detect
frauds which can cost millions of dollars. However, updating the graph will be too slow
for issuing contracts and processing claims in real time, which will severely influence legitimate customers’ user experience. This motivates us to develop an update-efficient graph
structure on GPUs to support dynamic graph analytics.
There are two major concerns when designing a GPU-based dynamic graph storage scheme.
First, the proposed storage scheme should handle both insertion and deletion operations
efficiently. Though processing updates against insertion-only graph stream could be handled by reserving extra spaces to accommodate the updates, this naı̈ve approach fails to
preserve the locality of the graph entries and cannot support deletions efficiently. Considering a common sliding window model on a graph edge stream, each element in the stream
is an edge in a graph and analytic tasks are performed on the graph induced by all edges
in the up-to-date window [49, 15, 17]. A naı̈ve approach needs to access the entire graph in
1
Accelerating Dynamic Graph Analytics on GPUs
M. Sha et al.
the sliding window to process deletions. This is obviously undesirable against high-speed
streams. Second, the proposed storage scheme should be general enough for supporting
existing graph formats on GPUs so that we can easily reuse existing static GPU graph
processing solutions for graph analytics. Most large graphs are inherently sparse. To maximize the efficiency, existing works [6, 32, 31, 29, 51] on GPU sparse graph processing rely
on optimized data formats and arrange the graph entries in certain sorted order, e.g. CSR
[32, 6] sorts the entries by their row-column ids. However, to the best of our knowledge,
no schemes on GPUs can support efficient updates and maintain a sorted graph format
at the same time, other than a rebuild. This motivates us to design an update-efficient
sparse graph storage scheme on GPUs while keeping the locality of the graph entries for
processing massive analytics instantly.
In this paper, we introduce a GPU-based dynamic graph analytic framework followed by
proposing the dynamic graph storage scheme on GPUs. Our preliminary study shows that
a cache-oblivious data structure, i.e., Packed Memory Array (PMA [10, 11]), can potentially
be employed for maintaining dynamic graphs on GPUs. PMA, originally designed for
CPUs [10, 11], maintains sorted elements in a partially contiguous fashion by leaving gaps
to accommodate fast updates with a constant bounded gap ratio. The simultaneously
sorted and contiguous characteristic of PMA nicely fits the scenario of GPU streaming
graph maintenance. However, the performance of PMA degrades when updates occur in
locations which are close to each other, due to the unbalanced utilization of reserved spaces.
Furthermore, as streaming updates often come in batches rather than one single update
at a time, PMA does not support parallel insertions and it is non-trivial to apply PMA to
GPUs due to its intricate update patterns which may cause serious thread divergence and
uncoalesced memory access issues on GPUs.
We thus propose two GPU-oriented algorithms, i.e. GPMA and GPMA+, to support efficient
parallel batch updates. GPMA explores a lock-based approach which becomes increasingly
popular due to the recent GPU architectural evolution for supporting atomic operations
[18, 28]. While GPMA works efficiently for the case where few concurrent updates conflict, e.g., small-size update batches with random updating edges in each batch, there are
scenarios where massive conflicts occur and hence, we propose a lock-free approach, i.e.
GPMA+. Intuitively, GPMA+ is a bottom-up approach by prioritizing updates that occur
in similar positions. The update optimizations of our proposed GPMA+ are able to maximize coalesced memory access and achieve linear performance scaling w.r.t the number of
computation units on GPUs, regardless of the update patterns.
In summary, the key contributions of this paper are the following:
• We introduce a framework for GPU dynamic graph analytics and propose, the first of
its kind, a GPU dynamic graph storage scheme to pave the way for real-time dynamic
graph analytics on GPUs.
• We devise two GPU-oriented parallel algorithms: GPMA and GPMA+, to support efficient
updates against high-speed graph streams.
• We conduct extensive experiments to show the performance superiority of GPMA and
GPMA+. In particular, we design different update patterns on real and synthetic graph
streams to validate the update efficiency of our proposed algorithms against their CPU
counterparts as well as the GPU rebuild baseline. In addition, we implement three real
world graph analytic applications on the graph streams to demonstrate the efficiency
and broad applicability of our proposed solutions. In order to support larger graphs, we
extend our proposed formats to multiple GPUs and demonstrate the scalability of our
2
Accelerating Dynamic Graph Analytics on GPUs
M. Sha et al.
approach with multi-GPU systems.
The remainder of this paper is organized as follows. The related work is discussed in
Section 2. Section 3 presents a general workflow of dynamic graph processing on GPUs.
Subsequently, we describe GPMA and GPMA+ in Sections 4-5 respectively. Section 6 reports
results of a comprehensive experimental evaluation. We conclude the paper and discuss
some future works in Section 7.
2
Related Work
In this section, we review related works in three different categories as follows.
2.1
Graph Stream Processing
Over the last decade, there has been an immense interest in designing efficient algorithms
for processing massive graphs in the data stream model (see [35] for a detailed survey).
This includes the problems of PageRank-styled scores [38], connectivity [21], spanners
[20], counting subgraphs e.g. triangles [46] and summarization [44]. However, these works
mainly focus on the theoretical study to achieve the best approximation solution with linear
bounded space. Our proposed methods can incorporate existing graph stream algorithms
with ease as our storage scheme can support most graph representations used in existing
algorithms.
Many systems have been proposed for streaming data processing, e.g. Storm [45], Spark
Streaming [54], Flink [1]. Attracted by its massively parallel performance, several attempts have successfully demonstrated the advantages of using GPUs to accelerate data
stream processing [47, 56]. However, the aforementioned systems focus on general stream
processing and lack support for graph stream processing. Stinger [19] is a parallel solution
to support dynamic graph analytics on a single machine. More recently, Kineograph [14],
CellIQ [27] and GraphTau [26] are proposed to address the need for general time-evolving
graph processing under the distributed settings. However, to our best knowledge, existing
works focusing on CPU-based time-evolving graph processing will be inefficient on GPUs,
because CPU and GPU are two architectures with different design principles and performance concerns in the parallel execution. We are aware that only one work [29] explores
the direction of using GPUs to process real-time analytics on dynamic graphs. However,
it only supports insertions and lacks an efficient indexing mechanism.
2.2
Graph Analytics on GPUs
Graph analytic processing is inherently data- and compute-intensive. Massively parallel GPU accelerators are powerful to achieve supreme performance of many applications.
Compared with CPU, which is a general-purpose processor featuring large cache size and
high single core processing capability, GPU devotes most of its die area to a large number
of simple Arithmetic Logic Units (ALUs), and executes code in a SIMT (Single Instruction Multiple Threads) fashion. With the massive amount of ALUs, GPU offers orders of
magnitude higher computational throughput than CPU in applications with ample parallelism. This leads to a spectrum of works which explore the usage of GPUs to accelerate
graph analytics and demonstrate immense potentials. Examples include breath-first search
3
Accelerating Dynamic Graph Analytics on GPUs
Graph Stream
M. Sha et al.
Streaming Applications
Graph Stream Buffer
Dynamic
Query Buffer
Graph Update
CPU
Continuous
Monitoring
Graph Analytics
GPU
Active Graph Structure
Figure 1: The dynamic graph analytic framework
(BFS) [32], subgraph query [31], PageRank [6] and many others. The success of deploying
specific graph algorithms on GPUs motivates the design of general GPU graph processing
systems like Medusa [57] and Gunrock [48]. However, the aforementioned GPU-oriented
graph algorithms and systems assume static graphs. To handle dynamic graph scenario,
existing works have to perform a rebuild on GPUs against each single update. DCSR [29]
is the only solution, to the best of our knowledge, which is designed for insertion-only
scenarios as it is based on linked edge block and rear appending technique. However, it
does not support deletions or efficient searches. We propose GPMA to enable efficient dynamic graph updates (i.e. insertions and deletions) on GPUs in a fine-grained manner. In
addition, existing GPU-optimized graph analytics and systems can replace their storage
layers directly with ease since the fundamental graph storage schemes used in existing
works can be directly implemented on top of our proposed storage scheme.
2.3
Storage Formats on GPUs
Sparse matrix representation is a popular choice for storing large graphs on GPUs [3, 2, 57,
48] The Coordinate Format [16] (COO) is the simplest format which only stores non-zero
matrix entries by their coordinates with values. COO sorts all the non-zero entries by the
entries’ row-column key for fast entry accesses. CSR [32, 6] compresses COO’s row indices
into an offset array to reduce the memory bandwidth when accessing the sparse matrix. To
optimize matrices with different non-zero distribution patterns, many customized storage
formats were proposed, e.g., Block COO [50] (BCCOO), Blocked Row-Column [7] (BRC)
and Tiled COO [52] (TCOO). Existing formats require to maintain a certain sorted order
of their storage base units according to the unit’s position in the matrix, e.g. entries for
COO and blocks for BCCOO, and still ensure the locality of the units. As mentioned
previously, few prior schemes can handle efficient sparse matrix updates on GPUs. To
the best of our knowledge, PMA [10, 11] is a common structure which maintains a sorted
array in a contiguous manner and supports efficient insertions/deletions. However, PMA is
designed for CPU and no concurrent updating algorithm is ever proposed. Thus, we are
motivated to propose GPMA and GPMA+ for supporting efficient concurrent updates on all
existing storage formats.
4
Accelerating Dynamic Graph Analytics on GPUs
Graph
Stream
Graph Stream
Transfer
(host to device)
Active Graph
Update
(Summarization &
Update)
Query Transfer
(host to device)
Query
Stream
Results Transfer
(device to host)
Step 1
Step 2
M. Sha et al.
Graph Stream
Transfer
(host to device)
Active Graph
Update
(Summarization &
Update)
Query Transfer
(host to device)
Graph Analytic
Processing
Results Transfer
(device to host)
Step 3
Data transfer on PCIe
Repeat…
GPU computation
Figure 2: Asynchronous streams
3
A dynamic framework on GPUs
To address the need for real-time dynamic graph analytics, we offload the tasks of concurrent dynamic graph maintenance and its corresponding analytic processing to GPUs. In
this section, we introduce a general GPU dynamic graph analytic framework. The design
of the framework takes into account two major concerns: the framework should not only
handle graph updates efficiently but also support existing GPU-oriented graph analytic
algorithms without forfeiting their performance.
Model: We adopt a common sliding window graph stream model [35, 27, 44]. The sliding
window model consists of an unbounded sequence of elements (u, v)t 1 which indicates
the edge (u, v) arrives at time t, and a sliding window which keeps track of the most
recent edges. As the sliding window moves with time, new edges in the stream keep being
inserted into the window and expiring edges are deleted. In real world applications, the
sliding window of a graph stream can be used to monitor and analyze fresh social actions
that appearing on Twitter [49] or the call graph formed by the most recent CDR data
[27]. In this paper, we focus on presenting how to handle edge streams but our proposed
scheme can also handle the dynamic hyper graph scenario with hyper edge streams.
Apart from the sliding window model, the graph stream model which involves explicit
insertions and deletions (e.g., a user requests to add or delete a friend in the social network) is also supported by our scheme as the proposed dynamic graph storage structure
is designed to handle random update operations. That is, our system supports two kinds
of updates, implicit ones generated from the sliding window mechanism and explicit ones
generated from upper level applications or users.
The overview of the dynamic graph analytic framework is presented in Figure 1. Given
a graph stream, there are two types of streaming tasks supported by our framework.
The first type is the ad-hoc queries such as neighborhood and reachability queries on the
graph which is constantly changing. The second type is the monitoring tasks like tracking
PageRank scores. We present the framework by illustrating how to handle the graph
streams and the corresponding queries while hiding data transfer between CPU and GPU,
as follows:
Graph Streams: The graph stream buffer module batches the incoming graph streams
on the CPU side (host) and periodically sends the updating batches to the graph update
module located on GPU (device). The graph update module updates the “active” graph
stored on the device by using the batch received. The “active” graph is stored in the format
of our proposed GPU dynamic graph storage structure. The details of the graph storage
1
Our framework handles both directed and undirected edges.
5
Accelerating Dynamic Graph Analytics on GPUs
M. Sha et al.
non-zero entry
Level 3
balanced
[0,31]
Level 2
rebalanced
[0,15]
[0,7]
Level 1
[0,3]
Leaf
[8,15]
[4,7]
[8,11]
[16,23]
[12,15]
[16,19]
[20,23]
segment size
density lower bound ρ
density upper bound τ
min # of entries
max # of entries
unbalanced
[16,31]
[24,31]
[24,27]
[28,31]
Original
2
5
8 13
16 17
23
27 28
31
34 37 42
46 51 62
Inserted
2
5
8 13
16 17
23
27 28
31
34 37 42
46 51 62 48
Final
2
5
8 13
16 17
23
27 28
31 34 37
42 46
48 51 62
Leaf
4
0.08
0.92
1
3
Level 1
8
0.19
0.88
2
6
Level 2
16
0.29
0.84
4
12
Level 3
32
0.40
0.80
8
24
Figure 3: PMA insertion example (Left: PMA for insertion; Right: predefined thresholds)
structure and how to update the graph efficiently on GPUs will be discussed extensively
in later sections.
Queries: Like the graph stream buffer, the dynamic query buffer module batches adhoc queries submitted against the stored active graph, e.g., queries to check the dynamic
reachability between pairs of vertices. The tracking tasks will also be registered in the
continuous monitoring module, e.g., tracking up-to-date PageRank. All ad-hoc queries
and monitoring tasks will be transferred to the graph analytic module for GPU accelerated
processing. The analytic module interacts with the active graph to process the queries and
the tracking tasks. Subsequently, the query results will be transferred back to the host. As
most existing GPU graph algorithms use optimized array formats like CSR to accelerate
the performance [18, 28, 34, 52], our proposed storage scheme provides an interface for
storing the array formats. In this way, existing algorithms can be integrated into the
analytic module with ease. We describe the details of the integration in Section 4.2.
Hiding Costly PCIe Transfer: Another critical issue on designing GPU-oriented systems is to minimize the data transfer between the host and the device through PCIe. Our
proposed batching approach allows overlapping data transfer by concurrently running analytic tasks on the device. Figure 2 shows a simplified schedule with two asynchronous
streams: graph streams and query streams respectively. The system is initialized at Step 1
where the batch containing incoming graph stream elements is sent to the device. At Step
2, while PCIe handles bidirectional data transfer for previous query results (device to host)
and freshly submitted query batch (host to device), the graph update module updates the
active graph stored on the device. At Step 3, the analytic module processes the received
query batch on the device and a new graph stream batch is concurrently transferred from
the host to the device. It is clear to see that, by repeating the aforementioned process, all
data transfers are overlapped with concurrent device computations.
Stroa
4
GPMA Dynamic Graph Processing
To support dynamic graph analytics on GPUs, there are two major challenges discussed
in the introduction. The first challenge is to maintain the dynamic graph storage in the
device memory of GPUs for efficient update as well as compute. The second challenge is
that the storage strategy should show its good compatibility with existing graph analytic
algorithms on GPUs.
In this section, we discuss how to address the challenges with our proposed scheme. First,
we introduce GPMA for GPU resident graph storage to simultaneously achieve update and
6
Accelerating Dynamic Graph Analytics on GPUs
M. Sha et al.
compute efficiency (Section 4.1). Subsequently, we illustrate GPMA’s generality in terms of
deploying existing GPU based graph analytic algorithms (Section 4.2).
4.1
GPMA Graph Storage on GPUs
In this subsection, we first discuss the design principles our proposed dynamic graph
storage should follow. Then we introduce how to implement our proposal.
Design Principles. The proposed graph storage on GPUs should take into account the
following principles:
• The proposed dynamic graph storage should efficiently support a broad range of updating operations, including insertions, deletions and modifications. Furthermore, it should
have a good locality to accommodate the highly parallel memory access characteristic
of GPUs, in order to achieve high memory efficiency.
• The physical storage strategy should support common logical storage formats and the
existing graph analytic solutions on GPUs based on such formats can be adapted easily.
Background of PMA. GPMA is primarily motivated by a novel structure, Packed Memory Array (PMA [10, 11]), which is proposed to maintain sorted elements in a partially
continuous fashion by leaving gaps to accommodate fast updates with a bounded gap
ratio. PMA is a self-balancing binary tree structure. Given an array of N entries, PMA
separates the whole memory space into leaf segments with O(log N ) length and defines
non-leaf segments as the space occupied by their descendant segments. For any segment
located at height i (leaf height is 0), PMA designs a way to assign the lower and upper
bound density thresholds for the segment as ρi and τi respectively to achieve O(log2 N )
amortized update complexity. Once an insertion/deletion causes the density of a segment
to fall out of the range defined by (ρi , τi ), PMA tries to adjust the density by re-allocating
all elements stored in the segment’s parent. The adjustment process is invoked recursively
and will only be terminated if all segments’ densities fall back into the range defined by
PMA’s density thresholds. For an ordered array, modifications are trivial. Therefore, we
mainly discuss insertions because deletions are the dual operation of insertions in PMA.
Example 1. Figures 3 presents an example for PMA insertion. Each segment is uniquely
identified by an interval (starting and ending position of the array) displayed in the corresponding tree node, e.g., the root segment is segment-[0,31] as it covers all 32 spaces. All
values stored in PMA are displayed in the array. The table in the figure shows predefined
parameters including the segment size, the assignment of density thresholds (ρi , τi ) and
the corresponding minimum and maximum entry sizes at different heights of the tree. We
use these setups as a running example throughout the paper. To insert an entry. i.e. 48,
into PMA, the corresponding leaf segment is firstly identified by a binary search, and the
new entry is placed at the rear of leaf segment. The insertion causes the density of the
leaf segment 4 to exceed the threshold 3. Thus, we need to identify the nearest ancestor
segment which can accommodate the insertion without violating the thresholds, i.e., the
segment-[16,31]. Finally, the insertion is completed by re-distpatching all entries evenly
in segment-[16,31].
Lemma 1 ([10, 11]). The amortized update complexity of PMA is proved to be O(log2 N )
in the worst case and O(log N ) in the average case.
It is evident that PMA could be employed for dynamic graph maintenance as it maintains
sorted elements efficiently with high locality on CPU. However, the update procedure described in [11] is inherently sequential and no concurrent algorithms have been proposed.
7
Accelerating Dynamic Graph Analytics on GPUs
Level 3
[0,31]
Level 2
[0,15]
Level 1
Leaf
[16,31]
[0,7]
[0,3]
[8,15]
[4,7]
Original 2
5
8 13
non-zero entry
balanced
rebalanced
Round1 1
M. Sha et al.
2 5
8
[8,11]
16 17
[12,15]
23
[16,19]
27 28
[20,23]
31
[24,31]
[24,27]
34 37 42
16 17
……
1
4
23
9 35 48
27 28
[28,31]
46 51 62
balanced
unbalanced
rebalanced
trylock failed
Thread Pool
Insertion Buffer
9 13
[16,23]
……
31 34 37
42 46
48 51 62
Figure 4: GPMA concurrent insertions
To support batch updates of edge insertions and deletions for efficient graph stream analytic processing, we devise GPMA to support concurrent PMA updates on GPUs. Note that
we focus on the insertion process for a concise presentation because the deletion process
is a dual process w.r.t. the insertion process in PMA.
Concurrent Insertions in GPMA. Motivated by PMA on CPUs, we propose GPMA to
handle a batch of insertions concurrently on GPUs. Intuitively, GPMA assigns an insertion
to a thread and concurrently executes PMA algorithm for each thread with a lock-based
approach to ensure consistency. More specifically, all leaf segments of insertions are identified in advance, and then each thread checks whether the inserted segments still satisfy
their thresholds from bottom to top. For each particular segment, it is accessed in a mutually exclusive fashion. Moreover, all threads are synchronized after updating all segments
located at the same tree height to avoid possible conflicts as segments at a lower height
are fully contained in the segments at a higher level.
Algorithm 1 presents the pseudocode for GPMA concurrent insertions. We highlight the
lines added to the original PMA update algorithm in order to achieve concurrent update
of GPMA. As shown in line 2, all entries in the insertion set are iteratively tried until all
of them take effect. For each iteration shown in line 9, all threads start at leaf segments
and attempt the insertions in a bottom-up fashion. If a particular thread fails the mutex
competition in line 11, it aborts immediately and waits for the next attempt. Otherwise,
it inspects the density of the current segment. If the current segment does not satisfy the
density requirement, it will try the parent segment in the next loop iteration (lines 13-14).
Once an ancestor segment is able to accommodate the insertion, it merges the new entry
in line 16 and the entry is removed from the insertion set. Subsequently, the updated
segment will re-dispatch all its entries evenly and the process is terminated.
Example 2. Figure 4 illustrates an example with five insertions, i.e. {1, 4, 9, 35, 48}, for
concurrent GPMA insertion. The initial structure is the same as in Example 1. After identifying the leaf segment for insertion, threads responsible for Insertion-1 and Insertion-4
compete for the same leaf segment. Assuming Insertion-1 succeeds in getting the mutex, Insertion-4 is aborted. Due to enough free space of the segment, Insertion-1 is
successfully inserted. Even though there is no leaf segment competition for Insertions9,35,48, they should continue to inspect the corresponding parent segments because all
the leaf segments do not satisfy the density requirement after the insertions. Insertions35,48 still compete for the same level-1 segment and Insertion-48 wins. For this example, three of the insertions are successful and the results are shown in the bottom of
Figure 4. Insertions-4,35 are aborted in this iteration and will wait for the next attempt.
8
Accelerating Dynamic Graph Analytics on GPUs
M. Sha et al.
Algorithm 1 GPMA Concurrent Insertion
procedure GPMAInsert(Insertions I)
2:
while I is not empty do
3:
parallel for i in I
1:
Seg s ← BinarySearchLeafSegment(i)
TryInsert(s, i, I)
synchronize
release locks on all segments
4:
5:
6:
7:
function TryInsert(Seg s, Insertion i, Insertions I)
while s 6= root do
10:
synchronize
11:
if fails to lock s then
12:
return
8:
9:
if (|s| + 1)/capacity(s) < τ then
s ← parent segment of s
else
Merge(s, i)
re-dispatch entries in s evenly
remove i from I
return
double the space of the root segment
13:
14:
15:
16:
17:
18:
19:
20:
4.2
. insertion aborts
. insertion succeeds
Adapting Graph Algorithms to GPMA
Existing graph algorithms often use sparse matrix format to store the graph entries since
most large graphs are naturally sparse[5]. Although many different sparse storage formats
have been proposed, most of the formats assume a specific order to organize the nonzero entries. These formats enforce the order of the graph entries to optimize their specific
access patterns, e.g., row-oriented (COO2 ), diagonal-oriented (JAD), and block-/tile-based
(BCCOO, BRC and TCOO). It is natural that the ordered graph entries can be projected
into an array and these similar formats can be supported by GPMA easily. Among all
formats, we choose CSR as an example to illustrate how to adapt the format to GPMA.
CSR as a case study. CSR is most widely used by existing algorithms on sparse matrices
or graphs. CSR compresses COO’s row indices into an offset array, which contributes to
reducing the memory bandwidth when accessing the sparse matrix, and achieves a better
workload estimation for skewed graph distribution (e.g., power-law distribution). The
following example demonstrates how to implement CSR on GPMA.
Example 3. In Figure 5, we have a graph of three vertices and six edges. The number
on each edge denotes the weight of the corresponding edge. The graph is represented as
a sparse matrix and is further transformed to the CSR format shown in the upper right.
CSR sorts all non-zero entries in the row-orient order, and compresses row indices into
intervals as a row offset array. The lower part denotes the GPMA representation of this
2
Generally, COO means ordered COO and it can also be column-oriented.
9
Accelerating Dynamic Graph Analytics on GPUs
M. Sha et al.
Algorithm 2 Breadth-First Search
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
procedure BFS(Graph G, Vertex s)
for each vertex u ∈ G.V − {s} do
u.visited = false
Q←φ
s.visited ← true
ENQUEUE(Q, s)
while Q 6= φ do
u ← DEQUEUE(Q)
for each v ∈ G.Adj[u] do
if IsEntryExist(v) then
if v.visited = false then
v.visited ← true
ENQUEUE(v)
Algorithm 3 GPU-based BFS Neighbour Gathering
procedure Gather(Vertex frontier, Int csrOffset)
{r, rEnd} ← csrOffset[frontier, frontier + 1]
3:
for (i ← r+threadId; i<rEnd; i+=threadNum) do
4:
if IsEntryExist(i) then ParallelGather(i)
1:
2:
graph. In order to maintain the row offset array without synchronization among threads,
we add a guard entry whose column index is ∞ during concurrent insertions. That is to
say, when the guard is moved, the corresponding element in row offset array will change.
Given a graph stored on GPMA, the next step is to adapt existing graph algorithms to
GPMA. In particular, how existing algorithms access the graph entries stored on GPMA
is of vital importance. As for the CSR example, most algorithms access the entries by
navigating through CSR’s ordered array[18, 28, 34, 52]. We note that a CSR stored on
GPMA is also an array which has bounded gaps interleaved with the graph entries. Thus,
we are able to efficiently replace the operations of arrays with the operations of GPMA. We
will demonstrate how we can do this replacement as follows.
Algorithm 2 illustrates the pseudocode of the classic BFS algorithm. We should pay
attention to line 10, which is highlighted. Compared with the raw adjacency list, the
applications based on GPMA need to guarantee the current vertex being traversed is a valid
neighbour instead of an invalid space in GPMA’s gap.
Algorithm 2 provides a high-level view for GPMA adaption. Furthermore, we present how it
adapts GPMA in the parallel GPU environment with some low-level details. Algorithm 3 is
the pseudocode of the Neighbour Gathering parallel procedure, which is a general primitive
for most GPU-based vertex-centric graph processing models [36, 18, 22]. This primitive
plays a role similar to line 10 of Algorithm 2 but in a parallel fashion in accessing the
neighbors of a particular vertex. When traversing all neighbours of frontiers, Neighbour
Gathering follows the SIMT manner, which means that there are threadNum threads as a
group assigned to one of the vertex frontier and the procedure in Algorithm 3 is executed
10
Accelerating Dynamic Graph Analytics on GPUs
2
1
0
6
5
1
2
4
Row Offset
Column Index
Value
M. Sha et al.
Row Offset
[0 2 3 6]
[0 2 2 0 1 2]
[1 2 3 4 5 6]
4
8
14
[0,15]
[0,7]
3
[8,15]
[0,3]
Example Graph
CSR Format
(0,0) (0,2)
1
2
[4,7]
(0,∞) (1,2)
3
[8,11]
(1,∞) (2,0)
4
[12,15]
(2,1) (2,2) (2,∞)
5
6
Figure 5: GPMA based on CSR
in parallel. For the index range (in the CSR on GPMA) of the current frontier given by
csrOffset (shown in line 2), each thread will handle the corresponding tasks according to
its threadId. For GPU-based BFS, the visited labels of neighbours for all frontiers will not
be judged immediately after neighbours are accessed. Instead, they will be compacted to
contiguous memory in advance for higher memory efficiency.
Similarly, we can also carry out the entry existing checking for other graph applications
to adapt them to GPMA. To summarize, GPMA can be adapted to common graph analytic
applications which are implemented in different representation and execution models, including matrix-based (e.g., PageRank), vertex-centric (e.g., BFS) and edge-centric (e.g.,
Connected Component).
5
GPMA+: GPMA Optimization
Although GPMA can support concurrent graph updates on GPUs, the update algorithm is
basically a lock-based approach and can suffer from serious performance issue when different threads compete for the same lock. In this section, we propose a lock-free approach,
i.e. GPMA+, which makes full utilization of GPU’s massive multiprocessors. We carefully
examine the performance bottleneck of GPMA in Section 5.1. Based on the issues identified,
we propose GPMA+ for optimizing concurrent GPU updates with a lock-free approach in
Section 5.2.
5.1
Bottleneck Analysis
The following four critical performance issues are identified for GPMA:
• Uncoalesced Memory Accesses: Each thread has to traverse the tree from the root
segment to identify the corresponding leaf segment to be updated. For a group of GPU
threads which share the same memory controller (including access pipelines and caches),
memory accesses are uncoalesced and thus, cause additional IO overheads.
• Atomic Operations for Acquiring Lock: Each thread needs to acquire the lock
before it can perform the update. Frequently invoking atomic operations for acquiring
locks will bring huge overheads, especially for GPUs.
• Possible Thread Conflicts: When two threads conflict on a segment, one of them
has to abort and wait for the next attempt. In the case where the updates occur on
segments which are located proximately, GPMA will end up with low parallelism. As
most real world large graphs have the power law property, the effect of thread conflicts
can be exacerbated.
• Unpredictable Thread Workload: Workload balancing is another major concern
for optimizing concurrent algorithms [43]. The workload for each thread in GPMA is
unpredictable because: (1) It is impossible to obtain the last non-leaf segment traversed
11
Accelerating Dynamic Graph Analytics on GPUs
Level 3
non-zero entry
[0,31]
Level 2
balanced
[0,15]
Level 1
[16,31]
[0,7]
M. Sha et al.
[8,15]
[16,23]
rebalanced
[24,31]
Update Segments: 0
Leaf
[0,3]
Round 2
1
2
Round 1
1
2
Original
2
4
[4,7]
[8,11]
[12,15]
[16,19]
[20,23]
[24,27]
[28,31]
4 24 28
Successful Flag: Y
N
N
N
4
Update Segments: 0 24
4
5
8
9 13
16 17 23
27 28
31 34 35
37 42 46
48 51 62
Update Offsets: 0
2
3
5
8 13
16 17
23
27 28
31
34 37 42
46 51 62
Update Keys: 1
4
9 35 48
5
8 13
16 17
23
27 28
31
34 37 42
46 51 62
Original
Update Segments: 0 16
Successful Flag: N N
5
Update Offsets: 0
1
Successful Flag: Y
Y
3 Update Offsets: 0
1
Update Keys: 9 35 48
Round 1
3
Update Keys: 9 35 48
Round 2
Figure 6: GPMA+ concurrent insertions (best viewed in color)
by each thread in advance; (2) The result of lock competition is random. The unpredictable nature triggers the imbalanced workload issue for GPMA. In addition, threads
are grouped as warps on GPUs. If a thread has a heavy workload, the remaining threads
of the same warp are idle and cannot be re-scheduled.
5.2
Lock-Free Segment-Oriented Updates
Based on the discussion above, we propose GPMA+ to lift all bottlenecks identified. The
proposed GPMA+ does not rely on lock mechanism and achieves high thread utilization
simultaneously. Existing graph algorithms can be adapted to GPMA+ in the same manner
as GPMA.
Compared with GPMA, which handles each update separately, GPMA+ concurrently processes updates based on the segments involved. It breaks the complex update pattern
into existing concurrent GPU primitives to achieve maximum parallelism. There are three
major components in the GPMA+ update algorithm:
(1) The updates are first sorted by their keys and then dispatched to GPU threads for
locating their corresponding leaf segments according to the sorted order.
(2) The updates belonging to the same leaf segment are grouped for processing and GPMA+
processes the updates level by level in a bottom-up manner.
(3) In any particular level, we leverage GPU primitives to invoke all computing resources
for segment updates.
We note that, the issue of uncoalesced memory access in GPMA is resolved by component (1) as the updating threads are sorted in advance to achieve similar traversal paths.
Component (2) completely avoids the use of locks, which solves the problem of atomic
operations and thread conflicts. Finally, component (3) makes use of GPU primitives to
achieve workload balancing among all GPU threads.
We present the pseudocode for GPMA+’s segment-oriented insertion in the procedure
GpmaPlusInsertion of Algorithm 4. Note that, similar to Section 4 (GPMA), we focus
on presenting the insertions for GPMA+ and the deletions could be naturally inferred. The
inserting entries are first sorted by their keys in line 2 and the corresponding segments are
then identified in line 3. Given the update set U , GPMA+ processes updating segments level
by level in lines 4-15 until all updates are executed successfully (line 11). In each iteration,
UniqueInsertion in line 7 groups update entries belonging to the same segments into
unique segments, i.e., S ∗ , and produces the corresponding index set I for quick accesses of
updates entries located in a segment from S ∗ . As shown in lines 19-20, UniqueSegments
only utilizes standard GPU primitives, i.e. RunLenghtEncoding and ExclusiveScan.
RunLenghtEncoding compresses an input array by merging runs of an element into a
12
Accelerating Dynamic Graph Analytics on GPUs
M. Sha et al.
Algorithm 4 GPMA+ Segment-Oriented Insertion
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
procedure GpmaPlusInsertion(Updates U )
Sort(U )
Segs S ← BinarySearchLeafSegments(U )
while root segment is not reached do
Indices I ← ∅
Segs S ∗ ← ∅
(S ∗ , I) ← UniqueSegments(S)
parallel for s ∈ S ∗
TryInsert+(s, I, U )
if U = ∅ then
return
parallel for s ∈ S
if s does not contain any update then
remove s from S
s ← parent segment of s
r ← double the space of the old root segment
TryInsert+(r, ∅, U )
function UniqueSegments(Segs S)
19:
(S ∗ , Counts) ← RunLengthEncoding(S)
20:
Indices I ← ExclusiveScan(Counts)
21:
return (S ∗ , I)
18:
22:
23:
24:
25:
26:
27:
28:
function TryInsert+(Seg s, Indices I, Updates U )
ns ← CountSegment(s)
Us ← CountUpdatesInSegment(s,I,U )
if (ns + |Us |)/capacity(s) < τ then
Merge(s, Us )
re-dispatch entries in s evenly
remove Us from U
single element. It also outputs a count array denoting the length of each run. ExclusiveScan calculates, for each entry e in an array, the sum of all entries before e. Both primitives
have very efficient parallelized GPU-based implementation which makes full utilization of
the massive GPU cores. In our implementation, we use the NVIDIA CUB library [4] for
these primitives. Given a set of unique updating segments, TryInsert+ first checks if a
segment s has enough space for accommodating the updates by summing the valid entries
in s (CountSegment) and the number of updates in s (CountUpdatesInSegment).
If the density threshold is satisfied, the updates will be materialized by merging the inserting entries with existing entries in the segment (as shown in line 26). Subsequently, all
entries in the segment will be re-dispatched to balance the densities. After TryInsert+,
the algorithm will terminate if there are no entries to be updated. Otherwise, GPMA+
will advance to higher levels by setting all remaining segments to their parent segments
(lines 12-15). The following example illustrates GPMA+’s segment-oriented updates.
Example 4. Figure 6 illustrates an example for GPMA+ insertions with the same setup as
13
Accelerating Dynamic Graph Analytics on GPUs
M. Sha et al.
in example 2. The left part is GPMA+’s snapshots in different rounds during this batch of
insertions. The right part denotes the corresponding array information after the execution
of each round. Five insertions are grouped into four corresponding leaf segments (denoted
in different colours).
For the first iteration at the leaf level, Insertions-1,4 of the first segment (denoted as
red) is merged into the corresponding leaf segment, then its success flag is marked and will
not be considered in the next round. The remaining intervals fail in this iteration and their
corresponding segments will upgrade to their parent segments. It should be noted that the
blue and the green grids belong to the same parent segment and therefore, will be merged
and then dispatched to their shared parent segment (as shown in Round1). In this round,
both segments (denoted as yellow and blue) cannot satisfy the density threshold, and their
successful flags are not checked. In Round2, both update segments can be merged by the
corresponding insertions and no update segments will be considered in the next round since
all of them are flagged.
In Algorithm 4, TryInsert+ is the most important function as it handles all the corresponding insertions with no conflicts. Moreover, it achieves a balanced workload for
each concurrent task. This is because GPMA+ handles the updates level by level and each
segment to be updated in a particular level has exactly the same capacity. However, segments in different levels have different capacities. Intuitively, the probability of updating
a segment with a larger size (a segment closer to the root) is much lower than that of
a segment with a smaller size (a segment closer to the leaf). To optimize towards the
GPU architecture, we propose the following optimization strategies for TryInsert+ for
segments with different sizes.
• Warp-Based: For a segment with entries not larger than the warp size, the segment
will be handled by a warp. Since all threads in the same warp are tied together and
warp-based data is held by registers, updating a segment by a warp does not require
explicit synchronization and will obtain superior efficiency.
• Block-Based: For a segment of which the data can be loaded in GPU’s shared memory,
block-based approach is chosen. Block-based approach executes all updates in the shared
memory. As shared memory has much larger size than warp registers, block-based
approach can handle large segments efficiently.
• Device-Based: For a segment with the size larger than the size of the shared memory,
we handle them via global memory and rely on kernel synchronization. Device-based
approach is slower than the two approaches above, but it has much less restriction on
memory size (less than device memory amount) and is not invoked frequently.
We refer interested readers to Appendix A for the detailed algorithm of the optimizations
above.
Theorem 1. Given there are K computation units in the GPU, the amortized update
2
performance of GPMA+ is O(1 + logK N ), where N is the maximum number of edges in the
dynamic graph.
Proof. Let X denote the set of updating entries contained in a batch. We consider the
case where |X| ≥ K as it is rare to see |X| < K in real world scenarios. In fact, our
analysis works for cases where |X| = O(K). The total update complexity consists of three
parts: (1) sorting the updating entries; (2) searching the position of the entries in GPMA;
(3) inserting the entries. We study these three parts separately.
For part (1), the sorting complexity of |X| entries on the GPU is O( |X|
K ) since parallel radix
14
Accelerating Dynamic Graph Analytics on GPUs
M. Sha et al.
Table 1: Experimented Graph Algorithms and the Compared Approaches
Compared Approaches
CPU Approaches
GPU Approaches
Graph Container
AdjLists
PMA [10, 11]
Stinger [19]
cuSparseCSR [3]
GPMA/GPMA+
BFS
ConnectedComponent
PageRank
Standard Single Thread Algorithms
Stinger built-in Parallel Algorithms
D. Merrill et al.[36]
J. Soman et al.[41]
CUSP SpMV [2]
sort is used (keys in GPMA are integers for storing edges). Then, the amortized sorting
complexity is O( |X|
K )/|X| = O(1).
)
For part (2), the complexity of concurrently searching |X| entries on GPMA is O( |X|·logN
K
since each entry is assigned to one thread and the depth of traversal is the same for
one thread (GPMA is a balanced tree). Thus, the amortized searching complexity is
O( |X|·logN
)/|X| = O( logN
K
K ).
For part (3), we need to conduct a slightly complicated analysis. We denote the total
insertion complexity of X with GPMA+ as cX
GPMA+ . As GPMA+ is updated level by level,
X
X
cGPMA+ can be decomposed into: cGPMA+ = c0 + c1 + ... + ch where h is the height of the
PMA tree. Given any level i, let zi denote the number of segments to be updated by
GPMA+. Since all segments at level i have the same size, we denote pi as the sequential
complexity to update any segment si,j at level i (TryInsert+ in Algorithm 4). GPMA+
evenly distributes the computing resources to each segment. As processing each segment
only requires a constant number of scans on the segment by GPU primitives, the complexity
for GPMA+ to process level i is ci = piK·zi . Thus we have:
cX
GPMA+ =
X pi · zi
1 X x
≤
cPMA
K
K
x∈X
i=0,..,h
where cxPMA is the sequential complexity for PMA to process the update of a particular
entry x ∈ X. The inequality holds because for each segment updated by GPMA+, it
must be updated at least once by a sequential PMA process. With Lemma 1, we have
|X|·log 2 N
cxPMA = O(log 2 N ) and thus cX
). Then the amortized complexity to
GPMA+ = O(
K
2
update one single entry under the GPMA scheme naturally follows as O(1 + logK N ).
Finally, we conclude the proof by combining the complexities from all three parts.
Theorem 1 proves that the speedups of GPMA+ over sequential PMA is linear to the number of processing units available on GPUs, which showcases the theoretical scalability of
GPMA+.
6
Experimental Evaluation
In this section, we present the experimental evaluation of our proposed methods. First,
we present the setup of the experiments. Second, we examine the update costs of different schemes for maintaining dynamic graphs. Finally, we implement three different
applications to show the performance and the scalability of the proposed solutions.
15
Accelerating Dynamic Graph Analytics on GPUs
102
101
100
0
102
101
100
0
101
103
Sliding Size
105
Reddit
104
103
Time (ms)
Time (ms)
Time (ms)
Graph500
104
103
102
101
100
0
101
AdjLists
103
Sliding Size
PMA
105
103
102
101
100
0
101
Stinger
Pokec
104
103
Time (ms)
Uniform Random
104
M. Sha et al.
cuSparseCSR
103
Sliding Size
105
GPMA
101
103
Sliding Size
105
GPMA+
Figure 7: Performance comparison for updates with different batch sizes. The dashed lines
represent CPU-based solutions whereas the solid lines represent GPU-based solutions.
Table 2: Statistics of Datasets
Datasets
Reddit
Pokec
Graph500
Random
6.1
|V |
2.61M
1.60M
1.00M
1.00M
|E|
34.4M
30.6M
200M
200M
|E|/|V |
13.2
19.1
200
200
|Es |
17.2M
15.3M
100M
100M
|Es |/|V |
6.6
9.6
100
100
Experimental Setup
Datasets. We collect two real world graphs (Reddit and Pokec) and synthesize two
random graphs (Random and Graph500) to test the proposed methods. The datasets are
described as follows and their statistics are summarized in Table 2.
• Reddit is an online forum where user actions include post and comment. We collect
all comment actions from a public resource3 . Each comment of a user b to a post from
another user a is associated with an edge from a to b, and the edge indicates an action
of a has triggered an action of b. As each comment is labeled with a timestamp, it
naturally forms a dynamic influence graph.
• Pokec is the most popular online social network in Slovakia. We retrieve the dataset
from SNAP [30]. Unlike other online datasets, Pokec contains the whole network over
a span of more than 10 years. Each edge corresponds to a friendship between two users.
• Graph500 is a synthetic dataset obtained by using the Graph500 RMAT generator [37]
to synthesize a large power law graph.
• Random is a random graph generated by the Erdős-Renyi model. Specifically, given
a graph with n vertices, the random graph is generated by including each edge with
probability p. In our experiments, we generate a Erdős-Renyi random graph with 0.02%
of non-zero entries against a full clique.
Stream Setup. In our datasets, Reddit has a timestamp on every edge whereas the other
datasets do not possess timestamps. As commonly used in existing graph stream algorithms [55, 53, 38], we randomly set the timestamps of all edges in the Pokec, Graph500
and Random datasets. Then, the graph stream of each dataset receives the edges with
increasing timestamps.
For each dataset, a dynamic graph stream is initialized with a subgraph consisting of
the dataset’s first half of its total edges according to the timestamps, i.e., Es in Table 2
denotes the initial edge set of a dynamic graph before the stream starts. To demonstrate
the update performance of both insertions and deletions, we adopt a sliding window setup
where the window contains a fixed number of edges. Whenever the window slides, we need
3
https://www.kaggle.com/reddit/reddit-comments-may-2015
16
Accelerating Dynamic Graph Analytics on GPUs
M. Sha et al.
to update the graph by deleting expired edges and inserting arrived edges until there are
no new edges left in the stream.
Applications. We conduct experiments on three most widely used graph applications to
showcase the applicability and the efficiency of GPMA+.
• BFS is a key graph operation which is extensively studied in previous works on GPU
graph processing [24, 33, 13]. It begins with a given vertex (or root) of an unweighted
graph and iteratively explores all connected vertices. The algorithm will assign a minimum distance away from the root vertex to every visited vertex after it terminates. In
the streaming scenario, after each graph update, we select a random root vertex and
perform BFS from the root to explore the entire graph.
• Connected Component is another fundamental algorithm which has been extensively
studied under both CPU [25] and GPU [41] µ environment. It partitions the graph in the
way that all vertices in a partition can reach the others in the same partition and cannot
reach vertices from other partitions. In the streaming context, after each graph update,
we run the ConnectedComponent algorithm to maintain the up-to-date partitions.
• PageRank is another popular benchmarking application for large scale graph processing. Power iteration method is a standard method to evaluate the PageRank where the
Sparse Matrix Vector Multiplication (SpMV) kernel is recursively executed between the
graph’s adjacency matrix and the PageRank vector. In the streaming scenario, whenever the graph is updated, the power iteration is invoked and it obtains the up-to-date
PageRank vector by operating on the updated graph adjacency matrix and the PageRank vector obtained in the previous iteration. In our experiments, we follow the standard
setup by setting the damping factor to 0.85 and we terminate the power iteration once
the 1-norm error is less than 10−3 .
These three applications have different memory and computation requirements. BFS requires little computation but performs frequent random memory accesses, and PageRank
using SpMV accesses the memory sequentially and it is the most compute-intensive task
among all three applications.
Maintaining Dynamic Graph. We adopt the CSR [32, 6] format to represent the dynamic graph maintained. Note that all approaches proposed in the paper are not restricted
to CSR but general enough to incorporate any popular representation formats like COO
[16], JAD [39], HYB [9, 34] and many others. To evaluate the update performance of our
proposed methods, we compare different graph data structures and respective approaches
on both CPUs and GPUs.
• AdjLists (CPU). AdjLists is a basic approach for CSR graph representation. As
the CSR format sorts all entries according to their row-column indices, we implement
AdjLists with a vector of |V | entries for |V | vertices and each entry is a RB-Tree
to denote all (out)neighbors of each vertex. The insertions/deletions are operated by
TreeSet insertions/deletions.
• PMA (CPU). We implement the original CPU-based PMA and adopt it for the CSR
format. The insertions/deletions are operated by PMA insertions/deletions.
• Stinger (CPU). We compare the graph container structure used in the state-of-the-art
CPU-based parallel dynamic graph analytic system, Stinger [19]. The updates are
handled by the internal logic of Stinger.
• cuSparseCSR (GPU). We also compare with the GPU-based CSR format used in the
NVIDIA cuSparse library [3]. The updates are executed by calling the rebuild function
17
Accelerating Dynamic Graph Analytics on GPUs
M. Sha et al.
in the cuSparse library.
• GPMA/GPMA+. They are our proposed approaches. Although insertions and deletions
could be handled similarly, in the sliding window models where the numbers of insertions
and deletions are often equal, the lazy deletions can be performed via marking the
location as deleted without triggering the density maintenance and recycling for new
insertions.
Note that we do not compare with DCSR [29] because, as discussed in Section 2.2, the
scheme can neither handle deletions nor support efficient searches, which makes it incomparable to all schemes proposed in this paper.
To validate if using the dynamic graph format proposed in this paper affects the performance of graph algorithms, we implement the state-of-the-art GPU-based algorithms on
the CSR format maintained by GPMA/GPMA+ as well as cuSparseCSR. Meanwhile, we
invoke Stinger’s built-in APIs to handle the same workloads of the graph algorithms,
which are considered as the counterpart of GPU-based approaches in highly parallel CPU
environment. Finally, we implement the standard single-threaded algorithms for each application in AdjLists and PMA as baselines for thorough evaluation. The details of all
compared solutions for each application is summarized in Table 1.
Experimental Environment. All algorithms mentioned in the remaining part of this
section are implemented with CUDA 7.5 and GCC 4.8.4 with -O3 optimization. All
experiments except Stinger run on a CentOS server which has Intel(R) Core i7-5820k
(6-cores, 3.30GHz) with 64GB main memory and three GeForce TITAN X GPUs (each has
12GB device memory), connected with PCIe v3.0. Stinger baselines run on a multi-core
server which is deployed 4-way Intel(R) Xeon(R) CPU E7-4820 v3 (40-cores, 1.90GHz)
with 128GB main memory.
6.2
The Performance of Handling Updates
In this subsection, we compare the update costs for different update approaches. As previously mentioned, we start with the initial subgraph consisting of each dataset’s first half
of total edges. We measure the average update time where the sliding window iteratively
shifts for a batch of edges. To evaluate the impact of update batch sizes, the batch size
is set to range from one edge and exponentially grow to one million edges with base two.
Figure 7 shows the average latency for all approaches with different sliding batch sizes.
Note that the x-axis and y-axis are plotted in log scales. We have also tested sorted graph
streams to evaluate extreme cases. We omit the detailed results and we refer interested
readers to Appendix C.
We observe that, PMA-based approaches are very efficient in handling updates when the
batch size is small. As batch size becomes larger, the performance of PMA and GPMA
quickly degrades to the performance of simple rebuild. Although GPMA achieves better performance than GPMA+ for small batches since the concurrent updating entries are unlikely
to conflict, thread conflicts become serious for larger batches. Due to its lock-free characterstic, GPMA+ shows superior performance over PMA and GPMA. In particular, GPMA+
has speedups of up to 20.42x and 18.30x against PMA and GPMA respectively. Stinger
shows impressive update performance in most cases as Stinger efficiently updates its
dynamic graph structure in a parallel fashion and the code runs on a powerful multi-core
CPU system. For now, multi-core CPU system is considered more powerful than GPUs
for pure random data structure maintainance but cost more (in our experimental setup,
18
Accelerating Dynamic Graph Analytics on GPUs
Slide Size
Uniform Random
M. Sha et al.
Graph500
Reddit
Pokec
1%
15
4.4
0.7
1%
14
4.8
23
1%
1.6
1.2
0.4
1%
1.6
0.8
0.4
0.1%
14
2.1
0.3
0.1%
18
1.7
4.4
0.1%
1.7
0.7
0.4
0.1%
1.9
0.6
0.5
0.01%
19
1.4
2.1
0.01%
1.7
0.6
0.4
0.01%
1.4
0.4
0.6
12
2.4
0.01%
0.0 0.05 0.1 0.15 0.2 0.25 0.3
Time (second)
AdjLists
PMA
0.0 0.05 0.1 0.15 0.2 0.25 0.3
Time (second)
Stinger
cuSparseCSR
0.0
0.01 0.02 0.03 0.04 0.05
Time (second)
BFS (patterned)
0.0
GPMA+
0.01 0.02 0.03 0.04 0.05
Time (second)
Update (unpatterned)
Figure 8: Streaming BFS
Slide Size
Uniform Random
Graph500
1%
15
4.4
1.5
0.1%
0.01%
0.0
0.1
0.2
0.3
0.4
Time (second)
AdjLists
Reddit
1%
16
5.7
23
16
1.9
1.3
0.1%
11
1.9
1.0
0.01%
0.5
0.0
PMA
0.1
0.2
0.3
0.4
Time (second)
cuSparseCSR
Stinger
Pokec
1%
1.8
0.9
1.0
1%
1.5
0.7
0.5
19
2.5
4.9
0.1%
2.0
0.6
0.9
0.1%
1.6
0.4
0.5
14
1.9
1.9
0.01%
2.3
0.5
0.6
0.01%
1.7
0.4
0.5
0.5
0.0
GPMA+
0.02 0.04 0.06 0.08 0.1
Time (second)
ConnectedComponent (patterned)
0.0
0.02 0.04 0.06 0.08
Time (second)
Update (unpatterned)
0.1
Figure 9: Streaming Connected Component
Uniform Random
Slide Size
1%
42
4.5
0.1%
35
4.0
0.01%
0.0
Graph500
44
9.2
3.6
0.5
1.0
1.5 2.0 2.5 3.0
Time (second)
AdjLists
Reddit
1%
70
21
25
1%
0.1%
37
5.1
4.8
0.1%
30
4.9
0.01%
0.0
PMA
0.5
1.0
Stinger
8.1
2.2
6.0
1.6
0.01%
1.5 2.0 2.5 3.0
Time (second)
cuSparseCSR
0.0
GPMA+
Pokec
12
4.9
1.0
0.2
0.4
0.6
0.8
1.0
Time (second)
PageRank (patterned)
14
3.0
1%
9.7
1.8
0.1%
6.9
1.2
0.01%
0.0
0.2
0.4
0.6
0.8
Time (second)
Update (unpatterned)
1.0
Figure 10: Streaming PageRank
our CPU server costs more than 5 times that of the GPU server). Moreover, we also note
that, Stinger shows extremely poor performance in the Graph500 dataset. According
to the previous study [8], the phenomenon is due to the fact that Stinger holds a fixed
size of each edge block. Since Graph500 is a heavily skewed graph as the graph follows
the power law model, the skewness causes severe performance deficiency on the utilization
of memory for Stinger.
We observe the sharp increase for GPMA+ performance curves occur when the batch size
is 512. This is because the multi-level strategy is used in GPMA+ (which is mentioned in
Section 5.2) and shared-memory constraint cannot support batch size which is more than
512 on our hardware. Finally, the experiments show that, GPMA is faster than GPMA+
when the update batch is smaller and leads to few thread conflicts, because the GPMA+
logic is more complicated and includes overheads by a number of kernel calls. However,
using GPMA only benefits when the update batch is extremely small and the performance
gain in such extreme case is also negligible compared with GPMA+. Hence, we can conclude
that GPMA+ shows its stability and efficiency across different update patterns compared
with GPMA, and we will only show the results of GPMA+ in the remaining experiments.
6.3
Application Performance
As previously mentioned, all compared application-specific approaches are summarized in
Table 1. We find that integrating GPMA+ into an existing GPU-based implementation
requires little modification. The main one is in transforming the array operations in
19
Accelerating Dynamic Graph Analytics on GPUs
Slide Size
Uniform Random
M. Sha et al.
Graph500
Reddit
Pokec
1%
1%
1%
1%
0.1%
0.1%
0.1%
0.1%
0.01%
0.01%
0
20
40
60
Time (ms)
80
100
0.01%
0
20
40
60
80
Time (ms)
GPMA+ Update
BFS
100
0.01%
0
5
10
15
20
25
Time (ms)
Fetch BFS Distance Vector
Send Updates
0
5
10
15
Time (ms)
20
25
Figure 11: Concurrent data transfer and BFS computation with asynchronous stream
Number of Edges
PMA Update
PageRank
BFS
Connected Component
600M
600M
600M
600M
1.2B
1.2B
1.2B
1.2B
1.8B
1.8B
1.8B
1.8B
0
20
40
60
80
Throughput (million edges / second)
0
2
4
6
8
0
Throughput (billion edges / second)
1 GPU
2 GPUs
1
2
3
4
5
Throughput (billion edges / second)
3 GPUs
0
1
2
3
Throughput (billion edges / second)
Figure 12: Multi-GPU performance on different sizes of Graph500 datasets
the original implementation to the operations on GPMA+, as presented in Section 4.2. The
intentions of this subsection are two-fold. First, we test if using the PMA-like data structure
to represent the graph brings significant overheads for the graph algorithms. Second, we
demonstrate how the update performance affects the overall efficiency of dynamic graph
processing.
In the remaining part of this section, we present the performance of different approaches
by showing their average elapsed time to process a shift of the sliding window with three
different batch sizes, i.e., the batches contain 0.01%, 0.1% and 1% edges of the respective
dataset. We have also tested the graph stream with explicit random insertions and deletions for all applications as an extended experiment. We omit the detailed results here
since they are similar to the results of the sliding window model and we refer interested
readers to Appendix D. We distinguish the time spent on updates and analytics with
different patterns among all figures.
BFS Results: Figure 8 presents the results for BFS. Although processing BFS only accesses each edge in the graph once, it is still an expensive operation because BFS can
potentially scan the entire graph. This has led to the observation that CPU-based approach takes significant amount of time for BFS computation whereas the update time is
comparatively negligible. Thanks to the massive parallelism and high memory bandwidth
of GPUs, GPU-based approaches are much more efficient than CPU-based approaches for
BFS computation as well as the overall performance. For the cuSparseCSR approach,
the rebuild process is the bottleneck as the update needs to scan the entire group multiple times. In contrast, GPMA+ takes much shorter time for the update and has nearly
identical BFS performance compared with cuSparseCSR. Thus, GPMA+ dominates the
comparisons in terms of the overall processing efficiency.
We have also tested our framework in terms of hiding data transfer over PCIe by using
asynchronous streams to concurrently perform GPU computation and PCIe transfer. In
Figure 11, we show the results when running concurrent execution by using the GPMA+
approach. The data transfer consists of two parts: sending graph updates and fetching
updated distance vector (from the query vertex to all other vertices). It is clear from
the figure that, under any circumstances, sending graph updates is overlapped by GPMA+
update processing and fetching the distance vector is overlapped by BFS computation.
20
Accelerating Dynamic Graph Analytics on GPUs
M. Sha et al.
Thus, the data transfer is completely hidden in the concurrent streaming scenario. As the
observations remain similar in other applications, we omit their results and explanations,
and the details can be found in Appendix B.
Connected Component Results: Figure 9 presents the results for running Connected
Component on the dynamic graphs. The results show different performance patterns compared with BFS as ConnectedComponent takes more time in processing which is caused
by a number of graph traversal passes to extract the partitions. Meanwhile, the update
cost remains the same. Thus, GPU-based solutions enhance their performance superiority over CPU-based solutions. Nevertheless, the update process of cuSparseCSR is still
expensive compared with the time spent on Connected- Component. GPMA+ is very
efficient in processing the updates. Although we have observed that, in the Reddit and
the Pokec datasets, GPMA+ shows some discrepancies for running the graph algorithm
against cuSparseCSR due to the “holes” introduced in the graph structure, the discrepancies are insignificant considering the huge performance boosts for updates. Thus, GPMA+
still dominates the rebuild approach for overall performance.
PageRank Results: Figure 10 presents the results for Page- Rank. PageRank is a
compute-intensive task where the SpMV kernel is iteratively invoked on the entire graph
until the PageRank vector converges. The pattern follows from previous results: CPUbased solutions are dominated by GPU-based approaches because iterative SpMV is a
more expensive process than BFS and ConnectedComponent, and GPU is designed to
handle massively parallel computation like SpMV. Although cuSparseCSR shows inferior
performance compared with GPMA+, the improvement brought by GPMA+’s efficient update
is not as significant as that in previous applications since the update costs are small
compared with the cost of iterative SpMV kernel calls. Nevertheless, the dynamic structure
of GPMA+ does not affect the efficiency of the SpMV kernel and GPMA+ outperforms other
approaches in all experiments.
6.4
Scalability
GPMA and GPMA+ can also be extended to multiple GPUs to support graphs with size
larger than the device memory of one GPU. To showcase the scalability of our proposed
framework, we implement the multi-GPU version of GPMA+ and carry out experiments of
the aforementioned graph applications.
We generate three large datasets using Graph500 with increasing numbers of edges (600
Million, 1.2 Billion and 1.8 Billion) and conduct the same performance experiments in
section 6.3 with 1% slide size, on 1, 2 and 3 GPUs respectively. We evenly partition graphs
according to the vertex index and synchronize all devices after each iteration. For fair
comparison among different datasets, we use throughput as our performance metric.The
experimental results of GPMA+ updates and application performance are illustrated in
Figure 12. We do not compare with Stinger because in this subsection, we focus on the
evaluation on the scalability of GPMA+. The memory consumption of Stinger exceeds
our machine’s 128GB main memory based on its default configuration in the standalone
mode.
Multiple GPUs can extend the memory capacity so that analytics on larger graphs can be
executed. According to Figure 12, the improvement in terms of throughput for multiple
21
Accelerating Dynamic Graph Analytics on GPUs
M. Sha et al.
GPUs behaves differently in various applications. For GPMA+ update and PageRank,
we achieve a significant improvement with more GPUs, because their workloads between
communications are relatively compute-intensive. For BFS and ConnectedComponent,
the experimental results demonstrate a tradeoff between overall computing power and
communication cost with increasing number of GPUs, as these two applications incur
larger communication cost. Nevertheless, multi-GPU graph processing is an emerging
research area and more effectiveness optimizations are left as future work. Overall, this
set of preliminary experiments shows that our proposed scheme is capable of supporting
large scale dynamic graph analytics.
6.5
Overall Findings
We summarize our findings in the subsection. First, GPU-based approaches (cuSparseCSR
and GPMA+) outperform CPU-based approaches thanks to our optimizations in taking advantage of the superior hardware of the GPUs, even compared with Stinger running on a
40-core CPU server. One of the key reasons is that GPMA+ and graph analytics can exploit
the superb high memory bandwidth and massive parallelism of the GPU, as many graph
applications are data- and compute-intensive. Second, GPMA+ is much more efficient than
cuSparseCSR as maintaining the dynamic updates is often the bottleneck of continuous
graph analytic processing and GPMA+ avoids the costly process of rebuilding the graph
via incremental updates while bringing minimal overheads for existing graph algorithms
running its graph structure.
7
Conclusion & Future Work
In this paper, we address how to dynamically update the graph structure on GPUs in an
efficient manner. First, we introduce a GPU dynamic graph analytic framework, which enables existing static GPU-oriented graph algorithms to support high-performance evolving
graph analytics. Second, to avoid the rebuild of the graph structure which is a bottleneck
for processing dynamic graphs on GPUs, we propose GPMA and GPMA+ to support incremental dynamic graph maintenance in parallel. We prove the scalability and complexity
of GPMA+ theoretically and evaluate the efficiency through extensive experiments. As the
future work, we would like to explore the hybrid CPU-GPU supports for dynamic graph
processing and more effectiveness optimizations for involved applications.
22
Accelerating Dynamic Graph Analytics on GPUs
M. Sha et al.
Appendices
A
TryInsert+ Optimizations
Based on different segment sizes, we propose three optimizations of Function TryInsert+
in Algorithm 4. The motivation is to obtain better memory access efficiency and lower cost
of synchronization by balancing between problem scale and hardware hierarchy on GPU.
The key computation logic of TryInsert+ is to merge two sorted arrays, i.e., existing
segment entries and entries to be inserted. Standard approach for parallel merging needs
to identify the position in merged array by binary search and then to execute parallel map,
which requires heavy and uncoalesced memory accesses. Thus, depending on the size of
the merge, we wish to employ different hardware hierarchies on GPU (i.e. warp, block and
device) to minimize the cost of memory accesses.
Before presenting the details of our optimizations, Algorithm 5 illustrates how to group
threads according to their positions in different hierarchies of GPU architecture and how
to target the groups to their assigned segments. In particular, each thread is assigned with
a lane id, a block id and a global thread id to indicate the position of the thread in the
corresponding warp, block and device work group. Each thread is assigned for one GPMA+
segment and the thread will ask other threads in the same work group to cooperate for
its task. This means that each thread tries to drive a group of threads to deal with the
assigned segment. Such a strategy lifts thread divergences caused by different execution
branches. Note that this assignment policy will be used in our warp and block based
optimizations as an initialization function.
Algorithm 6 shows the Warp-Based optimization for any segments with entries no larger
than the warp size. This implementation has high efficiency because explicit synchronization is not needed and all data is stored in registers. For each iteration, all threads
of a particular warp will compete for the control of the warp as shown in line 11. The
winner will drive the other threads in this warp to handle its required computation steps
of the corresponding segment. As an example, line 27 counts valid entries in the segment
concurrently. Lines 32-34 omit the remaining computation steps in TryInsert+, such as
merging insertions and redistributing entries of segments, as their computation paradigm
is similar to counting entries.
Algorithm 7 shows the Block-Based optimization. It utilizes the shared memory, which
has a higher volume than registers, to store data. Even though explicit synchronization is
needed in line 12 and line 32 to guarantee consistency, synchronization in a block is highly
optimized in GPU hardware and thus it does little effect to the overall performance. Both
Warp-Based and Block-Based optimizations explicitly accommodate GPU features. As
discussed in Section 5.2, although these two methods have limited memory for efficient
access, they can handle most of the update requests.
Algorithm 8 shows the Device-Based implementation. The implementation is different
from the ones in Warp and Block based approaches, because it is designed for segments
having a size larger than the shared memory size. Under this scenario, we have to handle
them in the GPU’s global memory. One possible approach is to invoke an independent
kernel for each large segment, but it will take considerable costs to initialize and schedule
for multiple kernels. Hence, we propose an approach to handle a large number segments
by only invoking a few unique kernel calls.
23
Accelerating Dynamic Graph Analytics on GPUs
M. Sha et al.
We illustrate the idea by showing how to perform counting segments which are valid for
insertions as an example. As shown in lines 5 and 12, all valid entries stored in GPMA+
segments are first marked, and then all valid entry counts are calculated by SumReduce in
one kernel call. Line 16 generates valid indexes for segments which have enough free space
to receive their corresponding insertions, which is used by the rest computation steps.
Simply speaking, our approach executes in horizontal steps of the execution logic, in order
to avoid load imbalance caused by branch divergences. Finally, merging and segment
entries redistribution use the same mechanism and thus are omitted.
24
Accelerating Dynamic Graph Analytics on GPUs
Algorithm 5 TryInsert+ Initialization
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
inline function ConstInit( void ) {
// cuda protocol variables
WARPS = blockDim / 32;
warp_id = threadIdx / 32;
lane_id = threadIdx % 32;
thread_id = threadIdx;
block_width = gridDim;
grid_width = gridDim * blockDim;
global_id = block_width * blockIdx + threadIdx;
// infos for assigned segment
seg_beg = segments[global_id];
seg_end = seg_beg + segment_width;
// infos for insertions belong current segment
ins_beg = offset[global_id];
ins_end = offset[global_id + 1];
insert_size = ins_end - ins_beg;
// the upper number that current segment can hold
upper_size = tau * segment_size;
}
Algorithm 6 TryInsert+ Warp-Based Optimization
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
kernel TryInsert+(int segments[m], int offsets[m],
int insertions[n], int segment_width) {
ConstInit();
volatile shared comm[WARPS][5];
warp_shared_register pma_buf[32];
while (WaryAny(seg_end - seg_beg)) {
// compete for warp
comm[warp_id][0] = lane_id;
// winner controls warp in this iteration
if (comm[warp_id][0] == lane_id) {
seg_beg = seg_end;
comm[warp_id][1] = seg_beg;
comm[warp_id][2] = seg_end;
comm[warp_id][3] = ins_beg;
comm[warp_id][4] = ins_end;
}
memcpy(pma_buf, pma[seg_beg], segment_width);
// count valid entries in this segment
entry_num = 0;
if (lane_id < segment_width) {
valid = pma_buf[lane_id] == NULL ? 0 : 1;
entry_num = WarpReduce(valid);
}
// check upper density if insert
if (entry_num + insert_size) < upper_size) {
// merge insertions with pma_buf
// evenly redistribute pma_buf
// mark all insertions successful
memcpy(pma[seg_beg], pma_buf, segment_width);
}
}
}
25
M. Sha et al.
Accelerating Dynamic Graph Analytics on GPUs
Algorithm 7 TryInsert+ Block-Based Optimization
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
kernel TryInsert+(int segments[m], int offsets[m],
int insertions[n], int segment_width) {
ConstInit();
volatile shared comm[5];
volatile shared pma_buf[segment_width];
while (BlockAny(seg_end - seg_beg)) {
// compete for block
comm[0] = thread_id;
BlockSynchronize();
// winner controls block in this iteration
if (comm[0] == lane_id) {
seg_beg = seg_end;
comm[1] = seg_beg;
comm[2] = seg_end;
comm[3] = ins_beg;
comm[4] = ins_end;
}
memcpy(pma_buf, pma[seg_beg], segment_width);
// count valid entries in this segment
entry_num = 0;
ptr = thread_id;
while (ptr < segment_width) {
valid = pma_buf[ptr] == NULL ? 0 : 1;
entry_num += BlockReduce(valid);
thread_id += block_width;
}
BlockSynchronize();
// same as lines 30-37 in Algorithm 5
}
}
Algorithm 8 TryInsert+ Device-Based Optimization
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
function TryInsert+(int segments[m], int offsets[m],
int insertions[n], int segment_width) {
int valid_flags[m * segment_width];
parallel foreach i in range(m):
parallel foreach j in range(segment_width):
if (pma[segments[i] + j] != NULL) {
valid_flags[i * segment_width + j] = 1;
}
DeviceSynchronize();
int entry_nums[m];
DeviceSegmentedReduce(valid_flags, m,
segment_size, entry_nums);
DeviceSynchronize();
int valid_indexes[m];
parallel foreach i in range(m):
if (entry_nums[i] + insert_size < upper_size) {
valid_indexes[i] = i;
}
DeviceSynchronize();
RemoveIfTrue(valid_indexes);
DeviceSynchronize();
// according to valid_indexes, segmentedly to:
//
merge insertions into segments
//
evenly redistribute segments
//
mark all insertions successful
}
26
M. Sha et al.
Accelerating Dynamic Graph Analytics on GPUs
B
M. Sha et al.
Additional Experimental Results For Data Transfer
We show the experimental results for using asynchronous streams for concurrently transmitting data on PCIe and running computations on the GPU. We only show the results
for GPMA+.
In ConnectedComponent, the data transferred on PCIe consists of two parts: the graph
updates and the component label vector to all vertices computed by ConnectedComponent.
In PageRank, the result vector to be fetched indicates PageRank scores, which has the
same size as ConnectedComponent’s. The results in Figures 13 and 14 have shown that
the data transfer is completely hidden by analytic processing on GPU and GPMA+ update.
Slide Size
Uniform Random
Graph500
Reddit
Pokec
1%
1%
1%
1%
0.1%
0.1%
0.1%
0.1%
0.01%
0.01%
0.01%
0.01%
0
25
50 75 100 125
0
Time (ms)
GPMA+ Update
25
50 75 100 125
Time (ms)
ConnectedComponent
0
10
20
30
Time (ms)
Fetch Component Label Vector
40
0
10
20
Time (ms)
30
Send Updates
Figure 13: Concurrent data transfer and Connected Component computation with asynchronous stream
Slide Size
Uniform Random
Graph500
Reddit
Pokec
1%
1%
1%
1%
0.1%
0.1%
0.1%
0.1%
0.01%
0.01%
0.01%
0.01%
0
50
100 150 200 250
Time (ms)
0
100
GPMA+ Update
200 300 400
Time (ms)
PageRank
0 25 50 75 100 125
Time (ms)
Fetch PageRank Vector
Send Updates
0
25
50 75 100
Time (ms)
125
Figure 14: Concurrent data transfer and PageRank computation with asynchronous stream
C
The Performance of Handling Updates on Sorted Graphs
Uniform Random
Graph500
106
Reddit
106
105
104
104
104
104
102
101
1000
103
102
101
1000
10
1
3
10
Sliding Size
10
5
103
102
101
1000
10
AdjLists
Time (ms)
105
Time (ms)
105
103
1
3
10
Sliding Size
PMA
10
5
103
102
101
1000
10
Stinger
Pokec
106
105
Time (ms)
Time (ms)
106
cuSparseCSR
1
3
10
Sliding Size
GPMA
10
5
101
103
Sliding Size
105
GPMA+
Figure 15: Performance comparison for updates with different batch sizes. The dashed
lines represent CPU-based solutions whereas the solid lines represent GPU-based solutions.
For the update results with sorted streaming orders, AdjLists performs the best among
all approaches due to its efficient balanced binary tree update mechanism. Meanwhile,
a batch of sorted updates makes GPMA very inefficient as all updating threads within
the batch conflict. Thanks to the non-locking optimization introduced, the update performance of GPMA+ is still significantly faster than that of the rebuild approach (cuSparseCSR)
with orders of magnitude speedups for small batch sizes.
27
Accelerating Dynamic Graph Analytics on GPUs
D
M. Sha et al.
Additional Experimental Results For Graph Streams with Explicit
Deletions
We present the experimental results for graph streams which involve explicit deletions. In
this section, we use the same stream setup which is mentioned in Section 6.1. However, for
each iteration of sliding, we will randomly pick a set of edges belonging to current sliding
window insteading of the head part as edges to be deleted.
Slide Size
Uniform Random
Graph500
1%
15
4.4
0.7
0.1%
14
2.1
0.3
12
2.4
0.01%
0.0 0.05 0.1 0.15 0.2 0.25 0.3
Time (second)
AdjLists
Reddit
1%
14
4.8
23
0.1%
0.01%
PMA
Pokec
1%
1.6
1.2
0.4
1%
1.6
0.8
0.4
18
1.7
4.4
0.1%
1.7
0.7
0.4
0.1%
1.9
0.6
0.5
19
1.4
2.1
0.01%
1.7
0.6
0.4
0.01%
1.4
0.4
0.6
0.0 0.05 0.1 0.15 0.2 0.25 0.3
Time (second)
Stinger
cuSparseCSR
0.0
0.01 0.02 0.03 0.04 0.05
Time (second)
BFS (patterned)
0.0
GPMA+
0.01 0.02 0.03 0.04 0.05
Time (second)
Update (unpatterned)
Figure 16: Streaming BFS with explicit deletions
Slide Size
Uniform Random
Graph500
Reddit
Pokec
1%
15
4.4
1.5
1%
16
5.7
23
1%
1.8
0.9
1.0
1%
1.5
0.7
0.5
0.1%
16
1.9
1.3
0.1%
19
2.5
4.9
0.1%
2.0
0.6
0.9
0.1%
1.6
0.4
0.5
0.01%
11
1.9
1.0
0.01%
14
1.9
1.9
0.01%
2.3
0.5
0.6
0.01%
1.7
0.4
0.5
0.0
0.1
0.2
0.3
0.4
Time (second)
AdjLists
0.5
0.0
PMA
0.1
0.2
0.3
0.4
Time (second)
cuSparseCSR
Stinger
0.5
0.0
GPMA+
0.02 0.04 0.06 0.08 0.1
Time (second)
ConnectedComponent (patterned)
0.0
0.02 0.04 0.06 0.08
Time (second)
Update (unpatterned)
0.1
Figure 17: Streaming Connected Component with explicit deletions
Uniform Random
Slide Size
1%
42
4.5
0.1%
35
4.0
0.01%
0.0
Graph500
44
9.2
3.6
0.5
1.0
1.5 2.0 2.5 3.0
Time (second)
AdjLists
Reddit
1%
70
21
25
1%
0.1%
37
5.1
4.8
0.1%
30
4.9
0.01%
0.0
PMA
0.5
1.0
Stinger
8.1
2.2
6.0
1.6
0.01%
1.5 2.0 2.5 3.0
Time (second)
cuSparseCSR
0.0
GPMA+
Pokec
12
4.9
1.0
0.2
0.4
0.6
0.8
1.0
Time (second)
PageRank (patterned)
14
3.0
1%
9.7
1.8
0.1%
6.9
1.2
0.01%
0.0
0.2
0.4
0.6
0.8
Time (second)
Update (unpatterned)
1.0
Figure 18: Streaming PageRank with explicit deletions
Figures 16, 17 and 18 illustrate the results of three streaming applications respectively.
Note that we pick sets of edges to be deleted in advance, which means that for each
independent baseline, it handles the same workload all the time. Since there is no intrinsic
difference between expiry and explicit deletions, the results are similar to sliding window’s.
The subtle difference in the results are mainly due to different deletions which lead to
various applications’ running time.
28
Accelerating Dynamic Graph Analytics on GPUs
M. Sha et al.
References
[1] Apache flink. https://flink.apache.org/. Accessed: 2016-10-18.
[2] Cusp library. https://developer.nvidia.com/cusp. Accessed: 2017-03-25.
[3] cusparse. https://developer.nvidia.com/cusparse. Accessed: 2016-11-09.
[4] CUDA UnBound (CUB) library. https://nvlabs.github.io/cub/, 2015.
[5] L. Akoglu, H. Tong, and D. Koutra. Graph based anomaly detection and description:
a survey. Data Min. Knowl. Discov., 29(3):626–688, 2015.
[6] A. Ashari, N. Sedaghati, J. Eisenlohr, S. Parthasarathy, and P. Sadayappan. Fast
sparse matrix-vector multiplication on gpus for graph applications. In SC, pages
781–792, 2014.
[7] A. Ashari, N. Sedaghati, J. Eisenlohr, and P. Sadayappan. An efficient twodimensional blocking strategy for sparse matrix-vector multiplication on gpus. In
ICS, pages 273–282, 2014.
[8] D. A. Bader, J. Berry, A. Amos-Binks, D. Chavarrı́a-Miranda, C. Hastings, K. Madduri, and S. C. Poulos. Stinger: Spatio-temporal interaction networks and graphs
(sting) extensible representation. Georgia Institute of Technology, Tech. Rep, 2009.
[9] N. Bell and M. Garland. Efficient sparse matrix-vector multiplication on CUDA.
Technical Report NVR-2008-004, NVIDIA Corporation, 2008.
[10] M. A. Bender, E. D. Demaine, and M. Farach-Colton. Cache-oblivious b-trees. SIAM
J. Comput., 35(2):341–358, 2005.
[11] M. A. Bender and H. Hu. An adaptive packed-memory array. ACM Trans. Database
Syst., 32(4), 2007.
[12] L. Braun, T. Etter, G. Gasparis, M. Kaufmann, D. Kossmann, D. Widmer,
A. Avitzur, A. Iliopoulos, E. Levy, and N. Liang. Analytics in motion: High performance event-processing and real-time analytics in the same database. In SIGMOD,
pages 251–264, 2015.
[13] F. Busato and N. Bombieri. Bfs-4k: an efficient implementation of bfs for kepler gpu
architectures. TPDS, 26(7):1826–1838, 2015.
[14] R. Cheng, J. Hong, A. Kyrola, Y. Miao, X. Weng, M. Wu, F. Yang, L. Zhou, F. Zhao,
and E. Chen. Kineograph: Taking the pulse of a fast-changing and connected world.
In EuroSys, pages 85–98, 2012.
[15] M. S. Crouch, A. McGregor, and D. Stubbs. Dynamic graphs in the sliding-window
model. In European Symposium on Algorithms, pages 337–348. Springer, 2013.
[16] H.-V. Dang and B. Schmidt. The sliced coo format for sparse matrix-vector multiplication on cuda-enabled gpus. Procedia Computer Science, 9:57–66, 2012.
[17] M. Datar, A. Gionis, P. Indyk, and R. Motwani. Maintaining stream statistics over
sliding windows. SIAM journal on computing, 31(6):1794–1813, 2002.
[18] A. Davidson, S. Baxter, M. Garland, and J. D. Owens. Work-efficient parallel gpu
methods for single-source shortest paths. In Parallel and Distributed Processing
Symposium, 2014 IEEE 28th International, pages 349–359. IEEE, 2014.
29
Accelerating Dynamic Graph Analytics on GPUs
M. Sha et al.
[19] D. Ediger, R. McColl, E. J. Riedy, and D. A. Bader. STINGER - High performance
data structure for streaming graphs. HPEC, 2012.
[20] M. Elkin. Streaming and fully dynamic centralized algorithms for constructing and
maintaining sparse spanners. ACM Trans. Algorithms, 7(2):20:1–20:17, 2011.
[21] J. Feigenbaum, S. Kannan, A. McGregor, S. Suri, and J. Zhang. On graph problems
in a semi-streaming model. Theor. Comput. Sci., 348(2-3):207–216, 2005.
[22] Z. Fu, M. Personick, and B. Thompson. MapGraph: A High Level API for Fast
Development of High Performance Graph Analytics on GPUs. A High Level API for
Fast Development of High Performance Graph Analytics on GPUs. ACM, New York,
New York, USA, June 2014.
[23] S. Guha and A. McGregor. Graph synopses, sketches, and streams: A survey.
Proc. VLDB Endow., 5(12):2030–2031, 2012.
[24] P. Harish and P. Narayanan. Accelerating large graph algorithms on the gpu using
cuda. In International Conference on High-Performance Computing, pages 197–208.
Springer, 2007.
[25] D. S. Hirschberg. Parallel algorithms for the transitive closure and the connected
component problems. In Proceedings of the eighth annual ACM symposium on Theory
of computing, pages 55–57. ACM, 1976.
[26] A. P. Iyer, L. E. Li, T. Das, and I. Stoica. Time-evolving graph processing at scale.
In Proceedings of the Fourth International Workshop on Graph Data Management
Experiences and Systems, pages 5:1–5:6, 2016.
[27] A. P. Iyer, L. E. Li, and I. Stoica. Celliq : Real-time cellular network analytics at
scale. In NSDI, pages 309–322, 2015.
[28] R. Kaleem, A. Venkat, S. Pai, M. Hall, and K. Pingali. Synchronization trade-offs
in gpu implementations of graph algorithms. In Parallel and Distributed Processing
Symposium, 2016 IEEE International, pages 514–523. IEEE, 2016.
[29] J. King, T. Gilray, R. M. Kirby, and M. Might. Dynamic sparse-matrix allocation on
gpus. In ISC, pages 61–80, 2016.
[30] J. Leskovec and R. Sosič. Snap: A general-purpose network analysis and graph-mining
library. TIST, 8(1):1, 2016.
[31] X. Lin, R. Zhang, Z. Wen, H. Wang, and J. Qi. Efficient subgraph matching using
gpus. In ADC, pages 74–85, 2014.
[32] H. Liu, H. H. Huang, and Y. Hu. ibfs: Concurrent breadth-first search on gpus. In
SIGMOD, pages 403–416, 2016.
[33] L. Luo, M. Wong, and W.-m. Hwu. An effective gpu implementation of breadth-first
search. In DAC, pages 52–55, 2010.
[34] M. Martone, S. Filippone, S. Tucci, P. Gepner, and M. Paprzycki. Use of hybrid
recursive csr/coo data structures in sparse matrix-vector multiplication. In IMCSIT,
pages 327–335. IEEE, 2010.
[35] A. McGregor. Graph stream algorithms: A survey. SIGMOD Rec., 43(1):9–20, 2014.
30
Accelerating Dynamic Graph Analytics on GPUs
M. Sha et al.
[36] D. Merrill, M. Garland, and A. Grimshaw. High-Performance and Scalable GPU
Graph Traversal. TOPC, 1(2), 2015.
[37] R. C. Murphy, K. B. Wheeler, B. W. Barrett, and J. A. Ang. Introducing the graph
500. 2010.
[38] N. Ohsaka, T. Maehara, and K.-i. Kawarabayashi. Efficient pagerank tracking in
evolving networks. In KDD, pages 875–884, 2015.
[39] Y. Saad. Numerical solution of large nonsymmetric eigenvalue problems. Computer
Physics Communications, 53(1):71–90, 1989.
[40] D. Sayce. 10 billions tweets, number of tweets per day. http://www.dsayce.com/
social-media/10-billions-tweets/. Accessed: 2016-10-18.
[41] J. Soman, K. Kothapalli, and P. J. Narayanan. A fast GPU algorithm for graph
connectivity. IPDPS Workshops, 2010.
[42] M. Stonebraker, U. Çetintemel, and S. Zdonik. The 8 requirements of real-time stream
processing. ACM SIGMOD Record, 34(4):42–47, 2005.
[43] J. A. Stratton, N. Anssari, C. Rodrigues, I.-J. Sung, N. Obeid, L. Chang, G. D. Liu,
and W.-m. Hwu. Optimization and architecture effects on gpu computing workload
performance. In InPar, pages 1–10, 2012.
[44] N. Tang, Q. Chen, and P. Mitra. Graph stream summarization: From big bang to
big crunch. In SIGMOD, pages 1481–1496, 2016.
[45] A. Toshniwal, S. Taneja, A. Shukla, K. Ramasamy, J. M. Patel, S. Kulkarni,
J. Jackson, K. Gade, M. Fu, J. Donham, N. Bhagat, S. Mittal, and D. Ryaboy.
Storm@twitter. In SIGMOD, pages 147–156, 2014.
[46] C. E. Tsourakakis, U. Kang, G. L. Miller, and C. Faloutsos. DOULION: counting
triangles in massive graphs with a coin. In SIGKDD, pages 837–846, 2009.
[47] U. Verner, A. Schuster, M. Silberstein, and A. Mendelson. Scheduling processing of
real-time data streams on heterogeneous multi-gpu systems. In SYSTOR, page 7,
2012.
[48] Y. Wang, A. Davidson, Y. Pan, Y. Wu, A. Riffel, and J. D. Owens. Gunrock: A highperformance graph processing library on the gpu. SIGPLAN Not., 50(8):265–266,
2015.
[49] Y. Wang, Q. Fan, Y. Li, and K.-L. Tan. Real-time influence maximization on dynamic
social streams. In Proc. VLDB Endow., 2017.
[50] S. Yan, C. Li, Y. Zhang, and H. Zhou. yaspmv: yet another spmv framework on gpus.
In SIGPLAN Notices, volume 49, pages 107–118, 2014.
[51] X. Yang, S. Parthasarathy, and P. Sadayappan. Fast sparse matrix-vector multiplication on gpus: Implications for graph mining. Proc. VLDB Endow., 4(4):231–242,
2011.
[52] X. Yang, S. Parthasarathy, and P. Sadayappan. Fast sparse matrix-vector multiplication on gpus: implications for graph mining. Proc. VLDB Endow., 4(4):231–242,
2011.
31
Accelerating Dynamic Graph Analytics on GPUs
M. Sha et al.
[53] Y. Yang, Z. Wang, J. Pei, and E. Chen. Tracking influential nodes in dynamic
networks. arXiv preprint arXiv:1602.04490, 2016.
[54] M. Zaharia, T. Das, H. Li, T. Hunter, S. Shenker, and I. Stoica. Discretized streams:
Fault-tolerant streaming computation at scale. In SOSP, pages 423–438, 2013.
[55] H. Zhang, P. Lofgren, and A. Goel. Approximate personalized pagerank on dynamic
graphs. arXiv preprint arXiv:1603.07796, 2016.
[56] Y. Zhang and F. Mueller. Gstream: A general-purpose data streaming framework on
GPU clusters. In ICPP, pages 245–254, 2011.
[57] J. Zhong and B. He. Medusa: Simplified graph processing on gpus. IEEE Trans.
Parallel Distrib. Syst., 25(6):1543–1552, 2014.
32
| 8 |
Durstewitz, Nonlinear State Space Model for Reconstructing Computational Dynamics, 11/02/16
Vers 02 Nov 2016
A State Space Approach for Piecewise‐Linear Recurrent Neural Networks for Reconstructing
Nonlinear Dynamics from Neural Measurements
Short title: Nonlinear State Space Model for Reconstructing Computational Dynamics
Daniel Durstewitz
Dept. of Theoretical Neuroscience, Bernstein Center for Computational Neuroscience Heidelberg‐
Mannheim, Central Institute of Mental Health, Medical Faculty Mannheim/ Heidelberg University
daniel.durstewitz@zi‐mannheim.de
1
Durstewitz, Nonlinear State Space Model for Reconstructing Computational Dynamics, 11/02/16
2
Abstract
The computational and cognitive properties of neural systems are often thought to be implemented
in terms of their network dynamics. Hence, recovering the system dynamics from experimentally
observed neuronal time series, like multiple single‐unit recordings or neuroimaging data, is an
important step toward understanding its computations. Ideally, one would not only seek a (lower‐
dimensional) state space representation of the dynamics, but would wish to have access to its
(computational) governing equations for in‐depth analysis. Recurrent neural networks (RNNs) are a
computationally powerful and dynamically universal formal framework which has been extensively
studied from both the computational and the dynamical systems perspective. Here we develop a
semi‐analytical maximum‐likelihood estimation scheme for piecewise‐linear RNNs (PLRNNs) within
the statistical framework of state space models, which accounts for noise in both the underlying
latent dynamics and the observation process. The Expectation‐Maximization algorithm is used to
infer the latent state distribution, through a global Laplace approximation, and the PLRNN
parameters iteratively. After validating the procedure on toy examples, and using inference through
particle filters for comparison, the approach is applied to multiple single‐unit recordings from the
rodent anterior cingulate cortex (ACC) obtained during performance of a classical working memory
task, delayed alternation. A model with 5 states turned out to be sufficient to capture the essential
computational dynamics underlying task performance, including stimulus‐selective delay activity. The
estimated models were rarely multi‐stable, however, but rather were tuned to exhibit slow dynamics
in the vicinity of a bifurcation point. In summary, the present work advances a semi‐analytical (thus
reasonably fast) maximum‐likelihood estimation framework for PLRNNs that may enable to recover
the computationally relevant dynamics underlying observed neuronal time series, and directly link
them to computational properties.
Author Summary
Neuronal dynamics mediate between the physiological and anatomical properties of a neural system
and the computations it performs, in fact may be seen as the ‘computational language’ of the brain.
It is therefore of great interest to recover from experimentally recorded time series, like multiple
single‐unit or neuroimaging data, the underlying network dynamics and, ideally, even its governing
equations. This is not at all a trivial enterprise, however, since neural systems are very high‐
dimensional, come with considerable levels of intrinsic (process) noise, are usually only partially
observable, and these observations may be further corrupted by noise from measurement and
preprocessing steps. The present article embeds piecewise‐linear recurrent neural networks
(PLRNNs) within a state space approach, a statistical estimation framework that deals with both
process and observation noise. PLRNNs are computationally and dynamically powerful model
systems. Their statistically principled estimation from multivariate neuronal time series thus may
provide access to some essential features of the neuronal dynamics, like attractor states, their
governing equations, and their computational implications. The approach is exemplified on multiple
single‐unit recordings from the rat prefrontal cortex during working memory.
Durstewitz, Nonlinear State Space Model for Reconstructing Computational Dynamics, 11/02/16
3
Introduction
Neural dynamics mediate between the underlying biophysical and physiological properties of a
neural system and its computational and cognitive properties (e.g. [1‐4]). Hence, from a
computational perspective, we are often interested in recovering the neural network dynamics of a
given brain region or neural system from experimental measurements. Yet, experimentally, we
commonly have access only to noisy recordings from a relatively small proportion of neurons
(compared to the size of the brain area of interest), or to lumped surface signals like local field
potentials or the EEG. Inferring from these the computationally relevant underlying dynamics is
therefore not trivial, especially since both the neural system itself (e.g., stochastic synaptic release;
[5]) as well as the recorded signals (e.g., spike sorting errors; [6]) come with a good deal of noise.
Speaking in statistical terms, 'model‐free' techniques which combine state space
reconstruction methods (delay embeddings) from nonlinear dynamics with nonlinear basis
expansions and kernel techniques have been one approach to the problem [7, 8]. These techniques
provide informative lower‐dimensional visualizations of population trajectories and approximations
to the neural flow field, but they highlight only certain, salient aspects of the dynamics and do not
return its governing equations (e.g. [9]) or underlying computations. Alternatively, state space
models, a statistical framework particularly popular in engineering and ecology (e.g. [10]), have been
adapted to extract lower‐dimensional neural trajectory flows from higher‐dimensional recordings
[11‐21]. State space models link a process model of the unobserved (latent) underlying dynamics to
the experimentally observed time series via observation equations, and differentiate between
process noise and observation noise (e.g. [22]). So far, with few exceptions (e.g. [19, 23]), these
models assumed linear latent dynamics, however. Although this may be sufficient to yield smoothed
trajectories and reduced state space representations, it implies that the recovered dynamical model
by itself is not powerful enough to reproduce a range of important dynamical and computational
phenomena in the nervous system, among them multi‐stability which has been proposed to underlie
neural activity during working memory [24‐28].
Here we derive a new state space algorithm based on piecewise‐linear (PL) recurrent neural
networks (RNN). It has been shown that RNNs with nonlinear activation functions can, in principle,
approximate any dynamical system's trajectory or, in fact, dynamical system itself (given some
general conditions; [29‐31]). Thus, in theory, they are powerful enough to recover whatever
dynamical system is underlying the experimentally observed time series. Piecewise linear activation
functions, in particular, are by now the most popular choice in deep learning algorithms [32, 33], and
considerably simplify some of the derivations within the state space framework (as shown later).
They may also be more apt for producing working memory‐type activity with longer delays if for
some units the transfer function happens to coincide with the bisectrix (cf. [34]), and ease the
analysis of fixed points and stability. We then apply this newly derived algorithm to multiple single‐
unit recordings from the rat prefrontal cortex obtained during a classical delayed alternation working
memory task [35].
Results
State space model
Durstewitz, Nonlinear State Space Model for Reconstructing Computational Dynamics, 11/02/16
4
This article considers simple discrete‐time piecewise‐linear (PL) recurrent neural networks (RNN) of
the form
(1)
z t Az t 1 W max{0, z t 1 θ} s t ε t
, ε t ~ N (0, Σ) ,
where z t ( z1t ...z Mt )T is the (M1)‐dimensional (latent) neural state vector at time t=1…T,
A diag ([a11...aMM ]) is an MM diagonal matrix of auto‐regression weights,
W (0 w12 ...w1M , w21 0 w23 ...w2 M , w31 w32 0 w34 ...w3 M ,...) is an MM off‐diagonal matrix of
connection weights, θ (1... M )T is a set of (constant) activation thresholds, s t is a sequence of
(known) external inputs, and ε t denotes a Gaussian white noise process with diagonal covariance
2
]) . The max‐operator is assumed to work element‐wise.
matrix Σ diag ([ 112 ... MM
Before proceeding further, two things are worth pointing out: First, more complicated PL
functions may, in principle, be constructed from (1) by properly connecting simple PL units,
combined with an appropriate choice of activation thresholds (and acknowledging the activation
lags among units). Second, all fixed points (in the absence of external input) of the PLRNN (1) could
be obtained by solving the 2M linear equations
(2)
z * ( A W I ) 1 Wθ ,
where is to denote the set of indices of units for which we assume z m m , and W the
respective connectivity matrix in which all columns from W corresponding to units are set to 0.
Obviously, to make z * a true fixed point of (1), the solution to (2) has to be consistent with the
defined set , that is z*m m has to hold for all m and z*m m for all m . For networks
of moderate size (say M<30) it is thus computationally feasible to explicitly check for all fixed points
and their stability.
Here, latent state model (1) is then connected to some N‐dimensional observed vector time
series X={xt} via a simple linear‐Gaussian model,
(3)
x t B (z t ) ηt
, ηt ~ N (0, Γ) ,
where (z t ) : max{0, z t θ} , {ηt } is the (white Gaussian) observation noise series with diagonal
2
covariance matrix Γ diag ([ 112 ... NN
]) , and B an NM matrix of regression weights. Thus, the idea
is that only the PL‐transformed activation (z t ) reaches the ‘observation surface’ (as, e.g., with
spiking activity when the underlying membrane dynamics itself is not visible). We further assume for
the initial state,
(4)
z1 ~ N (μ 0 s1 , Σ) ,
with, for simplicity, the same covariance matrix as for the process noise in general (reducing the
number of to be estimated parameters). In the case of multiple, temporally separated trials, we allow
each one to have its own individual initial condition μ k , k 1...K .
Durstewitz, Nonlinear State Space Model for Reconstructing Computational Dynamics, 11/02/16
5
The general goal here is to determine both the model’s unknown parameters
Ξ {μ 0 , A, W, Σ, B, Γ} (assuming fixed thresholds for now) as well as the unobserved, latent
state path Z : {z t } (and its second‐order moments) from the experimentally observed time series
{xt}. These could be, for instance, properly transformed multivariate spike time series or
neuroimaging data. This is accomplished here by the Expectation‐Maximization (EM) algorithm which
iterates state (E) and parameter (M) estimation steps and is developed in detail for model (1), (3), in
the Methods. In the following I will first discuss state and parameter estimation separately for the
PLRNN, before describing results from the full EM algorithm in subsequent sections. This will be done
along two toy problems, an higher‐order nonlinear oscillation (stable limit cycle), and a simple
'working memory' paradigm in which one of two discrete stimuli had to be retained across a
temporal interval. Finally, the application of the validated PLRNN EM algorithm will be demonstrated
on multiple single‐unit recordings obtained from rats on a standard working memory task (delayed
alternation; data from [35], kindly provided by Dr. James Hyman, University of Nevada, Las Vegas).
State estimation
The latent state distribution, as explained in Methods, is a high‐dimensional (piecewise) Gaussian
mixture with the number of components growing as 2TM with sequence length T and number of
latent states M. Here a semi‐analytical, approximate approach was developed that treats state
estimation as a combinatorial problem by first searching for the mode of the full distribution (cf. [36,
37]; in contrast, e.g., to a recursive filtering‐smoothing scheme that makes local [linear‐Gaussian]
approximations, e.g. [11], cf. [22]). This approach amounts to solving a high (2MT)‐dimensional
piecewise linear problem (due to the piecewise quadratic, in the states Z, log‐likelihood eq. 6, 7).
Here this was accomplished by alternating between (1) solving the linear set of equations implied by
a given set of linear constraints Ω : {(m, t ) | z mt m } (cf. eq. 7 in Methods) and (2) flipping the
sign of the constraints violated by the current solution z * () to the linear equations, thus following
a path through the (MT)‐dimensional binary space of linear constraints using Newton‐type
iterations (similar as in [38], see Methods). Given the mode and state covariance matrix (evaluated at
the mode from the negative inverse Hessian), all other expectations needed for the EM algorithm
were then derived analytically, with one exception that was approximated (see Methods for full
details).
The toy problems introduced above were used to assess the quality of these approximations. For the
first toy problem, an order‐15 limit cycle was produced with a PLRNN consisting of three recurrently
coupled units, inputs to units #1 and #2, and parameter settings as indicated in Fig. 1 and provided
Matlab file ‘PLRNNoscParam.mat’. The limit cycle was repeated for 50 full cycles (giving 750 data
points) and corrupted by process noise (cf. Fig. 1). These noisy states (arranged in a (3 x 750) matrix
Z) were then transformed into a (3 x 750) output matrix X, to which observation noise was added,
through a randomly drawn (3 x 3) regression weight matrix B. State estimation was started from a
random initial condition. True (but noise‐corrupted) and estimated states for this particular problem
are illustrated in Fig. 1A, indicating a tight fit (although some fraction of the linear constraints were
still violated, 0.27% in the present example and <2.3% in the working memory example below; see
Methods on this issue).
To examine more systematically the quality of the approximate‐analytical estimates of the first and
second order moments of the joint distribution across states z and their piecewise linear
Durstewitz, Nonlinear State Space Model for Reconstructing Computational Dynamics, 11/02/16
6
transformations (z), samples from p(Z|X) were simulated using bootstrap particle filtering (see
Methods). Although these simulated samples are based only on the filtering (not the smoothing)
steps (and (re‐)sampling schemes may have issues of their own; e.g. [22]), analytical and sampling
estimates were in tight agreement, correlating almost to 1 for this example, as shown in Fig. 2.
Fig. 3A illustrates the setup of the ‘two‐cue working memory task’, chosen for later comparability
with the experimental setup. A 5‐unit PLRNN was first trained by conventional gradient descent
(‘real‐time recurrent learning’ (RTRL), see [39, 40]) to produce a series of six 1’s on unit #3 and six 0’s
on unit #4 five time steps after an input (of 1) occurred on unit #1, and the reverse pattern (six 0’s on
unit #3 and six 1’s on unit #4) five time steps after an input occurred on unit #2. A stable PLRNN with
a reasonable solution to this problem was then chosen for further testing the present algorithm (cf.
Fig. 3C). (While the RTRL approach was chosen to derive a working memory circuit with reasonably
‘realistic’ characteristics like a wider distribution of weights, it is noted that a multi‐stable network is
relatively straightforward to construct explicitly given the analytical accessibility of fixed points [see
Methods]; for instance, choosing θ (0.5 0.5 0.5 0.5 2) , A (0.9 0.9 0.9 0.9 0.5) , and
W (0 , 0 , 0 , 0 , 11 11 0) with
0.2 , yields a tri‐stable system.) Like for the limit cycle problem before, the number of
observations was taken to be equal to the number of latent states, and process and observation
noise were added (see Fig. 4 and Matlab file ‘PLRNNwmParam.mat’ for specification of parameters).
The system was simulated for 20 repetitions of each trial type (i.e., cue‐1 or cue‐2 presentations)
with different noise realizations and each ‘trial’ started from its own initial condition μ k (see
Methods), resulting in a total series length of T=20220=800 (although, importantly, in this case the
time series consisted of distinct, temporally segregated trials, instead of one continuous series, and
was treated as such an ensemble of series by the algorithm). As before, state estimation started from
random initial conditions and was provided with the correct parameters, as well as with the
observation matrix X. While Fig. 3B illustrates the correlation between true (i.e., simulated) and
estimated states across all trials and units, Fig. 3C shows true and estimated states for a
representative cue‐1 (left) and cue‐2 (right) trial, respectively. Again, our procedure for obtaining the
maximum a‐posteriori (MAP) estimate of the state distribution appears to work quite well (in
general, only locally optimal solutions can be guaranteed, however, and the algorithm may have to
be repeated with different state initializations; see Methods).
Parameter estimation
Given the true states, how well would the algorithm retrieve the parameters of the PLRNN? To assess
this, the actual model states (which generated the observations X) from simulation runs of the
oscillation and the working memory task described above were provided as initialization for the E‐
step. Based on these, the algorithm first estimated the state covariances for z and (z) (see above),
and then the parameters in a second step (i.e., the M‐step). Note that the parameters can all be
computed analytically given the state distribution (see Methods), and, provided the state covariance
matrices (summed across time) as required in eqn. 17a,d,f are non‐singular, have a unique solution.
Hence, in this case, any misalignment with the true model parameters can only come from one of
two sources: i) estimation was based on one finite‐length noisy realization of the PLRNN process, ii)
all second order moments of the state distribution were still estimated based on the true state
vectors. However, as can be appreciated from Fig. 1B (oscillation) and Fig. 4 (working memory), for
Durstewitz, Nonlinear State Space Model for Reconstructing Computational Dynamics, 11/02/16
7
the two example scenarios studied here, all parameter estimates still agreed tightly with those
describing the true underlying model.
In the more general case where both the states and the parameters are unknown and only the
observations are given, note that the model as stated in eqns. 1 & 3 is over‐specified as, for instance,
at the level of the observations, additional variance placed into can be compensated for by
adjusting accordingly (cf. [41, 42]). In the following we therefore always arbitrarily fixed
Σ I 10 2 , as common in many latent variable models (like factor analysis), including state space
models (e.g. [23, 43]).
Joint estimation of states and parameters by EM
The observations above confirm that our algorithm finds satisfactory approximations to the
underlying state path and state covariances when started with the right parameters, and ‐ vice versa
‐ identifies the correct parameters when provided with the true states. Indeed, the M‐step, since it is
exact, can only increase the expected log‐likelihood eq. 5 with the present state expectancies fixed.
However, due to the system's piecewise‐defined discrete nature, modifying the parameters may lead
to a new set of constraint violations, that is may throw the system into a completely different linear
subspace which may imply a decrease in the likelihood in the E‐step. It is thus not guaranteed that a
straightforward EM algorithm converges (cf. [44, 45]), or that the likelihood would even
monotonically increase with each EM iteration.
To examine this issue, full EM estimation of the WM model (as specified in Fig. 4, using N=20 outputs
in this case) was run 240 times, starting from different random, uniformly distributed initializations
for the parameters. Fig. 5B gives, for the maximum likelihood solution across all 240 runs (Fig. 5A),
the correlations between true and estimated states for all five state variables of the WM model. Note
that estimated and true model states may not be in the same order, as any permutation of the latent
state indices together with the respective columns of observation matrix B will be equally consistent
with the data X (see also [23]). For the WM model examined here, however, partial order
information is implicitly provided to the EM algorithm through the definition of unit‐specific inputs
sit . For the present example, true and estimated states were nicely linearly correlated for all 5 latent
variables (Fig. 5B), but some of the regression slopes significantly differed from 1, indicating a degree
of freedom in the scaling of the states. More generally, there may not even be a clear linear
relationship with a single latent state, although, if the estimation was successful, a linear
ˆ may usually map the estimated onto the true states. This is because, if the
transformation VZ
observation eq. were strictly linear, any linear transformation of the latent states by some matrix V
could essentially be reversed at the level of the outputs by back‐multiplying B with V‐1 (cf. [23]; note
that here the piecewise linearity through (z ) in eq. 1, 3, ignored in the argument above,
complicates matters, however). This implies that the model is (at most) identifiable only up to this
linear transformation, which might not be a serious issue, however, if one is interested primarily in
the latent dynamics (rather than in the exact parameters).
Fig. 6 illustrates the distribution of initial and final parameter estimates around their true values
across all 240 runs (before and after reordering the estimated latent states based on the rotation
that would be required for achieving the optimal mapping onto the true states, as determined
through Procrustes analysis). Fig. 6 reveals that a) the EM algorithm does clearly improve the
estimates and b) these final estimates seemed to be largely unbiased (deviations centered around 0).
Durstewitz, Nonlinear State Space Model for Reconstructing Computational Dynamics, 11/02/16
8
Application to experimental recordings
I next was interested in what kind of structure the present PLRNN approach would retrieve from
experimental multiple (N=19) single‐unit recordings obtained while rats were performing a simple
and well‐examined working memory task, namely spatial delayed alternation [35] (see Methods).
(Note that in the present context this analysis is mainly meant as an exemplification of the current
model approach, not as a detailed examination of the working memory issue itself.) The delay was
always initiated by a nose poke of the animal into a port located on the side opposite from the
response levers, and had a minimum length of 10 s. Spike trains were first transformed into kernel
density estimates by convolution with a Gaussian kernel (see Methods), as done previously (e.g. [8,
46, 47]), and binned with 500 ms resolution. This also renders the observed data more suitable to the
Gaussian noise assumptions of the present observation model, eq. 3. Models with 5 and 8 latent
states were estimated, but only results from the former will be reported here (as 8 states did not
appear to yield any additional insight). Periods of cue presentation were indicated to the model by
setting external inputs sit 1 to units i=1 (left lever) or i=2 (right lever) for three 500 ms time bins
surrounding the event (and sit 0 otherwise). The EM algorithm was started from 36 different
initializations of the parameters (including thresholds ), and the 5 highest likelihood solutions were
considered further.
Fig. 7A gives the model log‐likelihoods across EM iterations for these 5 highest‐likelihood solutions.
Interestingly, there were single neurons whose responses were predicted very well by the estimated
model despite large trial‐to‐trial fluctuations (Fig. 7B, top row), while there were others with similar
trial‐to‐trial fluctuations for which the model only captured the general trend (Fig. 7B, bottom row).
This could potentially suggest that trial‐to‐trial fluctuations in single neurons could be for very
different reasons: In those cases where strongly varying single unit responses are nevertheless highly
predictable, at least a considerable proportion of their trial‐to‐trial fluctuations must have been
captured by the deterministic part of the model’s latent state dynamics, hence may be due to
different (trial‐unique) initializations of the states (recall that the states are not free to vary in
accounting for the observations, but are tightly constrained by the model’s temporal consistency
requirements). In contrast, when only the average trend is captured, the neuron’s trial‐to‐trial
fluctuations likely represent true intrinsic (or measurement) noise sources that the model’s
deterministic part cannot account for. This observation highlights that (nonlinear) state space models
could potentially also provide new insights into other long‐standing questions in neurophysiology.
Fig. 8 shows the five trial‐averaged latent states for both left‐ and right‐lever trials for one of the
highest likelihood solutions. Not surprisingly, the first two state variables (receiving external input)
exhibit a strong cue response for the left vs. right lever, respectively. The third latent variable
appears to reflect the (end‐of‐trial) motor response, while the fourth and fifth state variable clearly
distinguish between the left and right lever options throughout the delay period of the task, in this
sense carrying a memory of the cue (previous response) within the delay. Hence, for this particular
data set, the extracted latent states appear to summarize quite well the most salient computational
features of this simple working memory task.
Further insight might be gained by examining the system’s fixed points and their eigenvalue
spectrum. For this purpose, the EM algorithm was started from 200 different initial conditions (that
is, initial parameter estimates and threshold settings ) with maximum absolute eigenvalues (of the
corresponding fixed points) drawn from a relatively uniform distribution within the interval [0 3].
Durstewitz, Nonlinear State Space Model for Reconstructing Computational Dynamics, 11/02/16
9
Although the estimation process rarely returned truly multi‐stable solutions (just 2% of all cases),
there was a clear trend for the final maximum eigenvalues to aggregate around 1 (Fig. 9), that is to
produce models with very slow dynamics. Indeed, effectively slow dynamics is all that is needed to
bridge the delays (see also [1]), while true multi‐stability may perhaps even be the physiologically
less likely scenario (e.g. [48, 49]). (Reducing the bin width from 500 ms to 100 ms appeared to
produce solutions with eigenvalues even closer to 1 while retaining stimulus selectivity across the
delay, but this observation was not followed up more systematically here).
Discussion
Reconstructing neuronal dynamics parametrically and non‐parametrically
In the present work, a semi‐analytical, maximum‐likelihood (ML) approach for estimating piecewise‐
linear recurrent neural networks (PLRNN) from brain recordings was developed. The idea is that such
models would provide 1) a representation of neural trajectories and computationally relevant
dynamical features underlying high‐dimensional experimental time series in a much lower‐
dimensional latent variable space (cf. [16, 19]), and 2) more direct access to the neural system’s
computational properties. Specifically, once estimated to reproduce the data (in the ML sense), such
models may allow for more detailed analysis and in depth insight into the system’s computational
dynamics, e.g. through an analysis of fixed points and their linear stability (e.g. [24, 26, 28, 50‐56]),
which is not directly accessible from the experimental time series.
Model‐free (non‐parametric) techniques, usually based on Takens’ delay embedding theorem
[57] and extensions thereof [58, 59], have also frequently been applied to gain insight into neuronal
dynamics and its essential features, like attracting states associated with different task phases from
in‐vivo multiple single‐unit recordings [7, 8] or unstable periodic orbits extracted from relatively low‐
noise slice recordings [60]. In neuroscience, however, one commonly deals with high‐dimensional
observations, as provided by current multiple single‐unit or neuroimaging techniques (which still
usually constitute just a minor subset of all the system’s dynamical variables). In addition, there is a
large variety of both process and measurement noise sources. The former include potential thermal
noise sources and the probabilistic behavior of single ion channel gating [61], probabilistic synaptic
release [5], fluctuations in neuromodulatory background and hormone levels, and a large variety of
uncontrollable external noise sources via the sensory surfaces, including somatosensory and visceral
feedback from within the body. Measurement noise may come from direct physical sources like, for
instance, instabilities and movement in the tissue surrounding the recording electrodes, noise
properties of the recording devices themselves, the mere fact that only a fraction of all system
variables is experimentally accessed (‘sampling noise’), or may result from preprocessing steps like
spike sorting (e.g. [62, 63]). This is therefore a quite different scenario from the comparatively low‐
dimensional and low‐noise situations in, e.g., laser physics [64], and delay‐embedding‐based
approaches to the reconstruction of neural dynamics may have to be augmented by machine
learning techniques to retrieve at least some of its most salient features [7, 8].
Of course, model‐based approaches like the one developed here are also plagued by the high
dimensionality and high noise levels inherent in neural data, but perhaps to a lesser extent than
approaches like delay embeddings that aim to directly construct the state space from the
observations (see also [65]). This is because models as pursued in the statistical state space
Durstewitz, Nonlinear State Space Model for Reconstructing Computational Dynamics, 11/02/16
10
framework explicitly incorporate both process and measurement noise into the system’s description.
Also, as long as the latent variable space itself is relatively small and related to the observations by
simple linear equations, as here, the high dimensionality of the observations themselves does not
constitute a too serious issue for estimation. More importantly, however, it is of clear advantage to
have access to the governing equations themselves, as only this allows for an in depth analysis of the
system’s dynamics and its relation to neural computation (e.g. [2, 24, 26, 53, 54, 56, 66]). For
instance, recurrent network models have been trained in the past to perform behavioral tasks or
reproduce behavioral data to infer the dynamical mechanisms potentially underlying working
memory [67] or context‐dependent decision making [54], although commonly, as in the cited cases,
not within a statistical framework (or not even by direct estimation from experimental data). There
are also approaches which are somewhat in between, attempting to account for the observations
directly, without reference to an underlying latent variable model, through differential equations
expressed in terms of nonlinear basis expansions in the observations, estimated through strongly
regularized (penalized) regression methods ([7], see also [9]). It remains to be investigated how well
such methods, which go without a noise model and face high data dimensionality directly, transfer to
neuroscience problems.
Estimation of neural state space models
State space models are a popular statistical tool in many fields of science (e.g. [10, 68]), although
their applications in neuroscience are of more recent origin [11, 12, 14, 17‐19, 21]. The Dynamic
Causal Modeling (DCM) framework advanced in the human fMRI literature to infer the functional
connectivity of brain networks and their dependence on task conditions [68, 69] may be seen as a
state space approach, although these models usually do not contain process noise (except for the
recently proposed ‘stochastic DCM’ [69]) and are commonly estimated through Bayesian inference,
which imposes more constraints (via computational burden) on the complexity of the models that
can reasonably be dealt with in this framework. In neurophysiology, Smith & Brown [11] were among
the first to suggest a state space model for multivariate spike count data by coupling a linear‐
Gaussian transition model with Poisson observations, with state estimation achieved by making
locally Gaussian approximations to eq. 18. Similar models have variously been used subsequently to
infer local circuit coding properties [14] or, e.g., biophysical parameters of neurons or synaptic inputs
from postsynaptic voltage recordings [70, 12]. Yu et al. [21] proposed Gaussian Process Factor
Analysis (GPFA) for retrieving lower‐dimensional, smooth latent neural trajectories from multiple
spike train recordings. In GPFA, the correlation structure among the latent variables is specified
(parameterized) explicitly rather than being given through a transition model. Buesing et al. [16]
discuss regularized forms of neural state space models to enforce their stability, while Macke et al.
[18] review different estimation methods for such models like the Laplace approximation or
variational inference methods.
By far most of the models discussed above are linear in their latent dynamics, however
(although observations may be non‐Gaussian). Although this may often be sufficient to uncover
important properties of underlying latent processes or structures, like connectivity or
synaptic/neuronal parameters, or to obtain lower‐dimensional representations of the observed
process, it is not suitable for retrieving the system dynamics or computations, as linear systems are
strongly limited in the repertoire of dynamics (and computations) they can produce (e.g. [50, 71]).
There are a few exceptions, however, the current work builds on: Yu et al. [19] suggested a RNN with
sigmoid‐type activation function (using the error function), coupled to Poisson spike count outputs,
Durstewitz, Nonlinear State Space Model for Reconstructing Computational Dynamics, 11/02/16
11
and used it to reconstruct the latent neural dynamics underlying motor preparation and planning in
non‐human primates. In their work, they combined the Gaussian approximation suggested by Smith
& Brown [11] with the Extended Kalman Filter (EKF) for estimation within the EM framework. These
various approximations in conjunction with the iterative EKF estimation scheme may be quite prone
to numerical instabilities and accumulating errors, however (cf. [22]). Earlier work by Roweis &
Ghahramani [23] used radial basis function (RBF) networks as a partly analytically tracktable
approach. Nonlinear extensions to DCM, incorporating quadratic terms, have been proposed as well
recently [72]. State and parameter estimation has also been attempted in (noisy) nonlinear
biophysical models [73,74], but these approaches are usually computationally expensive, especially
when based on numerical sampling [74], while at the same time pursuing objectives somewhat
different from those targeted here (i.e., less focused on computational properties).
Nevertheless, nonlinear neural state space models remain an under‐researched topic in
theoretical neuroscience. In the present work, PLRNNs were therefore chosen as a mathematically
comparatively tracktable, yet computationally powerful nonlinear recurrent network approach that
can reproduce a wide range of nonlinear dynamical phenomena [75‐78]. Given its semi‐analytical
nature, the present algorithm runs reasonably fast (and starting it from a number of different
initializations to approach a globally optimal solution is computationally very feasible). However, its
mathematical properties (among them, issues of convergence/monotonicity, local
maxima/uniqueness and existence of solutions, and identifiability [of dynamics]) certainly require
further illumination and may lead to further algorithmic improvements.
Mechanisms of working memory
Although the primary focus of this work was to develop and evaluate a state space framework for
PLRNNs, some discussion of the applicational example chosen here, working memory, is in order.
Working memory is generally defined as the ability to actively hold an item in memory, in the
absence of guiding external input, for short‐term reference in subsequent choice situations [79].
Various neural mechanisms have been proposed to underlie this cognitive capacity, most
prominently multi‐stable neural networks which retain short‐term memory items by switching into
one of several stimulus‐selective attractor states (e.g. [24, 25, 28]). These attractors usually represent
fixed points in the firing rates, with assemblies of recurrently coupled stimulus‐selective cells
exhibiting high rates while those cells not coding for the present stimulus in short‐term memory
remaining at a spontaneous low‐rate base level. These models were inspired by the physiological
observation of ‘delay‐active’ cells [80‐82], that is cells that switch into a high‐rate state during the
delay periods of working memory tasks, and back to a low‐rate state after completion of a trial,
similar to the ‘delay‐active’ latent states observed in Fig. 8. Nakahara & Doya [83] were among the
first to point out, however, that, for working memory, it may be completely sufficient (or even
advantageous) to tune the system close to a bifurcation point where the dynamics becomes very
slow (see also [1]), and true multi‐stability may not be required. This is supported by the present
observation that most of the estimated PLRNN models had fixed points with eigenvalues close to 1
but were not truly bi‐ or multi‐stable (cf. Fig. 9), yet this was sufficient to account for maintenance of
stimulus‐selectivity throughout the 10 s delay of the present task (cf. Fig. 8) and for experimental
observations (cf. Fig. 7). Recently, a number of other mechanisms for supporting working memory,
however, including sequential activation of cell populations [84] or synaptic mechanisms [85] have
been discussed. Thus, the neural mechanisms of working memory remain an active research area to
Durstewitz, Nonlinear State Space Model for Reconstructing Computational Dynamics, 11/02/16
12
which statistical model estimation approaches as developed here may significantly contribute, but
too broad a topic in its own right to be covered in more depth by this mainly methodological work.
Models and Methods
Expectation‐Maximization Algorithm: State estimation
As with most previous work on estimation in (neural) state space models [16, 18, 19, 22], we use the
Expectation‐Maximization (EM) framework for obtaining estimates of both the model parameters
and the underlying latent state path. Due to the piecewise‐linear nature of model (1), however, the
conditional latent state path density p(Z|X) is a high‐dimensional ‘mixture’ of partial Gaussians, with
the number of integrations to perform to obtain moments of p(Z|X) scaling as 2TM. Although
analytically accessible, this will be computationally prohibitive for almost all cases of interest. Our
approach therefore focuses on a computationally reasonably efficient way of searching for the mode
(maximum a‐posteriori, MAP estimate) of p(Z|X) which was found to be in good agreement with
E(Z|X) in most cases. Covariances were then approximated locally around the MAP estimate.
More specifically, the EM algorithm maximizes the expected log‐likelihood of the joint
distribution p(X,Z) as a lower bound on log p ( X | Ξ) [23], where Ξ {μ 0 , A, W, Σ, B, Γ} denotes
the set of to‐be‐optimized‐for parameters (note that we dropped the thresholds from this for
now):
Q(Ξ, Z) : E[log p (Z, X | Ξ)]
1
E (z1 μ 0 s1 )T Σ 1 (z1 μ 0 s1 )
2
1 T
E (z t Az t 1 W (z t 1 ) s t )T Σ 1 (z t Az t 1 W (z t 1 ) s t )
2 t 2
(5)
1 T
T
E (x t B (z t ))T Γ 1 (x t B (z t )) (log | Σ | log | Γ |) .
2 t 1
2
For state estimation (E‐step), if were a linear function, obtaining E (Z | X, Ξ) would be equivalent
to maximizing the argument of the expectancy in (5) w.r.t. Z, i.e.,
E[ Z | X, Ξ] arg max Z log p (Z, X | Ξ) (see [12]; see also [86]). This is because for a Gaussian
mean and mode coincide. In our case, p(X,Z) is piecewise Gaussian, and we still take the approach
(suggested in [12]) of maximizing log p(Z, X | Ξ) directly w.r.t. Z (essentially a Laplace
approximation of p ( X | Ξ) where we neglect the Hessian which is constant around the maximizer;
cf. [12,37]).
Let (t ) {1...M } be the set of all indices of the units for which we have z mt m at time
t, and W ( t ) and B ( t ) the matrices W and B , respectively, with all columns with indices (t )
set to 0. The state estimation problem can then be formulated as
13
Durstewitz, Nonlinear State Space Model for Reconstructing Computational Dynamics, 11/02/16
(6)
maximize
1
QΩ* (Z) : (z1 μ 0 s1 )T Σ 1 (z1 μ 0 s1 )
2
1 T
[z t ( A W ( t 1) )z t 1 W ( t 1)θ s t ]T Σ 1[z t ( A W ( t 1) )z t 1 W (t 1)θ s t ]
2 t 2
1 T
(x t B ( t ) z t B ( t )θ)T Γ 1 (x t B (t ) z t B ( t )θ)
2 t 1
w.r.t. (Ω, Z) , subject to ztm m t , m (t ) AND ztm m t , m (t )
Let us concatenate all state variables into one long column vector, z (z1 ,..., z T ) ( z11...z mt ...z MT )T
, and unwrap the sums across time into large, block‐banded MTMT matrices (see [12, 71]) in which
we combine all terms quadratic or linear in z, or (z ) , respectively. Further, define d as the binary
(MT1) indicator vector which has 1s everywhere except for the entries with indices
Ω {1...MT } which are set to 0, and let D : diag (d ) the MTMT diagonal matrix formed
from d . Let Θ : (θ, θ,..., θ) ( MT 1) , and Θ M the same vector shifted downward by M positions,
with the first M entries set to 0. One may then rewrite QΩ* (Z ) in the form
QΩ* (Z)
(7)
1 T
z (U 0 D U1 U1T D D U 2 D )z
2
z T ( v 0 D v1 V2 diag[d M ]Θ M V3 DΘ D V4 D Θ)
( v 0 D v1 V2 diag[d M ]Θ M V3 DΘ D V4 D Θ)T z const.
The MTMT matrices U{0...2} separate product terms that do not involve (z ) ( U 0 ), involve
multiplication by (z ) only from the left‐hand or right‐hand side ( U1 ), or from both sides ( U 2 ).
Likewise, for the terms linear in z, vector and matrix terms were separated that involved z mt or m
conditional on z mt m (please see the provided MatLab code for the exact composition of these
matrices). For now, the important point is that we have 2M T different quadratic equations,
depending on the bits on and off in the binary vector d . Consequently, to obtain the MAP
estimator for z, in theory, one may consider all 2MT different settings for d , for each solve the
linear equations implied by QΩ* (Z) / Z 0 , and select among those for which the solution z * is
consistent with the considered set the one which produces the largest value QΩ* (z * ) .
In practice, this is generally not feasible. Various solution methods for piecewise linear
equations have been suggested in the mathematical programming literature in the past [87, 88]. For
instance, some piecewise linear problems may be recast as a linear complementarity problem [89],
but the pivoting methods often used to solve it work (numerically) well only for smaller scale settings
[38]. Here we therefore settled on a similar, simple Newton‐type iteration scheme as proposed in
Brugnano & Casulli [38]. Specifically, if we denote by z * () the solution to eq. 7 obtained with the
set of constraints active, the present scheme initializes with a random drawing of the {z mt } , sets
the components of d for which z mt m to 1 and all others to 0, and then keeps on alternating
Durstewitz, Nonlinear State Space Model for Reconstructing Computational Dynamics, 11/02/16
14
between (1) solving QΩ* (Z) / Z 0 for z * () and (2) flipping the bits in d for which
sgn[2d ( k ) 1] sgn[ z*k () k ] , that is, for which the components of the vector
(8)
c : (2d 1)T (θ z * ())
are positive, until the solution to QΩ* (Z) / Z 0 is consistent with set (i.e., c 0 ).
For the problem as formulated in Brugnano & Casulli [38], these authors proved that such a
solution always exists, and that the algorithm will always terminate after a finite (usually low)
number of steps, given certain assumptions and provided the matrix that multiplies with the states z
in QΩ* (Z) / Z 0 (i.e., the Hessian of QΩ* (z * ) ), fulfills certain conditions (Stieltjes‐type; see [38]
for details). This will usually not be the case for the present system; although the Hessian of QΩ* (z * )
will be symmetric and positive‐definite (with proper parameter settings), its off‐diagonal elements
may be either larger or smaller than 0. Moreover, for the problem considered here, all elements of
the Hessian in (7) depend on , while in [38] this is only the case for the on‐diagonal elements (i.e.,
in [38] D enters the Hessian only in additive, not multiplicative form as here). For these reasons,
the Newton‐type algorithm outlined above may not always converge to an exact solution (if one
exists in this case) but may eventually cycle among non‐solution configurations, or may not even
always increase Q(Z) (i.e., eq. 5!). To bypass this, the algorithm was always terminated if one of the
following three conditions was met: (i) A solution to QΩ* (Z) / Z 0 consistent with was
encountered; (ii) a previously probed set was revisited; (iii) the constraint violation error defined
by c 1 , the l1 norm of the positive part of c defined in eq. 8, went up beyond a pre‐specified
tolerance level. With these modifications, we found that the algorithm would usually terminate after
only a few iterations (<10 for the examined toy examples) and yield approximate solutions with only
a few constraints still violated (<3% for the toy examples). For these elements k of z for which the
constraints are still violated, that is for which ck 0 in eq. 8, one may explicitly enforce the
constraints by setting the violating states z {k } θ{k } , but either way it was found that even these
approximate (and potentially only locally optimal) solutions were generally (for the problems
studied) in sufficiently good agreement with E(Z|X).
In the case of full EM iterations (with the parameters unknown as well), it appeared that
flipping violated constraints in d one by one may often (for the scenarios studied here) improve
overall performance, in the sense of yielding higher‐likelihood solutions and less numerical problems
(although it may leave more constraints violated in the end). Hence, this scheme was adopted here
for the full EM, that is only the single bit k* corresponding to the maximum element of vector c in
eq. 8 was inverted on each iteration (the one with the largest wrong‐side deviation from θ ). In
general, however, the resultant slow‐down in the algorithm may not always be worth the
performance gains; or a mixture of methods, with d kl *1 1 d kl * with k * : arg max k {ck 0} early
on, and d{l k1} 1 d{l k }k : ck 0 during later iterations, may be considered.
Once a (local) maximum z max has been obtained, the (local) covariances may be read off
from the inverse negative Hessian at z max , i.e. the elements of
15
Durstewitz, Nonlinear State Space Model for Reconstructing Computational Dynamics, 11/02/16
(9)
V : ( U 0 D U1 U1T D D U 2 D ) 1 .
We then use these covariance estimates to obtain (estimates of) E[ ( z )] , E[z (z T )] , and
E[ (z ) (z T )] , as required for the maximization step. Denoting by F ( ; , 2 ) : N ( x; , 2 )dx
the complementary cumulative Gaussian, to ease subsequent derivations, let us introduce the
following notation:
(10)
N k : N ( k ; z kmax , k2 ) , Fk : F ( k ; z kmax , k2 ) , kl2 : cov( z kmax , zlmax ) vkl .
The elements of the expectancy vectors and matrices above are computed as
(11)
E[ ( z k )] k2 N k ( z kmax k ) Fk ,
E[ ( z k ) 2 ] ([ z kmax ]2 k2 k2 2 k z kmax ) Fk ( z kmax k ) k2 N k ,
E[ z k ( zl )] ( kl2 l z kmax z kmax zlmax ) Fl z kmax l2 N l .
The terms E[ ( z k ) ( zl )] , for k l , are more tedious, and cannot be (to my knowledge and insight)
computed exactly (analytically), so we develop them in a bit more detail here:
(12)
E[ ( z k ) ( zl )]
p (z k , zl )( z k k )( zl l ) dz k dzl
k l
k l
p (z k , zl ) z k zl dz k dzl k
k l
k l
k l
p (z k , zl ) zl dz k dzl l
k l
p (z k , zl ) z k dz k dzl
p (z k , zl ) dz k dz l
The last term is just a (complementary) cumulative bivariate Gaussian evaluated with parameters
specified through the MAP solution ( z max , V ) (and multiplied by the thresholds). The first term we
may rewrite as follows:
k
l
k
l
p (z k , zl ) z k zl dz k dzl p ( z k ) z k p (zl | z k ) zl dz k dzl
l
p ( z k ) z k N ( l ; l |k , l1 ) l |k (1 N ( zl ; l |k , lk1 ) dzl dz k
k
1
max
l lk ( z k z k )
(13)
where
l |k : zlmax
l : l2 /( k2 l2 kl4 )
lk : kl2 /( k2 l2 kl4 )
These are just standard results one can derive by the reverse chain rule for integration, with the ’s
the elements of the inverse bivariate (k,l)‐covariance matrix. Note that if the variable z k were
removed from the first integrand in eq. 13, i.e. as in the second term in eq. 12, all terms in eq. 13
would just come down to uni‐ or bivariate Gaussians (times some factor) or a univariate Gaussian
expectancy value, respectively. Noting this, one obtains for the second (and correspondingly for the
third) term in eq. 12:
16
Durstewitz, Nonlinear State Space Model for Reconstructing Computational Dynamics, 11/02/16
k
p (z k , zl ) zl dz k dzl k k N l F ( k ; lk , l1 ) k ( zlmax Fk kl2 N k ) F ( l ; zlmax , k1 )
k l
(14)
with kl : zlmax kl2 / k2 ( k z kmax )
The problematic bit is the product term
k
l
p ( z k ) z k l|k N ( zl ; l|k , lk1 ) dz l dz k in eq. 13, which we
resolve by making the approximation l |k l z
k
l
(15)
max
l
. This way we have for the first term in eq. 12:
p (z k , zl ) z k zl dz k dzl k N l l1 N ( k ; lk , l1 ) lk F ( k ; lk , l1 )
( k2 zlmax kl2 z kmax ) N k ( z kmax zlmax kl2 ) Fk F ( l ; lk , k1 )
Putting (13)‐(15) together with the bivariate cumulative Gaussian yields an analytical approximation
to eq. 12 that can be computed based on the quantities obtained from the MAP estimate ( z max , V ) .
Expectation‐Maximization Algorithm: Parameter estimation
Once we have estimates for E[z ] , E[ zz T ] , E[ ( z )] , E[z (z T )] , and E[ (z ) (z T )] , the
maximization step is standard and straightforward, so for convenience we just state the results here,
using the notation
T
T
E1, : E[ (z t ) (z )] ,
t 1
T 1
E 4 : E[ (z t )z Tt ] ,
t 1
(16)
E 2 : E[z z ] ,
T
t
t 2
E3, :
T
t t 1
T 1
E[z z
t 1
t
T
t
],
T
E5 : E[z t (z Tt1 )] ,
t 2
T
T
t 1
t 1
F1 : x t E[ (z Tt )] , F2 : x t xTt
T
T
t 2
t 1
T
, F3 : s t E[z Tt1 ] ,
F4 : s t E[ (z Tt1 )] , F5 : E[z t ]sTt
t 2
T
, F6 : s t sTt
t 1
With these expectancy sums defined, one has
(17a)
B F1E1,10
(17b)
Γ
(17c)
μ 0 E[z1 ] s1
(17d)
A [(E 2 WE4 F3 ) I ][E3, 0 I]1
1
(F2 F1BT BF1T BE1T,0 BT ) I
T
(17e)
Σ
1
var(z1 ) μ 0s1T s1μT0 ET3,1 F5 F5T F6 (F3 E 2 ) AT A(F3T ET2 ) AET3,0 AT
T
(F4 E5 ) W T W (F4T ET5 ) WE1T W T AET4 W T WE 4 A T I
Durstewitz, Nonlinear State Space Model for Reconstructing Computational Dynamics, 11/02/16
17
Note that to avoid redundancy in the parameters, here we usually fixed Σ I 10 2 .
For W, since we assumed this matrix to have an off‐diagonal structure (i.e., with zeros on the
diagonal), we solve for each row of W separately:
P ( 0) : (E3,0 I) 1 ET4
(17f)
P (1) : E5 (E 2 F3 ) I P ( 0 ) F4
m {1...M } : Wm ,{1:M }\ m Pm(1,{) 1:M }\ m ([E1 E 4, m Pm( 0) ]{1:M }\ m,{1:M }\ m ) 1
where the subscripts indicate the matrix elements to be pulled out, with the subscript dot denoting
all elements of the corresponding column or row (e.g., ‘m’ takes the mth column of that matrix).
Starting from a number of different random parameter initializations, the E‐ and M‐steps are
alternated until the log‐likelihood ratio falls below a predefined tolerance level (while still increasing)
or a preset maximum number of allowed iterations are exceeded. For reasons mentioned in the
Results, sometimes it can actually happen that the log‐likelihood ratio temporarily decreases, in
which case the iterations are continued. If ( N M ) 2 N M , factor analysis may be used to
derive initial estimates for the latent states and observation parameters in (3) [23], although this was
not attempted here. For further implementational details see the MatLab code provided at www.zi‐
mannheim.de/en/research/departments‐research‐groups‐institutes/theor‐neuroscience‐e.html
[upon publication].
Particle filter
To validate the approximations from our semi‐analytical procedure developed above, a bootstrap
particle filter as given in Durbin & Koopman [22] was implemented. In bootstrap particle filtering, the
state posterior distribution at time t,
pΞ ( z t | x1 ,..., x t )
(18)
pΞ (x t | z t ) pΞ (z t | x1 ,..., x t 1 )
pΞ (x t | x1 ,..., x t 1 )
pΞ (x t | z t ) pΞ (z t | z t 1 ) pΞ (z t 1 | x1 ,..., x t 1 )d z t 1
z t 1
pΞ ( x t | x1 ,..., x t 1 )
is numerically approximated through a set of ‘particles’ (samples) {z t(1) ,..., z t( K ) } , drawn from
pΞ (z t | x1 ,..., x t 1 ) , together with a set of normalized weights {wt(1) ,..., wt( K ) } ,
wt( r ) : pΞ (x t | z t( r ) ) k 1 pΞ (x t | z t( k ) )
K
1
. Based on this representation, moments of pΞ ( z t | x1:t ) and
pΞ ( (z t ) | x1:t ) can be easily obtained by evaluating (or any other function of z) on the set of
samples {z t(r ) } and summing the outcomes weighted with their respective normalized observation
likelihoods {wt(r ) } . A new set of samples {z t( r 1) } for t+1 is then generated by first drawing K times
from {z t(k ) } with replacement according to the weights {wt(k ) } , and then drawing K new samples
according to the transition probabilities pΞ (z t( k1) | z t( k ) ) (thus approximating the integral in eq. 18).
Here we used K=104 samples. Note that this numerical sampling scheme, like a Kalman filter, but
Durstewitz, Nonlinear State Space Model for Reconstructing Computational Dynamics, 11/02/16
18
unlike the procedure outlined above, only implements the filtering step (i.e., yields pΞ (z t | x1:t ) , not
pΞ (z t | x1T: ) ). On the other hand, it gives (weakly) consistent (asymptotically unbiased; [90, 91])
estimates of all expectancies across this distribution, that is, it does not rely on the type of
approximations and locally optimal solutions of our semi‐analytical approach that almost inevitably
will come with some bias (since, among other factors, the mode would usually deviate from the
mean by some amount for the present model).
Experimental data sets
Details of the experimental task and electrophysiological data sets used here could be found in
Hyman et al. [35]. Briefly, rats had to alternate between left and right lever presses in a Skinner box
to obtain a food reward dispensed on correct choices, with a 10 s delay enforced between
consecutive lever presses. While the levers were located on one side of the Skinner box, animals had
to perform a nosepoke on the opposite side of the box in between lever presses for initiating the
delay period, to discourage them from developing an external coding strategy (e.g., through
maintenance of body posture during the delay). While animals were performing the task, multiple
single units were recorded with a set of 16 tetrodes implanted bilaterally into the anterior cingulate
cortex (ACC, a subdivision of rat prefrontal cortex). For the present analyses, a data set from only one
of the four rats recorded on this task was selected for the present exemplary purposes, namely the
one where the clearest single unit traces of delay activity were observed in the first place. This data
set consisted of 30 simultaneously recorded units, of which the 19 units with spiking rates >1 Hz were
retained, on 14 correct trials (only correct response trials were analyzed). The trials had variable
length, but were all cut down to the same length of 14 s, including 2 s of pre‐nosepoke, 5 s extending
into the delay from the nosepoke, 5 s preceding the next lever press, and 2 s of post‐response phase
(note that this may imply temporal gaps in the middle of the delay on some trials, which were
ignored here for convenience). All spike trains were convolved with Gaussian kernels (see, e.g., [8,
46, 92]), with the kernel standard deviation set individually for each unit to one half of its mean
interspike‐interval. Note that this also brings the observed series into tighter agreement with the
Gaussian assumptions of the observation model, eq. 3. Finally, the spike time series were binned into
500 ms bins (corresponding roughly to the inverse of the overall [across all 30 recorded cells] average
neural firing rates of 2.2 Hz), which resulted in 14 trials of 28 time bins each submitted to the
estimation process. As indicated in the section ‘State space model’, a trial‐unique initial state mean
μ k , k 1...14 , was assumed for each of the 14 temporally segregated trials.
Acknowledgements
I thank Dr. Georgia Koppe for her feedback on this manuscript, and Drs. James Hyman and Jeremy
Seamans for lending me their in‐vivo electrophysiological recordings from rat ACC as an analysis
testbed.
Funding statement
Durstewitz, Nonlinear State Space Model for Reconstructing Computational Dynamics, 11/02/16
19
This work was funded through two grants from the German Research Foundation (DFG, Du 354/8‐1,
and within the Collaborative Research Center 1134) to the author, and by the German Ministry for
Education and Research (BMBF, 01ZX1314G) within the e:Med program.
References
1. Durstewitz D. Self‐organizing neural integrator predicts interval times through climbing activity. J
Neurosci. 2003;23: 5342‐5353.
2. Wang XJ. Probabilistic decision making by slow reverberation in cortical circuits. Neuron. 2002;36:
955‐968.
3. Izhikevich EM. Dynamical Systems in Neuroscience.2007; MIT Press.
4. Rabinovich MI, Rabinovich MI, Huerta R, Varona P, Afraimovich VS. Transient cognitive dynamics,
metastability, and decision making. PLoS Comput Biol. 2008;2: e1000072.
5. Stevens CF. Neurotransmitter release at central synapses. Neuron. 2003;40: 381‐388.
6. Pillow JW, Shlens J, Chichilnisky EJ, Simoncelli EP. A model‐based spike sorting algorithm for
removing correlation artifacts in multi‐neuron recordings. PLoS One. 2013;8: e62123
Durstewitz, Nonlinear State Space Model for Reconstructing Computational Dynamics, 11/02/16
20
7. Balaguer‐Ballester E, Lapish CC, Seamans JK, Durstewitz D. Attractor Dynamics of Cortical
Populations During Memory‐Guided Decision‐Making. PLoS Comput Biol. 2011;7: e1002057.
8. Lapish CC, Balaguer‐Ballester E, Seamans JK, Phillips AG, Durstewitz D. Amphetamine Exerts Dose‐
Dependent Changes in Prefrontal Cortex Attractor Dynamics during Working Memory. J Neurosci.
2015;35: 10172–10187.
9. Brunton SL, Proctor JL, Kutz JN. Discovering governing equations from data by sparse identification
of nonlinear dynamical systems. Proc Natl Acad Sci U S A. 2016;113: 3932‐3937.
10. Wood SN. Statistical inference for noisy nonlinear ecological dynamic systems. Nature. 2010; 466:
1102‐1104.
11. Smith AC, Brown EN. Estimating a state‐space model from point process observations. Neural
Comput. 2003;15: 965–991.
12. Paninski L, Ahmadian Y, Ferreira DG, Koyama S, Rahnama RK, Vidne M, et al. A new look at state‐
space models for neural data. J Comput Neurosci. 2010;29: 107‐126.
13. Paninski L, Vidne M, DePasquale B, Ferreira DG. Inferring synaptic inputs given a noisy voltage
trace via sequential Monte Carlo methods. J Comput Neurosci. 2012;33: 1‐19.
14. Pillow JW, Shlens J, Paninski L, Sher A, Litke AM, Chichilnisky EJ, et al. Spatio‐temporal
correlations and visual signalling in a complete neuronal population. Nature. 2008;454: 995‐999.
15. Pillow JW, Ahmadian Y, Paninski L. Model‐based decoding, information estimation, and change‐
point detection techniques for multineuron spike trains. Neural Comput. 2011;23: 1‐45.
16. Buesing L, Macke JH, Sahani M. Learning stable, regularised latent models of neural population
dynamics. Network. 2012;23: 24–47.
17. Latimer KW, Yates JL, Meister ML, Huk AC, Pillow JW. NEURONAL MODELING. Single‐trial spike
trains in parietal cortex reveal discrete steps during decision‐making. Science. 2015;349: 184‐187.
18. Macke JH, Buesing L, Sahani M. Estimating State and Parameters in State Space Models of Spike
Trains. In: Chen Z editor. Advanced State Space Methods for Neural and Clinical Data. Cambridge:
University Press; 2015. in press.
19. Yu BM, Afshar A, Santhanam G, Ryu SI, Shenoy KV. Extracting Dynamical Structure Embedded in
Neural Activity. Adv Neural Inf Process Syst. 2005;18: 1545‐1552.
20. Yu BM, Kemere C, Santhanam G, Afshar A, Ryu SI Meng TH, et al. Mixture of trajectory models for
neural decoding of goal‐directed movements. J Neurophysiol. 2007;5: 3763‐3780.
21. Yu BM, Cunningham JP, Santhanam G, Ryu SI, Shenoy KV, Sahani M. Gaussian‐Process Factor
Analysis for Low‐Dimensional Single‐Trial Analysis of Neural Population Activity. J Neurophysiol.
2009;102: 614‐635.
22. Durbin J, Koopman SJ. Time Series Analysis by State Space Methods. Oxford Statistical Science;
2012.
23. Roweis ST, Ghahramani Z. An EM algorithm for identification of nonlinear dynamical systems. In:
Haykin S, editor. Kalman Filtering and Neural Networks; 2001
24. Amit DJ, Brunel N. Model of global spontaneous activity and local structured activity during delay
periods in the cerebral cortex. Cereb Cortex. 1997;7: 237‐252.
25. Durstewitz D, Seamans JK Sejnowski TJ. Neurocomputational models of working memory. Nat
Neurosci. 2000;3; Suppl: 1184‐1191.
26. Durstewitz D. Implications of synaptic biophysics for recurrent network dynamics and active
memory. Neural Netw. 2009;22: 1189‐1200.
27. Brunel N, Wang XJ. Effects of neuromodulation in a cortical network model of object working
memory dominated by recurrent inhibition. J Comput Neurosci. 2011;11: 63‐85.
Durstewitz, Nonlinear State Space Model for Reconstructing Computational Dynamics, 11/02/16
21
28. Wang XJ. Synaptic basis of cortical persistent activity: the importance of NMDA receptors to
working memory. J Neurosci. 1999;19: 9587‐9603.
29. Funahashi KI, Nakamura Y. Approximation of Dynamical Systems by Continuous Time Recurrent
Neural Networks. Neural Netw. 1993;6: 801‐806.
30. Kimura M, Nakano R. Learning dynamical systems by recurrent neural networks from orbits.
Neural Netw. 1998;11: 1589–1599.
31. Chow TWS, Li XD. Modeling of Continuous Time Dynamical Systems with Input by Recurrent
Neural Networks. Trans Circuits Syst I Fundam Theory Theory Appl. 2000;47: 575‐578.
32. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521: 436‐444.
33. Mnih V, Kavukcuoglu K, Silver D, Rusu AA, Veness J, Bellemare MG, et al. Human‐level control
through deep reinforcement learning. Nature. 2015;518: 529‐533.
34. Hochreiter S, Schmidhuber J. Long short‐term memory. Neural Comput. 1997;9: 1735‐1780.
35. Hyman JM, Whitman J, Emberly E, Woodward TS, Seamans JK. Action and outcome activity state
patterns in the anterior cingulate cortex. Cereb Cortex. 2013;23: 1257‐1268.
36. Paninski L, Ahmadian Y, Ferreira DG, Koyama S, Rahnama RK, Vidne M, et al. A new look at state‐
space models for neural data. J Comput Neurosci. 2010;29: 107‐126.
37. Koyama S, Paninski L. Efficient computation of the maximum a posteriori path and parameter
estimation in integrate‐and‐fire and more general state‐space models. J Comput Neurosci. 2010;29:
89‐105.
38. Brugnano L, Casulli V. Iterative solution of piecewise linear systems. SIAM J Sci Comput. 2008;30:
463–472.
39. Williams RJ, Zipser D. A learning algorithm for continually running fully recurrent neural networks.
Neural Computat. 1990;1: 256‐263
40. Hertz J, Krogh AS, Palmer RG. Introduction to the theory of neural computation. 1991; Addison‐
Wesley Pub Co.
41. Zhang K, Hyvärinen A. A General Linear Non‐Gaussian State‐Space Model: Identifiability,
Identification, and Applications. JMLR: Workshop and Conference Proceedings 2011;20: 113‐128.
Durstewitz, Nonlinear State Space Model for Reconstructing Computational Dynamics, 11/02/16
22
42. Auger‐Méthé M, Field C, Albertsen CM, Derocher AE, Lewis MA, Jonsen ID, et al. State‐space
models’ dirty little secrets: even simple linear Gaussian models can have estimation problems. Sci
Rep. 2016;6: 26677.
43. Park M, Bohner G, Macke J. Unlocking neural population non‐stationarity using a hierarchical
dynamics model In: Advances in Neural Information Processing Systems 28, Twenty‐Ninth Annual
Conference on Neural Information Processing Systems (NIPS 2015); 2016. pp.1‐9.
44. Wu CFJ. On the Convergence Properties of the EM Algorithm. Ann Statist. 1983;11: 95‐103.
45. Boutayeb M, Rafaralahy H, Darouach M. Convergence analysis of the extended Kalman filter used
as an observer for nonlinear deterministic discrete‐time systems. IEEE Trans Autom Control. 1997;42:
581‐586.
46. Durstewitz D, Balaguer‐Ballester E. Statistical approaches for reconstructing neuro‐cognitive
dynamics from high‐dimensional neural recordings. Neuroforum. 2010;1: 89–98.
47. Shimazaki H, Shinomoto S. Kernel Bandwidth Optimization in Spike Rate Estimation. J Comp
Neurosci. 2010;29: 171‐182.
48. Latham PE, Nirenberg S. Computing and stability in cortical networks. Neural Comput. 2004;16:
1385‐1412.
49. Durstewitz D, Seamans JK. Beyond bistability: biophysics and temporal dynamics of working
memory. Neuroscience. 2006;139: 119‐133.
50. Strogatz SH. Nonlinear dynamics and chaos. Addison‐Wesley Publ; 1994.
51. Durstewitz D, Kelc M, Güntürkün O. A neurocomputational theory of the dopaminergic
modulation of working memory functions. J Neurosci. 1999;19: 207‐222.
52. Brunel N. Dynamics of Sparsely Connected Networks of Excitatory and Inhibitory Spiking Neurons.
J Comput Neurosci. 2000;8: 183–208.
53. Beer RD. Parameter Space Structure of Continuous‐Time Recurrent Neural Networks. Neural
Computation 2006;18: 3009‐3051.
54. Mante V, Sussillo D, Shenoy KV, Newsome WT. Context‐dependent computation by recurrent
dynamics in prefrontal cortex. Nature. 2013;503: 78‐84.
55. Hertäg L, Durstewitz D, Brunel N. Analytical approximations of the firing rate of an adaptive
exponential integrate‐and‐fire neuron in the presence of synaptic noise. Front Comput Neurosci.
2014;8: 116.
56. Sussillo D, Barak O. (2013) Opening the black box: low‐dimensional dynamics in high‐dimensional
recurrent neural networks. Neural Comput.2013;25: 626‐649.
57. Takens F. Detecting strange attractors in turbulence. Lecture Notes in Mathematics 898. Springer
Berlin;1981: 366–381.
58. Sauer TD, Sauer K, Davies DG. Embedology. J Stat Phys. 1991;65: 579‐616.
59. Sauer T. Reconstruction of dynamical systems from interspike intervals. Phys Rev Lett. 1994;72:
3811‐3814.
60. So P, Francis JT, Netoff TI, Gluckman BJ, Schiff SJ. Periodic Orbits: A New Language for Neuronal
Dynamics. Biophys J. 1998;74: 2776–2785
61. HilleB. Ion channels of excitable membranes. 3rd ed. Sinauer Assoc Inc; 2001.
62. Takahashi S, Anzai Y, Sakurai Y. A new approach to spike sorting for multi‐neuronal activities
recorded with a tetrode‐‐how ICA can be practical. Neurosci Res. 2003a;46: 265‐272.
63. Takahashi S, Anzai Y, Sakurai Y. Automatic sorting for multi‐neuronal activity recorded with
tetrodes in the presence of overlapping spikes. J Neurophysiol. 2003b;89: 2245‐2258.
64. Kantz H, Schreiber T. Nonlinear Time Series Analysis. Cambridge University Press; 2004.
65. Schreiber, T, Kantz H. Observing and predicting chaotic signals: Is 2% noise too much?
In: Kravtsov YA ,Kadtke JB, editors. Predictability of Complex Dynamical Systems, Springer, New
Durstewitz, Nonlinear State Space Model for Reconstructing Computational Dynamics, 11/02/16
23
York;1996.
66. Durstewitz D, Gabriel T. Dynamical basis of irregular spiking in NMDA‐driven prefrontal cortex
neurons. Cereb Cortex. 2007;17: 894‐908.
67. Zipser D, Kehoe B, Littlewort G, Fuster J. (1993) A spiking network model of short‐term active
memory. J Neurosci. 1993;13: 3406‐3420.
68. Friston KJ, Harrison L, Penny W. Dynamic causal modelling. Neuroimage. 2003;19: 1273‐1302.
69. Daunizeau J, Stephan KE, Friston KJ. Stochastic dynamic causal modelling of fMRI data: Should we
care about neural noise? Neuroimage. 2012;2: 464–481.
70. Huys QJM, Paninski L. Smoothing of, and Parameter Estimation from, Noisy Biophysical
Recordings. PLoS Comput Biol. 2009;5: e1000379.
71. Durstewitz D. Advanced Statistical Models in Neuroscience. Heidelberg: Springer; in press.
72. Stephan KE, Kasper L, Harrison LM, Daunizeau J, den Ouden HE et al. Nonlinear dynamic causal
models for fMRI. Neuroimage. 2008;42: 649‐662.
73. Toth BA, Kostuk M, Meliza CD, Margoliash D, Abarbanel HD. Dynamical estimation of neuron and
network properties I: variational methods. Biol Cybern. 2011;105: 217‐237.
Durstewitz, Nonlinear State Space Model for Reconstructing Computational Dynamics, 11/02/16
24
74. Kostuk M, Toth BA, Meliza CD, Margoliash D, Abarbanel HD. Dynamical estimation of neuron and
network properties II: path integral Monte Carlo methods. Biol Cybern. 2012;106: 155‐167.
75. Yi Z, Tan KK, Lee TH. Multistability analysis for recurrent neural networks with unsaturating
piecewise linear transfer functions. Neural Comput. 2003;15: 639‐662.
76. Tang HJ, Tan KC, Zhang W. Analysis of cyclic dynamics for networks of linear threshold neurons.
Neural Comput. 2005;17: 97‐114.
77. Yu J, Yi Z, Zhang L. Representations of continuous attractors of recurrent neural networks. IEEE
Trans Neural Netw. 2009;20: 368‐372.
78. Zhang L Yi Z, Yu J. Multiperiodicity and attractivity of delayed recurrent neural networks with
unsaturating piecewise linear transfer functions. IEEE Trans Neural Netw. 2008;19: 158‐167.
79. Fuster JM. Prefrontal Cortex. 5th ed. Academic Press; 2015.
80. Fuster JM. Unit activity in prefrontal cortex during delayed‐response performance: neuronal
correlates of transient memory. J Neurophysiol. 1973;36: 61‐78.
81. Funahashi S, Bruce CJ, Goldman‐Rakic PS. Mnemonic coding of visual space in the monkey's
dorsolateral prefrontal cortex. J Neurophysiol. 1989;61: 331‐349.
82. Miller EK, Erickson CA, Desimone R. Neural mechanisms of visual working memory in prefrontal
cortex of the macaque. J Neurosci. 1996;16: 5154‐5167.
83. Nakahara H, Doya K. Near‐saddle‐node bifurcation behavior as dynamics in working memory for
goal‐directed behavior. Neural Comput. 1998;10: 113‐132.
84. Baeg EH, Kim YB, Huh K, Mook‐Jung I, Kim HT, Jung MW. Dynamics of population code for
working memory in the prefrontal cortex. Neuron. 2003;40: 177‐188.
85. Mongillo G, Barak O, Tsodyks M. Synaptic theory of working memory. Science. 2008;319: 1543‐
1546.
86. Fahrmeir L, Tutz G. Multivariate Statistical Modelling Based on Generalized Linear Models.
Springer; 2010.
87. Eaves BC. "Solving Piecewise Linear Convex Equations," Mathematical Programming, Study 1,
November; 1974. pp. 96‐119.
88. Eaves BC, Scarf H. The solution of systems of piecewise linear equations. Math Oper Res. 1976;1:
1‐27.
89. Cottle RW, Dantzig GB. Complementary pivot theory of mathema cal programming. Linear
Algebra Appl. 1968;1: 103‐125.
90. Crisan D, Doucet A. A Survey of Convergence Results on Particle Filtering Methods for
Practitioners. IEEE Trans Signal Process. 2002;50: 736‐746.
91. Lee A, Whitley N. Variance estimation in the particle filter. arXiv:1509.00394v2
92. Hyman JM, Ma L, Balaguer‐Ballester E, Durstewitz D, Seamans JK. Contextual encoding by
ensembles of medial prefrontal cortex neurons. Proc Natl Acad Sci USA. 2013;109: 5086-5091.
Durstewitz, Nonlinear State Space Model for Reconstructing Computational Dynamics, 11/02/16
25
Figure Legends
Fig. 1. State and parameter estimates for nonlinear cycle example. (A) True (solid‐circle lines) and
estimated (dashed‐star lines) states over some periods of the simulated limit cycle generated by a 3‐
state PLRNN when true parameters were provided (for this example, θ (0.86,0.09,0.85) ; all
other parameters as in B, see also provided Matlab file ‘PLRNNoscParam.mat’). ‘True states’ refers to
the actual states from which the observations X were generated. Inputs of sit 1 were provided to
units i=1 and i=2 on time steps 1 and 10 of each cycle, respectively. (B) True and estimated model
parameters for (from top‐left to bottom‐right) μ 0 , A, W, Σ, B, Γ , when true states (but not their
higher‐order moments) were provided. Bisectrix lines (black) indicate identity.
Durstewitz, Nonlinear State Space Model for Reconstructing Computational Dynamics, 11/02/16
26
Fig. 2. Agreement between simulated (x‐axes) and semi‐analytical (y‐axes) solutions for state
expectancies for the model from Fig. 1 across all three state variables and T=750 time steps. Here,
( zi ) : max{0, zi θi } is the PL activation function. Simulated state paths and their moments were
generated using a bootstrap particle filter with 104 particles. Bisectrix lines in gray indicate identity.
Durstewitz, Nonlinear State Space Model for Reconstructing Computational Dynamics, 11/02/16
27
Fig. 3. State estimation for ‘working memory’ example when true parameters were provided. (A)
Setup of the simulated working memory task: Stimulus inputs (green bars, sit 1 , and 0 otherwise)
and requested outputs (black = 1, light‐gray = 0, dark‐grey = no output required) across the 20 time
points of a working memory trial (with two different trial types) for the 5 PLRNN units. (B) Correlation
between estimated and true states (i.e., those from which the observations X were generated) across
all five state variables and T=800 time steps. Bisectrix in black. (C) True (circle‐solid lines) and
estimated (star‐dashed lines) states for output units #3 (blue) and #4 (red) when s15 1 (left) or
s25 1 (right) for single example trials. Note that although working memory PLRNNs may, in
principle, be explicitly designed, here a 5‐state PLRNN was first trained by conventional gradient
descent (real‐time recurrent‐learning; [39]) to perform the task in A, to yield more ‘natural’ and less
uniform ground truth states and parameters. Here, all θi 0 (implying that there can only be one
stable fixed point). See Matlab file ‘PLRNNwmParam.mat’ and Fig. 4 for details on parameters.
Durstewitz, Nonlinear State Space Model for Reconstructing Computational Dynamics, 11/02/16
28
Fig. 4. True and estimated parameters for the working memory PLRNN (cf. Fig. 3) when true states
were provided. From top‐left to bottom‐right, estimates for: μ 0 , A, W, Σ, B, Γ . Note that most
parameter estimates were highly accurate, although all state covariance matrices still had to be
estimated as well (i.e., with the true states provided as initialization for the E‐step). Bisectrix lines in
black indicate identity.
Durstewitz, Nonlinear State Space Model for Reconstructing Computational Dynamics, 11/02/16
29
Fig. 5. Full EM algorithm on working memory model: State estimates for ML solution. (A) Log‐
likelihood as a function of EM iteration for the highest‐likelihood run out of all 240 initializations. As
in this example, the log‐likelihood, although generally increasing, was not always monotonic (note
the little ripples; see discussion in Results). (B) In this example, true and estimated states were nicely
linearly related, although not with a regression slope of 1 (in general, as discussed in the text, the
sets of true and estimated states may be related by some linear transformation). State estimation in
this case was performed by inverting only the single constraint corresponding to the largest deviation
on each iteration (see Methods). Bisectrix lines in black indicate identity.
Durstewitz, Nonlinear State Space Model for Reconstructing Computational Dynamics, 11/02/16
30
Fig. 6. Full EM algorithm on working memory model. (A) Parameter estimates for ML solution from
Fig. 5. True parameters (on x‐axes or as blue bars, respectively), initial (gray circles or green bars) and
final (black circles or yellow bars) parameter estimates for (from left to right) μ 0 , A, W, B, Γ .
Bisectrix lines in blue. (B) Distributions of initial (gray curves), final (black‐solid curves), and final after
reordering of states (black‐dashed curves), deviations between estimated and true parameters
across all 240 EM runs from different initial conditions. All final distributions were centered around 0,
indicating that final parameter estimates were largely unbiased. Note that partial information about
state assignments was implicitly provided to the network through the unit‐specific inputs (and, more
generally, may also come from the unit‐specific thresholds θi , although these were all set to 0 for
the present example), and hence state reordering only produced slight improvements in the
parameter estimates.
Durstewitz, Nonlinear State Space Model for Reconstructing Computational Dynamics, 11/02/16
31
Fig. 7. Prediction of single unit responses. (A) Examples of log‐likelihood curves across EM iterations
from the 5/36 highest‐likelihood runs for a 5‐state PLRNN estimated from 19 simultaneously
recorded prefrontal neurons on a working memory task. (B) Example of an ACC unit predicted
extremely well by the estimated PLRNN despite considerable trial to trial fluctuations (3 consecutive
trials shown). (C) Example of another ACC unit on the same three trials where only the average trend
was captured by the PLRNN. Gray vertical bars in B and C indicate times of cue/ response. State
estimation in this case was performed by inverting only the single constraint corresponding to the
largest deviation on each iteration (see Methods).
Durstewitz, Nonlinear State Space Model for Reconstructing Computational Dynamics, 11/02/16
32
Fig. 8. Example for latent states of PLRNN estimated from ACC multiple single‐unit recordings during
working memory (cf. Fig. 7). Shown are trial averages for left‐lever (black) and right‐lever (gray) trials
with SEMs computed across trials. Dashed vertical lines flank the 10 s period of the delay phase used
for model estimation. Note that latent variables z4 and z5, in particular, differentiate between left and
right lever responses throughout most of the delay period.
Durstewitz, Nonlinear State Space Model for Reconstructing Computational Dynamics, 11/02/16
33
Fig. 9. Initial (gray) and final (black) distributions of maximum (absolute) eigenvalues associated with
all fixed points of 200 PLRNNs estimated from the experimental data (cf. Figs. 7 & 8) with different
initializations of parameters, including the (fixed) threshold parameters θi . Initial parameter
configurations were deliberately chosen to yield a rather uniform distribution of absolute
eigenvalues 3.
| 9 |
Published as a conference paper at ICLR 2018
arXiv:1705.07904v3 [cs.LG] 22 Feb 2018
S EMANTICALLY D ECOMPOSING THE L ATENT S PACES
OF G ENERATIVE A DVERSARIAL N ETWORKS
Chris Donahue
Department of Music
University of California, San Diego
cdonahue@ucsd.edu
Zachary C. Lipton
Carnegie Mellon University
Amazon AI
zlipton@cmu.edu
Akshay Balsubramani
Department of Genetics
Stanford University
abalsubr@stanford.edu
Julian McAuley
Department of Computer Science
University of California, San Diego
jmcauley@eng.ucsd.edu
A BSTRACT
We propose a new algorithm for training generative adversarial networks that jointly
learns latent codes for both identities (e.g. individual humans) and observations
(e.g. specific photographs). By fixing the identity portion of the latent codes, we
can generate diverse images of the same subject, and by fixing the observation
portion, we can traverse the manifold of subjects while maintaining contingent
aspects such as lighting and pose. Our algorithm features a pairwise training
scheme in which each sample from the generator consists of two images with a
common identity code. Corresponding samples from the real dataset consist of two
distinct photographs of the same subject. In order to fool the discriminator, the
generator must produce pairs that are photorealistic, distinct, and appear to depict
the same individual. We augment both the DCGAN and BEGAN approaches with
Siamese discriminators to facilitate pairwise training. Experiments with human
judges and an off-the-shelf face verification system demonstrate our algorithm’s
ability to generate convincing, identity-matched photographs.
1
I NTRODUCTION
In many domains, a suitable generative process might consist of several stages. To generate a
photograph of a product, we might wish to first sample from the space of products, and then from
the space of photographs of that product. Given such disentangled representations in a multistage
generative process, an online retailer might diversify its catalog, depicting products in a wider variety
of settings. A retailer could also flip the process, imagining new products in a fixed setting. Datasets
for such domains often contain many labeled identities with fewer observations of each (e.g. a
collection of face portraits with thousands of people and ten photos of each). While we may know the
identity of the subject in each photograph, we may not know the contingent aspects of the observation
(such as lighting, pose and background). This kind of data is ubiquitous; given a set of commonalities,
we might want to incorporate this structure into our latent representations.
Generative adversarial networks (GANs) learn mappings from latent codes z in some low-dimensional
space Z to points in the space of natural data X (Goodfellow et al., 2014). They achieve this
power through an adversarial training scheme pitting a generative model G : Z 7→ X against a
discriminative model D : X 7→ [0, 1] in a minimax game. While GANs are popular, owing to their
ability to generate high-fidelity images, they do not, in their original form, explicitly disentangle the
latent factors according to known commonalities.
In this paper, we propose Semantically Decomposed GANs (SD-GANs), which encourage a specified portion of the latent space to correspond to a known source of variation.1,2 The technique
1
2
Web demo: https://chrisdonahue.github.io/sdgan
Source code: https://github.com/chrisdonahue/sdgan
1
Published as a conference paper at ICLR 2018
Figure 1: Generated samples from SD-BEGAN. Each of the four rows has the same identity code zI
and each of the fourteen columns has the same observation code zO .
decomposes the latent code Z into one portion ZI corresponding to identity, and the remaining
portion ZO corresponding to the other contingent aspects of observations. SD-GANs learn through a
pairwise training scheme in which each sample from the real dataset consists of two distinct images
with a common identity. Each sample from the generator consists of a pair of images with common
zI ∈ ZI but differing zO ∈ ZO . In order to fool the discriminator, the generator must not only
produce diverse and photorealistic images, but also images that depict the same identity when zI is
fixed. For SD-GANs, we modify the discriminator so that it can determine whether a pair of samples
constitutes a match.
As a case study, we experiment with a dataset of face photographs, demonstrating that SD-GANs
can generate contrasting images of the same subject (Figure 1; interactive web demo in footnote
on previous page). The generator learns that certain properties are free to vary across observations
but not identity. For example, SD-GANs learn that pose, facial expression, hirsuteness, grayscale
vs. color, and lighting can all vary across different photographs of the same individual. On the
other hand, the aspects that are more salient for facial verification remain consistent as we vary the
observation code zO . We also train SD-GANs on a dataset of product images, containing multiple
photographs of each product from various perspectives (Figure 4).
We demonstrate that SD-GANs trained on faces generate stylistically-contrasting, identity-matched
image pairs that human annotators and a state-of-the-art face verification algorithm recognize as
depicting the same subject. On measures of identity coherence and image diversity, SD-GANs
perform comparably to a recent conditional GAN method (Odena et al., 2017); SD-GANs can also
imagine new identities, while conditional GANs are limited to generating existing identities from the
training data.
2
S EMANTICALLY D ECOMPOSED G ENERATIVE A DVERSARIAL N ETWORKS
Before introducing our algorithm, we briefly review the prerequisite concepts.
2.1
GAN PRELIMINARIES
GANs leverage the discriminative power of neural networks to learn generative models. The generative model G ingests latent codes z, sampled from some known prior PZ , and produces G(z), a
sample of an implicit distribution PG . The learning process consists of a minimax game between G,
parameterized by θG , and a discriminative model D, parameterized by θD . In the original formulation,
the discriminative model tries to maximize log likelihood, yielding
min max V (G, D) = Ex∼PR [log D(x)] + Ez∼PZ [log(1 − D(G(z)))].
G
D
(1)
Training proceeds as follows: For k iterations, sample one minibatch from the real distribution PR
and one from the distribution of generated images PG , updating discriminator weights θD to increase
V (G, D) by stochastic gradient ascent. Then sample a minibatch from PZ , updating θG to decrease
V (G, D) by stochastic gradient descent.
2
Published as a conference paper at ICLR 2018
Algorithm 1 Semantically Decomposed GAN Training
1: for n in 1:NumberOfIterations do
2:
for m in 1:MinibatchSize do
3:
Sample one identity vector zI ∼ Uniform([−1, 1]dI ).
4:
Sample two observation vectors z1O , z2O ∼ Uniform([−1, 1]dO ).
5:
z1 ← [zI ; z1O ], z2 ← [zI ; z2O ].
6:
Generate pair of images G(z1 ), G(z2 ), adding them to the minibatch with label 0 (fake).
7:
8:
9:
10:
11:
12:
13:
for m in 1:MinibatchSize do
Sample one identity i ∈ I uniformly at random from the real data set.
Sample two images of i without replacement x1 , x2 ∼ PR (x|I = i).
Add the pair to the minibatch, assigning label 1 (real).
Update discriminator weights by θD ← θD + ∇θD V (G, D) using its stochastic gradient.
Sample another minibatch of identity-matched latent vectors z1 , z2 .
Update generator weights by stochastic gradient descent θG ← θG − ∇θG V (G, D).
Zhao et al. (2017b) propose energy-based GANs (EBGANs), in which the discriminator can be
viewed as an energy function. Specifically, they devise a discriminator consisting of an autoencoder:
D(x) = Dd (De (x)). In the minimax game, the discriminator’s weights are updated to minimize
the reconstruction error L(x) = ||x − D(x)|| for real data, while maximizing the error L(G(z))
for the generator. More recently, Berthelot et al. (2017) extend this work, introducing Boundary
Equilibrium GANs (BEGANs), which optimize the Wasserstein distance (reminiscent of Wasserstein
GANs (Arjovsky et al., 2017)) between autoencoder loss distributions, yielding the formulation:
VBEGAN (G, D) = L(x) − L(G(z)).
(2)
Additionally, they introduce a method for stabilizing training. Positing that training becomes unstable
when the discriminator cannot distinguish between real and generated images, they introduce a new
hyperparameter γ, updating the value function on each iteration to maintain a desired ratio between
the two reconstruction errors: E[L(G(z))] = γ E[L(x)]. The BEGAN model produces what appear
to us, subjectively, to be the sharpest images of faces yet generated by a GAN. In this work, we adapt
both the DCGAN (Radford et al., 2016) and BEGAN algorithms to the SD-GAN training scheme.
2.2
SD-GAN FORMULATION
Consider the data’s identity as a random variable I in a discrete index set I. We seek to learn a latent
representation that conveniently decomposes the variation in the real data into two parts: 1) due to I,
and 2) due to the other factors of variation in the data, packaged as a random variable O. Ideally, the
decomposition of the variation in the data into I and O should correspond exactly to a decomposition
of the latent space Z = ZI × ZO . This would permit convenient interpolation and other operations
on the inferred subspaces ZI and ZO .
A conventional GAN samples I, O from their joint distribution. Such a GAN’s generative model
samples directly from an unstructured prior over the latent space. It does not disentangle the variation
in O and I, for instance by modeling conditional distributions PG (O | I = i), but only models their
average with respect to the prior on I.
Our SD-GAN method learns such a latent space decomposition, partitioning the coordinates of Z
into two parts representing the subspaces, so that any z ∈ Z can be written as the concatenation
[zI ; zO ] of its identity representation zI ∈ RdI = ZI and its contingent aspect representation
zO ∈ RdO = ZO . SD-GANs achieve this through a pairwise training scheme in which each sample
from the real data consists of x1 , x2 ∼ PR (x | I = i), a pair of images with a common identity i ∈ I.
Each sample from the generator consists of G(z1 ), G(z2 ) ∼ PG (z | ZI = zI ), a pair of images
generated from a common identity vector zI ∈ ZI but i.i.d. observation vectors z1O , z2O ∈ ZO . We
assign identity-matched pairs from PR the label 1 and zI -matched pairs from PG the label 0. The
discriminator can thus learn to reject pairs for either of two primary reasons: 1) not photorealistic or
2) not plausibly depicting the same subject. See Algorithm 1 for SD-GAN training pseudocode.
3
Published as a conference paper at ICLR 2018
(a) DCGAN
(b) SD-DCGAN
(c) BEGAN
(d) SD-BEGAN
Figure 2: SD-GAN architectures and vanilla counterparts. Our SD-GAN models incorporate a decomposed latent space and Siamese discriminators. Dashed lines indicate shared weights. Discriminators
also observe real samples in addition to those from the generator (not pictured for simplicity).
2.3
SD-GAN DISCRIMINATOR ARCHITECTURE
With SD-GANs, there is no need to alter the architecture of the generator. However, the discriminator
must now act upon two images, producing a single output. Moreover, the effects of the two input
images x1 , x2 on the output score are not independent. Two images might be otherwise photorealistic
but deserve rejection because they clearly depict different identities. To this end, we devise two
novel discriminator architectures to adapt DCGAN and BEGAN respectively. In both cases, we first
separately encode each image using the same convolutional neural network De (Figure 2). We choose
this Siamese setup (Bromley, 1994; Chopra et al., 2005) as our problem is symmetrical in the images,
and thus it’s sensible to share weights between the encoders.
To adapt DCGAN, we stack the feature maps De (x1 ) and De (x2 ) along the channel axis, applying
one additional strided convolution. This allows the network to further aggregate information from the
two images before flattening and fully connecting to a sigmoid output. For BEGAN, because the
discriminator is an autoencoder, our architecture is more complicated. After encoding each image,
we concatenate the representations [De (x1 ); De (x2 )] ∈ R2(dI +dO ) and apply one fully connected
bottleneck layer R2(dI +dO ) ⇒ RdI +2dO with linear activation. In alignment with BEGAN, the
SD-BEGAN bottleneck has the same dimensionality as the tuple of latent codes (zI , z1O , z2O ) that
generated the pair of images. Following the bottleneck, we apply a second FC layer RdI +2dO ⇒
R2(dI +dO ) , taking the first dI + dO components of its output to be the input to the first decoder and
the second dI + dO components to be the input to the second decoder. The shared intermediate layer
gives SD-BEGAN a mechanism to push apart matched and unmatched pairs. We specify our exact
architectures in full detail in Appendix E.
3
E XPERIMENTS
We experimentally validate SD-GANs using two datasets: 1) the MS-Celeb-1M dataset of celebrity
face images (Guo et al., 2016) and 2) a dataset of shoe images collected from Amazon (McAuley
et al., 2015). Both datasets contain a large number of identities (people and shoes, respectively) with
multiple observations of each. The “in-the-wild” nature of the celebrity face images offers a richer
test bed for our method as both identities and contingent factors are significant sources of variation.
In contrast, Amazon’s shoe images tend to vary only with camera perspective for a given product,
making this data useful for sanity-checking our approach.
Faces From the aligned face images in the MS-Celeb-1M dataset, we select 12,500 celebrities at
random and 8 associated images of each, resizing them to 64x64 pixels. We split the celebrities into
subsets of 10,000 (training), 1,250 (validation) and 1,250 (test). The dataset has a small number
of duplicate images and some label noise (images matched to the wrong celebrity). We detect and
4
Published as a conference paper at ICLR 2018
Figure 3:
Generated samples
SD-DCGAN model trained on faces.
from
Figure 4:
Generated samples from
SD-DCGAN model trained on shoes.
remove duplicates by hashing the images, but we do not rid the data of label noise. We scale the pixel
values to [−1, 1], performing no additional preprocessing or data augmentation.
Shoes Synthesizing novel product images is another promising domain for our method. In our
shoes dataset, product photographs are captured against white backgrounds and primarily differ in
orientation and distance. Accordingly, we expect that SD-GAN training will allocate the observation
latent space to capture these aspects. We choose to study shoes as a prototypical example of a
category of product images. The Amazon dataset contains around 3,000 unique products with the
category “Shoe” and multiple product images. We use the same 80%, 10%, 10% split and again hash
the images to ensure that the splits are disjoint. There are 6.2 photos of each product on average.
3.1
T RAINING DETAILS
We train SD-DCGANs on both of our datasets for 500,000 iterations using batches of 16 identitymatched pairs. To optimize SD-DCGAN, we use the Adam optimizer (Kingma & Ba, 2015) with
hyperparameters α = 2e−4, β1 = 0.5, β2 = 0.999 as recommended by Radford et al. (2016). We
also consider a non-Siamese discriminator that simply stacks the channels of the pair of real or fake
images before encoding (SD-DCGAN-SC).
As in (Radford et al., 2016), we sample latent vectors z ∼ Uniform([−1, 1]100 ). For SD-GANs, we
partition the latent codes according to zI ∈ RdI , zO ∈ R100−dI using values of dI = [25, 50, 75].
Our algorithm can be trivially applied with k-wise training (vs. pairwise). To explore the effects of
using k > 2, we also experiment with an SD-DCGAN where we sample k = 4 instances each from
PG (z | ZI = zI ) for some zI ∈ ZI and from PR (x | I = i) for some i ∈ I. For all experiments,
unless otherwise stated, we use dI = 50 and k = 2.
We also train an SD-BEGAN on both of our datasets. The increased complexity of the SD-BEGAN
model significantly increases training time, limiting our ability to perform more-exhaustive hyperparameter validation (as we do for SD-DCGAN). We use the Adam optimizer with the default
hyperparameters from (Kingma & Ba, 2015) for our SD-BEGAN experiments. While results from
our SD-DCGAN k = 4 model are compelling, an experiment with a k = 4 variant of SD-BEGAN
resulted in early mode collapse (Appendix F); hence, we excluded SD-BEGAN k = 4 from our
evaluation.
We also compare to a DCGAN architecture trained using the auxiliary classifier GAN (AC-GAN)
method (Odena et al., 2017). AC-GAN differs from SD-GAN in two key ways: 1) random identity
codes zI are replaced by a one-hot embedding over all the identities in the training set (matrix of size
10000x50); 2) the AC-GAN method encourages that generated photos depict the proper identity by
tasking its discriminator with predicting the identity of the generated or real image. Unlike SD-GANs,
the AC-DCGAN model cannot imagine new identities; when generating from AC-DCGAN (for our
quantitative comparisons to SD-GANs), we must sample a random identity from those existing in the
training data.
5
Published as a conference paper at ICLR 2018
Table 1: Evaluation of 10k pairs from MS-Celeb-1M (real data) and generative models; half have
matched identities, half do not. The identity verification metrics demonstrate that FaceNet (FN) and
human annotators on Mechanical Turk (MT) verify generated data similarly to real data. The sample
diversity metrics ensure that generated samples are statistically distinct in pixel space. Data generated
by our best model (SD-BEGAN) performs comparably to real data. * 1k pairs, † 200 pairs.
Identity Verification
Dataset
3.2
Mem
Sample Diversity
Judge
AUC
Acc.
FAR
ID-Div
All-Div
MS-Celeb-1M
AC-DCGAN
SD-DCGAN
SD-DCGAN-SC
SD-DCGAN k=4
SD-DCGAN dI =25
SD-DCGAN dI =75
SD-BEGAN
131 MB
57 MB
47 MB
75 MB
57 MB
57 MB
68 MB
FN
FN
FN
FN
FN
FN
FN
FN
.913
.927
.823
.831
.852
.835
.816
.928
.867
.851
.749
.757
.776
.764
.743
.857
.045
.083
.201
.180
.227
.222
.268
.110
.621
.497
.521
.560
.523
.526
.517
.588
.699
.666
.609
.637
.614
.615
.601
.673
†MS-Celeb-1M
*MS-Celeb-1M
*AC-DCGAN
*SD-DCGAN k=4
*SD-BEGAN
131 MB
75 MB
68 MB
Us
MT
MT
MT
MT
-
.850
.759
.765
.688
.723
.110
.035
.090
.147
.096
.621
.621
.497
.523
.588
.699
.699
.666
.614
.673
E VALUATION
The evaluation of generative models is a fraught topic. Quantitative measures of sample quality can
be poorly correlated with each other (Theis et al., 2016). Accordingly, we design an evaluation to
match conceivable uses of our algorithm. Because we hope to produce diverse samples that humans
deem to depict the same person, we evaluate the identity coherence of SD-GANs and baselines using
both a pretrained face verification model and crowd-sourced human judgments obtained through
Amazon’s Mechanical Turk platform.
3.2.1
Q UANTITATIVE
Recent advancements in face verification using deep convolutional neural networks (Schroff et al.,
2015; Parkhi et al., 2015; Wen et al., 2016) have yielded accuracy rivaling humans. For our evaluation,
we procure FaceNet, a publicly-available face verifier based on the Inception-ResNet architecture
(Szegedy et al., 2017). The FaceNet model was pretrained on the CASIA-WebFace dataset (Yi et al.,
2014) and achieves 98.6% accuracy on the LFW benchmark (Huang et al., 2012).3
FaceNet ingests normalized, 160x160 color images and produces an embedding f (x) ∈ R128 .
The training objective for FaceNet is to learn embeddings that minimize the L2 distance between
matched pairs of faces and maximize the distance for mismatched pairs. Accordingly, the embedding
space yields a function for measuring the similarity between two faces x1 and x2 : D(x1 , x2 ) =
||f (x1 ) − f (x2 )||22 . Given two images, x1 and x2 , we label them as a match if D(x1 , x2 ) ≤ τv
where τv is the accuracy-maximizing threshold on a class-balanced set of pairs from MS-Celeb-1M
validation data. We use the same threshold for evaluating both real and synthetic data with FaceNet.
We compare the performance of FaceNet on pairs of images from the MS-Celeb-1M test set against
generated samples from our trained SD-GAN models and AC-DCGAN baseline. To match FaceNet’s
training data, we preprocess all images by resizing from 64x64 to 160x160, normalizing each image
individually. We prepare 10,000 pairs from MS-Celeb-1M, half identity-matched and half unmatched.
From each generative model, we generate 5,000 pairs each with z1I = z2I and 5,000 pairs with
z1I 6= z2I . For each sample, we draw observation vectors zO randomly.
We also want to ensure that identity-matched images produced by the generative models are diverse.
To this end, we propose an intra-identity sample diversity (ID-Div) metric. The multi-scale structural
similarity (MS-SSIM) (Wang et al., 2004) metric reports the similarity of two images on a scale
from 0 (no resemblance) to 1 (identical images). We report 1 minus the mean MS-SSIM for all pairs
3
“20170214-092102” pretrained model from https://github.com/davidsandberg/facenet
6
Published as a conference paper at ICLR 2018
of identity-matched images as ID-Div. To measure the overall sample diversity (All-Div), we also
compute 1 minus the mean similarity of 10k pairs with random identities.
In Table 1, we report the area under the receiver operating characteristic curve (AUC), accuracy, and
false accept rate (FAR) of FaceNet (at threshold τv ) on the real and generated data. We also report our
proposed diversity statistics. FaceNet verifies pairs from the real data with 87% accuracy compared
to 86% on pairs from our SD-BEGAN model. Though this is comparable to the accuracy achieved
on pairs from the AC-DCGAN baseline, our model produces samples that are more diverse in pixel
space (as measured by ID-Div and All-Div). FaceNet has a higher but comparable FAR for pairs
from SD-GANs than those from AC-DCGAN; this indicates that SD-GANs may produce images that
are less semantically diverse on average than AC-DCGAN.
We also report the combined memory footprint of G and D for all methods in Table 1. For conditional
GAN approaches, the number of parameters grows linearly with the number of identities in the
training data. Especially in the case of the AC-GAN, where the discriminator computes a softmax
over all identities, linear scaling may be prohibitive. While our 10k-identity subset of MS-Celeb-1M
requires a 131MB AC-DCGAN model, an AC-DCGAN for all 1M identities would be over 8GB,
with more than 97% of the parameters devoted to the weights in the discriminator’s softmax layer. In
contrast, the complexity of SD-GAN is constant in the number of identities.
3.2.2
Q UALITATIVE
In addition to validating that identity-matched SD-GAN samples are verified by FaceNet, we also
demonstrate that humans are similarly convinced through experiments using Mechanical Turk. For
these experiments, we use balanced subsets of 1,000 pairs from MS-Celeb-1M and the most promising
generative methods from our FaceNet evaluation. We ask human annotators to determine if each
pair depicts the “same person” or “different people”. Annotators are presented with batches of ten
pairs at a time. Each pair is presented to three distinct annotators and predictions are determined by
majority vote. Additionally, to provide a benchmark for assessing the quality of the Mechanical Turk
ensembles, we (the authors) manually judged 200 pairs from MS-Celeb-1M. Results are in Table 1.
For all datasets, human annotators on Mechanical Turk answered “same person” less frequently than
FaceNet when the latter uses the accuracy-maximizing threshold τv . Even on real data, balanced
so that 50% of pairs are identity-matched, annotators report “same person” only 28% of the time
(compared to the 41% of FaceNet). While annotators achieve higher accuracy on pairs from ACDCGAN than pairs from SD-BEGAN, they also answer “same person” 16% more often for ACDCGAN pairs than real data. In contrast, annotators answer “same person” at the same rate for
SD-BEGAN pairs as real data. This may be attributable to the lower sample diversity produced by
AC-DCGAN. Samples from SD-DCGAN and SD-BEGAN are shown in Figures 3 and 1 respectively.
4
R ELATED WORK
Style transfer and novel view synthesis are active research areas. Early attempts to disentangle style
and content manifolds used factored tensor representations (Tenenbaum & Freeman, 1997; Vasilescu
& Terzopoulos, 2002; Elgammal & Lee, 2004; Tang et al., 2013), applying their results to face image
synthesis. More recent work focuses on learning hierarchical feature representations using deep
convolutional neural networks to separate identity and pose manifolds for faces (Zhu et al., 2013;
Reed et al., 2014; Zhu et al., 2014; Yang et al., 2015; Kulkarni et al., 2015; Oord et al., 2016; Yan
et al., 2016) and products (Dosovitskiy et al., 2015). Gatys et al. (2016) use features of a convolutional
network, pretrained for image recognition, as a means for discovering content and style vectors.
Since their introduction (Goodfellow et al., 2014), GANs have been used to generate increasingly highquality images (Radford et al., 2016; Zhao et al., 2017b; Berthelot et al., 2017). Conditional GANs
(cGANs), introduced by Mirza & Osindero (2014), extend GANs to generate class-conditional data.
Odena et al. (2017) propose auxiliary classifier GANs, combining cGANs with a semi-supervised
discriminator (Springenberg, 2015). Recently, cGANs have been used to ingest text (Reed et al.,
2016) and full-resolution images (Isola et al., 2017; Liu et al., 2017; Zhu et al., 2017) as conditioning
information, addressing a variety of image-to-image translation and style transfer tasks. Chen et al.
(2016) devise an information-theoretic extension to GANs in which they maximize the mutual
information between a subset of latent variables and the generated data. Their unsupervised method
7
Published as a conference paper at ICLR 2018
Figure 5: Linear interpolation of zI (identity) and zO (observation) for three pairs using SD-BEGAN
generator. In each matrix, rows share zI and columns share zO .
appears to disentangle some intuitive factors of variation, but these factors may not correspond to
those explicitly disentangled by SD-GANs.
Several related papers use GANs for novel view synthesis of faces. Tran et al. (2017); Huang et al.
(2017); Yin et al. (2017a;b); Zhao et al. (2017a) all address synthesis of different body/facial poses
conditioned on an input image (representing identity) and a fixed number of pose labels. Antipov
et al. (2017) propose conditional GANs for synthesizing artificially-aged faces conditioned on both
a face image and an age vector. These approaches all require explicit conditioning on the relevant
factor (such as rotation, lighting and age) in addition to an identity image. In contrast, SD-GANs can
model these contingent factors implicitly (without supervision).
Mathieu et al. (2016) combine GANs with a traditional reconstruction loss to disentangle identity.
While their approach trains with an encoder-decoder generator, they enforce a variational bound
on the encoder embedding, enabling them to sample from the decoder without an input image.
Experiments with their method only address small (28x28) grayscale face images, and their training
procedure is complex to reproduce. In contrast, our work offers a simpler approach and can synthesize
higher-resolution, color photographs.
One might think of our work as offering the generative view of the Siamese networks often favored
for learning similarity metrics (Bromley, 1994; Chopra et al., 2005). Such approaches are used for
discriminative tasks like face or signature verification that share the many classes with few examples
structure that we study here. In our work, we adopt a Siamese architecture in order to enable the
discriminator to differentiate between matched and unmatched pairs. Recent work by Liu & Tuzel
(2016) propose a GAN architecture with weight sharing across multiple generators and discriminators,
but with a different problem formulation and objective from ours.
5
D ISCUSSION
Our evaluation demonstrates that SD-GANs can disentangle those factors of variation corresponding
to identity from the rest. Moreover, with SD-GANs we can sample never-before-seen identities, a
benefit not shared by conditional GANs. In Figure 3, we demonstrate that by varying the observation
vector zO , SD-GANs can change the color of clothing, add or remove sunnies, or change facial
pose. They can also perturb the lighting, color saturation, and contrast of an image, all while keeping
the apparent identity fixed. We note, subjectively, that samples from SD-DCGAN tend to appear
less photorealistic than those from SD-BEGAN. Given a generator trained with SD-GAN, we can
independently interpolate along the identity and observation manifolds (Figure 5).
On the shoe dataset, we find that the SD-DCGAN model produces convincing results. As desired, manipulating zI while keeping zO fixed yields distinct shoes in consistent poses (Figure 4). The identity
code zI appears to capture the broad categories of shoes (sneakers, flip-flops, boots, etc.). Surprisingly,
neither original BEGAN nor SD-BEGAN can produce diverse shoe images (Appendix G).
In this paper, we presented SD-GANs, a new algorithm capable of disentangling factors of variation
according to known commonalities. We see several promising directions for future work. One logical
extension is to disentangle latent factors corresponding to more than one known commonality. We
also plan to apply our approach in other domains such as identity-conditioned speech synthesis.
8
Published as a conference paper at ICLR 2018
ACKNOWLEDGEMENTS
The authors would like to thank Anima Anandkumar, John Berkowitz and Miller Puckette for their
helpful feedback on this work. This work used the Extreme Science and Engineering Discovery
Environment (XSEDE), which is supported by National Science Foundation grant number ACI1053575 (Towns et al., 2014). GPUs used in this research were donated by the NVIDIA Corporation.
R EFERENCES
Grigory Antipov, Moez Baccouche, and Jean-Luc Dugelay. Face aging with conditional generative
adversarial networks. In ICIP, 2017.
Martin Arjovsky, Soumith Chintala, and Léon Bottou. Wasserstein GAN. In ICML, 2017.
David Berthelot, Tom Schumm, and Luke Metz. BEGAN: Boundary equilibrium generative adversarial networks. arXiv:1703.10717, 2017.
Jane Bromley. Signature verification using a "Siamese" time delay neural network. In NIPS, 1994.
Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. InfoGAN:
Interpretable representation learning by information maximizing generative adversarial nets. In
NIPS, 2016.
Sumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, with
application to face verification. In CVPR, 2005.
Alexey Dosovitskiy, Jost Tobias Springenberg, and Thomas Brox. Learning to generate chairs with
convolutional neural networks. In CVPR, 2015.
Ahmed Elgammal and Chan-Su Lee. Separating style and content on a nonlinear manifold. In CVPR,
2004.
Leon A Gatys, Alexander S Ecker, and Matthias Bethge. Image style transfer using convolutional
neural networks. In CVPR, 2016.
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair,
Aaron Courville, and Yoshua Bengio. Generative adversarial networks. In NIPS, 2014.
Yandong Guo, Lei Zhang, Yuxiao Hu, Xiaodong He, and Jianfeng Gao. MS-Celeb-1M: A dataset
and benchmark for large scale face recognition. In ECCV, 2016.
Geoffrey Hinton, Nitish Srivastava, and Kevin Swersky. Neural networks for machine learning lecture
6a: Overview of mini–batch gradient descent.
Gary B. Huang, Marwan Mattar, Honglak Lee, and Erik Learned-Miller. Learning to align from
scratch. In NIPS, 2012.
Rui Huang, Shu Zhang, Tianyu Li, and Ran He. Beyond face rotation: Global and local perception
gan for photorealistic and identity preserving frontal view synthesis. In ICCV, 2017.
Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with
conditional adversarial networks. In CVPR, 2017.
Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015.
Tejas D Kulkarni, William F Whitney, Pushmeet Kohli, and Josh Tenenbaum. Deep convolutional
inverse graphics network. In NIPS, 2015.
Zachary C Lipton and Subarna Tripathi. Precise recovery of latent vectors from generative adversarial
networks. ICLR Workshop Track, 2017.
Ming-Yu Liu and Oncel Tuzel. Coupled generative adversarial networks. In NIPS, 2016.
Ming-Yu Liu, Thomas Breuel, and Jan Kautz. Unsupervised image-to-image translation networks. In
ICML, 2017.
9
Published as a conference paper at ICLR 2018
Michael F Mathieu, Junbo Jake Zhao, Junbo Zhao, Aditya Ramesh, Pablo Sprechmann, and Yann
LeCun. Disentangling factors of variation in deep representation using adversarial training. In
NIPS, 2016.
Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton Van Den Hengel. Image-based
recommendations on styles and substitutes. In SIGIR, 2015.
Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv:1411.1784, 2014.
Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional image synthesis with auxiliary
classifier GANs. In ICML, 2017.
Aaron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Koray
Kavukcuoglu. Conditional image generation with pixelcnn decoders. In NIPS, 2016.
Omkar M Parkhi, Andrea Vedaldi, and Andrew Zisserman. Deep face recognition. In BMVC, 2015.
Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context
encoders: Feature learning by inpainting. In CVPR, 2016.
Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep
convolutional generative adversarial networks. In ICLR, 2016.
Scott Reed, Kihyuk Sohn, Yuting Zhang, and Honglak Lee. Learning to disentangle factors of
variation with manifold interaction. In ICML, 2014.
Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee.
Generative adversarial text to image synthesis. In ICML, 2016.
Florian Schroff, Dmitry Kalenichenko, and James Philbin. FaceNet: A unified embedding for face
recognition and clustering. In CVPR, 2015.
Jost Tobias Springenberg. Unsupervised and semi-supervised learning with categorical generative
adversarial networks. In ICLR, 2015.
Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, and Alex Alemi. Inception-v4, Inception-ResNet
and the impact of residual connections on learning. In AAAI, 2017.
Yichuan Tang, Ruslan Salakhutdinov, and Geoffrey Hinton. Tensor analyzers. In ICML, 2013.
Joshua B Tenenbaum and William T Freeman. Separating style and content. In NIPS, 1997.
Lucas Theis, Aäron van den Oord, and Matthias Bethge. A note on the evaluation of generative
models. In ICLR, 2016.
John Towns, Timothy Cockerill, Maytal Dahan, Ian Foster, Kelly Gaither, Andrew Grimshaw, Victor
Hazlewood, Scott Lathrop, Dave Lifka, Gregory D Peterson, et al. XSEDE: Accelerating scientific
discovery. Computing in Science & Engineering, 2014.
Luan Tran, Xi Yin, and Xiaoming Liu. Disentangled representation learning GAN for pose-invariant
face recognition. In CVPR, 2017.
M Vasilescu and Demetri Terzopoulos. Multilinear analysis of image ensembles: Tensorfaces. In
ECCV, 2002.
Zhou Wang, Eero P Simoncelli, and Alan C Bovik. Multiscale structural similarity for image quality
assessment. In Asilomar Conference on Signals, Systems and Computers, 2004.
Yandong Wen, Kaipeng Zhang, Zhifeng Li, and Yu Qiao. A discriminative feature learning approach
for deep face recognition. In ECCV, 2016.
Xinchen Yan, Jimei Yang, Kihyuk Sohn, and Honglak Lee. Attribute2Image: Conditional image
generation from visual attributes. In ECCV, 2016.
Jimei Yang, Scott E Reed, Ming-Hsuan Yang, and Honglak Lee. Weakly-supervised disentangling
with recurrent transformations for 3D view synthesis. In NIPS, 2015.
10
Published as a conference paper at ICLR 2018
Dong Yi, Zhen Lei, Shengcai Liao, and Stan Z Li. Learning face representation from scratch.
arXiv:1411.7923, 2014.
Weidong Yin, Yanwei Fu, Leonid Sigal, and Xiangyang Xue. Semi-latent GAN: Learning to generate
and modify facial images from attributes. arXiv:1704.02166, 2017a.
Xi Yin, Xiang Yu, Kihyuk Sohn, Xiaoming Liu, and Manmohan Chandraker. Towards large-pose
face frontalization in the wild. In ICCV, 2017b.
Bo Zhao, Xiao Wu, Zhi-Qi Cheng, Hao Liu, and Jiashi Feng. Multi-view image generation from a
single-view. arXiv:1704.04886, 2017a.
Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network. In
ICLR, 2017b.
Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation
using cycle-consistent adversarial networks. In ICCV, 2017.
Zhenyao Zhu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning identity-preserving face
space. In ICCV, 2013.
Zhenyao Zhu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Multi-view perceptron: a deep model
for learning face identity and view representations. In NIPS, 2014.
11
Published as a conference paper at ICLR 2018
A
E STIMATING LATENT CODES
Figure 6: Linear interpolation of both identity (vertical) and observation (horizontal) on latent codes
recovered for unseen images. All rows have the same identity vector (zI ) and all columns have the
same observation vector (zO ).
We estimate latent vectors for unseen images and demonstrate that the disentangled representations
of SD-GANs can be used to depict the estimated identity with different contingent factors. In order to
find a latent vector ẑ such that G(ẑ) (pretrained G) is similar to an unseen image x, we can minimize
the distance between x and G(ẑ): minẑ ||G(ẑ) − x||22 (Lipton & Tripathi, 2017).
In Figure 6, we depict estimation and linear interpolation across both subspaces for two pairs of
images using SD-BEGAN. We also display the corresponding source images being estimated. For
both pairs, ẑI (identity) is consistent in each row and ẑO (observation) is consistent in each column.
B
PAIRWISE DISCRIMINATION OF EMBEDDINGS AND ENCODINGS
In Section 3.1, we describe an AC-GAN (Odena et al., 2017) baseline which uses an embedding
matrix over real identities as latent identity codes (G : i, zO 7→ x̂). In place of random identity
vectors, we tried combining this identity representation with pairwise discrimination (in the style of
SD-GAN). In this experiment, the discriminator receives either either two real images with the same
identity (x1i , x2i ), or a real image with label i and synthetic image with label i (x1i , G(i, zO )). All
other hyperparameters are the same as in our SD-DCGAN experiment (Section 3.1). We show results
in Figure 7.
Figure 7: Generator with a one-hot identity embedding trained against a pairwise discriminator. Each
row shares an identity vector and each column shares an observation vector. Random sample of 4
real images of the corresponding identity on the right.
In Appendix C, we detail a modification of the DR-GAN (Tran et al., 2017) method which uses an
encoding network Ge to transform images to identity representations (Gd : Ge (x), zO 7→ x̂). We
also tried combining this encoder-decoder approach with pairwise discrimination. The discriminator
12
Published as a conference paper at ICLR 2018
receives either two real images with the same identity (x1i , x2i ), or (x1i , Gd (Ge (x1i ), zO ). We show
results in Figure 8.
Figure 8: Generator with an encoder-decoder architecture trained against a pairwise discriminator.
Each row shares an identity vector and each column shares an observation vector. Input image on the
right.
While these experiments are exploratory and not part of our principle investigation, we find the results
to be qualitatively promising. We are not the first to propose pairwise discrimination with pairs of
(real, real) or (real, fake) images in GANs (Pathak et al., 2016; Isola et al., 2017).
C
E XPLORATORY EXPERIMENT WITH DR-GAN S
Tran et al. (2017) propose Disentangled Representation learning-GAN (DR-GAN), an approach
to face frontalization with similar setup to our SD-GAN algorithm. The (single-image) DR-GAN
generator G (composition of Ge and Gd ) accepts an input image x, a pose code c, and a noise vector z.
The DR-GAN discriminator receives either x or x̂ = Gd (Ge (x), c, z). In the style of (Springenberg,
2015), the discriminator is tasked with determining not only if the image is real or fake, but also
classifying the pose c, suggesting a disentangled representation to the generator. Through their
experiments, they demonstrate that DR-GAN can explicitly disentangle pose and illumination (c)
from the rest of the latent space (Ge (x); z).
Figure 9: Generated samples from cGAN trained only to disentangle identity. Each row shares an
identity vector and each column shares an observation vector; input image on the right.
In addition to our AC-DCGAN baseline (Odena et al., 2017), we tried modifying DR-GAN to
only disentangle identity (rather than both identity and pose in the original paper). We used the
DCGAN (Radford et al., 2016) discriminator architecture (Table 4) as Ge , linearly projecting the
final convolutional layer to Ge (x) ∈ R50 (in alignment with our SD-GAN experiments). We altered
the discriminator to predict the identity of x or x̂, rather than pose information (which is unknown
in our experimental setup). With these modifications, Ge (x) is analogous to zI in the SD-GAN
generator, and z is analogous to zO . Furthermore, this setup is identical to the AC-DCGAN baseline
13
Published as a conference paper at ICLR 2018
except that the embedding matrix is replaced by an encoding network Ge . Unfortunately, we found
that the generator quickly learned to produce a single output image x̂ for each input x regardless
of observation code z (Figure 9). Accordingly, we excluded this experiment from our evaluation
(Section 3.2).
D
I MAGINING IDENTITIES WITH AC-GAN
Figure 10: AC-DCGAN generation with random identity vectors that sum to one. Each row shares an
identity vector and each column shares an observation vector.
Figure 11: AC-DCGAN generation with one-hot identity vectors. Each row shares an identity vector
and each column shares an observation vector.
As stated in Section 3.1, AC-GANs Odena et al. (2017) provide no obvious way to imagine new identities. For our evaluation (Section 3.2), the AC-GAN generator receives identity input zI ∈ [0, 1]10000 :
a one-hot over all identities. One possible approach to imagining new identities would be to query a
P10000
trained AC-GAN generator with a random vector zI such that i=1 zI [i] = 1. We found that
this strategy produced little identity variety (Figure 10) compared to the normal one-hot strategy
(Figure 11) and excluded it from our evaluation.
E
A RCHITECTURE D ESCRIPTIONS
We list here the full architectural details for our SD-DCGAN and SD-BEGAN models. In these
descriptions, k is the number of images that the generator produces and discriminator observes per
identity (usually 2 for pairwise training), and dI is the number of dimensions in the latent space ZI
(identity). In our experiments, dimensionality of ZO is always 100 − dI . As a concrete example, the
bottleneck layer of the SD-BEGAN discriminator autoencoder (“fc2” in Table 6) with k = 2, dI = 50
has output dimensionality 150.
We emphasize that generators are parameterized by k in the tables only for clarity and symmetry with
the discriminators. Implementations need not modify the generator; instead, k can be collapsed into
the batch size.
For the stacked-channels versions of these discriminators, we simply change the number of input
image channels from 3 to 3k and set k = 1 wherever k appears in the table.
14
Published as a conference paper at ICLR 2018
Table 2: Input abstraction for both SD-DCGAN and SD-BEGAN generators during training (where
zO is always different for every pair or set of k)
Operation
[zi ; zo ]
dup zi
concat
Input Shape
[(dI ,);(k,100-dI )]
[(dI ,);(k,100-dI )]
[(k,dI );(k,100-dI )]
Kernel Size
Output Shape
[(dI ,);(k,100-dI )]
[(k,dI );(k,100-dI )]
(k,100)
Table 3: SD-DCGAN generator architecture
Operation
z
fc1
reshape
bnorm
relu
upconv1
bnorm
relu
upconv2
bnorm
relu
upconv3
bnorm
relu
upconv4
tanh
Input Shape
(k,100)
(k,8192)
(k,8192)
(k,4,4,512)
(k,4,4,512)
(k,4,4,512)
(k,8,8,256)
(k,8,8,256)
(k,8,8,256)
(k,16,16,128)
(k,16,16,128)
(k,16,16,128)
(k,32,32,64)
(k,32,32,64)
(k,32,32,64)
(k,64,64,3)
Kernel Size
(100,8192)
(5,5,512,256)
(5,5,256,128)
(5,5,128,64)
(5,5,64,3)
Output Shape
(k,100)
(k,8192)
(k,4,4,512)
(k,4,4,512)
(k,4,4,512)
(k,8,8,256)
(k,8,8,256)
(k,8,8,256)
(k,16,16,128)
(k,16,16,128)
(k,16,16,128)
(k,32,32,64)
(k,32,32,64)
(k,32,32,64)
(k,64,64,3)
(k,64,64,3)
Table 4: SD-DCGAN discriminator architecture
Operation
x or G(z)
downconv1
lrelu(a=0.2)
downconv2
bnorm
lrelu(a=0.2)
downconv3
bnorm
lrelu(a=0.2)
downconv4
stackchannels
downconv5
flatten
fc1
sigmoid
Input Shape
(k,64,64,3)
(k,64,64,3)
(k,32,32,64)
(k,32,32,64)
(k,16,16,128)
(k,16,16,128)
(k,16,16,128)
(k,8,8,256)
(k,8,8,256)
(k,8,8,256)
(k,4,4,512)
(4,4,512k)
(2,2,512)
(2048,)
(1,)
Kernel Size
(5,5,3,64)
(5,5,64,128)
(5,5,128,256)
(5,5,256,512)
(3,3,512k,512)
(2048,1)
15
Output Shape
(k,64,64,3)
(k,32,32,64)
(k,32,32,64)
(k,16,16,128)
(k,16,16,128)
(k,16,16,128)
(k,8,8,256)
(k,8,8,256)
(k,8,8,256)
(k,4,4,512)
(4,4,512k)
(2,2,512)
(2048,)
(1,)
(1,)
Published as a conference paper at ICLR 2018
Table 5: SD-BEGAN generator architecture
Operation
z
fc1
reshape
conv2d
elu
conv2d
elu
upsample2
conv2d
elu
conv2d
elu
upsample2
conv2d
elu
conv2d
elu
upsample2
conv2d
elu
conv2d
elu
conv2d
Input Shape
(k,100)
(k,100,)
(k,100,8192)
(k,8,8,128)
(k,8,8,128)
(k,8,8,128)
(k,8,8,128)
(k,8,8,128)
(k,16,16,128)
(k,16,16,128)
(k,16,16,128)
(k,16,16,128)
(k,16,16,128)
(k,32,32,128)
(k,32,32,128)
(k,32,32,128)
(k,32,32,128)
(k,32,32,128)
(k,64,64,128)
(k,64,64,128)
(k,64,64,128)
(k,64,64,128)
(k,64,64,128)
Kernel Size
(100,8192)
(3,3,128,128)
(3,3,128,128)
(3,3,128,128)
(3,3,128,128)
(3,3,128,128)
(3,3,128,128)
(3,3,128,128)
(3,3,128,128)
(3,3,128,3)
Output Shape
(k,100)
(k,100,8192)
(k,8,8,128)
(k,8,8,128)
(k,8,8,128)
(k,8,8,128)
(k,8,8,128)
(k,16,16,128)
(k,16,16,128)
(k,16,16,128)
(k,16,16,128)
(k,16,16,128)
(k,32,32,128)
(k,32,32,128)
(k,32,32,128)
(k,32,32,128)
(k,32,32,128)
(k,64,64,128)
(k,64,64,128)
(k,64,64,128)
(k,64,64,128)
(k,64,64,128)
(k,64,64,3)
Table 6: SD-BEGAN discriminator autoencoder architecture. The decoder portion is equivalent to,
but does not share weights with, the SD-BEGAN generator architecture (Table 5).
Operation
x or G(z)
conv2d
elu
conv2d
elu
conv2d
elu
downconv2d
elu
conv2d
elu
conv2d
elu
downconv2d
elu
conv2d
elu
conv2d
elu
downconv2d
elu
conv2d
elu
conv2d
elu
flatten
fc1
concat
fc2
fc3
split
G (Table 5)
Input Shape
(k,64,64,3)
(k,64,64,3)
(k,64,64,128)
(k,64,64,128)
(k,64,64,128)
(k,64,64,128)
(k,64,64,128)
(k,64,64,128)
(k,32,32,256)
(k,32,32,256)
(k,32,32,256)
(k,32,32,256)
(k,32,32,256)
(k,32,32,256)
(k,16,16,384)
(k,16,16,384)
(k,16,16,384)
(k,16,16,384)
(k,16,16,384)
(k,16,16,384)
(k,8,8,512)
(k,8,8,512)
(k,8,8,512)
(k,8,8,512)
(k,8,8,512)
(k,8,8,512)
(k,32768)
(k,100)
(100k,)
(dI +(100-dI )k,)
(100k,)
(k,100)
Kernel Size
(3,3,3,128)
(3,3,128,128)
(3,3,128,128)
(3,3,128,256)
(3,3,256,256)
(3,3,256,256)
(3,3,256,384)
(3,3,384,384)
(3,3,384,384)
(3,3,384,512)
(3,3,512,512)
(3,3,512,512)
(32768,100)
(100k,dI +(100-dI )k,)
(dI +(100-dI )k,100k,)
16
Output Shape
(k,64,64,3)
(k,64,64,128)
(k,64,64,128)
(k,64,64,128)
(k,64,64,128)
(k,64,64,128)
(k,64,64,128)
(k,32,32,256)
(k,32,32,256)
(k,32,32,256)
(k,32,32,256)
(k,32,32,256)
(k,32,32,256)
(k,16,16,384)
(k,16,16,384)
(k,16,16,384)
(k,16,16,384)
(k,16,16,384)
(k,16,16,384)
(k,8,8,512)
(k,8,8,512)
(k,8,8,512)
(k,8,8,512)
(k,8,8,512)
(k,8,8,512)
(k,32768)
(k,100)
(100k,)
(dI +(100-dI )k,)
(100k,)
(k,100)
(k,64,64,3)
Published as a conference paper at ICLR 2018
F
FACE S AMPLES
We present samples from each model reported in Table 1 for qualitative comparison. In each matrix,
zI is the same across all images in a row and zO is the same across all images in a column. We draw
identity and observation vectors randomly for these samples.
Figure 12: Generated samples from AC-DCGAN (four sample of real photos of ID on right)
Figure 13: Generated samples from SD-DCGAN
Figure 14: Generated samples from SD-DCGAN with stacked-channel discriminator
17
Published as a conference paper at ICLR 2018
Figure 15: Generated samples from SD-DCGAN with k = 4
Figure 16: Generated samples from SD-DCGAN with dI = 25
Figure 17: Generated samples from SD-DCGAN with dI = 75
Figure 18: Generated samples from SD-DCGAN trained with the Wasserstein GAN loss (Arjovsky
et al., 2017). This model was optimized using RMS-prop (Hinton et al.) with α = 5e−5. In our
evaluation (Section 3.2), FaceNet had an AUC of .770 and an accuracy of 68.5% (at τv ) on data
generated by this model. We excluded it from Table 1 for brevity.
18
Published as a conference paper at ICLR 2018
Figure 19: Generated samples from SD-BEGAN
Figure 20: Generated samples from SD-BEGAN with k = 4, demonstrating mode collapse
G
S HOE S AMPLES
We present samples from an SD-DCGAN and SD-BEGAN trained on our shoes dataset.
Figure 21: Generated samples from SD-DCGAN
Figure 22: Generated samples from SD-BEGAN
19
| 9 |
arXiv:1705.07962v2 [cs.LG] 19 Sep 2017
pix2code: Generating Code from a Graphical User
Interface Screenshot
Tony Beltramelli
UIzard Technologies
Copenhagen, Denmark
tony@uizard.io
Abstract
Transforming a graphical user interface screenshot created by a designer into computer code is a typical task conducted by a developer in order to build customized
software, websites, and mobile applications. In this paper, we show that deep
learning methods can be leveraged to train a model end-to-end to automatically
generate code from a single input image with over 77% of accuracy for three
different platforms (i.e. iOS, Android and web-based technologies).
1
Introduction
The process of implementing client-side software based on a Graphical User Interface (GUI) mockup
created by a designer is the responsibility of developers. Implementing GUI code is, however,
time-consuming and prevent developers from dedicating the majority of their time implementing the
actual functionality and logic of the software they are building. Moreover, the computer languages
used to implement such GUIs are specific to each target runtime system; thus resulting in tedious
and repetitive work when the software being built is expected to run on multiple platforms using
native technologies. In this paper, we describe a model trained end-to-end with stochastic gradient
descent to simultaneously learns to model sequences and spatio-temporal visual features to generate
variable-length strings of tokens from a single GUI image as input.
Our first contribution is pix2code, a novel approach based on Convolutional and Recurrent Neural
Networks allowing the generation of computer tokens from a single GUI screenshot as input. That
is, no engineered feature extraction pipeline nor expert heuristics was designed to process the input
data; our model learns from the pixel values of the input image alone. Our experiments demonstrate
the effectiveness of our method for generating computer code for various platforms (i.e. iOS and
Android native mobile interfaces, and multi-platform web-based HTML/CSS interfaces) without the
need for any change or specific tuning to the model. In fact, pix2code can be used as such to support
different target languages simply by being trained on a different dataset. A video demonstrating our
system is available online1 .
Our second contribution is the release of our synthesized datasets consisting of both GUI screenshots
and associated source code for three different platforms. Our datasets and our pix2code implemention
are publicly available2 to foster future research.
2
Related Work
The automatic generation of programs using machine learning techniques is a relatively new field of
research and program synthesis in a human-readable format have only been addressed very recently.
1
2
https://uizard.io/research#pix2code
https://github.com/tonybeltramelli/pix2code
A recent example is DeepCoder [2], a system able to generate computer programs by leveraging
statistical predictions to augment traditional search techniques. In another work by Gaunt et al. [5],
the generation of source code is enabled by learning the relationships between input-output examples
via differentiable interpreters. Furthermore, Ling et al. [12] recently demonstrated program synthesis
from a mixed natural language and structured program specification as input. It is important to note
that most of these methods rely on Domain Specific Languages (DSLs); computer languages (e.g.
markup languages, programming languages, modeling languages) that are designed for a specialized
domain but are typically more restrictive than full-featured computer languages. Using DSLs thus
limit the complexity of the programming language that needs to be modeled and reduce the size of
the search space.
Although the generation of computer programs is an active research field as suggested by these
breakthroughs, program generation from visual inputs is still a nearly unexplored research area. The
closest related work is a method developed by Nguyen et al. [14] to reverse-engineer Android user
interfaces from screenshots. However, their method relies entirely on engineered heuristics requiring
expert knowledge of the domain to be implemented successfully. Our paper is, to the best of our
knowledge, the first work attempting to address the problem of user interface code generation from
visual inputs by leveraging machine learning to learn latent variables instead of engineering complex
heuristics.
In order to exploit the graphical nature of our input, we can borrow methods from the computer vision
literature. In fact, an important number of research [21, 4, 10, 22] have addressed the problem of
image captioning with impressive results; showing that deep neural networks are able to learn latent
variables describing objects in an image and their relationships with corresponding variable-length
textual descriptions. All these methods rely on two main components. First, a Convolutional Neural
Network (CNN) performing unsupervised feature learning mapping the raw input image to a learned
representation. Second, a Recurrent Neural Network (RNN) performing language modeling on the
textual description associated with the input picture. These approaches have the advantage of being
differentiable end-to-end, thus allowing the use of gradient descent for optimization.
(b) Sampling
(a) Training
Figure 1: Overview of the pix2code model architecture. During training, the GUI image is encoded by
a CNN-based vision model; the context (i.e. a sequence of one-hot encoded tokens corresponding to
DSL code) is encoded by a language model consisting of a stack of LSTM layers. The two resulting
feature vectors are then concatenated and fed into a second stack of LSTM layers acting as a decoder.
Finally, a softmax layer is used to sample one token at a time; the output size of the softmax layer
corresponding to the DSL vocabulary size. Given an image and a sequence of tokens, the model (i.e.
contained in the gray box) is differentiable and can thus be optimized end-to-end through gradient
descent to predict the next token in the sequence. During sampling, the input context is updated for
each prediction to contain the last predicted token. The resulting sequence of DSL tokens is compiled
to the desired target language using traditional compiler design techniques.
3
pix2code
The task of generating computer code written in a given programming language from a GUI screenshot
can be compared to the task of generating English textual descriptions from a scene photography. In
both scenarios, we want to produce a variable-length strings of tokens from pixel values. We can
thus divide our problem into three sub-problems. First, a computer vision problem of understanding
the given scene (i.e. in this case, the GUI image) and inferring the objects present, their identities,
2
positions, and poses (i.e. buttons, labels, element containers). Second, a language modeling problem
of understanding text (i.e. in this case, computer code) and generating syntactically and semantically
correct samples. Finally, the last challenge is to use the solutions to both previous sub-problems by
exploiting the latent variables inferred from scene understanding to generate corresponding textual
descriptions (i.e. computer code rather than English) of the objects represented by these variables.
3.1
Vision Model
CNNs are currently the method of choice to solve a wide range of vision problems thanks to their
topology allowing them to learn rich latent representations from the images they are trained on
[16, 11]. We used a CNN to perform unsupervised feature learning by mapping an input image to a
learned fixed-length vector; thus acting as an encoder as shown in Figure 1.
The input images are initially re-sized to 256 × 256 pixels (the aspect ratio is not preserved) and the
pixel values are normalized before to be fed in the CNN. No further pre-processing is performed. To
encode each input image to a fixed-size output vector, we exclusively used small 3 × 3 receptive
fields which are convolved with stride 1 as used by Simonyan and Zisserman for VGGNet [18].
These operations are applied twice before to down-sample with max-pooling. The width of the first
convolutional layer is 32, followed by a layer of width 64, and finally width 128. Two fully connected
layers of size 1024 applying the rectified linear unit activation complete the vision model.
(b) Code describing the GUI written in our DSL
(a) iOS GUI screenshot
Figure 2: An example of a native iOS GUI written in our markup-like DSL.
3.2
Language Model
We designed a simple lightweight DSL to describe GUIs as illustrated in Figure 2. In this work we
are only interested in the GUI layout, the different graphical components, and their relationships;
thus the actual textual value of the labels is ignored. Additionally to reducing the size of the search
space, the DSL simplicity also reduces the size of the vocabulary (i.e. the total number of tokens
supported by the DSL). As a result, our language model can perform token-level language modeling
with a discrete input by using one-hot encoded vectors; eliminating the need for word embedding
techniques such as word2vec [13] that can result in costly computations.
In most programming languages and markup languages, an element is declared with an opening
token; if children elements or instructions are contained within a block, a closing token is usually
needed for the interpreter or the compiler. In such a scenario where the number of children elements
contained in a parent element is variable, it is important to model long-term dependencies to be
able to close a block that has been opened. Traditional RNN architectures suffer from vanishing
and exploding gradients preventing them from being able to model such relationships between data
points spread out in time series (i.e. in this case tokens spread out in a sequence). Hochreiter and
Schmidhuber proposed the Long Short-Term Memory (LSTM) neural architecture in order to address
this very problem [9]. The different LSTM gate outputs can be computed as follows:
3
it
ft
ot
ct
ht
= φ(Wix xt + Wiy ht−1 + bi )
= φ(Wf x xt + Wf y ht−1 + bf )
= φ(Wox xt + Woy ht−1 + bo )
= ft • ct−1 + it • σ(Wcx xt + Wcy ht−1 + bc )
= ot • σ(ct )
(1)
(2)
(3)
(4)
(5)
With W the matrices of weights, xt the new input vector at time t, ht−1 the previously produced
output vector, ct−1 the previously produced cell state’s output, b the biases, and φ and σ the activation functions sigmoid and hyperbolic tangent, respectively. The cell state c learns to memorize
information by using a recursive connection as done in traditional RNN cells. The input gate i is
used to control the error flow on the inputs of cell state c to avoid input weight conflicts that occur in
traditional RNN because the same weight has to be used for both storing certain inputs and ignoring
others. The output gate o controls the error flow from the outputs of the cell state c to prevent output
weight conflicts that happen in standard RNN because the same weight has to be used for both
retrieving information and not retrieving others. The LSTM memory block can thus use i to decide
when to write information in c and use o to decide when to read information from c. We used the
LSTM variant proposed by Gers and Schmidhuber [6] with a forget gate f to reset memory and help
the network model continuous sequences.
3.3
Decoder
Our model is trained in a supervised learning manner by feeding an image I and a contextual sequence
X of T tokens xt , t ∈ {0 . . . T − 1} as inputs; and the token xT as the target label. As shown on
Figure 1, a CNN-based vision model encodes the input image I into a vectorial representation p. The
input token xt is encoded by an LSTM-based language model into an intermediary representation qt
allowing the model to focus more on certain tokens and less on others [8]. This first language model
is implemented as a stack of two LSTM layers with 128 cells each. The vision-encoded vector p and
the language-encoded vector qt are concatenated into a single feature vector rt which is then fed into
a second LSTM-based model decoding the representations learned by both the vision model and the
language model. The decoder thus learns to model the relationship between objects present in the
input GUI image and the associated tokens present in the DSL code. Our decoder is implemented as
a stack of two LSTM layers with 512 cells each. This architecture can be expressed mathematically
as follows:
p = CN N (I)
qt = LST M (xt )
rt = (q, pt )
(6)
(7)
(8)
yt = sof tmax(LST M 0 (rt ))
xt+1 = yt
(9)
(10)
This architecture allows the whole pix2code model to be optimized end-to-end with gradient descent
to predict a token at a time after it has seen both the image as well as the preceding tokens in the
sequence. The discrete nature of the output (i.e. fixed-sized vocabulary of tokens in the DSL) allows
us to reduce the task to a classification problem. That is, the output layer of our model has the same
number of cells as the vocabulary size; thus generating a probability distribution of the candidate
tokens at each time step allowing the use of a softmax layer to perform multi-class classification.
3.4
Training
The length T of the sequences used for training is important to model long-term dependencies;
for example to be able to close a block of code that has been opened. After conducting empirical
experiments, the DSL input files used for training were segmented with a sliding window of size
48; in other words, we unroll the recurrent neural network for 48 steps. This was found to be a
satisfactory trade-off between long-term dependencies learning and computational cost. For every
4
token in the input DSL file, the model is therefore fed with both an input image and a contextual
sequence of T = 48 tokens. While the context (i.e. sequence of tokens) used for training is updated
at each time step (i.e. each token) by sliding the window, the very same input image I is reused
for samples associated with the same GUI. The special tokens < ST ART > and < EN D > are
used to respectively prefix and suffix the DSL files similarly to the method used by Karpathy and
Fei-Fei [10]. Training is performed by computing the partial derivatives of the loss with respect to
the network weights calculated with backpropagation to minimize the multiclass log loss:
L(I, X) = −
T
X
xt+1 log(yt )
(11)
t=1
With xt+1 the expected token, and yt the predicted token. The model is optimized end-to-end hence
the loss L is minimized with regard to all the parameters including all layers in the CNN-based vision
model and all layers in both LSTM-based models. Training with the RMSProp algorithm [20] gave
the best results with a learning rate set to 1e − 4 and by clipping the output gradient to the range
[−1.0, 1.0] to cope with numerical instability [8]. To prevent overfitting, a dropout regularization
[19] set to 25% is applied to the vision model after each max-pooling operation and at 30% after
each fully-connected layer. In the LSTM-based models, dropout is set to 10% and only applied to
the non-recurrent connections [24]. Our model was trained with mini-batches of 64 image-sequence
pairs.
Dataset type
iOS UI (Storyboard)
Android UI (XML)
web-based UI (HTML/CSS)
3.5
Table 1: Dataset statistics.
Training set
Synthesizable
Instances Samples
26 × 105
1500
93672
21 × 106
1500
85756
31 × 104
1500
143850
Test set
Instances Samples
250
15984
250
14265
250
24108
Sampling
To generate DSL code, we feed the GUI image I and a contextual sequence X of T = 48 tokens
where tokens xt . . . xT −1 are initially set empty and the last token of the sequence xT is set to the
special < ST ART > token. The predicted token yt is then used to update the next sequence of
contextual tokens. That is, xt . . . xT −1 are set to xt+1 . . . xT (xt is thus discarded), with xT set to
yt . The process is repeated until the token < EN D > is generated by the model. The generated
DSL token sequence can then be compiled with traditional compilation methods to the desired target
language.
(b) Micro-average ROC curves
(a) pix2code training loss
Figure 3: Training loss on different datasets and ROC curves calculated during sampling with the
model trained for 10 epochs.
5
4
Experiments
Access to consequent datasets is a typical bottleneck when training deep neural networks. To the best
of our knowledge, no dataset consisting of both GUI screenshots and source code was available at the
time this paper was written. As a consequence, we synthesized our own data resulting in the three
datasets described in Table 1. The column Synthesizable refers to the maximum number of unique
GUI configuration that can be synthesized using our stochastic user interface generator. The columns
Instances refers to the number of synthesized (GUI screenshot, GUI code) file pairs. The columns
Samples refers to the number of distinct image-sequence pairs. In fact, training and sampling are
done one token at a time by feeding the model with an image and a sequence of tokens obtained with
a sliding window of fixed size T . The total number of training samples thus depends on the total
number of tokens written in the DSL files and the size of the sliding window. Our stochastic user
interface generator is designed to synthesize GUIs written in our DSL which is then compiled to
the desired target language to be rendered. Using data synthesis also allows us to demonstrate the
capability of our model to generate computer code for three different platforms.
Table 2: Experiments results reported for the test sets described in Table 1.
Error (%)
Dataset type
greedy search beam search 3 beam search 5
iOS UI (Storyboard)
22.73
25.22
23.94
Android UI (XML)
22.34
23.58
40.93
web-based UI (HTML/CSS)
12.14
11.01
22.35
Our model has around 109 × 106 parameters to optimize and all experiments are performed with the
same model with no specific tuning; only the training datasets differ as shown on Figure 3. Code
generation is performed with both greedy search and beam search to find the tokens that maximize
the classification probability. To evaluate the quality of the generated output, the classification error
is computed for each sampled DSL token and averaged over the whole test dataset. The length
difference between the generated and the expected token sequences is also counted as error. The
results can be seen on Table 2.
Figures 4, 5, and 6 show samples consisting of input GUIs (i.e. ground truth), and output GUIs
generated by a trained pix2code model. It is important to remember that the actual textual value of the
labels is ignored and that both our data synthesis algorithm and our DSL compiler assign randomly
generated text to the labels. Despite occasional problems to select the right color or the right style
for specific GUI elements and some difficulties modelling GUIs consisting of long lists of graphical
components, our model is generally able to learn the GUI layout in a satisfying manner and can
preserve the hierarchical structure of the graphical elements.
(a) Groundtruth GUI 1
(b) Generated GUI 1
(c) Groundtruth GUI 2
Figure 4: Experiment samples for the iOS GUI dataset.
6
(d) Generated GUI 2
5
Conclusion and Discussions
In this paper, we presented pix2code, a novel method to generate computer code given a single GUI
image as input. While our work demonstrates the potential of such a system to automate the process
of implementing GUIs, we only scratched the surface of what is possible. Our model consists of
relatively few parameters and was trained on a relatively small dataset. The quality of the generated
code could be drastically improved by training a bigger model on significantly more data for an
extended number of epochs. Implementing a now-standard attention mechanism [1, 22] could further
improve the quality of the generated code.
Using one-hot encoding does not provide any useful information about the relationships between the
tokens since the method simply assigns an arbitrary vectorial representation to each token. Therefore,
pre-training the language model to learn vectorial representations would allow the relationships
between tokens in the DSL to be inferred (i.e. learning word embeddings such as word2vec [13]) and
as a result alleviate semantical error in the generated code. Furthermore, one-hot encoding does not
scale to very big vocabulary and thus restrict the number of symbols that the DSL can support.
(a) Groundtruth GUI 5
(b) Generated GUI 5
(c) Groundtruth GUI 6
(d) Generated GUI 6
Figure 5: Experiment samples from the web-based GUI dataset.
Generative Adversarial Networks GANs [7] have shown to be extremely powerful at generating
images and sequences [23, 15, 25, 17, 3]. Applying such techniques to the problem of generating
computer code from an input image is so far an unexplored research area. GANs could potentially be
used as a standalone method to generate code or could be used in combination with our pix2code
model to fine-tune results.
A major drawback of deep neural networks is the need for a lot of training data for the resulting
model to generalize well on new unseen examples. One of the significant advantages of the method
we described in this paper is that there is no need for human-labelled data. In fact, the network can
model the relationships between graphical components and associated tokens by simply being trained
on image-sequence pairs. Although we used data synthesis in our paper partly to demonstrate the
capability of our method to generate GUI code for various platforms; data synthesis might not be
needed at all if one wants to focus only on web-based GUIs. In fact, one could imagine crawling the
World Wide Web to collect a dataset of HTML/CSS code associated with screenshots of rendered
websites. Considering the large number of web pages already available online and the fact that new
websites are created every day, the web could theoretically supply a virtually unlimited amount of
7
training data; potentially allowing deep learning methods to fully automate the implementation of
web-based GUIs.
(a) Groundtruth GUI 3
(b) Generated GUI 3
(c) Groundtruth GUI 4
(d) Generated GUI 4
Figure 6: Experiment samples from the Android GUI dataset.
References
[1] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align
and translate. arXiv preprint arXiv:1409.0473, 2014.
[2] M. Balog, A. L. Gaunt, M. Brockschmidt, S. Nowozin, and D. Tarlow. Deepcoder: Learning to
write programs. arXiv preprint arXiv:1611.01989, 2016.
[3] B. Dai, D. Lin, R. Urtasun, and S. Fidler. Towards diverse and natural image descriptions via a
conditional gan. arXiv preprint arXiv:1703.06029, 2017.
[4] J. Donahue, L. Anne Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and
T. Darrell. Long-term recurrent convolutional networks for visual recognition and description.
In Proceedings of the IEEE conference on computer vision and pattern recognition, pages
2625–2634, 2015.
[5] A. L. Gaunt, M. Brockschmidt, R. Singh, N. Kushman, P. Kohli, J. Taylor, and D. Tarlow. Terpret:
A probabilistic programming language for program induction. arXiv preprint arXiv:1608.04428,
2016.
[6] F. A. Gers, J. Schmidhuber, and F. Cummins. Learning to forget: Continual prediction with
lstm. Neural computation, 12(10):2451–2471, 2000.
[7] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and
Y. Bengio. Generative adversarial nets. In Advances in neural information processing systems,
pages 2672–2680, 2014.
[8] A. Graves. Generating sequences with recurrent neural networks.
arXiv:1308.0850, 2013.
arXiv preprint
[9] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–
1780, 1997.
[10] A. Karpathy and L. Fei-Fei. Deep visual-semantic alignments for generating image descriptions.
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages
3128–3137, 2015.
[11] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional
neural networks. In Advances in neural information processing systems, pages 1097–1105,
2012.
[12] W. Ling, E. Grefenstette, K. M. Hermann, T. Kočiskỳ, A. Senior, F. Wang, and P. Blunsom.
Latent predictor networks for code generation. arXiv preprint arXiv:1603.06744, 2016.
8
[13] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of
words and phrases and their compositionality. In Advances in neural information processing
systems, pages 3111–3119, 2013.
[14] T. A. Nguyen and C. Csallner. Reverse engineering mobile application user interfaces with
remaui (t). In Automated Software Engineering (ASE), 2015 30th IEEE/ACM International
Conference on, pages 248–259. IEEE, 2015.
[15] S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee. Generative adversarial text
to image synthesis. In Proceedings of The 33rd International Conference on Machine Learning,
volume 3, 2016.
[16] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv preprint
arXiv:1312.6229, 2013.
[17] R. Shetty, M. Rohrbach, L. A. Hendricks, M. Fritz, and B. Schiele. Speaking the same language:
Matching machine to human captions by adversarial training. arXiv preprint arXiv:1703.10476,
2017.
[18] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image
recognition. arXiv preprint arXiv:1409.1556, 2014.
[19] N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: a
simple way to prevent neural networks from overfitting. Journal of Machine Learning Research,
15(1):1929–1958, 2014.
[20] T. Tieleman and G. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of
its recent magnitude. COURSERA: Neural networks for machine learning, 4(2), 2012.
[21] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption
generator. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
pages 3156–3164, 2015.
[22] K. Xu, J. Ba, R. Kiros, K. Cho, A. C. Courville, R. Salakhutdinov, R. S. Zemel, and Y. Bengio.
Show, attend and tell: Neural image caption generation with visual attention. In ICML,
volume 14, pages 77–81, 2015.
[23] L. Yu, W. Zhang, J. Wang, and Y. Yu. Seqgan: sequence generative adversarial nets with policy
gradient. arXiv preprint arXiv:1609.05473, 2016.
[24] W. Zaremba, I. Sutskever, and O. Vinyals. Recurrent neural network regularization. arXiv
preprint arXiv:1409.2329, 2014.
[25] H. Zhang, T. Xu, H. Li, S. Zhang, X. Huang, X. Wang, and D. Metaxas. Stackgan: Text to
photo-realistic image synthesis with stacked generative adversarial networks. arXiv preprint
arXiv:1612.03242, 2016.
9
| 2 |
A Statistical Perspective on Inverse and Inverse Regression
Problems
Debashis Chatterjeea , Sourabh Bhattacharyaa,∗
arXiv:1707.06852v1 [stat.ME] 21 Jul 2017
a Interdisciplinary
Statistical Research Unit, Indian Statistical Institute,
203 B. T. Road, Kolkata - 700108, India
Abstract
Inverse problems, where in broad sense the task is to learn from the noisy response
about some unknown function, usually represented as the argument of some known
functional form, has received wide attention in the general scientific disciplines. However, in mainstream statistics such inverse problem paradigm does not seem to be as
popular. In this article we provide a brief overview of such problems from a statistical,
particularly Bayesian, perspective.
We also compare and contrast the above class of problems with the perhaps more
statistically familiar inverse regression problems, arguing that this class of problems
contains the traditional class of inverse problems. In course of our review we point out
that the statistical literature is very scarce with respect to both the inverse paradigms,
and substantial research work is still necessary to develop the fields.
Keywords: Bayesian analysis, Inverse problems, Inverse regression problems,
Regularization, Reproducing Kernel Hilbert Space (RKHS), Palaeoclimate
reconstruction
1. Introduction
The similarities and dissimilarities between inverse problems and the more traditional forward problems are usually not clearly explained in the literature, and often
“ill-posed” is the term used to loosely characterize inverse problems. We point out that
these two problems may have the same goal or different goal, while both consider the
same model given the data. We first elucidate using the traditional case of deterministic
differential equations, that the goals of the two problems may be the same. Consider a
dynamical system
dxt
= G(t, xt , θ),
(1.1)
dt
where G is a known function and θ is a parameter. In the forward problem the goal
is to obtain the solution xt ≡ xt (θ), given θ and the initial conditions, whereas, in
∗ Corresponding
author.
Email addresses: debashis1chatterjee@gmail.com (Debashis Chatterjee),
bhsourabh@gmail.com, sourabh@isical.ac.in (Sourabh Bhattacharya)
Preprint submitted to XYZ
July 24, 2017
the inverse problem, the aim is to obtain θ given the solution process xt . Realistically,
the differential equation would be perturbed by noise, and so, one observes the data
y = (y1 , . . . , yT )T , where
yt = xt (θ) + t ,
(1.2)
for noise variables t having some suitable independent and identical (iid) error distribution q, which we assume to be known for simplicity of illustration. A typical
method of estimating θ, employed by the scientific community, is the method of calibration, where the solution of (1.1) would be obtained for each θ-value on a proposed
grid of plausible values, and a set ỹ(θ) = (ỹ1 (θ), . . . , ỹT (θ))T is generated from the
iid
model (1.2) for every such θ after simulating, for i = 1, . . . , T , ˜t ∼ q; then forming
ỹt (θ) = xt (θ) + ˜t , and finally reporting that value θ in the grid as an estimate of the
true values for which ky − ỹ(θ)k is minimized, given some distance measure k·k; maximization of the correlation between y and ỹ(θ) is also considered. In other words, the
calibration method makes use of the forward technique to estimate the desired quantities of the model. On the other hand, the inverse problem paradigm attempts to directly
estimate θ from the observed data y usually by minimizing some discrepancy measure
between y and x(θ), where x(θ) = (x1 (θ), . . . , xT (θ))T . Hence, from this perspective the goals of both forward and inverse approaches are the same, that is, estimation
of θ. However, the forward approach is well-posed, whereas, the inverse approach is
often ill-posed. To clarify, note that within a grid, there always exists some θ̂ that minimizes ky − ỹ(θ)k among all the grid-values. In this sense the forward problem may be
thought of as well-posed. However, direct minimization of the discrepancy between y
and x(θ) with respect to θ is usually difficult and for high-dimensional θ, the solution
to the minimization problem is usually not unique, and small perturbations of the data
causes large changes in the possible set of solutions, so that the inverse approach is
usually ill-posed. Of course, if the minimization is sought over a set of grid values of θ
only, then the inverse problem becomes well-posed.
From the statistical perspective, the unknown parameter θ of the model needs to be
learned, in either classical or Bayesian way, and hence, in this sense there is no real
distinction between forward and inverse problems. Indeed, statistically, since the data
are modeled conditionally on the parameters, all problems where learning the model
parameter given the data is the goal, are inverse problems. We remark that the literature
usually considers learning unknown functions from the data in the realm of inverse
problems, but a function is nothing but an infinite-dimensional parameter, which is a
very common learning problem in statistics.
We now explain when forward and inverse problems can differ in their aims, and are
significantly different even from the statistical perspective. To give an example, consider the palaeoclimate reconstruction problem discussed in Haslett et al. [18] where
the reconstruction of prehistoric climate at Glendalough in Ireland from fossil pollen
is of interest. The model is built on the realistic assumption that pollen abundance depends upon climate, not the other way around. The compositional pollen data with the
modern climates are available at many modern sites but the climate values associated
with the fossil pollen data are missing. The inverse nature of the problem is associated
with the fact that it is of interest to predict the fossil climate values, given the pollen
assemblages. The forward problem would result, if given the fossil climate values (if
2
known), the fossil pollen abundances (if unknown), were to be predicted.
Technically, given a data set y that depends upon covariates x, with a probability
distribution f (y|x, θ) where θ is the model parameter, we call the problem ‘inverse’
if it is of interest to predict the corresponding unknown x̃ given a new observed ỹ
(see Bhattacharya and Haslett [9]), after eliminating θ. On the other hand, the more
conventional forward problem considers the prediction of ỹ for given x̃ with the same
probability distribution, again, after eliminating the unknown parameter θ. This perspective clearly distinguishes the forward and inverse problems, as opposed to the other
parameter-learning perspective discussed above, which is much more widely considered in the literature. In fact, with respect to predicting unknown covariates from the
responses, mostly inverse linear regression, particularly in the classical set-up, has been
considered in the literature. To distinguish the traditional inverse problems from the
covariate-prediction perspective, we use the phrase ‘inverse regression’ to refer to the
latter. Other examples of inverse regression are given in Section 7.
Our discussion shows that statistically, there is nothing special about the existing
literature on inverse problems that considers estimation of unknown (perhaps, infinitedimensional) parameters, and the only class of problems that can be truly regarded
as inverse problems as distinguished from forward problems are those which consider
prediction of unknown covariates from the dependent response data. However, for the
sake of completeness, the traditional inverse problems related to learning of unknown
functions shall occupy a significant portion of our review.
The rest of the paper is structured as follows. In Section 2 we discuss the general
inverse model, providing several examples. In Section 3 we focus on linear inverse
problems, which constitute the most popular class of inverse problems, and review the
links between the Bayesian approach based on simple finite difference priors and the
deterministic Tikhonov regularization. Connections between Gaussian process based
Bayesian inverse problems and deterministic regularizations are reviewed in Section
4. In Section 5 we provide an overview of the connections between the Gaussian process based Bayesian approach and regularization using differential operators, which
generalizes the discussion of Section 3 on the connection between finite difference priors and the Tikhonov regularization. The Bayesian approach to inverse problems in
Hilbert spaces is discussed in Section 6. We then turn attention to inverse regression
problems, providing an overview of such problems and discussing the links with traditional inverse problems in Section 7. Finally, we make concluding remarks in Section
8.
2. Traditional inverse problem
Suppose that one is interested in learning about the function θ given the noisy observed responses y n = (y1 , . . . , yn )T , where the relationship between θ and y n is
governed by following equation (2.1) :
yi = G(xi , θ) + i ,
(2.1)
for i = 1, . . . , n, where xi are known covariates or design points, i are errors associated with the i-th observation and G is a forward operator defined appropriately, which
is usually allowed to be non-injective.
3
Note that since n = (1 , . . . , n )T is unknown, the noisy observation vector y n
itself may not be in the image set of G. If θ is a p-dimensional parameter, then there
will often be situations when the number of equations is smaller than the number of
unknowns, in the sense that p > n (see, for example, Dashti and Stuart [14]). Modern statistical research is increasingly coming across such inverse problems termed
as “ill-posed” which are not in the exact domain of statistical estimation procedures
(O’Sullivan [27]) where the maximum likelihood solution or classical least squares
may not be uniquely defined and with very bad perturbation sensitivity of the classical
solution. However, although such problematic issues are said to characterize inverse
problems, the problems in fact fall in the so-called “large p small n” paradigm and
has received wide attention in statistics; see, for example, Bühlmann and van de Geer
[10], Giraud [17]. A key concept involved in handling such problems is inclusion of
some appropriate penalty term in the discrepancy to be minimized with respect to θ.
Such regularization methods are initiated by Tikhonov [34] and Tikhonov and Arsenin
[35]. Under this method, usually a criterion of the following form is chosen for the
minimization purpose:
n
1X
2
[yi − G(xi , θ)] + λJ(θ), λ > 0.
n i=1
(2.2)
The functional J is chosen such that highly implausible or irregular values of θ has
large values (O’Sullivan [27]). Thus, depending on the problem at hand, J(θ) can be
used to induce “sparsity” in an appropriate sense so that the minimization problem may
be well-defined. We next present several examples of classical inverse problems based
on Aster et al. [3].
2.1. Examples of inverse problems
2.1.1. Vertical seismic profiling
In this scientific field, one wishes to learn about the vertical seismic velocity of the
material surrounding a borehole. A source generates downward-propagating seismic
wavefront at the surface, and in the borehole, a string of seismometers sense these seismic waves. The arrival times of the seismic wavefront at each instrument are measured
from the recorded seismograms. These times provide information on the seismic velocity for vertically traveling waves as a function of depth. The problem is nonlinear if
it is expressed in terms of seismic velocities. However, we can linearize this problem
via a simple change of variables, as follows. Letting z denote the depth, it is possible to
parameterize the seismic structure in terms of slowness, s(z), which is the reciprocal
of the velocity v(z). The observed travel time at depth z can then be expressed as:
Z z
Z ∞
t(z) =
s(u)du =
s(u)H(z − u)du,
(2.3)
0
0
where H is the Heaviside step function. The interest is to learn about s(z) given observed t(z). Theoretically, s(z) = dt(z)
dz , but in practice, simply differentiating the
observations need not lead to useful solutions because noise is generally present in the
observed times t(z), and naive differentiation may lead to unrealistic features of the
solution.
4
2.1.2. Estimation of buried line mass density from vertical gravity anomaly
Here the problem is to estimate an unknown buried line mass density m(x) from
data on vertical gravity anomaly, d(x), observed at some height, h. The mathematical
relationship between d(x) and m(x) is given by
Z ∞
h
d(x) =
3 m(u)du.
−∞ [(u − x)2 + h2 ] 2
As before, noise in the data renders the above linear inverse problem difficult. Variations of the above example has been considered in Aster et al. [3].
2.1.3. Estimation of incident light intensity from diffracted light intensity
Consider an experiment in which an angular distribution of illumination passes
through a thin slit and produces a diffraction pattern, for which the intensity is observed. The data, d(s), are measurements of diffracted light intensity as a function
of the outgoing angle −π/2 ≤ s ≤ π/2. The goal here is to obtain the intensity of
incident light on the slit, m(θ), as a function of the incoming angle −π/2 ≤ θ ≤ π/2,
using the following mathematical relationship:
Z
π/2
d(s) =
(cos(s) + cos(θ))
2
−π/2
sin(π (sin(s) + sin(θ)))
π (sin(s) + sin(θ))
2
m(θ)dθ.
2.1.4. Groundwater pollution source history reconstruction problem
Consider the problem of recovering the history of groundwater pollution at a source
site from later measurements of the contamination at downstream wells to which the
contaminant plume has been transported by advection and diffusion. The mathematical model for contamination transport is given by the following advection-diffusion
equation with respect to t and transported site x:
∂C
∂2C
∂C
=D 2 −ν
∂t
∂x
∂x
C(0, t) = Cin (t)
C(x, t) → 0 as x → ∞.
In the above, D is the diffusion coefficient, ν is the velocity of the groundwater flow,
and Cin (t) is the time history of contaminant injection at x = 0. The solution to the
above advection-diffusion equation is given by
Z
T
Cin (t)f (x, T − t)dt,
C(x, T ) =
0
where
"
(x − ν(T − t))
f (x, T − t) = p
exp
3
4D(T − t)
2 πD(T − t)
x
2
It is of interest to learn about Cin (t) from data observed on C(x, T ).
5
#
.
2.1.5. Transmission tomography
The most basic physical model for tomography assumes that wave energy traveling
between a source and receiver can be considered to be propagating along infinitesimally
narrow ray paths. In seismic tomography, if the slowness at a point x is s(x), and the
ray path is known, then the travel time for seismic energy transiting along that ray path
is given by the line integral along `:
Z
t = s(x(l))dl.
(2.4)
`
Learning of s(x) from t is required. Note that (2.4) is a high-dimensional generalization of (2.3). In reality, seismic ray paths will be bent due to refraction and/or reflection,
resulting in nonlinear inverse problem.
The above examples demonstrate the ubiquity of linear inverse problems. As a
result, in the next section we take up the case of linear inverse problems and illustrate
the Bayesian approach in details, also investigating connections with the deterministic
approach employed by the general scientific community.
3. Linear inverse problem
The motivating examples and discussions in this section are based on Bui-Thanh
[11].
Let us consider the following one-dimensional integral equation on a finite interval
as in equation (3.1):
Z
G(x, θ) =
K(x, t) θ(t) dt,
(3.1)
where K(x, ·) is some appropriate, known, real-valued function given x Now, let the
dataset be y n = (y1 , y2 , . . . , yn )T . Then for a known system response K(xi , t) for the
dataset, the equation can be written as follows:
Z
yi = G(xi , θ) + i ; i ∈ {1, 2, . . . , n}
(3.2)
R1
As a particular
example, let G(x, θ) = 0 K(x, t) θ(t) dt, where K(x, t) =
√ 1 2 exp −(x − t)2 /2ψ 2 is the Gaussian kernel and θ : [0, 1] 7→ R is to be
2πψ
learned given the data y n and xn = (x1 , . . . , xn )T . We first illustrate the Bayesian
approach and draw connections with the traditional approach of Tikhonov’s regularization when the integral in G is discretized. In this regard, let xi = (i − 1)/n, for
i = 1, . . . , n. Letting θ = (θ(x1 ), . . . , θ(xn ))T and K be the n × n matrix with the
(i, j)-th element K(xi , xj )/n, and n = (1 , . . . , n )T , the discretized version of (3.2)
can be represented as
y n = Kθ + n .
(3.3)
We assume that n ∼ Nn 0n , σ 2 I n , that is, an n-variate normal with mean 0n , an
n-dimensional vector with all components zero, and covariance σ 2 I n , where I n is the
n-th order identity matrix.
6
3.1. Smooth prior on θ
To reflect the belief that the function θ is smooth, one may presume that
θ(xi−1 ) + θ(xi+1 )
+ ˜i ,
(3.4)
2
iid
where, for i = 1, . . . , n, ˜i ∼ N 0, σ̃ 2 . Thus, a priori, θ(xi ) is assumed to be
an average of its nearest neighbors to quantify smoothness, with an additive random
perturbation term. Letting
−1 2 −1 0 · · · · · ·
0 −1 2 −1 0 · · ·
..
..
..
..
..
..
1
.
.
.
.
.
(3.5)
L= .
,
2 .
.
.
.
.
.
..
..
..
..
..
..
0
0 · · · −1 2 −1
θ(xi ) =
and ˜ = (˜
1 , . . . , ˜n )T , it follows from (3.4) that
Lθ = ˜,
(3.6)
Now, noting that the Laplacian of a twice-differentiable real-valued function f with
Pk
2
independent arguments z1 , . . . , zk is given by ∆f = i=1 ∂∂zf2 , we have
i
∆θ(xj ) ≈ n2 (Lθ)j ,
(3.7)
where (Lθ)j is the j-th element of Lθ.
However, the rank of L is n−1, and boundary conditions on the Laplacian operator
is necessary to ensure positive definiteness of the operator. In our case, we assume
that θ ≡ 0 outside [0, 1], so that we now assume θ(0) = θ(x2 1 ) + ˜0 and θ(xn ) =
θ(xn−1 )
+ ˜n , where ˜0 and ˜n are iid N 0, σ̃ 2 . With this modification, the prior on
2
θ is given by
1
2
π(θ) ∝ exp − 2 kL̃θk ,
(3.8)
2σ̃
where k · k is the Euclidean norm and
2 −1
−1 2
0 −1
..
..
1
.
.
L̃ =
2 .
.
..
..
0
0
0
0
0
−1
2
..
.
..
.
···
···
7
0
0
−1
..
.
..
.
−1
0
···
···
0
..
.
..
.
2
−1
···
···
···
..
.
..
.
−1
2
.
(3.9)
Rather than assuming zero boundary conditions,
moregenerally
one may assume
σ̃ 2
σ̃ 2
that θ(0) and θ(xn ) are distributed as N 0, δ2 and N 0, δ2 , respectively. The
n
0
resulting modified matrix is then given by
2δ0 0
0
0 ··· ···
−1 2 −1 0 · · · · · ·
0 −1 2 −1 0 · · ·
..
..
..
..
..
1 ..
.
.
.
.
.
.
.
(3.10)
L̂ =
2
.
.
.
.
.
.
..
..
..
..
..
..
0
0 · · · −1 2 −1
0
0 ··· 0
0 2δn
To choose δ0 and δn , one may assume that
V ar [θ(0)] =
T −1
σ̃ 2
σ̃ 2
2 T
[n/2] ,
=
V
ar
[θ(x
)]
=
=
V
ar
θ(x
)
=
σ̃
L̂ L̂
n
[n/2]
[n/2]
δ02
δn2
where [n/2] is the largest integer not exceeding n/2, and [n/2] is the [n/2]-th canonical
basis vector in Rn+1 . It follows that
δ02 = δn2 =
1
.
−1
[n/2]
T[n/2] L̂ L̂
T
Since this requires solving a non-linear equation (since L̂ contains δ0 and δn ), for
avoiding computational complexity one may simply employ the approximation
δ02 = δn2 =
1
,
−1
T[n/2] L̃ L̃
[n/2]
T
where L̃ is given by (3.9).
3.2. Non-smooth prior on θ
To begin with, let us assume that θ has several points of discontinuities on the grid
of points {x0 , . . . , xn }. To reflect this information in the prior, one may assume that
θ(0) = 0 and for i = 1, . . . , n, θ(xi ) = θ(xi−1 ) + ˜i , where, as before, ˜i are iid
N 0, σ̃ 2 . Then, with
1
0
0
0 ··· ···
−1 1
0
0 ··· ···
0 −1 1
0
0 ···
..
..
..
..
..
1 ..
,
.
.
.
.
.
.
L∗ =
(3.11)
2
.
.
.
.
.
.
..
..
..
..
..
..
0
0 · · · −1 1
0
0
0 ··· 0
−
1
8
the prior is given by
1
∗
2
π(θ) ∝ exp − 2 kL θk .
2σ̃
(3.12)
One may also flexibly account for any particular big jump. For instance, if for some
` ∈ {0, . . . , n}, the jump θ(x` ) − θ(x`−1 ) is particularly large compared
to the
other
2
jumps, then it can be assumed that θ(x` ) = θ(x`−1 ) + ∗` , with ∗` ∼ N 0, σ̃ξ2 , where
ξ < 1. Letting D ` be the diagonal matrix with ξ 2 being the `-th diagonal element and
1 being the other diagonal elements, the prior is then given by
1
π(θ) ∝ exp − 2 kD ` L∗ θk2 .
(3.13)
2σ̃
A more general prior can be envisaged where the number and location of the
jump discontinuities are unknown. Then we may consider a diagonal matrix D =
diag{ξ1 , . . . , ξn }, so that conditionally on the hyperparameters ξ1 , . . . , ξn , the prior
on θ is given by
1
∗
2
(3.14)
π(θ|ξ1 , . . . , ξn ) ∝ exp − 2 kDL θk .
2σ̃
Prior on ξ1 , . . . , ξn may be considered to complete the specification. These may also
be estimated by maximizing the marginal likelihood obtained by integrating out θ,
which is known as the ML-II method; see Berger [6]. Calvetti and Somersalo [12] also
advocate likelihood based methods.
3.3. Posterior distribution
For convenience, let us generically denote the matrices L, L̃, L̂, L∗ , D ` L∗ , DL∗ ,
1
by Γ− 2 . Then it can be easily verified that the posterior of θ admits the following
generic form:
1
1
− 21
2
2
π (θ|y n , xn ) ∝ exp −
ky
−
Kθk
+
kΓ
θk
.
(3.15)
2σ 2 n
2σ̃ 2
Note that the exponent of the posterior is of the form of the Tikhonov functional, which
we denote by T (θ). The maximizer of the posterior, commonly known as the maximum
a posteriori (MAP) estimator, is given by
θ̂ M AP = arg max π (θ|y n , xn ) = arg min T (θ).
θ
(3.16)
θ
In other words, the deterministic solution to the inverse problem obtained by Tikhonov’s
regularization is nothing but the Bayesian MAP estimator in our context.
Writing H = σ12 K T K + σ̃12 Γ−1 , which is the Hessian of the Tikhonov functional
1
(regularized misfit), and writing k·kH = kH 2 ·k, it is clear that (3.15) can be simplified
to the Gaussian form, given by
(
)
2
1 −1 −1
π (θ|y n , xn ) ∝ exp − θ − 2 H K y n
.
(3.17)
σ
H
9
It follows from (3.17) that the inverse of the Hessian of the regularized misfit is the
posterior covariance itself. From the above posterior it also trivially follows that
θ̂ M AP
1
1
= 2 H −1 K −1 y n = 2
σ
σ
1 T
1
K K + 2 Γ1
σ2
σ̃
−1
K T Y n,
(3.18)
which coincides with the Tikhonov solution for linear inverse problems. The connection between the traditional deterministic Tikhonov regularization approach with
Bayesian analysis continues to hold even if the likelihood is non-Gaussian.
3.4. Exploration of the smoothness conditions
For deeper investigation of the smoothness conditions, let us write
1
1
1
2
2
2
2
θ̂ M AP = arg min T (θ) = σ
ky − ỹ n k + %kΓ̃ θk ,
2 n
2
θ
1
(3.19)
1
where ỹ n = Kθ, % = σ 2 /σ̃ 2 and Γ̃ 2 = Γ− 2 . Now, from (3.7) it follows that for the
smooth priors with the zero boundary conditions, our Tikhonov functional discretizes
T∞ (θ) =
1
1
ky − ỹ n k2 + %k∆θk2L2 (0,1) ,
2 n
2
(3.20)
R1
where k · k2L2 (0,1) = 0 (·)2 dt.
On the other hand, for the non-smooth prior (3.12), rather than discretizing ∆θ,
∇θ, that is, the gradient of θ, is discretized. In other words, for non-smooth priors, our
Tikhonov functional discretizes
T∞ (θ) =
1
1
ky − ỹ n k2 + %k∇θk2L2 (0,1) .
2 n
2
(3.21)
Hence, realizations of prior (3.12) is less smooth compared to those of our smooth
priors. However, the realizations (3.12) must be continuous. The priors given by
(3.13) and (3.14) also support continuous functions as long as the hyperparameters
are bounded away from zero. These facts, although clear, can be rigorously justified
by functional analysis arguments, in particular, using the Sobolev imbedding theorem
(see, for example, Arbogast and Bona [1]).
4. Links between Bayesian inverse problems based on Gaussian process prior and
deterministic regularizations
In this section, based on Rasmussen and Williams [31], we illustrate the connections between deterministic regularizations such as those obtained from differential
operators as above, and Bayesian inverse problems based on the very popular Gaussian
process prior on the unknown function. A key tool for investigating such relationship
is the reproducing kernel Hilbert space (RKHS).
10
4.1. RKHS
We adopt the following definition of RKHS provided in Rasmussen and Williams
[31]:
Definition 4.1 (RKHS). Let H be a Hilbert space of real functions θ defined on an
index set X . Then H is called an RKHS endowed with an inner product h·, ·iH (and
norm kθkH = hθ, θiH ) if there exists a function K : X × X 7→ R with the following
properties:
(a) for every x, K(·, x) ∈ H, and
(b) K has the reproducing property hθ(·), K(·, x)iH = θ(x).
Observe that since K(·, x), K(·, x0 ) ∈ H, it follows that hK(·, x), K(·, x0 )iH =
K(x, x0 ). The Moore-Aronszajn theorem asserts that the RKHS uniquely determines
K, and vice versa. Formally,
Theorem 1 (Aronszajn [2]). . Let X be an index set. Then for every positive definite
function K(·, ·) on X × X there exists a unique RKHS, and vice versa.
R
Here, by positive definite function K(·, ·) on X×X, we mean K(x, x0 )g(x)g(x0 )dν(x)dν(x0 ) >
0 for all non-zero functions g ∈ L2 (X, ν), where L2 (X, ν) denotes the space of functions square-integrable on X with respect to the measure ν.
Indeed, the subspace H0 of H spanned by the functions {K(·, xi ); i = 1, 2, . . .}
is dense in H in the sense that every function in H is a pointwise limit of a Cauchy
sequence from H0 .
To proceed, we require the concepts of eigenvalues and eigenfunctions associated
with kernels. In the following section we provide a briefing on these.
4.2. Eigenvalues and eigenfunctions of kernels
We borrow the statements of the following definition of eigenvalue and eigenfunction, and the subsequent statement of Mercer’s theorem from Rasmussen and Williams
[31].
Definition 4.2. A function ψ(·) that obeys the integral equation
Z
C(x, x0 )ψ(x)dν(x) = λψ(x0 ),
(4.1)
X
is called an eigenfunction of the kernel C with eigenvalue λ with respect to the measure
ν.
We assume that the ordering is chosen such that λ1 ≥ λ2 ≥ · · · . The eigenfunctions
are orthogonal with respect to ν and can be chosen to be normalized so that
R
ψ
(x)ψ
i
j (x)dν(x) = δij , where δij = 1 if i = j and 0 otherwise.
X
The following well-known theorem (see, for example, König [21]) expresses the
positive definite kernel C in terms of its eigenvalues and eigenfunctions.
11
Theorem 2 (Mercer’s theorem). Let (X, ν) be a finite measure space and C ∈ L∞ X2 , ν 2
be a positive definite kernel. By L∞ X2 , ν 2 we mean the set of all measurable
functions C : X2 7→ R which are essentially bounded, that is, bounded up to a set
of ν 2 -measure zero. For any function C in this set, its essential supremum, given
by inf {C ≥ 0 : |C(x1 , x2 )| < C, for almost all (x1 , x2 ) ∈ X × X} serves as the norm
kCk.
Let ψj ∈ L2 (X, ν) be the normalized eigenfunctions of C associated with the eigenvalues λj (C) > 0. Then
∞
(a) the eigenvalues {λj (C)}j=1 are absolutely summable.
P∞
(b) C(x, x0 ) = j=1 λj (C)ψj (x)ψ¯j (x0 ) holds ν 2 -almost everywhere, where the series converges absolutely and uniformly ν 2 -almost everywhere. In the above, ψ¯j
denotes the complex conjugate of ψj .
It is important to note the difference between the eigenvalue λj (C) associated with
the kernel C and λj (Σn ) where Σn denotes the n × n Gram matrix with (i, j)-th
element C(xi , xj ). Observe that (see Rasmussen and Williams [31]):
0
Z
λj (C)ψj (x ) =
n
C(x, x0 )ψj (x)dν(x) ≈
X
1X
C(xi , x0 )ψj (xi , x0 ),
n i=1
(4.2)
where, for i = 1, . . . , n, xi ∼ ν, assuming that ν is a probability measure. Now
substituting x0 = xi ; i = 1, . . . , n in (4.2) yields the following approximate eigen
system for the matrix Σn :
Σn uj ≈ nλj (C)uj ,
(4.3)
where the i-th component of uj is given by
uij =
ψj (xi )
√ .
n
(4.4)
Since ψj are normalized to have unit norm, it holds that
n
uTj uj =
1X 2
ψ (xi ) ≈
n i=1 j
Z
ψ 2 (x)dν(x) = 1.
(4.5)
X
From (4.5) it follows that
λj (Σn ) ≈ nλj (C).
−1
(4.6)
Indeed, Theorem 3.4 of Baker [5] shows that n λj (Σn ) → λj (C), as n → ∞.
For our purposes the main usefulness of the RKHS framework is that kθk2H can
be perceived as a generalization of θ T K −1 θ, where θ = (θ(x1 ), . . . , θ(xn ))T and
K = (K(xi , xj ))i,j=1,...,n , is the n × n matrix with (i, j)-th element K(xi , xj ).
12
4.3. Inner product
Consider a real positive semidefinite kernel K(x, x0 ) with an eigenfunction exPN
pansion K(x, x0 ) = i=1 λi φi (x)φi (x0 ) relative to a measure µ. Mercer’s theorem
ensures
that the eigenfunctions are orthonormal with respect to µ, that is, we have
R
φi (x)φj (x)dµ(x) = δij . Consider a Hilbert space of linear combinations of the
PN
PN θi2
eigenfunctions, that is, θ(x) =
i=1 θi φi (x) with
i=1 λi < ∞. Then the inner
PN
PN
product hθ1 , θ2 iH between θ1 = i=1 θ1i φi (x), and θ2 = i=1 θ2i φi (x) is of the
form
N
X
θ1i θ2i
hθ1 , θ2 iH =
.
(4.7)
λi
i=1
PN θ2
This induces the norm k · kH , where kθk2H = i=1 λii . A smoothness condition on
the space is immediately imposed by requiring the norm to be finite – the eigenvalues
must decay sufficiently fast.
The Hilbert space defined above is a unique RKHS with respect to K, in that it
satisfies the following reproducing property:
hθ, K(·, x)i =
N
X
θi λi φi (x)
i=1
λi
= θ(x).
(4.8)
= K(x, x0 ).
(4.9)
Further, the kernel satisfies the following:
hK(x, ·), K(x0 , ·)i =
N
X
λ2 φi (x)
i
i=1
λi
PN
Now, with reference to (4.6), observe that the square norm kθk2H = i=1 θi2 /λi
and the quadratic form θ T Kθ have the same form if the latter is expressed in terms
of the eigenvectors of K, albeit the latter has n terms, while the square norm has N
terms.
4.4. Regularization
The ill-posed-ness of inverse problems can be understood from the fact that for
any given data set y n , all functions that pass through the data set minimize any given
measure of discrepancy D(y n , θ) between the data y n and θ. To combat this, one
considers minimization of the following regularized functional:
R(θ) = D(y n , θ) +
τ
kθk2H ,
2
(4.10)
where the second term, which is the regularizer, controls smoothness of the function
and τ is the appropriate Lagrange multiplier.
The well-known representer theorem (see, for example, Kimeldorf and Wahba [20],
O’Sullivan et al. [28], Wahba [38], Schölkopf and
Pn Smola [32]) guarantees that each
minimizer θ ∈ H can be represented as θ(x) = i=1 ci K (x, xi ), where K is the corresponding reproducing kernel. If D (y n , θ) is convex, then there is a unique minimizer
θ̂.
13
4.5. Gaussian process modeling of the unknown function θ
For simplicity, let us consider the model
yi = θ(xi ) + i ,
(4.11)
iid
for i = 1, . . . , n, where i ∼ N (0, σ 2 ), where we assume σ to be known for simplicity
of illustration. Let θ(x) be modeled by a Gaussian process with mean function µ(x)
and covariance kernel K associated with the RKHS. In other words, for any x ∈ X,
E [θ(x)] = µ(x) and for any x1 , x2 ∈ X, Cov (θ(x1 ), θ(x2 )) = K(x1 , x2 ).
Assuming for convenience that µ(x) = 0 for all x ∈ X, it follows that the posterior
distribution of θ(x∗ ) for any x∗ ∈ X is given by
π(θ(x∗ )|y n , xn ) ≡ N µ̂(x∗ ), σ̂ 2 (x∗ ) ,
(4.12)
where, for any x∗ ∈ X,
µ̂(x∗ ) = sT (x∗ ) K + σ 2 In
−1
yn ;
σ̂ 2 (x∗ ) = K(x∗ , x∗ ) − sT (x∗ ) bK + σ 2 In
(4.13)
−1
s(x∗ ),
(4.14)
T
with s(x∗ ) = (K(x∗ , x1 ), . . . , K(x∗ , xn )) .
Observe that the posterior mean admits the following representation:
µ̂(x∗ ) =
n
X
c̃i K(x∗ , xi ),
(4.15)
i=1
−1
where c̃i is the i-th element of K + σ 2 In
yn .
In other words, the posterior mean of the Gaussian process based model is consistent with the representer theorem.
5. Regularization using differential operators and connection with Gaussian process
For x = (x1 , . . . , xd )T ∈ Rd , let
m
2
Z
kL θk =
X
∂ m θ(x)
j1 +···+jd =m
∂xj11 · · · ∂xjdd
and
kPθk2 =
M
X
bm kLm θk2 ,
!2
,
(5.1)
(5.2)
m=0
for some M > 0, where the co-efficients bm ≥ 0. In particular, we assume for our
purpose that b0 > 0. It is clear that kPθk2 is translation and rotation invariant. This
norm penalizes θ in terms of its derivatives up to order M .
14
5.1. Relation to RKHS
It can be shown, using the fact that the complex exponentials exp(2πisT x) are
eigen functions of the differential operator, that
2
kPθk =
Z X
M
2
m
bm 4π 2 sT s
θ̃(s) ds,
(5.3)
m=0
where θ̃(s) is the Fourier transform of θ(s). Comparison of (5.3) with (4.7) yields the
hP
m i−1
M
2 T
power spectrum of the form
b
4π
s
s
which yields the following
m=0 m
kernel by Fourier inversion:
Z
exp(2πisT (x − x0 ))
0
0
ds.
(5.4)
K(x, x ) = K(x − x ) =
PM
2 T m
m=0 bm (4π s s)
Calculus of variations can also be used to minimize R(θ) with respect to θ, which
yields (using the Euler-Lagrange equation)
θ(x) =
n
X
bi G(x − xi ),
(5.5)
i=1
with
m
X
(−1)m bm ∇m G = δx−x0 ,
(5.6)
i=1
where G is known as the Green’s function. Using Fourier transform on (5.6) it can be
shown that the Green’s function
Pm is nothing but the kernel K given by (5.4). Moreover,
it follows from (5.6) that i=1 (−1)m bm ∇m and K are inverses of each other.
Examples of kernels derived from differential operators are as follows. For d = 1,
setting b0 = b2 , b1 = 1 and bm = 0 for m ≥ 2, one obtains K(x, x0 ) = K(x − x0 ) =
1
0
2b exp (−b|x − x |), which is the covariance of the Ornstein-Uhlenbeck process. For
m
0
0
general d dimension,
bm = b2m
1 setting
/(m!2 ), yields K(x, x ) = K(x − x ) =
1
0 T
0
exp − 2b2 (x − x ) (x − x ) .
(2πb2 )d/2
Considering a grid xn , note that
!
M
M
X
X
T
T
kPθk2 ≈
bm (Dm θ) (Dm θ) = θ T
Dm
Dm θ,
(5.7)
m=0
m=0
where Dm is a suitable finite-difference approximation of the differential operator.
Note that such finite-difference approximation has been explored in Section 3, which
we now investigate in a rigorous setting. Also, since (5.7) is quadratic in θ, assuming a prior for θ, the logarithm of which has this form, and further assuming that
log [D(y n , θ)] is a log-likelihood quadratic in θ, a Gaussian posterior results.
15
5.2. Spline models and connection with Gaussian process
Let us consider the penalty function to be kLm θk2 . Then polynomials up to degree
m − 1 are not penalized and so, are in the null space of the regularization operator. In
this case, it can be shown that a minimizer of R(θ) is of the form
θ(x) =
k
X
dj ψj (x) +
j=1
n
X
ci G(x, xi ),
(5.8)
i=1
where {ψ1 , . . . , ψk } are polynomials that span the null space and the Green’s function
G is given by (see Duchon [15], Meinguet [23])
cm,d |x − x0 |2m−d log |x − x0 | if 2m > d and d even
G(x, x0 ) = G(x − x0 ) =
,
cm,d |x − x0 |2m−d
otherwise.
(5.9)
where cm,D are constants (see Wahba [38] for the explicit form).
We now specialize the above arguments to the spline set-up. As before, let us con
iid
sider the model yi = θ(xi ) + i , where, for i = 1, . . . , n, i ∼ N 0, σ 2 . For simplicity, we consider the one-dimensional set-up, and consider the cubic spline smoothing
problem that minimizes
Z
n
X
2
R(θ) =
(yi − θ(xi )) + τ
1
2
[θ00 (x)] dx,
(5.10)
0
i=1
where 0 < x1 < · · · < xn < 1. The solution to this minimization problem is given by
θ(x) =
1
X
dj xj +
n
X
ci (x − xi )3+ ,
(5.11)
i=1
j=0
where, for any x, (x)+ = x if x > 0 and zero otherwise.
Following Wahba [37], let us consider
f (x) =
1
X
βj xj + θ(x),
(5.12)
j=0
where β = (β0 , β1 )T ∼ N 0, σβ2 I2 , and θ is a zero mean Gaussian process with
covariance
Z 1
v3
|x − x0 |v 2
+
,
(5.13)
σθ2 K(x, x0 ) =
(x − u)+ (x0 − u)+ du = σθ2
2
3
0
where v = min{x, x0 }.
Taking σβ2 → ∞ makes the prior of β vague, so that penalty on the polynomial
terms in the null space is effectively washed out. It follows that
−1
E [θ(x∗ )|y n , xn ] = h(x∗ )T β̂ + s(x∗ )T K̂
y n − H T β̂ ,
(5.14)
16
where, for any x, h(x) = (1, x)T , H = (h(x1 ), . . . , h(xn )), K̂ is the covariance
−1
−1
−1
matrix corresponding to σθ2 K(xi , xj ) + σ 2 δij , and β̂ = H K̂ H
H K̂ y n .
Since the elements of s(x∗ ) are piecewise cubic polynomials, it is easy to see that
the posterior mean (5.14) is also a piecewise cubic polynomial. It is also clear that
(5.14) is a first order polynomial on [0, x1 ] and [xn , 1].
5.2.1. Connection with the `-fold integrated Wiener process
Shepp [33] considered the `-fold integrated Wiener process, for ` = 0, 1, 2 . . ., as
follows:
Z 1
(x − u)`+
Z(u)du,
(5.15)
W` (x) =
`!
0
where Z is a Gaussian white noise process with covariance δ(u − u0 ). As a special
case, note that W0 is the standard Wiener process. In our case, note that
K(x, x0 ) = Cov (W1 (x), W1 (x0 )) .
(5.16)
2
R
The above ideas can be easily extended to the case of the regularizer f (m) (x) dx,
for m ≥ 1 by replacing (x − u)+ with (x − u)m−1
/(m − 1)! and letting h(x) =
+
m−1 T
1, x, . . . , x
.
6. The Bayesian approach to inverse problems in Hilbert spaces
We assume the following model
y = G(θ) + ,
(6.1)
where y, θ and are in Banach or Hilbert spaces.
6.1. Bayes theorem for general inverse problems
We will consider the model stated by equation (6.1). Let Y and Θ denote the
sample spaces for y and θ, respectively. Let us first assume that both are separable
Banach spaces. Assume µ0 to be the prior measure for θ. Assuming well-defined joint
distribution for (y, θ), let us denote the posterior of θ given y as µy . Let ∼ Q0 where
Q0 such that and θ are independent. Let Q0 be the distribution of . Let us denote the
conditional distribution of y given θ by Qθ , obtained from a translation of Q0 by G(θ).
Assume that Qθ Q0 . Thus, for some potential Φ : Θ × Y 7→ R,
dQθ
= exp (−Φ(θ, y)) .
dQ0
(6.2)
Thus, for fixed θ, Φ(θ, ·) : Y 7→ R is measurable and EQ0 [exp (−Φ(θ, y))] = 1. Note
that −Φ(·, y) is nothing but the log-likelihood.
Let ν0 denote the product measure
ν0 (dθ, dy) = µ0 (dθ)Q0 (dy),
17
(6.3)
and let us assume that Φ is ν0 -measurable. Then (θ, y) ∈ Θ×Y is distributed according
to the measure ν(dθ, dy) = µ0 (dθ)Qθ (dy). It then also follows that ν ν0 , with
dνθ
(θ, y) = exp (−Φ(θ, y)) .
dν0
(6.4)
Then we have the following statement of Bayes’ theorem for general inverse problems:
Theorem 3 (Bayes theorem for general inverse problems). Assume that Φ : Θ × Y 7→
R is ν0 -measurable and
Z
C=
exp (−Φ(θ, y)) µ0 (dy) > 0,
(6.5)
Θ
for Q0 -almost surely all y. Then the posterior of θ given y, which we denote by µy ,
exists under ν. Also, µy µ0 and for all y ν0 -almost surely,
dµyθ
1
(θ) = exp (−Φ(θ, y)) .
dµ0
C
(6.6)
Now assume that Θ and Y are Hilbert spaces. Suppose ∼ N(0, Γ). Then the
following theorem holds:
Theorem 4 (Vollmer [36]).
dµy
1
∝ exp − kG(θ)k2Γ + hy, G(θ)iΓ ,
dµ0
2
(6.7)
where h·, ·iΓ = hΓ−1 ·, ·i, and k · kΓ is the norm induced by h·, ·iΓ .
iid
For the model yi = θ(xi ) + i for i = 1, . . . , n, with i ∼ N 0, σ 2 , the posterior
is of the form
!
n
2
X
dµy
(yi − θ(xi ))
.
(6.8)
∝ exp −
dµ0
2σ 2
i=1
6.2. Connection with regularization methods
It is not immediately clear if the Bayesian approach in the Hilbert space setting
has connection with the deterministic regularization methods, but Vollmer [36] prove
consistency of the posterior assuming certain stability results which are used to prove
convergence of regularization methods; see Engl et al. [16].
We next turn to inverse regression.
7. Inverse regression
We first provide some examples of inverse regression, mostly based on Avenhaus
et al. [4].
18
7.1. Examples of inverse regression
7.1.1. Example 1: Measurement of nuclear materials
Measurement of the amount of nuclear materials such as plutonium by direct chemical means is an extremely difficult exercise. This motivates model-based methods.
For instance, there are physical laws relating heat production or the number of neutrons emitted (the dependent response variable y) to the amount of material present,
the latter being the independent variable x. But any measurement instrument based on
the physical laws first needs to be calibrated. In other words, the unknown parameters
of the model needs to be learned, using known inputs and outputs. However, the independent variables are usually subject to measurement errors, motivating a statistical
model. Thus, conditionally on x and parameter(s) θ, y ∼ P (·|x, θ), where P (·|x, θ)
denotes some appropriate probability model. Given y n and xn , and some specific ỹ,
the corresponding x̃ needs to be predicted.
7.1.2. Example 2: Estimation of family incomes
Suppose that it is of interest to estimate the family incomes in a certain city through
public opinion poll. Most of the population, however, will be unwilling to provide
reliable answers to the questionnaires. One way to extract relatively reliable figures is
to consider some dependent variable, say, housing expenses (y), which is supposed to
strongly depend on family income (x); see Muth [26], and such that the population is
less reluctant to divulge the correct figures related to y. From past survey data on xn
and y n , and using current data from families who may provide reliable answers related
to both x and y, a statistical model may be built, using which the unknown family
incomes may be predicted, given their household incomes.
7.1.3. Example 3: Missing variables
In regression problems where some of the covariate values xi are missing, they may
be estimated from the remaining data and the model. In this context, Press and Scott
[29] considered a simple linear regression problem in a Bayesian framework. Under
special assumptions about the error and prior distributions, they showed that an optimal
procedure for estimating the linear parameters is to first estimate the missing xi from
an inverse regression based only on the complete data pairs.
7.1.4. Example 4: Bioassay
It is usual to investigate the effects of substances (y) given in several dosages on
organisms (x) using bioassay methods. In this context it may be of interest to determine the dosage necessary to obtain some interesting effect, making inverse regression
relevant (see, for example, Rasch et al. [30]).
7.1.5. Example 5: Learning the Milky Way
The modelling of the Milky Way galaxy is an integral step in the study of galactic
dynamics; this is because knowledge of model parameters that define the Milky Way
directly influences our understanding of the evolution of our galaxy. Since the nature
of the Galaxy’s phase space, in the neighbourhood of the Sun, is affected by distinct
Milky Way features, measurements of phase space coordinates of individual stars that
19
live in this neighbourhood of the Sun, will bear information about the influence of such
features. Then, inversion of such measurements can help us learn the parameters that
describe such Milky Way features. In this regard, learning about the location of the Sun
with respect to the center of the galaxy, given the two-component velocities of the stars
in the vicinity of the Sun, is an important problem. For k such stars, Chakrabarty et al.
[13] model the k × 2-dimensional velocity matrix V as a function of the galactocentric
location (S) of the Sun, denoted by V = ξ(S). For a given observed value V ∗ of V ,
it is then of interest to obtain the corresponding S ∗ . Since ξ is unknown, Chakrabarty
et al. [13] model ξ as a matrix-variate Gaussian process, and consider the Bayesian
approach to learning about S ∗ , given data {(S i , V i ) : i = 1, . . . , n} simulated from
established astrophysical models, and the observed velocity matrix V ∗ .
We now provide a brief overview of of the methods of inverse linear regression,
which is the most popular among inverse regression problems. Our discussion is generally based on Hoadley [19] and Avenhaus et al. [4].
7.2. Inverse linear regression
Let us consider the following simple linear regression model: for i = 1, . . . , n,
yi = α + βxi + σi ,
(7.1)
iid
where i ∼ N (0, 1).
For simplicity, let us consider a single unknown x̃, associated with a further set of
m responses {ỹ1 , . . . , ỹm }, related by
ỹi = α + β x̃ + τ ˜i ,
(7.2)
iid
for i = 1, . . . , m, where ˜i ∼ N (0, 1) and are independent of the i ’s associated with
(7.1).
The interest in the above problem is inference regarding the unknown x. Based on
(7.1), first least squares estimates of α and β are obtained as
Pn
(y − ȳ)(xi − x̄)
Pn i
;
(7.3)
β̂ = i=1
2
i=1 (xi − x̄)
α̂ = ȳ − β̂ x̄,
(7.4)
Pn
Pn
where ȳ = i=1 yi /n and ȳ = i=1 xi /n. Then, letting ỹ¯ = i=1 ỹi /n, a ‘classical’ estimator of x is given by
ỹ¯ − α̂
,
(7.5)
x̂C =
β̂
which is also the maximum likelihood estimator for the likelihood associated with (7.1)
and (7.2), assuming known σ and τ . However,
h
i
2
E (x̂C − x) |α, β, σ, τ, x = ∞,
(7.6)
Pn
which prompted Krutchkoff [22] to propose the following ‘inverse’ estimator:
¯
x̂I = γ̂ + δ̂ ỹ,
20
(7.7)
where
Pn
(y − ȳ)(xi − x̄)
Pn i
;
δ̂ = i=1
2
i=1 (yi − ȳ)
(7.8)
γ̂ = x̄ − δ̂ ȳ,
(7.9)
are the least squares estimators of the slope and intercept when the xi are regressed on
the yi . It can be shown that the mean square error of this inverse estimator is finite.
However, Williams [39] showed that if σ 2 = τ 2 and if the sign of β is known, then
the unique unbiased estimator of x has infinite variance. Williams advocated the use of
confidence limits instead of point estimators.
Hoadley [19] derive
Pn confidence limits setting σ = τ and assuming without loss
of generality that i=1 xi = 0. Under these assumptions, the maximum likelihood
estimators of σ 2 with xn and y n only, ỹ n = (ỹ1 , . . . , ỹn )T only, and with the entire
available data set are, respectively,
n
σ̂12 =
2
1 X
yi − α̂ − β̂xi ;
n − 2 i=1
σ̂22 =
1 X
¯ 2;
(ỹi − ỹ)
m − 1 i=1
(7.11)
σ̂ 2 =
1
(n − 2)σ12 + (m − 1)σ22 .
n−2+m−1
(7.12)
(7.10)
n
2
Now consider the F -statistic F = nσ̂β̂2 for testing the hypothesis β = 0. Note that
under the null hypothesis this statistic has the F distribution with 1 and n + m degrees
of freedom. For m = 1,
r
n
β̂ (x̂C − x)
σ 2 (n + 1 + x2 )
has a t distribution with n − 2 degrees of freedom. Letting Fα;1,ν denote the upper α
point of the F distribution with 1 and ν degrees of freedom, a confidence set S can be
derived as follows:
{x : xL ≤ x ≤ xU }
if
F > Fα;1,n−2 ;
n+1
{x
:
x
≤
x
}
∪
{x
≥
x
}
if
F
≤ F < Fα;1,n−2 ;
2
L
U
α;1,n−2
S=
n+1+x̂C
n+1
(−∞, ∞)
if
F < n+1+x̂2 Fα;1,n−2 ,
C
(7.13)
where xL and xU are given by
1
Fα;1,n−2 (n + 1) (F − Fα;1,n−2 ) + F x̂2C 2
F x̂C
±
.
F − Fα;1,n−1
F − Fα;1,n−2
n+1
Hence, if F < n+1+x̂
2 Fα;1,n−2 , then the associated confidence interval is S =
C
(−∞, ∞), which is of course useless.
Hoadley [19] present a Bayesian analysis of this problem, presented below in the
form of the following two theorems.
21
Theorem 5 (Hoadley [19]). Assume that σ = τ , and let x be independent of (α, β, σ 2 )
a priori. With any prior π(x) on x and the prior
π(α, β, σ 2 ) ∝
1
σ2
on (α, β, σ 2 ), the posterior density of x given by
π(x|y n , xn , ỹ n ) ∝ π(x)L(x),
where
L(x) = h
1+
n
m
m+n−3
n
2
+ x2
1+ m
,
i m+n−2
2
2
F
2
+ Rx̂C + m+n−3 + 1 (x − Rx̂C )
where
R=
F
.
F +m+n−3
For m = 1, Hoadley [19] present the following result characterizing the inverse
estimator x̂I :
Theorem 6 (Hoadley [19]). Consider the following informative prior on x:
x = tn−3
n+1
,
n−3
where tν denotes the t distribution with ν degrees of freedom. Then the posterior
distribution of x given y n , xn and ỹ n has the same distribution as
s
x̂2
n + 1 + RI
.
x̂I + tn−2
F +n−2
In particular, it follows from Theorem 6 that the posterior mean of x is x̂I when
m = 1. In other words, the inverse estimator x̂I is Bayes with respect to the squared
error loss and a particular informative prior distribution for x.
Since the goal of Hoadley [19] was to provide a theoretical justification of the
inverse estimator, he had to choose a somewhat unusual prior so that it leads to x̂I as
the posterior mean. In general it is not necessary to confine ourselves to any specific
prior for Bayesian analysis of inverse regression. It is also clear that the Bayesian
framework is appropriate for any inverse regression problem, not just linear inverse
regression; indeed, the palaeoclimate reconstruction problem (Haslett et al. [18]) and
the Milky Way problem (Chakrabarty et al. [13]) are examples of very highly non-linear
inverse regression problems.
7.3. Connection between inverse regression problems and traditional inverse problems
Note that the class of inverse regression problems includes the class of traditional
inverse problems. The Milky Way problem is an example where learning the unknown,
22
matrix-variate function ξ (inverse problem) was required, even though learning about
S, the galactocentric location of the sun (inverse regression problem) was the primary
goal. The Bayesian approach allowed learning both S and ξ simultaneously and coherently.
In the palaeoclimate models proposed in Haslett et al. [18], Bhattacharya [7] and
Mukhopadhyay and Bhattacharya [25], although species assemblages are modeled conditionally on climate variables, the functional relationship between species and climate
are not even approximately known. In all these works, it is of interest to learn about the
functional relationship as well as to predict the unobserved climate values, the latter
being the main aim. Again, the Bayesian approach facilitated appropriate learning of
both the unknown quantities.
7.4. Consistency of inverse regression problems
In the above linear inverse regression, notice that if τ > 0, then the variance of the
estimator of x can not tend to zero, even as the data size tends to infinity. This shows
that no estimator of x can be consistent. The same argument applies even to Bayesian
approaches; for any sensible prior on x that does not give point mass to the true value
of x, the posterior of x will not converge to the point mass at the true value of x as the
data size increases indefinitely. The arguments remain valid for any inverse regression
problem where the response variable y probabilistically depends upon the independent
variable x. Not only in inverse regression problems, even in forward regression problems where the interest is in prediction of y given x, any estimate of y or any posterior
predictive distribution y will be inconsistent.
To give an example of inconsistency in non-linear and non-normal inverse problem,
iid
consider the following set-up: yi ∼ Poisson (θxi ), for i = 1, . . . , n, where θ > 0 and
xi > 0 for each i. Let us consider the prior π(θ) ≡ 1 for all θ > 0. For some
i∗ ∈ {1, . . . , n} let us assume the leave-one-out cross-validation set-up in that we wish
to learn x = xi∗ assuming it is unknown, from the rest of the data. Putting the prior
π(x) ≡ 1 for x > 0, the posterior of x is given by (see Bhattacharya and Haslett [9],
Bhattacharya [8])
π(x|xn \xi , y n ) ∝
xyi
(x +
P
j6=i
xj )(
Pn
j=1
yj +1)
.
(7.14)
Figure 7.1 displays the posterior of x when i∗ = 10, for increasing sample size. Observe that the variance of the posterior does not decrease even with sample size as large
as 100, 000, clearly demonstrating inconsistency. Hence, special, innovative priors are
necessary for consistency in such cases.
8. Conclusion
In this review article, we have clarified the similarities and dissimilarities between
the traditional inverse problems and the inverse regression problems. In particular, we
have argued that only the latter class of problems qualify as authentic inverse problems
in they have significantly different goals compared to the corresponding forward problems. Moreover, they include the traditional inverse problems on learning unknown
23
0.30
Posterior of x
n=10
0.25
n=100
n=10000
0.15
n=100000
0.00
0.05
0.10
posterior density
0.20
n=1000
0
5
10
15
20
x
Figure 7.1: Demonstration of posterior inconsistency in inverse regression problems. The vertical line denotes the true value.
functions as a special case, as exemplified by our palaeoclimate and Milky Way examples. We advocate the Bayesian paradigm for both classes of problems, not only
because of its inherent flexibility, coherency and posterior uncertainty quantification,
but also because the prior acts as a natural penalty which is very important to regularize
the so-called ill-posed inverse problems. The well-known Tikhonov regularizer is just
a special case from this perspective.
It is important to remark that the literature on inverse function learning problems
and inverse regression problems is still very young and a lot of research is necessary to
develop the fields. Specifically, there is hardly any well-developed, consistent model
adequacy test or model comparison methodology in either of the two fields, although
Mohammad-Djafari [24] deal with some specific inverse problems in this context, and
Bhattacharya [8] propose a test for model adequacy in the case of inverse regression
problems. Moreover, as we have demonstrated, inverse regression problems are inconsistent in general. The general development in these respects will be provided in the
PhD thesis of the first author.
24
References
References
[1] Arbogast, T., and Bona, J. L. [2008], “Methods of Applied Mathematics,”. University of Texas at Austin.
[2] Aronszajn, N. [1950], “Theory of Reproducing Kernels,” Transactions of the
American Mathematical Society, 68, 337–404.
[3] Aster, R. C., Borchers, B., and Thurber, C. H. [2013], Parameter Estimation and
Inverse Problems, Oxford, UK: Academic Press.
[4] Avenhaus, R., Höpfinger, E., and Jewell, W. S. [1980], “Approaches
to Inverse Linear Regression,”.
Technical Report. Available at
https://publikationen.bibliothek.kit.edu/270015256/3812158.
[5] Baker, C. T. H. [1977], The Numerical Treatment of Integral Equations, Oxford:
Clarendon Press.
[6] Berger, J. O. [1985], Statistical Decision Theory and Bayesian Analysis, New
York: Springer-Verlag.
[7] Bhattacharya, S. [2006], “A Bayesian Semiparametric Model for Organism Based
Environmental Reconstruction,” Environmetrics, 17(7), 763–776.
[8] Bhattacharya, S. [2013], “A Fully Bayesian Approach to Assessment of Model
Adequacy in Inverse Problems,” Statistical Methodology, 12, 71–83.
[9] Bhattacharya, S., and Haslett, J. [2007], “Importance Resampling MCMC for
Cross-Validation in Inverse Problems,” Bayesian Analysis, 2, 385–408.
[10] Bühlmann, P., and van de Geer, S. [2011], Statistics for High-Dimensional Data,
New York: Springer.
[11] Bui-Thanh, T. [2012], “A Gentle Tutorial on Statistical Inversion
Using the Bayesian Paradigm,”.
ICES Report 12-18. Available at
http://users.ices.utexas.edu/ tanbui/PublishedPapers/BayesianTutorial.pdf.
[12] Calvetti, D., and Somersalo, E. [2007], Introduction to Bayesian Scientific Computing: Ten Lectures on Subjective Computing, New York: Springer.
[13] Chakrabarty, D., Biswas, M., and Bhattacharya, S. [2015], “Bayesian Nonparametric Estimation of Milky Way Parameters Using Matrix-Variate Data, in a New
Gaussian Process Based Method,” Electronic Journal of Statistics, 9, 1378–1403.
[14] Dashti, M., and Stuart, A. M. [2015], “The Bayesian Approach to Inverse Problems,”. eprint: arXiv:1302.6989.
[15] Duchon, J. [1977], Splines Minimizing Rotation-Invariant Semi-norms in
Sobolev Spaces,, in Constructive Theory of Functions of Several Variables, eds.
W. Schempp, and K. Zellner, Springer-Verlag, New York, pp. 85–100.
25
[16] Engl, H. W., Hanke, M., and Neubauer, A. [1996], Regularization of Inverse
Problems, Dordrecht: Kluwer Academic Publishers Group. Volume 375 of Mathematics and its Applications.
[17] Giraud, C. [2015], Introduction to High-Dimensional Statistics, New York: Chapman and Hall.
[18] Haslett, J., Whiley, M., Bhattacharya, S., Salter-Townshend, M., Wilson, S. P.,
Allen, J. R. M., Huntley, B., and Mitchell, F. J. G. [2006], “Bayesian Palaeoclimate Reconstruction (with discussion),” Journal of the Royal Statistical Society:
Series A (Statistics in Society), 169, 395–438.
[19] Hoadley, B. [1970], “A Bayesian Look at Inverse Linear Regression,” Journal of
the American Statistical Association, 65, 356–369.
[20] Kimeldorf, G., and Wahba, G. [1971], “Some Results on Tchebycheffian Spline
Functions,” Journal of Mathematical Analysis and Applications, 33, 82–95.
[21] König, H. [1986], Eigenvalue Distribution of Compact Operators, : Birkhäuser.
[22] Krutchkoff, R. G. [1967], “Classical and Inverse Regression Methods of Calibration,” Technometrics, 9, 425–435.
[23] Meinguet, J. [1979], “Multivariate Interpolation at Arbitrary Points Made Simple,” Journal of the Applied Mathematics and Physics, 30, 292–304.
[24] Mohammad-Djafari, A. [2000], “Model Selection for Inverse Problems: Best
Choice of Basis Function and Model Order Selection,”.
Available at
https://arxiv.org/abs/math-ph/0008026.
[25] Mukhopadhyay, S., and Bhattacharya, S. [2013], “Cross-Validation Based Assessment of a New Bayesian Palaeoclimate Model,” Environmetrics, 24, 550–
568.
[26] Muth, R. F. [1960], The Demand for Non-Farm Housing,, in The Demand for
Durable Goods, ed. A. C. Harberger. The University of Chicago.
[27] O’Sullivan, F. [1986], “A Statistical Perspective on Ill-Posed Inverse Problems,”
Statistical Science, 1, 502–512.
[28] O’Sullivan, F., Yandell, B. S., and Raynor, W. J. [1986], “Automatic Smoothing
of Regression Functions in Generalized Linear Models,” Journal of the American
Statistical Association, 81, 96–103.
[29] Press, S. J., and Scott, A. [1975], Missing Variables in Bayesian Regression,, in
Studies in Bayesian Econometrics and Statistics, eds. S. E. Fienberg, and A. Zellner, North-Holland, Amsterdam.
[30] Rasch, D., Enderlein, G., and Herrendörfer, G. [1973], “Biometrie,”. Deutscher
Landwirtschaftsverlag, Berlin.
26
[31] Rasmussen, C. E., and Williams, C. K. I. [2006], Gaussian Processes for Machine
Learning, Cambridge, Massachusetts: The MIT Press.
[32] Schölkopf, B., and Smola, A. J. [2002], Learning with Kernels, USA: MIT Press.
[33] Shepp, L. A. [1966], “Radon-Nikodym Derivatives of Gaussian Measures,” Annals of Mathematical Statistics, 37, 321–354.
[34] Tikhonov, A. [1963], “Solution of Incorrectly Formulated Problems and the
Reguarization Method,” Soviet Math. Dokl., 5, 1035–1038.
[35] Tikhonov, A., and Arsenin, V. [1977], Solution of Ill-Posed Problems, New York:
Wiley.
[36] Vollmer, S. [2013], “Posterior Consistency for Bayesian Inverse Problems
Through Stability and Regression Results,” Inverse Problems, 29. Article number
125011.
[37] Wahba, G. [1978], “Improper Priors, Spline Smoothing and the Problem of
Guarding Against Model Errors in Regression,” Journal of the Royal Statistical
Society B, 40, 364–372.
[38] Wahba, G. [1990], “Spline Functions for Observational Data,”. CBMS-NSF Regional Conference series, SIAM. Philadelphia.
[39] Williams, E. J. [1969], “A Note on Regression Methods in Calibration,” Technometrics, 11, 189–192.
27
| 10 |
Overcommitment in Cloud Services – Bin packing
with Chance Constraints
Maxime C. Cohen
Google Research, New York, NY 10011, maxccohen@google.com
Philipp W. Keller
arXiv:1705.09335v1 [] 25 May 2017
Google, Mountain View, CA 94043, pkeller@google.com
Vahab Mirrokni
Google Research, New York, NY 10011, mirrokni@google.com
Morteza Zadimoghaddam
Google Research, New York, NY 10011, zadim@google.com
This paper considers a traditional problem of resource allocation, scheduling jobs on machines. One such
recent application is cloud computing, where jobs arrive in an online fashion with capacity requirements
and need to be immediately scheduled on physical machines in data centers. It is often observed that the
requested capacities are not fully utilized, hence offering an opportunity to employ an overcommitment
policy, i.e., selling resources beyond capacity. Setting the right overcommitment level can induce a significant
cost reduction for the cloud provider, while only inducing a very low risk of violating capacity constraints.
We introduce and study a model that quantifies the value of overcommitment by modeling the problem as
a bin packing with chance constraints. We then propose an alternative formulation that transforms each
chance constraint into a submodular function. We show that our model captures the risk pooling effect and
can guide scheduling and overcommitment decisions. We also develop a family of online algorithms that are
intuitive, easy to implement and provide a constant factor guarantee from optimal. Finally, we calibrate
our model using realistic workload data, and test our approach in a practical setting. Our analysis and
experiments illustrate the benefit of overcommitment in cloud services, and suggest a cost reduction of 1.5%
to 17% depending on the provider’s risk tolerance.
Key words : Bin packing, Approximation algorithms, Cloud computing, Overcommitment
1.
Introduction
Bin packing is an important problem with numerous applications such as hospitals, call centers,
filling up containers, loading trucks with weight capacity constraints, creating file backups and
more recently, cloud computing. A cloud provider needs to decide how many physical machines to
purchase in order to accommodate the incoming jobs efficiently. This is typically modeled as a bin
packing optimization problem, where one minimizes the cost of acquiring the physical machines
subject to a capacity constraint for each machine. The jobs are assumed to arrive in an online
fashion according to some vaguely specified arrival process. In addition, the jobs come with a
specific requirement, but the effective job size and duration are not exactly known until after the
1
2
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
actual scheduling has occurred. In practice, job size and duration can be estimated from historical
data. One straightforward way to schedule jobs is to assume that each job will fully utilize its
requirement (e.g., if a job requests 32 CPU cores, the cloud provider allocates this exact amount for
the job). However, there is empirical evidence, that most of the virtual machines do not use the full
requested capacity. This offers an opportunity for the cloud provider to employ an overcommitment
policy, i.e., to schedule sets of jobs with the total requirement exceeding the respective capacities
of physical machines. On one hand, the provider faces the risk that usage exceeds the physical
capacity, which can result in severe penalties (e.g., acquiring or reallocating machines on the fly,
canceling and rescheduling running jobs, mitigating interventions, etc.). On the other hand, if
many jobs do not fully utilize their requested resources, the provider can potentially reduce the
costs significantly. This becomes even more impactful in the cloud computing market, which has
become increasingly competitive in recent years as Google, Amazon, and Microsoft aim to replace
private data centers. “The race to zero price” is a commonly used term for this industry, where
cloud providers have cut their prices very aggressively. According to an article in Business Insider
in January 2015: “Amazon Web Services (AWS), for example, has cut its price 44 times during
2009-2015, while Microsoft and Google have both decreased prices multiple times to keep up with
AWS”. In January 2015, RBC Capital’s Mark Mahaney published a chart that perfectly captures
this trend and shows that the average monthly cost per gigabyte of RAM, for a set of various
workloads, has dropped significantly: AWS dropped prices 8% from Oct. 2013 to Dec. 2014, while
both Google and Microsoft cut prices by 6% and 5%, respectively, in the same period. Other
companies who charge more, like Rackspace and AT&T, dropped prices even more significantly.
As a result, designing the right overcommitment policy for servers has a clear potential to increase
the cloud provider profit. The goal of this paper is to study this question, and propose a model
that helps guiding this type of decisions. In particular, we explicitly model job size uncertainty to
motivate new algorithms, and evaluate them on realistic workloads.
Our model and approaches are not limited to cloud computing and can be applied to several
resource allocation problems. However, we will illustrate most of the discussions and applications
using examples borrowed from the cloud computing world. Note that describing the cloud infrastructure and hardware is beyond the scope of this paper. For surveys on cloud computing, see, for
example Dinh et al. (2013) and Fox et al. (2009).
We propose to model the problem as a bin packing with chance constraints, i.e., the total load
assigned to each machine should be below physical capacity with a high pre-specified probability.
Chance constraints are a commonly used modeling tool to capture risks and constraints on random variables (Charnes and Cooper (1963)). Introducing chance constraints to several continuous
optimization problems was extensively studied in the literature (see, e.g., Calafiore and El Ghaoui
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
3
(2006) and Delage and Ye (2010)). This paper is the first to incorporate capacity chance constraints
in the bin packing problem, and to propose efficient algorithms to solve the problem. Using some
results from distributionally robust optimization (Calafiore and El Ghaoui (2006)), we reformulate
the problem as a bin packing with submodular capacity constraints. Our reformulations are exact
under the assumption of independent Gaussian resource usages for the jobs. More generally, they
provide an upper bound and a good practical approximation in the realistic case where the jobs’
usages are arbitrarily distributed but bounded.
Using some machinery from previous work (see Goemans et al. (2009), and Svitkina and Fleischer
(2011)), we show that for the bin packing problem with general monotone submodular constraints,
it is impossible to find a solution within any reasonable factor from optimal (more precisely,
√
N
,
ln(N )
where N is the number of jobs). In this paper, we show that our problem can be solved using a class
of simple online algorithms that guarantee a constant factor of 8/3 from optimal (Theorem 2). This
class of algorithms includes the commonly used Best-Fit and First-Fit heuristics. We also develop
an improved constant guarantee of 9/4 for the online problem (Theorem 4), and a 2-approximation
for the offline version (Theorem 6). We further refine our results to the case where a large number
of jobs can be scheduled on each machine (i.e., each job has a small size relative to the machine
capacity). In this regime, our approach asymptotically converges to a 4/3 approximation. More
importantly, our model and algorithms allow us to draw interesting insights on how one should
schedule jobs. In particular, our approach (i) translates to a transparent recipe on how to assign
jobs to machines; (ii) explicitly exploits the risk pooling effect; and (iii) can be used to guide an
overcommitment strategy that significantly reduces the cost of purchasing machines.
We apply our algorithm to a synthetic but realistic workload inspired by historical production workloads in Google data centers, and show that it yields good performance. In particular,
our method reduces the necessary number of physical machines, while limiting the risk borne by
the provider. Our analysis also formalizes intuitions and provides insights regarding effective job
scheduling strategies in practical settings.
1.1.
Contributions
Scheduling jobs on machines can be modeled as a bin packing problem. Jobs arrive online with
some requirements, and the scheduler decides how many machines to purchase and how to schedule
the jobs. Assuming random job sizes and limited machine capacities, one can formulate the problem
as a 0/1 integer program. The objective is to minimize the number of machines required, subject
to constraints on the capacity of each machine. In this paper, we model the capacity constraints as
chance constraints, and study the potential benefit of overcommitment. The contributions of the
paper can be summarized as follows.
4
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
• Formulating the overcommitment bin packing problem.
We present an optimization formulation for scheduling jobs on machines, while allowing the provider
to overcommit. We first model the problem as Bin Packing with Chance Constraints (BPCC). Then,
we present an alternative Submodular Bin Packing (SMBP) formulation that explicitly captures
the risk pooling effect on each machine. We show that the SMBP is equivalent to the BPCC under
common assumptions (independent Gaussian usage distributions), and that it is distributionally
robust for usages with given means and diagonal covariance matrix). Perhaps most importantly
from a practical perspective, the SMBP provides an upper bound and a good approximation under
generic independent distributions over bounded intervals (see Proposition 1). This last setting is
most common in today’s cloud data centers, where virtual machines are sold as fixed-size units.
• Developing simple algorithms that guarantee a constant factor approximation from optimal.
We show that our (SMBP) problem can be solved by well-known online algorithms such as First-Fit
and Best-Fit, while guaranteeing a constant factor of 8/3 from optimal (Theorem 2). We further
refine this result in the case where a large number of jobs can be scheduled on each machine, and
obtain a 4/3 approximation asymptotically (Corollary 1). We also develop an improved constant
guarantee of 9/4 for the online problem using First-Fit (Theorem 4), and a 2 approximation for
the offline version (Theorem 6). We then use our analysis to infer how one should assign jobs to
machines, and show how to obtain a nearly optimal assignment (Theorem 5).
• Using our model to draw practical insights on the overcommitment policy.
Our approach translates to a transparent and meaningful recipe on how to assign jobs to machines
by clustering similar jobs in terms of statistical information. In addition, our approach explicitly
captures the risk pooling effect: as we assign more jobs to a given machine, the “safety buffer”
needed for each job decreases. Finally, our approach can be used to guide a practical overcommitment strategy, where one can significantly reduce the cost of purchasing machines by allowing a
low risk of violating capacity constraints.
• Calibrating and applying our model to a practical setting.
We use realistic workload data inspired by Google Compute Engine to calibrate our model and test
our results in a practical setting. We observe that our proposed algorithm outperforms other natural
scheduling schemes, and realizes a cost saving of 1.5% to 17% relative to the no-overcommitment
policy.
1.2.
Literature review
This paper is related to different streams of literature.
In the optimization literature, the problem of scheduling jobs on virtual machines has been
studied extensively, and the bin packing problem is a common formulation. Hundreds of papers
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
5
study the bin packing problem including many of its variations, such as 2D packing (e.g., Pisinger
and Sigurd (2005)), linear packing, packing by weight, packing by cost, online bin packing, etc.
The basic bin packing problem is NP-hard, and Delorme et al. (2016) provide a recent survey of
exact approaches. However, several simple online algorithms are often used in practice for largescale instances. A common variation is the problem where jobs arrive online with sizes sampled
independently from a known discrete distribution with integer support and must be immediately
packed onto machines upon arrival. The size of a job is known when it arrives, and the goal is
to minimize the number of non-empty machines (or equivalently, minimize the waste, defined as
the total unused space). For this variation, the sum-of-squares heuristic represents the state-ofthe-art. It is almost distribution-agnostic, and nearly universally optimal for most distributions by
achieving sublinear waste in the number of items seen (see, Csirik et al. (2006)). In Gupta and
Radovanovic (2012), the authors propose two algorithms based on gradient descent on a suitably
√
defined Lagrangian relaxations of the bin packing linear program that achieve additive O( N )
waste relative to the optimal policy. This line of work bounds the expected waste for general classes
of job size distribution in an asymptotic sense.
Worst-case analysis of (finite, deterministic) bin packing solutions has received a lot of attention
as well. For deterministic capacity constraints, several efficient algorithms have been proposed.
They can be applied online, and admit approximation guarantees in both online and offline settings. The offline version of the problem can be solved using (1 + )OP T + 1 bins in linear time
(de La Vega and Lueker (1981)). A number of heuristics can solve large-scale instances efficiently
while guaranteeing a constant factor cost relative to optimal. For a survey on approximation algorithms for bin packing, see for example Coffman Jr et al. (1996). Three such widely used heuristics
are First-Fit (FF), Next-Fit (NF) and Best-Fit (BF) (see, e.g., Bays (1977), Keller et al. (2012) and
Kenyon et al. (1996)). FF assigns the newly arrived job to the first machine that can accommodate
it, and purchase a new machine only if none of the existing ones can fit the new job. NF is similar to
FF but continues to assign jobs from the current machine without going back to previous machines.
BF uses a similar strategy but seeks to fit the newly arrived job to the machine with the smallest
remaining capacity. While one can easily show that these heuristics provide a 2-approximation
guarantee, improved factors were also developed under special assumptions. Dósa and Sgall (2013)
provide a tight upper bound for the FF strategy, showing that it never needs more than 1.7OP T
machines for any input. The offline version of the problem also received a lot of attention, and
the Best-Fit-Decreasing (BFD) and First-Fit-Decreasing (FFD) strategies are among the simplest
(and most popular) heuristics for solving it. They operate like BF and FF but first rank all the
jobs in decreasing order of size. Dósa (2007) show that the tight bound of FFD is 11/9OP T + 6/9.
6
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
Our problem differs as our goal is to schedule jobs before observing the realization of their size. In
this case, stochastic bin packing models, where the job durations are modeled as random variables,
are particularly relevant. Coffman et al. (1980) consider the problem and study the asymptotic and
convergence properties of the Next-Fit online algorithm. Lueker (1983) considers the case where
the job durations are drawn uniformly from intervals of the form [a, b], and derive a lower bound
on the asymptotic expected number of bins used in an optimum packing. However, unlike this and
other asymptotic results where the jobs’ sizes are known when scheduling occurs, we are interested
in computing a solution that is feasible with high probability before observing the actual sizes.
Our objective is to assign the jobs to as few machines as possible such that the set of jobs assigned
to each machine satisfies the capacity constraint with some given probability (say 99%). In other
words, we are solving a stochastic optimization problem, and studying/analyzing different simple
heuristic solutions to achieve this goal. To make the difference with the worst case analysis clear,
we note that the worst case analysis becomes a special case of our problem when the objective
probability threshold is set to 100% (instead of 99%, or any other number strictly less than 1). The
whole point of our paper is to exploit the stochastic structure of the problem in order to reduce
the scheduling costs via overcommitment.
In this paper, we consider an auxiliary deterministic bin packing problem with a linear cost but
non-linear modified capacity constraints. In Anily et al. (1994), the authors consider general cost
structures with linear capacity constraints. More precisely, the cost of a machine is assumed to
be a concave and monotone function of the number of jobs in the machine. They show that the
Next-Fit Increasing heuristic provides a worst-case bound of no more than 1.75, and an asymptotic
worst-case bound of 1.691.
The motivation behind this paper is similar to the overbooking policy for airline companies and
hotels. It is very common for airlines to overbook and accept additional reservations for seats on a
flight beyond the aircraft’s seating capacity1 . Airline companies (and hotels) employ an overbooking
strategy for several reasons, including: (i) no-shows (several passengers are not showing up to their
flight, and the airline can predict the no-show rate for each itinerary); (ii) increasing the profit
by reducing lost opportunities; and (iii) segmenting passengers (charging a higher price as we get
closer to the flight). Note that in the context of this paper, the same motivation of no-shows
applies. However, the inter-temporal price discrimination is beyond the scope of our model. Several
academic papers in operations research have studied the overbooking problem within the last forty
years (see, e.g., Rothstein (1971), Rothstein (1985), Weatherford and Bodily (1992), Subramanian
et al. (1999) and Karaesmen and Van Ryzin (2004)). The methodology is often based on solving
1
http://www.forbes.com/2009/04/16/airline-tickets-flights-lifestyle-travel-airlines-overbooked.html
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
7
a dynamic program incorporating some prediction of the no-show rate. In our problem, we face a
large-scale bin packing problem that needs to be solved online. Rather than deciding how many
passengers (jobs) to accept and at what price, cloud providers today usually aim to avoid declining
any reasonable workloads at a fixed list price2 .
This paper is also related to the robust optimization literature, and especially to distributionally
robust optimization. The goal is to solve an optimization problem where the input parameter distribution belongs to a family of distributions that share some properties (e.g., all the distributions
with the same mean and covariance matrix) and consider the worst-case within the given family
(concrete examples are presented in Section 2.4). Examples of such work include: Ghaoui et al.
(2003), Bertsimas and Popescu (2005), Calafiore and El Ghaoui (2006) and Delage and Ye (2010).
That work aims to convert linear or convex (continuous) optimization problems with a chance constraint into tractable formulations. Our paper shares a similar motivation but considers a problem
with integer variables. To the best of our knowledge, this paper is the first to develop efficient algorithms with constant approximation guarantees for the bin packing problem with capacity chance
constraints.
Large-scale cluster management in general is an important area of computer systems research.
Verma et al. (2015) provide a full, modern example of a production system. Among the work
on scheduling jobs, Sindelar et al. (2011) propose a model that also has a certain submodular
structure due to the potential for sharing memory pages between virtual machines (in contrast
to the risk-pooling effect modeled in this paper). Much experimental work seek to evaluate the
real-world performance of bin packing heuristics that also account for factors such as adverse
interactions between jobs scheduled together, and the presence of multiple contended resources
(see for example Rina Panigrahy (2011) and Alan Roytman (2013)). While modeling these aspects
is likely to complement the resource savings achieved with the stochastic model we propose, these
papers capture fundamentally different efficiency gains arising from technological improvements
and idiosyncratic properties of certain types (or combinations) of resources. In this paper, we limit
our attention to the benefit and practicality of machine over-commitment in the case where a single
key resource is in short supply. This applies directly to multi-resource settings if, for example, the
relatively high cost of one resource makes over-provisioning the others worthwhile, or if there is
simply an imbalance between the relative supply and demand for the various resources making one
of the resources scarce.
Structure of the paper. In Section 2, we present our model and assumptions. Then, we
present the results and insights for special cases in Section 3. In Section 4, we consider the general
2
The ”spot instances” provided by Amazon and other heavily discounted reduced-availability services are notable
exceptions.
8
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
case and develop a class of efficient approximation algorithms that guarantee a constant factor
from optimal. In Section 5, we exploit the structure of the problem in order to obtain a nearly
optimal assignment and to draw practical insights. In Sections 6 and 7, we present extensions and
computational experiments using realistic data respectively. Finally, our conclusions are reported
in Section 8. Most of the proofs of the Theorems and Propositions are relegated to the Appendix.
2.
Model
In this section, we present the model and assumptions we impose. We start by formulating the
problem we want to solve, and then propose an alternative formulation. As we previously discussed,
job requests for cloud services (or any other resource allocation problem) come with a requested
capacity. This can be the memory or CPU requirements for virtual machines in the context of
cloud computing, or job duration in more traditional scheduling problems where jobs are processed
sequentially3 . We refer to Aj as the size of job j and assume that Aj is a random variable. Historical
data can provide insight into the distribution of Aj . For simplicity, we first consider the offline
version of the problem where all the jobs arrive simultaneously at time 0, and our goal is to pack
the jobs onto the minimum possible number of machines. Jobs cannot be delayed or preempted.
The methods we develop in this paper can be applied in the more interesting online version of
the problem, as we discuss in Section 4. We denote the capacity of machine i by Vi . Motivated by
practical problems, and in accordance with prior work, we assume that all the machines have the
same capacity, i.e., Vi = V ; ∀i. In addition, each machine costs ci = c, and our goal is to maximize
the total profit (or equivalently, minimize the number of machines), while scheduling all the jobs
and satisfying the capacity constraints. Note that we consider a single dimensional problem, where
each job has one capacity requirement (e.g., the number of virtual CPU cores or the amount of
memory). Although cloud virtual machine packing may be modeled as a low-dimensional vector bin
packing problem (see for example, Rina Panigrahy (2011)), one resource is often effectively binding
and/or more critical so that focusing on it offers a much larger opportunity for overcommitment4 .
3
Although there is also a job duration in cloud computing, it is generally unbounded and hence, even less constrained
than the resource usage from the customer’s perspective. The duration is also less important than the resource usage,
since most virtual machines tend to be long-lived, cannot be delayed or pre-empted, and are paid for by the minute.
In contrast, over-allocating unused, already paid-for resources can have a large impact on efficiency.
4
Insofar as many vector bin packing heuristics are actually straightforward generalizations of the FF, NF and BF
rules, it will become obvious how our proposed algorithm could similarly be adapted to the multi-resource setting in
Section 4, although we do not pursue this idea in this paper.
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
2.1.
9
Bin packing problem
For the case where Aj is deterministic, we obtain the classical deterministic bin packing problem:
B = min
xij ,yi
s.t.
N
X
yi
i=1
N
X
j=1
N
X
Aj xij ≤ V yi
∀i
(DBP)
xij = 1
∀j
i=1
xij ∈ {0, 1}
yi ∈ {0, 1}
∀i, j
∀i
For the offline version, we have a total of N jobs and we need to decide which machines to
use/purchase (captured by the decision variable yi that is equal to 1, if machine i is purchased
and 0 otherwise). The solution is a B-partition of the set {1, 2, . . . , N } that satisfies the capacity
constraints. The decision variable xij equals one if job j is assigned to machine i and zero otherwise.
As we discussed in Section 1.2, there is an extensive literature on the DBP problem and its many
variations covering both exact algorithms as well and approximation heuristics with performance
bounds.
The problem faced by a cloud provider is typically online in nature since jobs arrive and depart
over time. Unfortunately, it is not possible to continually re-solve the DBP problem as the data is
updated for both practical and computational reasons. Keeping with the majority of prior work,
we start by basing our algorithms on static, single-period optimization formulations like the DBP
problem, rather than explicitly modeling arrivals and departures. The next section explains how,
unlike prior work, our single-period optimization model efficiently captures the uncertainty faced
by a cloud provider. We will consider both the online and offline versions of our model.
We remark that, while our online analysis considers sequentially arriving jobs, none of our results
explicitly considers departing jobs. This is also in line with the bin-packing literature, where results
usually apply to very general item arrival processes {Aj }, but it is typically assumed that packed
items remain in their assigned bins. In practice, a large cloud provider is likely to be interested
in a steady-state where the distribution of jobs in the systems is stable over time (or at least
predictable), even if individual jobs come and go. Whereas the online model with arrivals correctly
reflects that the scheduler cannot optimize to account for unseen future arrivals, it is unclear if
and how additionally modeling departures would affect a system where the overall distribution of
jobs remains the same over time. We therefore leave this question open. Note that several works
consider bin-packing with item departures (see, e.g., Stolyar and Zhong (2015) and the references
therein). In this work, the authors design a simple greedy algorithm for general packing constraints
and show that it can be asymptotically optimal.
10
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
2.2.
Chance constraints
The DBP problem suffers from the unrealistic assumption that the jobs’ sizes Aj are deterministic. In reality, jobs’ requirements (or durations) can be highly unpredictable and quite volatile,
especially from the perspective of a cloud provider with no control over the software executed in
a virtual machine. Ensuring that the capacity constraints are satisfied for any realization of Aj
generally yields a conservative outcome. For example, if the jobs’ true requirements are Bernoulli
random variables taking on either 0.3 or 1.0 with equal probability, one needs to plan as if each
job consumes a capacity of 1.0. By overcommitting resources, the provider can reduce the cost
significantly. Caution is required however, since overcommitting can be very expensive if not done
properly. Planning according to the expected value (in the previous simple example, 0.65), for
instance, would result in capacity being too tight on many machines. Specifically, for large machines,
the realized requirements could exceed capacity up to half of the time. Depending on the specific
resource and the degree of violation, such performance could be catastrophic for a cloud service
provider. Concretely, sustained CPU contention among virtual machines would materially affect
customers’ performance metrics, whereas a shortage of available memory could require temporarily
“swapping” some data to a slower storage medium with usually devastating consequences on performance. Other mitigations are possible, including migrating a running virtual machine to another
host, but these also incur computational overhead for the provider and performance degradation for
the customer. In the extreme case where overly optimistic scheduling results in inadequate capacity
planning, there is even a stock-out risk where it is no longer possible to schedule all customers’ jobs
within a data center. With this motivation in mind, our goal is to propose a formulation that finds
the right overcommitment policy. We will show that by slightly overcommitting (defined formally
in Section 2.3), one can reduce the costs significantly while satisfying the capacity constraints with
high probability.
While not strictly required by our approach, in practice, there is often an upper bound on Aj ,
denoted by Āj . In the context of cloud computing, Āj is the requested capacity that a virtual
machine is not allowed to exceed (32 CPU cores, or 128 GB of memory, say). However, the job
may end up using much less, at least for some time. If the cloud provider schedules all the jobs
according to their respective upper bounds Āj , then there is no overcommitment. If the cloud
provider schedules all the jobs according to some sizes smaller than the Āj , then some of the
machines may be overcommitted.
We propose to solve a bin packing problem with capacity chance constraints. Chance constraints
are widely used in optimization problems, starting with Charnes and Cooper (1963) for linear
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
11
programs, and more recently in convex optimization (see, e.g., Nemirovski and Shapiro (2006)) and
in finance (see, e.g., Abdelaziz et al. (2007)). In this case, the capacity constraints are replaced by:
P
N
X
Aj xij ≤ V yi ≥ α,
(1)
j=1
where α represents the confidence level of satisfying the constraint (α = 0.999, say) and is exogenously set by the cloud provider depending on considerations such as typical job’s running time
and contractual agreements. Note that when α = 1, this corresponds to the setting with no overcommitment, or in other words, to the worst-case solution that covers all possible realizations of
all the Aj ’s. One of our goals is to study the trade-off between the probability of violating physical
capacity and the cost reduction resulting from a given value of α.
The problem becomes the bin packing with chance constraints, parameterized by α:
B(α) = min
N
X
xij ,yi
s.t.
yi
i=1
P
N
X
Aj xij ≤ V yi ≥ α
∀i
j=1
N
X
xij = 1
(BPCC)
∀j
i=1
xij ∈ {0, 1}
yi ∈ {0, 1}
2.3.
∀i, j
∀i
Overcommitment
One can define the overcommitment level as follows. Consider two possible (equivalent) benchmarks. First, one can solve the problem for α = 1, and obtain a solution (by directly solving the
IP or any other heuristic method) with objective B(1). Then, we solve the problem for the desired
value α < 1. The overcommitment benefit can be defined as 0 < B(α)/B(1) ≤ 1. It is also interesting
to compare the two different jobs assignments.
The second definition goes as follows. We define the overcommitment factor as the amount of
sellable capacity divided by the physical capacity of machines in the data center, that is:
P
j Āj
OCF (α) , P
.
i V yi
Since we assume that all the machines have the same capacity and cost, we can write:
P
P
Āj
j Āj
OCF (α) =
≥
= OCF (1).
V B(α) V B(1)
j
12
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
Note that OCF(1) is (generally strictly) less than one, as the bin packing overhead prevents the
sale of all resources5 . Then, we have:
OCF (1) B(α)
=
.
OCF (α) B(1)
For illustration purposes, consider the following simple example with N = 70 1-core jobs. The jobs
are independent and Bernoulli distributed with probability 0.5. In particular, the jobs are either
high usage (i.e., fully utilize the 1 core), or low usage (in this case, idle). Each machine has a
capacity V = 48 cores. Without overcommitting, we need 2 machines, i.e., B(1) = 2. What happens
if we schedule all the jobs in a single machine? In this case, one can reduce the cost (number of
machines) by half, while satisfying the capacity constraint with probability 0.9987. In other words,
B(0.99) = 1. The overcommitment benefit in this simple example is clear. Our goal is to formalize
a systematic way to overcommit in more complicated and realistic settings.
Note that overcommitment may lead to Service Level Agreement (SLA) violations. This paper
does not discuss in detail the SLAs (with some possible associated metrics), and the corresponding
estimation/forecast procedures as they are usually application and resource specific. Instead, this
research treats a general Virtual Machine (VM) scheduling problem. More precisely, our context
is that of a cloud computing provider with limited visibility into the mix of customer workloads,
and hard SLAs. While the provider does track numerous service-level indicators, they are typically
monotonic in the resource usage on average (we expect more work to translate to worse performance). Therefore, we believe that it is reasonable to rely on resource utilization as the sole metric
in the optimization problem.
2.4.
A variant of submodular bin packing
In this section, we propose an alternative formulation that is closely related to the (BPCC) problem.
Under some mild assumptions, we show that the latter is either exactly or approximately equivalent
to the following submodular bin packing problem:
BS (α) = min
xij ,yi
s.t.
N
X
yi
i=1
PN
j=1 µj xij + D(α)
N
X
qP
N
j=1 bj xij
≤ V yi
∀i
(SMBP)
xij = 1
∀j
i=1
xij ∈ {0, 1}
yi ∈ {0, 1}
5
∀i, j
∀i
Technical note: other production overheads such as safety stocks for various types of outages and management
overheads, are generally also included in the denominator. For the purpose of this paper, we omit them.
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
13
The difference between the (BPCC) and the (SMBP) problems is the way the capacity constraints
are written. Here, we have replaced each chance constraint with a linear term plus a square root
term. These constraints are submodular with respect to the vector x. The variable µj denotes the
expected value of Aj . In what follows, we will consider different definitions of bj and D(α) in three
different settings. The first two are concrete motivational examples, whereas the third one is a
generalization. In each case, we formally show the relation between the (BPCC) and the (SMBP)
problems.
1. Gaussian case: Assume that the random variables Aj are Gaussian and independent. In this
PN
case, the random variable Z = j=1 Aj xij for any given binary vector x is Gaussian, and therefore,
one can use the following simplification:
P
N
X
Aj xij ≤ V yi = P Z ≤ V yi ≥ α.
j=1
For each machine i, constraint (1) becomes:
N
X
v
u N
uX
−1
σj2 xij ≤ V yi ,
µj xij + Φ (α) · t
(2)
j=1
j=1
where Φ−1 (·) is the inverse CDF of a normal N (0, 1), µj = E[Aj ] and σj2 = Var(Aj ). Note that we
have used the fact that x is binary so that x2ij = xij . Consequently, the (BPCC) and the (SMBP)
problems are equivalent with the values bj = σj2 and D(α) = Φ−1 (α).
When the random variables Aj are independent but not normally distributed, if there are a
large number of jobs per machine, one can apply the Central Limit Theorem and obtain a similar
approximate argument. In fact, using a result from Calafiore and El Ghaoui (2006), one can extend
this equivalence to any radial distribution6 .
2. Hoeffding’s inequality: Assume that the random variables Aj are independent with a finite
support [Aj , Aj ], 0 ≤ Aj < Aj with mean µj . As we discussed, one can often know the value of Aj
and use historical data to estimate µj and Aj (we discuss this in more detail in Section 7). Assume
PN
that the mean usages fit on each machine, i.e.,
j=1 xij µj < yi Vi . Then, Hoeffding’s inequality
states that:
P
N
X
Aj xij ≤ V yi ≥ 1 − e
P
2
−2[V yi − N
j=1 µj xij ]
PN
2
(A −Aj )
j=1 j
.
j=1
Equating the right hand side to α, we obtain:
PN
−2[V yi − j=1 µj xij ]2
= ln(1 − α),
PN
b
x
j
ij
j=1
6
Radial distributions include all probability densities whose level sets are ellipsoids. The formal mathematical definition can be found in Calafiore and El Ghaoui (2006).
14
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
where bj = (Aj − Aj )2 represents the range of job j’s usage. Re-arranging the equation, we obtain
for each machine i:
N
X
v
u N
uX
µj xij + D(α)t
bj xij ≤ V yi ,
j=1
where in this case, D(α) =
p
(3)
j=1
−0.5 ln(1 − α). Note that in this setting the (BPCC) and the (SMBP)
problems are not equivalent. We only have that any solution of the latter is a feasible solution for
the former. We will demonstrate in Section 7 that despite being very conservative, this formulation
based on Hoeffding’s inequality actually yields good practical solutions.
The next case is a generalization of the last two.
3. Distributionally robust formulations: Assume that the random variables Aj are independent with some unknown distribution. We only know that this distribution belongs to a family
of probability distributions D. We consider two commonly used examples of such families. First,
we consider the family D1 of distributions with a given mean and (diagonal) covariance matrix,
µ and Σ, respectively. Second, we look at D2 , the family of generic distributions of independent
random variables over bounded intervals [Aj , Aj ].
In this setting, the chance constraint is assumed to be enforced robustly with respect to the
entire family D of probability distributions on A = (A1 , A2 , . . . , AN ), meaning that:
inf P
A∼D
N
X
Aj xij ≤ V yi ≥ α.
(4)
j=1
In this context, we have the following result.
Proposition 1. Consider the robust bin packing problem with the capacity chance constraints
(4) for each machine i. Then, for any α ∈ (0, 1), we have:
• For the family D1 of distributions with a given mean and diagonal covariance matrix, the
p
robust problem is equivalent to the (SMBP) with bj = σj2 and D1 (α) = α/(1 − α).
• For the family D2 of generic distributions of independent random variables over bounded inter-
vals, the robust problem can be approximated by the (SMBP) with bj = (Aj − Aj )2 and D2 (α) =
p
−0.5 ln(1 − α).
The details of the proof are omitted for conciseness. In particular, the proof for D1 is analogous to
an existing result in continuous optimization that converts linear programs with a chance constraint
into a linear program with a convex second-order cone constraint (see Calafiore and El Ghaoui
(2006) and Ghaoui et al. (2003)). The proof for D2 follows directly from the fact that Hoeffding’s
inequality applies for all such distributions, and thus for the infimum of the probability.
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
15
We have shown that the (SMBP) problem is a good approximation for the bin packing problem
with chance constraints. For the case of independent random variables with a given mean and
covariance, the approximation is exact and for the case of distributions over independent bounded
intervals, the approximation yields a feasible solution. We investigate practical settings in Section
7, and show that these approximate formulations all yield good solutions to the original problem.
From now on, we consider solving the (SMBP) problem, that is repeated here for convenience:
BS (α) = min
xij ,yi
s.t.
N
X
yi
i=1
PN
j=1 µj xij + D(α)
N
X
qP
N
j=1 bj xij
≤ V yi
∀i
(SMBP)
xij = 1
∀j
i=1
xij ∈ {0, 1}
yi ∈ {0, 1}
∀i, j
∀i
As discussed, the capacity constraint is now replaced by the following equation, called the modified
capacity constraint:
N
X
j=1
v
u N
uX
µj xij + D(α)t
bj xij ≤ V yi .
(5)
j=1
One can interpret equation (5) as follows. Each machine has a capacity V . Each job j consumes
capacity µj in expectation, as well as an additional buffer to account for the uncertainty. This
buffer depends on two factors: (i) the variability of the job, captured by the parameter bj ; and (ii)
the acceptable level of risk through D(α). The function D(α) is increasing in α, and therefore we
impose a stricter constraint as α approaches 1 by requiring this extra buffer to be larger.
Equation (5) can also be interpreted as a risk measure applied by the scheduler. For each machine
PN
i, the total (random) load is j=1 Aj xij . If we consider that µj represents the expectation and bj
qP
PN
N
corresponds to the variance, then j=1 µj xij and
j=1 bj xij correspond to the expectation and
the standard deviation of the total load on machine i respectively. As a result, the right hand side of
equation (5) can be interpreted as an adjusted risk utility, where D(α) is the degree of risk aversion
of the scheduler. The additional amount allocated for job j can be interpreted as a safety buffer to
account for the uncertainty and for the risk that the provider is willing to bear. As we discussed,
this extra buffer decreases with the number of jobs assigned to the same machine. In Section 4, we
develop efficient methods to solve the (SMBP) with analytical performance guarantees.
16
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
2.5.
Two naive approaches
In this section, we explore the limitations of two approaches that come to mind. The first attempt
is to rewrite the problem as a linear integer program: the decision variables are all binary and the
non-linearity in (SMBP) can actually be captured by common modeling techniques, as detailed
in Appendix A. Unfortunately, solving this IP is not a viable option. Similarly as for the classical
deterministic bin packing problem, solving even moderately large instances with commercial solvers
takes several hours. Moreover, applying the approach to smaller, specific toy instances provides
little insight about the assignment policy, and how the value of α affects the solution. Since our
goal is to develop practical strategies for the online problem, we chose not to further pursue exact
solutions.
The second potential approach is to develop an algorithm for a more general problem: the
bin packing with general monotone submodular capacity constraints. Unfortunately, using some
machinery and results from Goemans et al. (2009) and Svitkina and Fleischer (2011), we next show
that it is in fact impossible to find a solution within any reasonable factor from optimal.
Theorem 1. Consider the bin packing problem with general monotone submodular capacity constraints for each machine. Then, it is impossible to guarantee a solution within a factor better than
√
N
ln(N )
from optimal.
The proof can be found in Appendix A. We will show that the (SMBP) problem that we consider
is more tractable as it concerns only a specific class of monotone submodular capacity constraints
that capture the structure of the chance-constrained problem. In the next session, we start by
addressing simple special cases in order to draw some structural insights.
3.
Results and insights for special cases
In this section, we consider the (SMBP) problem for some given µj , bj , N and D(α). Our goals are
to: (i) develop efficient approaches to solve the problem; (ii) draw some insights on how to schedule
the different jobs and; (iii) study the effect of the different parameters on the outcome. This will
ultimately allows us to understand the impact of overcommitment in resource allocation problems,
such as cloud computing.
3.1.
Identical distributed jobs
We consider the symmetric setting where all the random variables Aj have the same distribution,
such that µj = µ and bj = b in the (SMBP) problem. By symmetry, we only need to find the number
of jobs n to assign to each machine. Since all the jobs are interchangeable, our goal is to assign as
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
17
many jobs as possible in each machine. In other words, we want to pick the largest value of n such
that the constraint (5) is satisfied, or equivalently:
√
nµ + D(α) nb ≤ V
[V − nµ]2
D(α)2 ≤
.
nb
For a given value of α, this is the largest integer smaller than:
n(α) =
p
1
V
+ 2 bD(α)2 − b2 D(α)4 + 4bD(α)2 V µ .
µ 2µ
(6)
• For a given value of α, the number of jobs n(α) increases with V /µ. Indeed, since µ represents
the expected job size, increasing the ratio V /µ is equivalent to increasing the number of ”average”
jobs a machine can host. If the jobs are smaller or the machines larger, one can fit more jobs per
machine, as expected.
• For a given value of V /µ, n(α) is a non-increasing function of α. When α increases, it means
that we enforce the capacity constraint in a stricter manner (recall that α = 1 corresponds to the
case without overcommitment). As a result, the number of jobs per machine cannot increase.
• For given values of α and V /µ, n(α) is a non-increasing function of b. Recall that the parameter
b corresponds to some measure of spread (the variance in the Gaussian setting, and the range for
distributions with bounded support). Therefore, when b increases, it implies that the jobs’ resource
usage is more volatile and hence, a larger buffer is needed. Consequently, the number of jobs cannot
increase when b grows.
• For given values of α and V , n(α) is non-increasing with
√
b/µ. The quantity
√
b/µ represents
the coefficient of variation of the random job size in the Gaussian case, or a similarly normalized
measure of dispersion in other cases. Consequently, one should be able to fit less jobs, as the
variability increases.
The simple case of identically distributed jobs allows us to understand how the different factors
affect the number of jobs that one can assign to each machine. In Figure 1, we plot equation (6)
for an instance with A = 1, A = 0.3, µ = 0.65, V = 30 and 0.5 ≤ α < 1. The large dot for α = 1
in the figure represents the case without overcommitment (i.e., α = 1). Interestingly, one can see
that when the value of α approaches 1, the benefit of allowing a small probability of violating the
capacity constraint is significant, so that one can increase the number of jobs per machine. In this
case, when α = 1, we can fit 30 jobs per machine, whereas when α = 0.992, we can fit 36 jobs,
hence, an improvement of 20%. Note that this analysis guarantees that the capacity constraint is
satisfied with at least probability α. As we will show in Section 7 for many instances, the capacity
constraint is satisfied with an even higher probability.
18
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
Figure 1
Parameters: A = 1, A = 0.3, µ = 0.65, V = 30
Alternatively, one can plot α as a function of n (see Figure 2a for an example with different
values for V /µ). As expected, the benefit of overcommitting increases with V /µ, i.e., one can fit a
larger number of jobs per machine. In our example, when V /µ = 25, by scheduling jobs according
to A (i.e., α = 1, no overcommitment), we can schedule 14 jobs, whereas if we allow a 0.1%
violation probability, we can schedule 17 jobs. Consequently, by allowing 0.1% chance of violating
the capacity constraint, one can save more than 20% in costs.
(a) Parameters: A = 1, A = 0.3, µ = 0.65
Figure 2
(b) Parameters: V = 30
Example for identically distributed jobs
We next discuss how to solve the problem for the case with a small number of different classes
of job distributions.
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
3.2.
19
Small number of job distributions
We now consider the case where the random variables Aj can be clustered in few different categories.
For example, suppose standard clustering algorithms are applied to historical data to treat similar
jobs as a single class with some distribution of usage. For example, one can have a setting with
four types of jobs: (i) large jobs with no variability (µj is large and bj is zero); (ii) small jobs with
no variability (µj is small and bj is zero); (ii) large jobs with high variability (both µj and bj are
large); and (iv) small jobs with high variability (µj is small and bj is high). In other words, we
have N jobs and they all are from one of the 4 types, with given values of µj and bj . The result for
this setting is summarized in the following Observation (the details can be found in Appendix C).
Observation 1. In the case where the number of different job classes is not too large, one can
solve the problem efficiently as a cutting stock problem.
The resulting cutting stock problem (see formulation (14) in Appendix C) is well studied in many
contexts (see Gilmore and Gomory (1961) for a classical approach based on linear programming,
or the recent survey of Delorme et al. (2016)). For example, one can solve the LP relaxation of
(14) and round the fractional solution. This approach can be very useful for cases where the cloud
provider have enough historical data, and when the jobs can all be regrouped into a small number
of different clusters. This situation is sometimes realistic but not always. Very often, grouping all
possible customer job profiles into a small number of classes, each described by a single distribution
is likely unrealistic in many contexts. For example, virtual machines are typically sold with 1, 2, 4,
8, 16, 32 or 64 CPU cores, each with various memory configurations, to a variety of customers with
disparate use-cases. Aggregating across these jobs is already dubious, before considering differences
in their usage means and variability. Unfortunately, if one decides to use a large number of job
classes, solving a cutting stock problem is not scalable. In addition, this approach requires advance
knowledge of the number of jobs of each class and hence, cannot be applied to the online version
of our problem.
4.
Online constant competitive algorithms
In this section, we analyze the performance of a large class of algorithms for the online version of
problem (SMBP). We note that the same guarantees hold for the offline case, as it is just a simpler
version of the problem. We then present a refined result for the offline problem in Section 6.1.
4.1.
Lazy algorithms are 38 -competitive
An algorithm is called lazy, if it does not purchase/use a new machine unless necessary. The formal
definition is as follows.
Definition 1. We call an online algorithm lazy if upon arrival of a new job, it assigns the job
to one of the existing (already purchased) machines given the capacity constraints are not violated.
20
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
In other words, the algorithm purchases a new machine if and only if non of the existing machines
can accomodate the newly arrived job.
Several commonly used algorithms fall into this category, e.g., First-Fit, Best-Fit, Next-Fit,
greedy type etc. Let OP T be the optimal objective, i.e., the minimum number of machines needed
to serve all the jobs {1, 2, · · · , N }. Recall that in our problem, each job 1 ≤ j ≤ N , has two characteristics: µj and bj which represent the mean and the uncertain part of job j respectively. For a
qP
P
set of jobs S, we define the corresponding cost Cost(S) to be j∈S µj +
j∈S bj . Without loss of
generality, we can assume (by normalization of all µj and bj ) that the capacity of each machine is
1 and that D(α) is also normalized to 1. We call a set S feasible, if its cost is at most the capacity
limit 1. In the following Theorem, we show that any lazy algorithm yields a constant approximation
for the (SBBP) problem.
Theorem 2. Any lazy algorithm ALG purchases at most 83 OP T machines, where OP T is the
optimum number of machines to serve all jobs.
Proof.
Let m be the number of machines that ALG purchases when serving jobs {1, 2, · · · , N }.
For any machine 1 ≤ i ≤ m, we define Si to be the set of jobs assigned to machine i. Without loss
of generality, we assume that the m machines are purchased in the order of their indices. In other
words, machines 1 and m are the first and last purchased ones respectively.
For any pair of machines 1 ≤ i < i0 ≤ m, we next prove that the set Si ∪ Si0 is infeasible for the
qP
P
modified capacity constraint, i.e., j∈Si ∪S 0 µj +
j∈Si ∪S 0 bj > 1. Let j be the first job assigned
i
i
to machine i0 . Since ALG is lazy, assigning j to machine i upon its arrival time was not feasible,
i.e., the set {j } ∪ Si is infeasible. Since we only assign more jobs to machines throughout the course
of the algorithm, and do not remove any job, the set Si0 ∪ Si is also infeasible.
In the next Lemma, we lower bound the sum of µj + bj for the jobs in an infeasible set.
Lemma 1. For any infeasible set T , we have
P
j∈T (µj
+ bj ) > 34 .
qP
P
Proof. For any infeasible set T , we have by definition j∈T µj +
j∈T bj > 1. We denote
qP
P
x = j∈T µj , and y =
j∈T bj . Then, y > 1 − x. If x is greater than 1, the claim of the lemma
holds trivially. Otherwise, we obtain:
1
3 3
x + y 2 > x + (1 − x)2 = x2 − x + 1 = (x − )2 + ≥ .
2
4 4
We conclude that
P
j∈T (µj
+ bj ) > 43 .
As discussed, for any pair of machines i < i0 , the union of their sets of jobs Si ∪ Si0 is an infeasible
set that does not fit in one machine. We now apply the lower bound from Lemma 1 for the infeasible
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
set Si ∪ Si0 and imply
P
j∈Si ∪Si0 (µj
+ bj ) > 34 . We can sum up this inequality for all
m
2
21
pairs of
machines i and i0 to obtain:
X
X
1≤i<i0 ≤m j∈Si ∪Si0
3 m
(µj + bj ) >
.
4 2
We claim that the left hand side of inequality (7) is equal to (m − 1)
(7)
PN
j=1 (µj
+ bj ). We note that
for each job j ∈ Sk , the term µj + bj appears k − 1 times in the left hand side of inequality (7) when
i0 is equal to k. In addition, this term also appears m − k times when i is equal to k. Therefore,
every µj + bj appears k − 1 + m − k = m − 1 times, which is independent of the index k of the
machine that contains job j. As a result, we obtain:
N
X
(µj + bj ) >
j=1
m
3m
3
.
=
4(m − 1) 2
8
On the other hand, we use the optimal assignment to upper bound the sum
PN
j=1 (µj +bj ),
and relate
m to OP T . Let T1 , T2 , · · · , TOP T be the optimal assignment of all jobs to OP T machines. Since Ti is
qP
P
P
a feasible set, we have j∈Ti µj +
j∈Ti bj ≤ 1, and consequently, we also have
j∈Ti (µj + bj ) ≤ 1.
PN
POP T P
Summing up for all the machines 1 ≤ i ≤ OP T , we obtain: j=1 (µj + bj ) = i=1
j∈Ti (µj + bj ) ≤
OP T . We conclude that:
OP T ≥
N
X
j=1
This completes the proof of m <
8OP T
3
(µj + bj ) >
3m
.
8
.
Theorem 2 derives an approximation guarantee of 8/3 for any lazy algorithm. In many practical
settings, one can further exploit the structure of the set of jobs, and design algorithms that achieve
better approximation factors. For example, if some jobs are usually larger relative to others, one
can incorporate this knowledge into the algorithm. We next describe the main intuitions behind the
8/3 upper bound. In the proof of Theorem 2, we have used the following two main proof techniques:
P
• First, we show a direct connection between the feasibility of a set S and the sum j∈S (µj + bj ).
P
In particular, we prove that j∈S (µj + bj ) ≤ 1 for any feasible set, and greater than 3/4 for any
infeasible set. Consequently, OP T cannot be less than the sum of µj + bj for all jobs. The gap of
4/3 between the two bounds contributes partially to the final upper bound of 8/3.
• Second, we show that the union of jobs assigned to any pair of machines by the lazy algorithm
is an infeasible set, so that their sum of µj + bj should exceed 3/4. One can then find m/2 disjoint
pairs of machines, and obtain a lower bound of 3/4 for the sum µj + bj for each pair. The fact
that we achieve this lower bound for every pair of machines (and not for each machine) contributes
another factor of 2 to the approximation factor, resulting to
4
3
× 2 = 83 .
22
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
Note that the second loss of a factor of 2 follows from the fact that the union of any two machines
forms an infeasible set and nothing stronger. In particular, all machines could potentially have a
cost of 1/2 + for a very small , and make the above analysis tight. Nevertheless, if we assume
that each machine is nearly full (i.e., has Cost close to 1), one can refine the approximation factor.
Theorem 3. For any 0 ≤ ≤ 0.3, if the lazy algorithm ALG assigns all the jobs to m machines
such that Cost(Si ) ≥ 1 − for every 1 ≤ i ≤ m, we have m ≤ ( 34 + 3)OP T , i.e., a ( 43 + 3) approximation guarantee.
P
To simplify the analysis, we denote β to be 1 − . For a set Si , we define x = j∈Si µj
qP
and y =
j∈Si bj . Since Cost(Si ) is at least β, we have x + y ≥ β. Assuming x ≤ β, we have:
Proof.
2β − 1 2
1
1 3
(µj + bj ) = x + y 2 ≥ x + (β − x)2 = x −
+ β − ≥ β − = − ,
2
4
4 4
j∈S
X
i
where the first equality is by the definition of x and y, the second inequality holds by x + y ≥ α,
P
and the rest are algebraic manipulations. For x > β, we also have j∈Si (µj + bj ) ≥ x > β > 43 − .
PN
PN
We conclude that j=1 (µj + bj ) ≥ m × ( 34 − ). We also know that OP T ≥ j=1 (µj + bj ), which
implies that m ≤
OP T
3/4−
≤ OP T ( 43 + 3) for ≤ 0.3.
A particular setting where the condition of Theorem 3 holds is when the capacity of each machine
p
is large compared to all jobs, i.e., max1≤j≤N µj + bj is at most . In this case, for each machine
i 6= m (except the last purchased machine), we know that there exists a job j ∈ Sm (assigned to the
last purchased machine m) such that the algorithm could not assign j to machine i. This means
that Cost(Si ∪ {j }) exceeds one. Since Cost is a subadditive function, we have Cost(Si ∪ {j }) ≤
Cost(Si ) + Cost({j }). We also know that Cost({j }) ≤ which implies that Cost(Si ) > 1 − .
Remark 1. As elaborated above, there are two main sources for losses in the approximation
factors: non-linearity of the cost function that can contribute up to 4/3, and machines being only
partially full that can cause an extra factor of 2 which in total implies the 8/3 approximation
guarantee. In the classical bin packing case (i.e., bj = 0 for all j), the cost function is linear, and
the non-linearity losses in approximation factors fade. Consequently, we obtain that (i) Theorem
2 reduces to a 2 approximation factor; and (ii) Theorem 3 reduces to a (1 + ) approximation
factor, which are both consistent with known results from the literature on the classical bin packing
problem.
Theorem 3 improves the bound for the case where each machine is almost full. However, in
practice machines are often not full. In the next section, we derive a bound as a function of the
minimum number of jobs assigned to the machines.
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
4.2.
23
Algorithm First-Fit is 94 -competitive
So far, we considered the general class of lazy algorithms. One popular algorithm in this class
(both in the literature and in practice) is First-Fit. By exploring the structural properties of
allocations made by First-Fit, we can provide a better competitive ratio of
9
4
< 83 . Recall that
upon the arrival of a new job, First-Fit purchases a new machine if the job does not fit in any of
the existing machines. Otherwise, it assigns the job to the first machine (based on a fixed ordering
such as machine IDs) that it fits in. This algorithm is simple to implement, and very well studied
in the context of the classical bin packing problem. First, we present an extension of Theorem 2
for the case where each machine has at least K jobs.
Corollary 1. If the First-Fit algorithm assigns jobs such that each machine receives at least
K jobs, the number of purchased machines does not exceed
4
(1
3
+
1
)OP T ,
K
where OP T is the
optimum number of machines to serve all jobs.
One can prove Corollary 1 in a similar fashion as the proof of Theorem 2 and using the fact that
jobs are assigned using First-Fit (the details are omitted for conciseness). For example, when
K = 2 (resp. K = 5), we obtain a 2 (resp. 1.6) approximation. We next refine the approximation
factor for the problem by using the First-Fit algorithm.
Theorem 4. The number of purchased machines by Algorithm First-Fit for any arrival order
of jobs is not more than 94 OP T + 1.
The proof can be found in Appendix D. We note that the approximation guarantees we developed
in this section do not depend on the factor D(α), and on the specific definition of the parameters
µj and bj . In addition, as we show computationally in Section 7, the performance of this class of
algorithm is not significantly affected by the factor D(α).
5.
Insights on job scheduling
In this section, we show that guaranteeing the following two guidelines in any allocation algorithm
yields optimal solutions:
• Filling up each machine completely such that no other job fits in it, i.e., making each machine’s
Cost equal to 1.
• Each machine contains a set of similar jobs (defined formally next).
We formalize these properties in more detail, and show how one can achieve optimality by
qP
P
satisfying these two conditions. We call a machine full if j∈S µj +
j∈S bj is equal to 1 (recall
that the machine capacity is normalized to 1 without loss of generality), where S is the set of jobs
assigned to the machine. Note that it is not possible to assign any additional job (no matter how
small the job is) to a full machine. Similarly, we call a machine -full, if the the cost is at least 1 − ,
24
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
i.e.,
P
j∈S µj +
qP
j∈S
bj ≥ 1 − . We define two jobs to be similar, if they have the same b/µ ratio.
Note that the two jobs can have different values of µ and b. We say that a machine is homogeneous,
if it only contains similar jobs. In other words, if the ratio bj /µj is the same for all the jobs j
assigned to this machine. By convention, we define bj /µj to be +∞ when µj = 0. In addition, we
introduce the relaxed version of this property: we say that two jobs are δ-similar, if their b/µ ratios
differ by at most a multiplicative factor of 1 + δ. A machine is called δ-homogeneous, if it only
contains δ-similar jobs (i.e., for any pair of jobs j and j 0 in the same machine,
bj /µj
b0j /µ0j
is at most
1 + δ).
Theorem 5. For any ≥ 0 and δ ≥ 0, consider an assignment of all jobs to some machines
with two properties: a) each machine is -full, and b) each machine is δ-homogeneous. Then, the
number of purchased machines in this allocation is at most
OP T
.
(1−)2 (1−δ)
The proof can be found in Appendix E.
In this section, we proposed an easy to follow recipe in order to schedule jobs to machines. Each
arriving job is characterized by two parameters µj and bj . Upon arrival of a new job, the cloud
provider can compute the ratio rj = bj /µj . Then, one can decide of a few buckets for the different
values of rj , depending on historical data, and performance restrictions. Finally, the cloud provider
will assign jobs with similar ratios to the same machines and tries to fill in machines as much as
possible. In this paper, we show that such a simple strategy guarantees a good performance (close
to optimal) in terms of minimizing the number of purchased machines while at the same time
allowing to strategically overcommit.
6.
Extensions
In this section, we present two extensions of the problem we considered in this paper.
6.1.
Offline 2-approximation algorithm
Consider the offline version of the (SMBP) problem. In this case, all the N jobs already arrived,
and one has to find a feasible schedule so as to minimize the number of machines. We propose
the algorithm Local-Search that iteratively reduces the number of purchased machines, and
also uses ideas inspired from First-Fit in order to achieve a 2-approximation for the offline
problem. Algorithm Local-Search starts by assigning all the jobs to machines arbitrarily, and
then iteratively refines this assignment. Suppose that each machine has a unique identifier number.
We next introduce some notation before presenting the update operations. Let a be the number of
machines with only one job, A1 be the set of these a machines, and S1 be the set of jobs assigned to
these machines. Note that this set changes throughout the algorithm with the update operations.
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
25
We say that a job j ∈
/ S1 is good, if it fits in at least 6 of the machines in the set A1 7 . In addition,
we say that a machine is large, if it contains at least 5 jobs, and we denote the set of large machines
by A5 . We say that a machine is medium size, if it contains 2, 3, or 4 jobs, and we denote the set
of medium machines by A2,3,4 . We call a medium size machine critical, if it contains one job that
fits in none of the machines in A1 , and the rest of the jobs in this machine are all good. Following
are the update operations that Local-Search performs until no such operation is available.
• Find a job j in machine i (i is the machine identifier number) and assign it to some other
machine i0 < i if feasible (the outcome will be similar to First-Fit).
• Find a medium size machine i that contains only good jobs. Let j1 , · · · , j` (2 ≤ ` ≤ 4) be the
jobs in machine i. Assign j1 to one of the machines in A1 that it fits in. Since j1 is a good job,
there are at least 6 different options, and the algorithm picks one of them arbitrarily. Assign j2 to
a different machine in A1 that it fits in. There should be at least 5 ways to do so. We continue
this process until all the jobs in machine i (there are at most 4 of them) are assigned to distinct
machines in A1 , and they all fit in their new machines. This way, we release machine i and reduce
the number of machines by one.
• Find a medium size machine i that contains one job j that fits in at least one machine in A1 ,
and the rest of the jobs in i are all good. First, assign j to one machine in A1 that it fits in. Similar
to the previous case, we assign the rest of the jobs (that are all good) to different machines in A1 .
This way, we release machine i and reduce the number of purchased machines by one.
• Find two critical machines i1 and i2 . Let j1 and j2 be the only jobs in these two machines
that fit in no machine in A1 . If both jobs fit and form a feasible assignment in a new machine,
we purchase a new machine and assign j1 and j2 to it. Otherwise, we do not change anything
and ignore this update step. There are at most 6 other jobs in these two machines since both are
medium machines. In addition, the rest of the jobs are all good. Therefore, similar to the previous
two cases, we can assign these jobs to distinct machines in A1 that they fit in. This way, we release
machines i1 and i2 and purchase a new machine. So in total, we reduce the number of purchased
machines by one.
We are now ready to analyze this Local-Search algorithm that also borrows ideas from
First-Fit. We next show that the number of purchased machines is at most 2OP T + O(1), i.e., a
2-approximation.
Theorem 6. Algorithm Local-Search terminates after at most N 3 operations (where N is
the number of jobs), and purchases at most 2OP T + 11 machines.
7
The reason we need 6 jobs is technical, and will be used in the proof of Theorem 6.
26
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
The proof can be found in Appendix F.
We conclude this section by comparing our results to the classical (deterministic) bin packing
problem. In the classical bin packing problem, there are folklore polynomial time approximation
schemes (see Section 10.3 in Albers and Souza (2011)) that achieve a (1 − )-approximation factor
by proposing an offline algorithm based on clustering the jobs into 1/2 groups, and treating them
as equal size jobs. Using dynamic programming techniques, one can solve the simplified problem
with 1/2 different job sizes in time O(npoly(1/) ). In addition to the inefficient time complexity of
these algorithms that make them less appealing for practical purposes, one cannot generalize the
same ideas to our setting. The main obstacle is the lack of a total ordering among the different
jobs. In the classical bin packing problem, the jobs can be sorted based on their sizes. However,
this is not true in our case since the jobs have the two dimensional requirements µj and bj .
6.2.
Alternative constraints
Recall that in the (SMBP) problem, we imposed the modified capacity constraint (5). Instead, one
can consider the following family of constraints, parametrized by 0.5 ≤ p ≤ 1:
N
X
j=1
µj xij + D(α)
N
X
bj xij
p
≤ V yi ,
(8)
j=1
Note that this equation is still monotone and submodular in the assignment vector x, and captures
some notion of risk pooling. In particular, the “safety buffer” reduces with the number of jobs
already assigned to each machine. The motivation behind such a modified capacity constraint lies
in the shape that one wishes to impose on the term that captures the uncertain part of the job. In
one extreme (p = 1), we consider that the term that captures the uncertainty is linear and hence,
as important as the expectation term. In the other extreme case (p = 0.5), we consider that the
term that captures the uncertainty behaves as a square root term. For a large number of jobs per
machine, this is known to be an efficient way of handling uncertainty (similar argument as the
central limit theorem). Note also that when p = 0.5, we are back to equation (5), and when p = 1
we have a commonly used benchmark (see more details in Section 7). One can extend our analysis
and derive an approximation factor for the online problem as a function of p for any lazy algorithm.
Corollary 2. Consider the bin packing problem with the modified capacity constraint (8).
Then, any lazy algorithm ALG purchases at most
2
OP T
f (p)
machines, where OP T is the optimum
number of machines to serve all jobs and f (p) is given by:
1
f (p) = 1 − (1 − p)p p −1 .
The proof is in a very similar spirit as in Theorem 2 and is not repeated due to space limitations.
p
PN
PN
Intuitively, we find parametric lower and upper bounds on
in terms of j=1 bj xij .
j=1 bj xij
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
27
Note that when p = 0.5, we recover the result of Theorem 2 (i.e., a 8/3 approximation) and as p
increases, the approximation factor converges to 2. In Figure 3, we plot the approximation factor
as a function of 0.5 ≤ p ≤ 1.
Figure 3
Approximation factor
2
f (p)
as a function of p
Finally, one can also extend our results for the case where the modified capacity constrain is
PN
PN
given by: j=1 µj xij + D(α) log 1 + j=1 bj xij ≤ V yi .
7.
Computational experiments
In this section, we test and validate the analytical results developed in the paper by solving
the (SMBP) problem for different realistic cases, and investigating the impact on the number of
machines required (i.e., the cost). We use realistic workload data inspired by Google Compute
Engine, and show how our model and algorithms can be applied in an operational setting.
7.1.
Setting and data
We use simulated workloads of 1000 jobs (virtual machines) with a realistic VM size distribution
(see Table 1). Typically, the GCE workload is composed of a mix of CPU usages from virtual
machines belonging to cloud customers. These jobs can have highly varying workloads, including
some large ones and many smaller ones.8 More precisely, we assume that each VM arrives to the
cloud provider with a requested number of CPU cores, sampled from the distribution presented in
Table 1.
8
The average distribution of workloads we present in Table 1 assumes small percentages of workloads with 32 and
16 cores, and larger percentages of smaller VMs. A “large” workload may consist of many VMs belonging to a single
customer whose usages may be correlated at the time-scales we are considering, but heuristics ensure these are spread
across different hosts to avoid strong correlation of co-scheduled VMs. The workload distributions we are using are
representative for some segments of GCE. Unfortunately, we cannot provide the real data due to confidentiality.
28
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
Table 1
Example distribution of VM sizes in one Google data center
Number of cores 1
2
4
8
16 32
% VMs
36.3 13.8 21.3 23.1 3.5 1.9
In this context, the average utilization is typically low, but in many cases, the utilization can be
highly variable over time. Although we decided to keep a similarl VM size distribution as observed
in a production data center, we also fitted parametric distributions to roughly match the mean and
the variance of the measured usage. This allows us to obtain a parametric model that we could vary
for simulation. We consider two different cases for the actual CPU utilization as a fraction of the
requested job size: we either assume that the number of cores used has a Bernoulli distribution, or
a truncated Gaussian distribution. As discussed in Section 2.4, we assume that each job j has lower
and upper utilization bounds, Aj and Aj . We sample Aj uniformly in the range [0.3, 0.6], and Aj
in the range [0.7, 1.0]. In addition, we uniformly sample µ0j and σj0 ∈ [0.1, 0.5] for each VM to serve
as the parameters for the truncated Gaussian (not to be confused with its true mean and standard
deviation, µj and σj ). For the Bernoulli case, µ0j = µj determines the respective probabilities of the
realization corresponding to the lower or upper bound (and the unneeded σj0 is ignored).
For each workload of 1000 VMs generated in this manner, we solve the online version of the
(SMBP) problem by implementing the Best-Fit heuristic, using one of the three different variants
for the values of D(α) and bj . We solve the problem for various values of α ranging from 0.5 to
0.99999. More precisely, when a new job arrives, we compute the modified capacity constraint
in equation (5) for each already-purchased machine, and assign the job to the machine with the
smallest available capacity that can accommodate it9 . If the job does not fit in any of the already
purchased machines, the algorithm opens a new machine. We consider the three variations of the
(SMBP) discussed earlier:
• The Gaussian case introduced in (2), with bj = σj2 and D(α) = Φ−1 (α). This is now also
an approximation to the chance constrained (BPCC) formulation since the true distributions are
truncated Gaussian or Bernoulli.
• The Hoeffding’s inequality approximation introduced in (3), with bj = (Aj − Aj )2 and D(α) =
p
−0.5 ln(1 − α). Note hat the distributionally robust approach with the family of distributions D2
is equivalent to this formulation.
• The distributionally robust approximation with the family of distributions D1 , with bj = σj2
p
and D1 (α) = α/(1 − α).
9
P
Note that we clip the value of the constraint at the effective upper bound
j xij Aj , to ensure that no trivially
feasible assignments are excluded. Otherwise, the Hoeffding’s inequality-based constraint may perform slightly worse
relative to the policy without over-commitment, if it leaves too much free space on the machines.
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
7.2.
29
Linear benchmarks
We also implement the following four benchmarks which consist of solving the classical (DBP)
problem with specific problem data. First we have:
• No overcommitment – This is equivalent to setting α = 1 in the (SMBP) problem, or solving
the (DBP) problem with sizes Aj .
Three other heuristics are obtained by replacing the square-root term in constraint (5) by a linear
term, specifically we replace the constraint with:
N
X
µj xij + D(α)
j=1
N
X
p
j=1
bj xij =
N
X
p
µj + D(α) bj xij ≤ V yi
(9)
j=1
to obtain:
• The linear Gaussian heuristic that mimics the Gaussian approximation in (2).
• The linear Hoeffding’s heuristic that mimics the Hoeffding’s approximation in (3).
• The linear robust heuristic that mimics the distributionally robust approach with the family
of distributions D1 .
Notice that the linearized constraint (9) is clearly more restrictive for a fixed value of α by concavity
of the square root, but we do of course vary the value of α in our experiments. We do not expect
these benchmarks to outperform our proposed method since they do not capture the risk-pooling
effect from scheduling jobs concurrently on the same machine. They do however still reflect different
relative amounts of ”padding” or ”buffer” above the expected utilization of each job allocated due
to the usage uncertainty.
The motivation behind the linear benchmarks lies in the fact that the problem is reduced to the
standard (DBP) formulation which admits efficient implementations for the classical heuristics.
For example, the Best-Fit algorithm can run in time O(N log N ) by maintaining a list of open
machines sorted by the slack left free on each machine (see Johnson (1974) for details and linear time approximations). In contrast, our implementation of the Best-Fit heuristic with the non-linear
constraint (5) takes time O(N 2 ) since we evaluate the constraint for each machine when each new
job arrives. Practically, in cloud VM scheduling systems, this quadratic-time approach may be
preferred anyway since it generalizes straightforwardly to more complex “scoring” functions that
also take into account additional factors besides the remaining capacity on a machine, such as
multiple resource dimensions, performance concerns or correlation between jobs (see, for example,
Verma et al. (2015)). In addition, the computational cost could be mitigated by dividing the data
center into smaller “shards”, each consisting of a fraction of the machines, and then trying to assign
each incoming job only to the machines in one of the shard. For example, in our experiments we
found that there was little performance advantage in considering sets of more than 1000 jobs at
30
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
a time. Nevertheless, our results show that even these linear benchmarks may provide substantial
savings (relative to the no-overcommitment policy) while only requiring very minor changes to
classical algorithms: instead of Aj , we simply use job sizes defined by µj , bj and α.
7.3.
Results and comparisons
We compare the seven different methods in terms of the number of purchased machines and show
that, in most cases, our approach significantly reduces the number of machines needed.
We consider two physical machine sizes: 32 cores and 72 cores. As expected, the larger machines
achieve a greater benefit from modeling risk-pooling. We draw 50 independent workloads each
composed of 1000 VMs as described above. For each workload, we schedule the jobs using the BestFit algorithm and report the average number of machines needed across the 50 workloads. Finally,
we compute the probability of capacity violation as follows: for each machine used to schedule each
of the workloads, we draw 5000 utilization realizations (either from a sum of truncated Gaussian
or a sum of Bernoulli distributions), and we count the number of realizations where the total CPU
usage of the jobs scheduled on a machine exceeds capacity.
The sample size was chosen so that our results reflect an effect that is measurable in a typical
data center. Since our workloads require on the order of 100 machines each, this corresponds to
roughly 50 × 100 × 5000 = 25, 000, 000 individual machine-level samples. Seen another way, we
schedule 50 × 1000 = 50, 000 jobs and collect 5000 data points from each. Assuming a sample is
recorded every 10 minutes, say, this corresponds to a few days of traffic even in a small real data
center with less than 1000 machines10 . The sample turns out to yield very stable measurements,
and defining appropriate service level indicators is application-dependent and beyond the scope of
this paper, so we do not report confidence intervals or otherwise delve into statistical measurement
issues. Similarly, capacity planning for smaller data centers may need to adjust measures of demand
uncertainty to account for the different scheduling algorithms, but any conclusions are likely specific
to the workload and data center, so we do not report on the variability across workloads.
In Figure 4, we plot the average number of machines needed as a function of the probability
that a given constraint is violated, in the case where the data center is composed of 72 CPU core
machines. Each point in the curves corresponds to a different value of the parameter α. Without
overcommitment, we need an average of over 54 machines in order to serve all the jobs. By allowing
10
The exact time needed to collect a comparable data set from a production system depends on the data center size
and on the sampling rate, which should be a function of how quickly jobs enter and leave the system, and of how
volatile their usages are. By sampling independently in our simulations, we are assuming that the measurements from
each machine are collected relatively infrequently (to limit correlation between successive measurements), and that
the workloads are diverse (to limit correlation between measurements from different machines). This assumption is
increasingly realistic as the size of the data center and the length of time covered increase: in the limit, for a fixed
sample size, we would record at most one measurement from each job with a finite lifetime, and it would only be
correlated with a small fraction of its peers.
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
31
a small chance of violation, say a 0.1% risk (or equivalently, a 99.9% satisfaction probability), we
only need 52 machines for the Bernoulli usage, and 48 machines for the truncated Gaussian usage.
If we allow a 1% chance of violation, we then only need 50 and 46 machines, respectively. The
table of Figure 6 summarizes the relative savings, which are roughly 4.5% and 11.5% with a 0.1%
risk, and roughly 8% and 14% with a 1% risk, for the Bernoulli and truncated Gaussian usages,
respectively. In terms of the overcommitment factor defined in Section 2.3, the reported savings
translate directly to the fraction of the final capacity that is due to overcommitment,
B(1) − B(α) OCF (α) − OCF (1)
=
.
B(1)
OCF (α)
Figure 4 shows that all three variations of our approach (the Gaussian, Hoeffding’s, and the
distributionally robust approximations) yield very similar results. This suggests that the results
are robust to the method and the parameters. The same is true for the corresponding linear
benchmarks, though they perform worse, as expected. We remark that although the final performance tradeoff is nearly identical, for a particular value of the α parameter, the achieved violation
probabilities vary greatly. For example, with α = 0.9 and the truncated normal distribution, each
constraint was satisfied with probability 0.913 when using the Gaussian approximation, but with
much higher probabilities 0.9972 and 0.9998 for the Hoeffding and robust approximations, respectively. This is expected, since the latter two are relatively loose upper bounds for a truncated
normal distribution, whereas the distributions N (µj , σj ) are close approximations to the truncated
Gaussian with parameters µ0j and σj0 . (This is especially true for their respective sums.) Practically,
the normal approximation is likely to be the easiest to calibrate and understand in cases where the
theoretical guarantees of the other two approaches are not needed, since it would be nearly exact
for normally-distributed usages.
In Figure 5, we repeat the same tests for smaller machines having only 32 physical CPU cores. The
smaller machines are more difficult to overcommit since there is a smaller risk-pooling opportunity,
as can be seen by comparing the columns of Table 6. The three variations of our approach still
yield similar and significant savings, but now they substantially outperform the linear benchmarks:
the cost reduction is at least double with all but the largest values of α. We highlight that with
the “better behaved” truncated Gaussian usage, we still obtain a 5% cost savings at 0.01% risk,
whereas the linear benchmarks barely improve over the no-overcommit case.
As mentioned in Section 2.2, the value of α should be calibrated so as to yield an acceptable
risk level given the data center, the workload and the resource in question. Any data center has
a baseline risk due to machine (or power) failure, say, and a temporary CPU shortage is usually
much less severe relative to such a failure. On the other hand, causing a VM to crash because of a
memory shortage can be as bad as a machine failure from the customer’s point of view. Ultimately,
32
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
(a) Bernoulli usage
Figure 4
(b) Truncated Gaussian usage
Average number of 72-core machines needed to schedule a workload, versus the probability that any
given machine’s realized load exceeds capacity
(a) Bernoulli usage
Figure 5
(b) Truncated Gaussian usage
Results for 32 core machines
the risk tolerance will be driven by technological factors, such as the ability to migrate VMs or
swap memory while maintaining an acceptable performance.
7.4.
Impact
We conclude that our approach allows a substantial cost reduction for realistic workloads. More
precisely, we draw the following four conclusions.
• Easy to implement: Our approach is nearly as simple to implement as classical bin packing
heuristics. In addition, it works naturally online and in real-time, and can be easily incorporated
to existing scheduling algorithms.
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
Figure 6
33
Percentage savings due to overcommitment for two CPU usage distributions, using the three
proposed variants of the chance constraint. The linear Gaussian benchmark is shown for comparison
• Robustness: The three variations we proposed yield very similar results. This suggests that
our approach is robust to the type of approximation. In particular, the uncertain term bj and the
risk coefficient D(α) do not have a strong impact on the results. It also suggests that the method
is robust to estimation errors in the measures of variability that define bj .
• Significant cost reduction: With modern 72-core machines, our approach allows a 8-14%
cost savings relative to the no overcommitment policy. This is achieved by considering a manageable
risk level of 1%, which is comparable to other sources of risk that are not controllable (e.g., physical
failures and regular maintenance operations).
• Outperforming the benchmarks: Our proposals show a consistent marked improvement
over three different “linear” benchmarks that reduce to directly apply the classical Best-Fit heuristic. The difference is most substantial in cases where the machines are small relative to the jobs
they must contain, which is intuitively more challenging. Although our approach does not run in
O(n log n) time, ”sharding” and (potentially) parallelization mitigate any such concerns in practice.
8.
Conclusion
In this paper, we formulated and practically solved the bin-packing problem with overcommitment. In particular, we focused on a cloud computing provider that is willing to overcommit when
allocating capacity to virtual machines in a data center. We modeled the problem as bin packing
with chance constraints, where the objective is to minimize the number of purchased machines,
while satisfying the physical capacity constraints of each machine with a very high probability.
We first showed that this problem is closely related to an alternative formulation that we call the
SMBP (Submodular Bin Packing) problem. Specifically, the two problems are equivalent under
the assumption of independent Gaussian job sizes, or when the job size distribution belongs to the
distributionally robust family with a given mean and (diagonal) covariance matrix. In addition, the
34
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
bin packing problem with chance constraints can be approximated by the SMBP for distributions
with bounded supports.
We first showed that for the bin packing problem with general monotone submodular capacity
constraints, it is impossible to find a solution within any reasonable factor from optimal. We then
developed simple algorithms that achieve solutions within constant factors from optimal for the
SMBP problem. We showed that any lazy algorithm is 8/3 competitive, and that the First-Fit
heuristic is 9/4 competitive. Since the First-Fit and Best-Fit algorithms are easy to implement and
well understood in practice, this provides an attractive option from an implementation perspective.
Second, we proposed an algorithm for the offline version of the problem, and showed that it
guarantees a 2-approximation. Then, we used our model and algorithms in order to draw several
useful insights on how to schedule jobs to machines, and on the right way to overcommit. We
convey that our method captures the risk pooling effect, as the “safety buffer” needed for each job
decreases with the number of jobs already assigned to the same machine. Moreover, our approach
translates to a transparent and meaningful recipe on how to assign jobs to machines by naturally
clustering similar jobs in terms of statistical information. Namely, jobs with a similar ratio b/µ
(the uncertain term divided by the expectation) should be assigned to the same machine.
Finally, we demonstrated the benefit of overcommitting and applied our approach to realistic
workload data inspired by Google Compute Engine. We showed that our methods are (i) easy to
implement; (ii) robust to the parameters; and (iii) significantly reduce the cost (1.5-17% depending
on the setting and the size of the physical machines in the data center).
Acknowledgments
We would like to thank the Google Cloud Analytics team for helpful discussions and feedback. The first
author would like to thank Google Research as this work would not have been possible without a one year
postdoc at Google NYC during the year 2015-2016. The authors would also like to thank Lennart Baardman,
Arthur Flajolet and Balasubramanian Sivan for their valuable feedback that has helped us improve the
paper.
References
Abdelaziz FB, Aouni B, El Fayedh R (2007) Multi-objective stochastic programming for portfolio selection.
European Journal of Operational Research 177(3):1811–1823.
Alan Roytman SGJLSN Aman Kansal (2013) Algorithm design for performance aware vm consolidation. Technical report, URL https://www.microsoft.com/en-us/research/publication/
algorithm-design-for-performance-aware-vm-consolidation/.
Albers S, Souza A (2011) Combinatorial algorithms lecture notes: Bin packing. URL https://www2.
informatik.hu-berlin.de/alcox/lehre/lvws1011/coalg/bin_packing.pdf.
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
35
Anily S, Bramel J, Simchi-Levi D (1994) Worst-case analysis of heuristics for the bin packing problem with
general cost structures. Operations research 42(2):287–298.
Bays C (1977) A comparison of next-fit, first-fit, and best-fit. Communications of the ACM 20(3):191–192.
Bertsimas D, Popescu I (2005) Optimal inequalities in probability theory: A convex optimization approach.
SIAM Journal on Optimization 15(3):780–804.
Calafiore GC, El Ghaoui L (2006) On distributionally robust chance-constrained linear programs. Journal
of Optimization Theory and Applications 130(1):1–22.
Charnes A, Cooper WW (1963) Deterministic equivalents for optimizing and satisfying under chance constraints. Operations research 11(1):18–39.
Coffman EG, So K, Hofri M, Yao A (1980) A stochastic model of bin-packing. Information and Control
44(2):105–115.
Coffman Jr EG, Garey MR, Johnson DS (1996) Approximation algorithms for bin packing: a survey. Approximation algorithms for NP-hard problems, 46–93 (PWS Publishing Co.).
Csirik J, Johnson DS, Kenyon C, Orlin JB, Shor PW, Weber RR (2006) On the sum-of-squares algorithm
for bin packing. Journal of the ACM (JACM) 53(1):1–65.
de La Vega WF, Lueker GS (1981) Bin packing can be solved within 1+ ε in linear time. Combinatorica
1(4):349–355.
Delage E, Ye Y (2010) Distributionally robust optimization under moment uncertainty with application to
data-driven problems. Operations research 58(3):595–612.
Delorme M, Iori M, Martello S (2016) Bin packing and cutting stock problems: Mathematical models and
exact algorithms. European Journal of Operational Research 255(1):1 – 20, ISSN 0377-2217, URL
http://dx.doi.org/http://dx.doi.org/10.1016/j.ejor.2016.04.030.
Dinh HT, Lee C, Niyato D, Wang P (2013) A survey of mobile cloud computing: architecture, applications,
and approaches. Wireless communications and mobile computing 13(18):1587–1611.
Dósa G (2007) The tight bound of first fit decreasing bin-packing algorithm is f f d ≤ 11/9opt + 6/9. Combinatorics, Algorithms, Probabilistic and Experimental Methodologies, 1–11 (Springer).
Dósa G, Sgall J (2013) First fit bin packing: A tight analysis. LIPIcs-Leibniz International Proceedings in
Informatics, volume 20 (Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik).
Fox A, Griffith R, Joseph A, Katz R, Konwinski A, Lee G, Patterson D, Rabkin A, Stoica I (2009) Above the
clouds: A berkeley view of cloud computing. Dept. Electrical Eng. and Comput. Sciences, University
of California, Berkeley, Rep. UCB/EECS 28(13):2009.
Ghaoui LE, Oks M, Oustry F (2003) Worst-case value-at-risk and robust portfolio optimization: A conic
programming approach. Operations Research 51(4):543–556.
36
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
Gilmore PC, Gomory RE (1961) A linear programming approach to the cutting-stock problem. Operations
research 9(6):849–859.
Goemans MX, Harvey NJ, Iwata S, Mirrokni V (2009) Approximating submodular functions everywhere.
Proceedings of the twentieth Annual ACM-SIAM Symposium on Discrete Algorithms, 535–544 (Society
for Industrial and Applied Mathematics).
Gupta V, Radovanovic A (2012) Online stochastic bin packing. arXiv preprint arXiv:1211.2687 .
Johnson DS (1974) Fast algorithms for bin packing. Journal of Computer and System Sciences 8(3):272 – 314,
ISSN 0022-0000, URL http://dx.doi.org/http://dx.doi.org/10.1016/S0022-0000(74)80026-7.
Karaesmen I, Van Ryzin G (2004) Overbooking with substitutable inventory classes. Operations Research
52(1):83–104.
Keller G, Tighe M, Lutfiyya H, Bauer M (2012) An analysis of first fit heuristics for the virtual machine
relocation problem. Network and service management (cnsm), 2012 8th international conference and
2012 workshop on systems virtualiztion management (svm), 406–413 (IEEE).
Kenyon C, et al. (1996) Best-fit bin-packing with random order. SODA, volume 96, 359–364.
Lueker GS (1983) Bin packing with items uniformly distributed over intervals [a, b]. Foundations of Computer
Science, 1983., 24th Annual Symposium on, 289–297 (IEEE).
Nemirovski A, Shapiro A (2006) Convex approximations of chance constrained programs. SIAM Journal on
Optimization 17(4):969–996.
Pisinger D, Sigurd M (2005) The two-dimensional bin packing problem with variable bin sizes and costs.
Discrete Optimization 2(2):154–167.
Rina Panigrahy KTUWRR Vijayan Prabhakaran (2011) Validating heuristics for virtual machines
consolidation. Technical report, URL https://www.microsoft.com/en-us/research/publication/
validating-heuristics-for-virtual-machines-consolidation/.
Rothstein M (1971) An airline overbooking model. Transportation Science 5(2):180–192.
Rothstein M (1985) Or forum – or and the airline overbooking problem. Operations Research 33(2):237–248.
Sindelar M, Sitaraman R, Shenoy P (2011) Sharing-aware algorithms for virtual machine colocation. Proceedings of the 23rd ACM symposium on Parallelism in algorithms and architectures, 367–378 (New
York, NY, USA).
Stolyar AL, Zhong Y (2015) Asymptotic optimality of a greedy randomized algorithm in a large-scale service
system with general packing constraints. Queueing Systems 79(2):117–143.
Subramanian J, Stidham Jr S, Lautenbacher CJ (1999) Airline yield management with overbooking, cancellations, and no-shows. Transportation Science 33(2):147–167.
Svitkina Z, Fleischer L (2011) Submodular approximation: Sampling-based algorithms and lower bounds.
SIAM Journal on Computing 40(6):1715–1737.
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
37
Verma A, Pedrosa L, Korupolu MR, Oppenheimer D, Tune E, Wilkes J (2015) Large-scale cluster management at Google with Borg. Proceedings of the European Conference on Computer Systems (EuroSys)
(Bordeaux, France).
Weatherford LR, Bodily SE (1992) A taxonomy and research overview of perishable-asset revenue management: yield management, overbooking, and pricing. Operations Research 40(5):831–844.
38
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
Appendix A:
Details for Section 2.5
IP formulation
By taking the square on both sides of the submodular capacity constraint (5), we obtain:
V 2 yi +
N
X
µj xij
2
− 2V yi
j =1
N
X
µj xij ≥ D(α)2 ·
j =1
N
X
bj xij .
j =1
Note that since yi is binary, we have yi2 = yi . We next look at the term: yi
PN
j =1
µj xij . One can linearize this
term by using one of the following two methods.
1. Since yi = 1 if and only if at least one xij = 1, we have the constraint:
PN
j =1
xij ≤ M yi , for a large
positive number M (actually, one can take M = N ). Consequently, one can remove the yi in the above term.
2. One can define a new variable tij , yi xij and add the four following constraints:
Next, we look at the term:
tij ≤ yi ; tij ≤ xij ; tij ≥ 0; tij ≥ xij + yi − 1.
2
N
µ
x
. Since x2ij = xij , we remain only with the terms xij · xik for k > j.
j
ij
j =1
P
One can now define a new variable for each such term, i.e., zijk , xij · xik with the four constraints as before:
zijk ≤ xij ; zijk ≤ xik ; zijk ≥ 0; zijk ≥ xij + xik − 1.
The resulting formulation is a linear integer program. Note that the decision variables tij and zijk are
continuous, and only xij and yi are binary.
Appendix B:
Proof.
Proof of Theorem 1
In this proof, we make use of the submodular functions defined by Svitkina and Fleischer (2011)
for load balancing problems. Denote the jobs by 1, 2, · · · , N , and for every subset of jobs S ⊆ [N ], let f (S) be
the cost of the set S (i.e., the capacity cost induced by the function f ). We use two submodular functions f
and f 0 (defined formally next) which are proved to be indistinguishable with a polynomial number of value
oracle queries (see Lemma 5.1 of Svitkina and Fleischer (2011)). Let denote x = ln(N ). Note that Svitkina
and Fleischer (2011) require x to be any parameter such that x2 dominates ln(N ) asymptotically and hence,
√
5 N
x
, α0 =
N
m0
2
, and β0 = x5 . We choose N such
P
that m0 takes an integer value. Define f (S) to be min{|S|, α0 }, and f 0 (S) to be min{ i min{β0 , |S ∩ Vi |}, α0 }
includes the special case we are considering here. Define m0 =
0
where {Vi }m
i=1 is a random partitioning of [N ] into m0 equal sized parts. Note that by definition, both set
functions f and f 0 are monotone and submodular.
As we mentioned, it is proved in Svitkina and Fleischer (2011) that the submodular functions f and
0
f cannot be distinguished from each other with a polynomial number of value oracle queries with high
probability. We construct two instances of the bin packing problem with monotone submodular capacity
constraints by using f and f 0 as follows. In both instances, the capacity of each machine is set to β0 .
In the first instance, a set S is feasible (i.e., we can schedule all its jobs in a machine) if and only if
f (S) ≤ β0 . By definition, f (S) is greater than β0 if |S| is greater than β0 . Therefore, any feasible set S in
this first instance consists of at most β0 jobs. Consequently, in the first instance, any feasible assignment of
jobs to machines requires at least
N
β0
machines.
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
39
We define the second instance of the bin packing problem based on the submodular function f 0 . A set S
is feasible in the second instance, if and only if f 0 (S) ≤ β0 . Since f 0 (Vj ) is at most β0 for each 1 ≤ j ≤ m0 ,
each set Vj is a feasible set in this second instance. Therefore, we can assign each Vj to a separate machine
to process all jobs, and consequently, m0 machines suffice to do all the tasks in the second instance. We note
that with our parameter setting, m0 is much smaller that
N
β0
. We then conclude that the optimum solutions
of these two instances differ significantly.
We next prove the claim of Theorem 1 by using a contradiction argument. Assume that there exists a
polynomial time algorithm ALG for the bin packing problem with monotone submodular capacity constraints
√
with an approximation factor better than
N
ln(N )
. We next prove that by using ALG, we can distinguish
0
between the two set functions f and f with a polynomial number of value oracles, which contradicts the
result of Svitkina and Fleischer (2011). The task of distinguishing between the two functions f and f 0 can
be formalized as follows. We have value oracle access to a set function g, and we know that g is either the
same as f or the same as f 0 . The goal is to find out whether g = f or g = f 0 using a polynomial number of
value oracle queries. We construct a bin packing instance with N jobs, capacity constraints g, and ask the
algorithm ALG to solve this instance. If ALG uses less than
N
β0
machines to process all jobs, we can say
that g is the same as f 0 (since with capacity constraints f , there does not exist a feasible assignment of all
jobs to less than
N
β0
machines). On the other hand, if ALG uses at least
N
β0
machines, we can say that g is
0
equal to f . This follows from the fact that if g was equal to f , the optimum number of machines would have
√
been at most m0 . Since ALG has an approximation factor better than
√
by ALG should have been less than m0 ×
N
ln(N )
=
√
5 N
x
√
×
N
ln(N )
=
N
β0
N
ln(N )
, the number of machines used
. Therefore, using at least
N
β0
machines
by ALG is a sufficient indicator of g being the same as f . This argument implies that an algorithm with an
√
approximation factor better than
N
ln(N )
for the bin packing problem with monotone submodular constraints
yields a way of distinguishing between f and f 0 with a polynomial number of value oracle queries (since
ALG is a polynomial time algorithm), which contradicts the result of Svitkina and Fleischer (2011).
Appendix C:
Details related to Observation 1
For ease of exposition, we first address the case with two job classes. Classes 1 and 2 have parameters (µ1 , b1 )
and (µ2 , b2 ) respectively. For example, an interesting special case is when one class of jobs is more predictable
relative to the other (i.e., µ1 = µ2 = µ, b2 = b and b1 = 0). In practice, very often, one class of jobs has low
variability (i.e., close to deterministic), whereas the other class is more volatile. For example, class 1 can
represent loyal recurring customers, whereas class 2 corresponds to new customers.
We assume that we need to decide the number of machines to purchase, as well as how many jobs of types
1 and 2 to assign to each machine. Our goal is to find the right mix of jobs of classes 1 and 2 to assign to
each machine (note that this proportion can be different for each machine). Consider a given machine i and
denote by n1 and n2 the number of jobs of classes 1 and 2 that we assign to this machine. We would like to
ensure that the chance constraint is satisfied in each machine with the given parameter α. Assuming that
V > n1 µ1 + n2 µ2 , we obtain:
[V − n1 µ1 − n2 µ2 ]2
= D(α)2 .
n 1 b1 + n 2 b2
(10)
40
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
For a given α, one can find the value of n1 as a function of n2 that satisfies equation (10):
q
V − n2 µ2
1
n1 (n2 ) =
+ 2 b1 D(α)2 − b21 D(α)4 + 4b1 D(α)2 (V − n2 µ2 )µ1 + 4µ21 n2 b2 D(α)2 .
µ1
2µ1
(11)
As we discussed, an interesting special case is when both classes of jobs have the same expectation, i.e.,
µ1 = µ2 = µ but one type of jobs is much more predictable (i.e., smaller range or variance). In the extreme
case, one can assume that class 1 jobs are deterministic, (i.e., b1 = 0). In this case, equation (11) becomes:
√
V − n2 µ
2n2 b2 B
n1 (n2 ) =
−
.
(12)
µ
2µ
Alternatively, by directly looking at the modified capacity constraint (5) for this special case, we obtain:
p
V = (n1 + n2 )µ + D(α) n2 b2 .
(13)
Equation (13) can be interpreted as follows. Any additional job of type 1 takes µ from the capacity budget
V , whereas any additional job of type 2 is more costly. The extra cost depends on both the uncertainty of
the job (through b2 ) and the overcommitment policy (through D(α)). The higher is one of these two factors,
the larger is the capacity we should plan for jobs of type 2 (i.e., “safety buffer”). Note that the submodular
nature of constraint (5) implies that this marginal extra cost decreases with the number of jobs n2 . In other
words, when n2 becomes large, each additional job of type 2 will converge to take a capacity of µ, as the
central limit theorem applies. More generally, by taking the derivative of the above expression, any additional
√ √
job of type 2 will take µ + 0.5D(α) b2 / n2 (where n2 here represents how many jobs of type 2 are already
assigned to this machine).
In Figure 7, we plot equation (12) for a specific instance with V = 30 and different values of α. As expected,
if n2 = 0, we can schedule n1 = 50 jobs of class 1 to reach exactly the capacity V , no matter what is the
value of α. On the other hand, for α = 0.99, if n1 = 0, we can schedule n2 = 38 jobs of class 2. As the value
of n2 increases, the optimal value of n1 decreases. For a given value of α, any point (n1 , n2 ) on the curve
(or below) guarantees the feasibility of the chance constraint. The interesting insight is to characterize the
proportion of jobs of classes 1 and 2 per machine. For example, if we want to impose n1 = n2 in each machine,
what is the optimal value for a given α? In our example, when α = 0.99, we can schedule n1 = n2 = 21 jobs.
If we compare to the case without overcommitment (i.e., α = 1), we can schedule 18 jobs from each class.
Therefore, we obtain an improvement of 16.67%. More generally, if the cost (or priority) of certain jobs is
higher, we can design an optimal ratio per machine so that it still guarantees to satisfy the chance constraint.
To summarize, for the case when V = 30, µ = 0.65, A = 1, A = 0.3 and α = 0.99, one can schedule either 50
jobs of class 1 or 38 jobs of class 2 or any combination of both classes according to equation (12). In other
words, we have many different ways of bin packing jobs of classes 1 and 2 to each machine.
We next consider solving the offline problem when our goal is to schedule N1 jobs of class 1 and N2 jobs of
class 2. The numbers N1 and N2 are given as an input, and our goal is to minimize the number of machines
denoted by M ∗ , such that each machine is assigned a pair (n1 , n2 ) that satisfies equation (11). Since n1 and
n2 should be integer numbers, one can compute for a given value of α, all the feasible pairs that lie on the
curve or just below (we have at most K = mini=1,2 max ni such pairs). In other words, for each k = 1, 2, . . . , K,
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
Figure 7
41
Parameters: A = 1, A = 0.3, µ = 0.65, V = 30.
we compute a pair of coefficients denoted by (βk , γk ). The optimization problem becomes a cutting stock
problem:
M ∗ = min
zk
s.t.
K
X
zk
k=1
K
X
βk zk ≥ N1
(14)
k=1
K
X
γ k z k ≥ N2
k=1
zk ≥ 0, integer
∀k.
The decision variable zk represents the number of times we use the pair (βk , γk ), and M ∗ denotes the optimal
number of machines. As a result, assuming that we have only two classes of jobs (with different µs and bs),
one can solve the deterministic linear integer program in (14) and obtain a solution for problem (SMBP).
Note that the above treatment can easily be extended to more than two classes of jobs.
Appendix D:
Proof.
Proof of Theorem 4
Let n1 be the number of machines purchased by First-Fit with only a single job, and S1 be the
set of n1 jobs assigned to these machines. Similarly, we define n2 to be the number of machines with at least
two jobs, and S2 be the set of their jobs. The goal is to prove that n1 + n2 ≤ 49 OP T + 1. We know that
any pair of jobs among the n1 jobs in S1 does not fit in a single machine (by the definition of First-Fit).
Therefore, any feasible allocation (including the optimal allocation) needs at least n1 machines. In other
words, we have OP T ≥ n1 . This observation also implies that the sum of µj + bj for any pair of jobs in S1 is
greater than
3
4
(using Lemma 1). If we sum up all these inequalities for the different pairs of jobs in S1 , we
P
have: (n1 − 1) j∈S1 (µj + bj ) > n21 43 . We note that the n1 − 1 term on the left side appears because every job
j ∈ S1 is paired with n1 − 1 other jobs in S1 , and the n21 term on the right side represents the total number
P
of pairs of jobs in S1 . By dividing both sides of this inequality by n1 − 1, we obtain j∈S1 (µj + bj ) > 3n8 1 .
P
We also lower bound
j∈S2 (µj + bj ) as a function of n2 as follows. Let m1 < m2 < · · · < mn2 be the
machines that have at least two jobs, and the ordering shows in which order they were purchased (e.g.,
42
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
m1 was purchased first). Define Mi to be the set of jobs in machine mi . By definition of First-Fit, any
job j in machine mi+1 could not be assigned to machine mi because of the feasibility constraints for any
1 ≤ i < n2 . In other words, the set of jobs Mi with any job j ∈ Mi+1 form an infeasible set. Therefore, we
P
have µj + bj + j 0 ∈Mi (µj 0 + bj 0 ) > 43 . For each 1 ≤ i < n2 , one can pick two distinct jobs j1 and j2 from Mi+1 ,
and write the following two inequalities:
X
µj1 + bj1 +
(µj 0 + bj 0 ) >
j 0 ∈Mi
3
4
and
µj2 + bj2 +
X
j 0 ∈Mi
Summing up these two inequalities implies that: µj1 + bj1 + µj2 + bj2 + 2
3
(µj 0 + bj 0 ) > .
4
P
j 0 ∈Mi
(µj 0 + bj 0 ) > 32 . Since j1 and
j2 are two distinct jobs in Mi+1 , we have:
X
(µj + bj ) + 2
X
j 0 ∈Mi
j∈Mi+1
3
(µj 0 + bj 0 ) > .
2
Now we sum up this inequality for different values of i ∈ {1, 2, · · · , n2 − 1} to achieve that:
n2 −1
X
2
(µj + bj ) + 3
X X
i=2 j∈Mi
j∈M1
Pn2 P
X
(µj + bj ) +
(µj + bj ) >
j∈Mn2
3
× (n2 − 1),
2
3
2
P
× (n2 − 1). This is equivalent to j∈S2 (µj + bj ) >
P
1
× (n2 − 1). Combining both inequalities, we obtain: j∈S1 ∪S2 (µj + bj ) > 38 n1 + 12 (n2 − 1). On the other
2
P
hand, OP T is at least j∈S1 ∪S2 (µj + bj ). We then have the following two inequalities:
and consequently, we have 3
i=1
j∈Mi
(µj + bj ) >
OP T ≥ n1 ,
1
3
OP T > n1 + (n2 − 1).
8
2
We can now multiply (15) by
1
4
and (16) by 2, and sum them up. We conclude that
(15)
(16)
9
OP T
4
+ 1 is greater
than n1 + n2 , which is the number of machines purchased by Algorithm First-Fit.
Appendix E:
Proof of Theorem 5
We first state the following
√ Lemma that provides a lower bound on OP T . For any a, b ≥ 0, we define the
2a+b+ b(4a+b)
.
function f (a, b) =
2
Lemma 2. For any feasible set of jobs S, the sum
P
µj
bj
j∈S
f (µj , bj ) is at most 1.
and b̄ = j∈S
. Since the function f is concave with respect to both a and
|S|
P
b, using Jensen’s inequality we have j∈S f (µj , bj ) ≤ |S|f (µ̄, b̄). Since S is a feasible set, Cost(S) = |S|µ̄ +
p
p
|S|b̄ ≤ 1. The latter is a quadratic inequality with variable x = |S|, so that we we can derive an upper
√
bound on |S| in terms of µ̄ and b̄. Solving the quadratic form µ̄X + b̄X = 1 yields:
p
√
− b̄ ± b̄ + 4µ̄
X=
.
2µ̄
√
√
p
4µ̄+b̄− b̄
4q
1
We then have |S| ≤
= q 2 √ . Therefore, |S| ≤
= f (µ̄,
, where the equality
2µ̄
b̄)
4µ̄+b̄+b̄+2 b̄(4µ̄+b̄)
(4µ̄+b̄)+ b̄
P
follows by the definition of the function f . Equivalently, |S|f (µ̄, b̄) ≤ 1, and hence j∈S f (µj , bj ) ≤ 1.
Proof.
Define µ̄ =
j∈S
P
P
|S|
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
43
Proof of Theorem 5.
For any arbitrary allocation of jobs, applying Lemma 2 to all the machines implies
PN
that the number of machines is at least j =1 f (µj , bj ). So it suffices to upper bound the number of machines
PN
as a function of j =1 f (µj , bj ). For any given machine, we prove that the sum of f (µj , bj ) for all jobs j
assigned to this machine is at least 1 − O( + δ). Consider the set of jobs S assigned to a given machine.
P
Similar to the proof of Lemma 2, we define µ̄ =
j∈S
µj
|S|
P
and b̄ =
j∈S
|S|
bj
P
. Define r =
P j∈S
j∈S
bj
µj
= µ̄b̄ . We start by
lower bounding f (µj , bj ) as a function of f (µj , rµj ). Recall that each machine is δ-homogeneous, i.e., for all
pairs of jobs in the same machine, the ratios
other. Hence, any ratio
bj
µj
f (µj , bj ) ≥ f
r
is at least
(1+δ )
bj
µj
are at most a multiplicative factor of 1 + δ away from each
. Consequently, we have bj ≥
rµj
1+δ
which implies that:
µj
rµj
1
µj
, bj ≥ f
,
=
f (µj , rµj ) ≥ (1 − δ)f (µj , rµj ).
1+δ
1+δ 1+δ
1+δ
The first and second inequalities follow from the monotonicity of the function f , the equality follows from
P
the definition of f , and the last inequality holds since δ ≥ 0. It now suffices to lower bound j∈S f (µj , rµj )
P
so as to obtain a lower bound for j∈S f (µj , bj ). We note that for any η ≥ 0, we have f (ηµ, ηb) = ηf (µ, b)
by the definition of f . Applying η = µj , µ = 1 and b = r, we obtain f (µj , rµj ) = µj f (1, r) which implies:
X
X
f (µj , rµj ) =
µj f (1, r) = f (A, B),
j∈S
where A =
P
A0 =
and B 0 =
µj , and B = r
P
µj =
j∈S
P
bj . The last equality above follows from f (ηµ, ηb) = ηf (µ, b)
√
with η = A. Recall that each machine is -full, i.e., Cost(S) ≥ 1 − or equivalently, A + B ≥ 1 − . Let
A
(1−)2
j∈S
B
(1−)2
j∈S
j∈S
. Then, we have:
√
√
A
+
B
B
A
+
≥
≥ 1.
A + B0 =
(1 − )2 1 −
1−
0
√
√
Since B = rA, we also have B 0 = rA0 , and the lower bound can be rewritten as follows: A0 + rA0 ≥ 1, which
is the same quadratic form as in the proof of Lemma 2. Similarly, we prove that A0 ≥
is equal to
1
.
f (1,r )
0
0
0
0
0
4
√
4+2r+2
r (4+r )
We also know that f (A , B ) = f (A , rA ) = A f (1, r). We already proved that A ≥
which
1
f (1,r )
,
so we have f (A0 , B 0 ) ≥ 1. By definition of A0 and B 0 , we know thatf (A, B) = (1 − )2 f (A0 , B 0 ) ≥ (1 − )2 .
P
We conclude that the sum j∈S f (µj , bj ) ≥ (1 − δ)f (A, B) ≥ (1 − δ)(1 − )2 . As a result, for each machine
the sum of f (µj , bj ) is at least (1 − δ)(1 − )2 . Let m be the number of purchased machines. Therefore, the
PN
sum j =1 f (µj , bj ) is lower bounded by (1 − δ)(1 − )2 m, and at the same time upper bounded by OP T .
Consequently, m does not exceed
Appendix F:
Proof.
OP T
(1−δ )(1−)2
and this concludes the proof.
Proof of Theorem 6
We first prove that the algorithm terminates in a finite number of iterations. Note that all the
update operations (except the first one) reduce the number of purchased machines, and hence, there are
no more than N of those. As a result, it suffices to upper bound the number of times we perform the first
update operation. Since we assign jobs to lower id machines, there cannot be more than N 2 consecutive first
update operations. Consequently, after at most N × N 2 = N 3 operations, the algorithm has to terminate.
Next, we derive an upper bound on the number of purchased machines at the end of the algorithm. Note
that all the machines belong to one of the following four categories:
• Single job machines, i.e., the set A1 .
44
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
• Medium machines with only one non-good job – denoted by the set B.
• Medium machines with at least two non-good jobs – denoted by the set C.
• Large machines, i.e., the set A5 .
Let a, b, c, and d be the number of machines in A1 , B, C, and A5 respectively. Since no update operation is
possible (as the algorithm already terminated), the b non-good jobs assigned to the machines in the set B
do not fit in any of the single job machines, and no pair of them fit together in a new machine. Consider
these b non-good jobs in addition to the a jobs in the machines of the set A1 . No pair of these a + b jobs fit
in one machine together and therefore, OP T ≥ a + b.
PN
We also know that OP T ≥ j =1 (µj + bj ). Next, we derive a more elaborate lower bound on OP T by
writing the sum of µj + bj as a linear combination of the sizes of the sets A1 , B, C, and A5 . For each machine
i, let Mi be the set of jobs assigned to this machine. Let i1 < i2 < · · · < id be the indices of machines in the
set A5 , where d = |A5 |. Since we cannot perform the first update operation anymore, we can say that no
P
job in machine i`+1 fits in machine i` for any 1 ≤ ` < d. Therefore, µj + bj + j 0 ∈Mi (µj 0 + bj 0 ) > 34 for any
`
j ∈ Mi`+1 (using Lemma 1). We write this inequality for 5 different jobs (arbitrarily chosen) in Mi`+1 (recall
that there are at least 5 jobs in this machine), and for all the values of 1 ≤ ` < d. If we sum up all these
5(d − 1) inequalities, then the right hand side would be 5(d − 1) × 43 . On the other hand, the term µj + bj for
every job in these d machines appears on the left hand side at most 5 + 1 = 6 times. Therefore, by summing
P
P
1)
1)
= 5(d−
.
up these inequalities, we obtain: i∈A5 j∈Mi (µj + bj ) > 34 × 5(d−
6
8
Each machine in A5 has at least 5 jobs. Therefore, the term
5
6
appears in the lower bound. With a similar
argument, each machine in either B or C has at least 2 jobs and hence, this term is now replaced by 32 . The
P
P
1)
= b−2 1 , and
inequalities for every pair of machines in B and C are then: i∈B j∈Mi (µj + bj ) > 34 × 2(b−
3
P
P
2(c−1)
3
= c−2 1 .
i∈C
j∈Mi (µj + bj ) > 4 ×
3
P
P
Next, we lower bound i∈A1 ∪C j∈Mi (µj + bj ) as a function of a and c in order to complete the proof.
Recall that each machine in C has at least two non-good jobs. If we pick one of these non-good jobs j, and a
random machine i from A1 , with probability at least
a−5
a
, job j does not fit in machine i. This follows from
the fact that a non-good job fits in at most 5 machines in A1 and hence, a random machine in A1 would
P
not be be able to fit job j with probability at least a−a 5 . Therefore, µj + bj + j 0 ∈Mi (µj 0 + bj 0 ) ≥ 34 with
probability at least
a−5
a
. We next consider two different cases depending on the value of c.
If c is at least a2 , we pick a random machine in C, and two of its non-good jobs j1 and j2 arbitrarily. We
also pick a random machine in A1 . For each of these two jobs, the sum µj + bj of the non-good job and the
single job in the selected machine in A1 is greater than
3
4
with probability at least
a−5
a
. Summing up these
two inequalities, we obtain:
i 1hX X
i a−5 3
2h X X
(µj + bj ) +
(µj + bj ) >
× .
a i∈A j∈M
c i∈C j∈M
a
2
1
i
(17)
i
The left hand side of the above equation is composed of two terms. The first term is obtained through
picking a random machine in A1 (i.e., with probability a1 ), and once the machine is picked, we sum up both
equations so we obtain
2
a
. For the second term, every machine in the set C is chosen with probability
1
c
.
Cohen, Keller, Mirrokni and Zadimoghaddam: Overcommitment in Cloud Services - Bin packing with Chance Constraints
45
When the machine is picked, we sum up on all the jobs and hence get an upper bound. As we have shown,
P
P
(µj + bj ) ≥ c−2 1 . Combining these two inequalities leads to (using c ≥ a2 ):
i∈C
j∈Mi
X
X
(µj + bj ) >
i∈A1 ∪C j∈Mi
a
c − 1 3a 15 c a 1 a + c
a 3(a − 5)
×
+ (1 − ) ×
≥
−
+ − − =
− 4.25.
2
2a
2c
2
4
4
2 4 2
2
PN
By combining the three different bounds (on A1 ∪C, B and A5 ), we obtain j =1 (µj +bj ) ≥ a+2b+c + 58d −5.875.
PN
Since OP T ≥ j =1 (µj + bj ), we conclude that the number of purchased machines a + b + c + d is no more
than 2OP T + 11.
In the other case, we have c < a2 . Note that inequality (17) still holds. However, since the coefficient 1 − 2ac
becomes negative, we cannot combine the two inequalities as before. Instead, we lower bound the sum µj + bj
of jobs in A1 . We know that there is no pair of a jobs in the A1 machines that fit together in one machine.
P
P
Therefore, i∈A1 j∈Mi (µj + bj ) ≥ 38a . Next, we multiply inequality (17) by c, and combine it with this new
P
P
lower bound on i∈A1 j∈Mi (µj + bj ), to obtain (using c < a2 ):
X
X
i∈A1 ∪C j∈Mi
(µj + bj ) > c ×
3(a − 5)
2c
3a 3c 5 3a 3c 3a 3c 5
+ (1 − ) ×
>
− +
−
=
+
− .
2a
a
8
2
4
8
4
8
4
4
Combining this inequality with similar ones on the sets B and A5 , we obtain OP T ≥
3a
8
PN
j =1
(µj + bj ) >
+ 2b + 34c + 58d − 19
. Finally, combining this with OP T ≥ a + b leads to a + b + c + d ≤ 85 OP T + 52 OP T + 19
=
8
5
2OP T + 3.75, which concludes the proof.
| 8 |
The Complex Event Recognition Group
Elias Alevizos
Alexander Artikis
Nikos Katzouris
Georgios Paliouras
Evangelos Michelioudakis
arXiv:1802.04086v1 [] 12 Feb 2018
Institute of Informatics & Telecommunications,
National Centre for Scientific Research (NCSR) Demokritos, Athens, Greece
{alevizos.elias, a.artikis, nkatz, vagmcs, paliourg}@iit.demokritos.gr
ABSTRACT
The Complex Event Recognition (CER) group is a research team, affiliated with the National Centre of Scientific Research “Demokritos” in Greece. The CER
group works towards advanced and efficient methods
for the recognition of complex events in a multitude of
large, heterogeneous and interdependent data streams.
Its research covers multiple aspects of complex event
recognition, from efficient detection of patterns on event
streams to handling uncertainty and noise in streams,
and machine learning techniques for inferring interesting patterns. Lately, it has expanded to methods for forecasting the occurrence of events. It was founded in 2009
and currently hosts 3 senior researchers, 5 PhD students
and works regularly with under-graduate students.
1.
INTRODUCTION
The proliferation of devices that work in realtime, constantly producing data streams, has led to
a paradigm shift with respect to what is expected
from a system working with massive amounts of
data. The dominant model for processing largescale data was one that assumed a relatively fixed
database/knowledge base, i.e., it assumed that the
operations of updating existing records/facts and
inserting new ones were infrequent. The user of such
a system would then pose queries to the database,
without very strict requirements in terms of latency.
While this model is far from being rendered obsolete (on the contrary), a system aiming to extract actionable knowledge from continuously evolving streams of data has to address a new set of challenges and satisfy a new set of requirements. The
basic idea behind such a system is that it is not
always possible, or even desirable, to store every
bit of the incoming data, so that it can be later
processed. Rather, the goal is to make sense out of
these streams of data, without having to store them.
This is done by defining a set of queries/patterns,
continuously applied to the data streams. Each
such pattern includes a set of temporal constraints
and, possibly, a set of spatial constraints, expressing
a composite or complex event of special significance
for a given application. The system must then be
efficient enough so that instances of pattern satisfaction can be reported to a user with minimal latency.
Such systems are called Complex Event Recognition
(CER) systems [6, 7, 2].
CER systems are widely adopted in contemporary applications. Such applications are the recognition of attacks in computer network nodes, human activities on video content, emerging stories
and trends on the Social Web, traffic and transport
incidents in smart cities, fraud in electronic marketplaces, cardiac arrhythmias and epidemic spread.
Moreover, Big Data frameworks, such as Apache
Storm, Spark Streaming and Flink, have been extending their stream processing functionality by including implementations for CER.
There are multiple issues that arise for a CER system. As already mentioned, one issue is the requirement for minimal latency. Therefore, a CER system has to employ highly efficient reasoning mechanisms, scalable to high-velocity streams. Moreover,
pre-processing steps, like data cleaning, have to be
equally efficient, otherwise they constitute a “luxury” that a CER system cannot afford. In this case,
the system must be able to handle noise. This may
be a requirement, even if perfectly clean input data
is assumed, since domain knowledge is often insufficient or incomplete. Hence, the patterns defined
by the users may themselves carry a certain degree
of uncertainty. Moreover, it is quite often the case
that such patterns cannot be provided at all, even
by domain experts. This poses a further challenge of
how to apply machine learning techniques in order
to extract patterns from streams before a CER system can actually run with them. Standard machine
learning techniques are not always directly applicable, due to the size and variability of the training
set. As a result, machine learning techniques must
work in an online fashion. Finally, one often needs
2.
COMPLEX EVENT RECOGNITION
Numerous CER systems have been proposed in
the literature [6, 7]. Recognition systems with a
logic-based representation of complex event (CE)
patterns, in particular, have been attracting attention since they exhibit a formal, declarative semantics [2]. We have been developing an efficient dialect of the Event Calculus, called ‘Event Calculus
for Run-Time reasoning’ (RTEC) [4]. The Event
Calculus is a logic programming formalism for representing and reasoning about events and their effects [14]. CE patterns in RTEC identify the conditions in which a CE is initiated and terminated.
Then, according to the law of inertia, a CE holds at
a time-point T if it has been initiated at some timepoint earlier than T , and has not been terminated
in the meantime.
RTEC has been optimised for CER, in order to
be scalable to high-velocity data streams. A form of
caching stores the results of subcomputations in the
computer memory to avoid unnecessary recomputations. A set of interval manipulation constructs simplify CE patterns and improve reasoning efficiency.
A simple indexing mechanism makes RTEC robust
to events that are irrelevant to the patterns we want
to match and so RTEC can operate without data filtering modules. Finally, a ‘windowing’ mechanism
supports real-time CER. One main motivation for
RTEC is that it should remain efficient and scalable
in applications where events arrive with a (variable)
delay from, or are revised by, the underlying sensors: RTEC can update the intervals of the already
recognised CEs, and recognise new CEs, when data
arrive with a delay or following revision.
RTEC has been analysed theoretically, through a
complexity analysis, and assessed experimentally in
several application domains, including city transport and traffic management [5], activity recognition on video feeds [4], and maritime monitoring
[18]. In all of these applications, RTEC has proven
capable of performing real-time CER, scaling to large
data streams and highly complex event patterns.
1
http://cer.iit.demokritos.gr/
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
ECcrisp
MLN –EC
P (CE | input data)
to move beyond detecting instances of pattern satisfaction into forecasting when a pattern is likely to
be satisfied in the future.
Our CER group1 at the National Centre for Scientific Research (NCSR) Demokritos, in Athens,
Greece, has been conducting research on CER for
the past decade, and has developed a number of
novel algorithms and publicly available software tools.
In what follows, we sketch the approaches that we
have proposed and present some indicative results.
0
3
initiation
10
initiation
20
termination
time I
Figure 1: CE probability estimation in the Event
Calculus. The solid line concerns a probabilistic
Event Calculus, such as MLN-EC, while the dashed
line corresponds to a crisp (non-probabilistic) version of the Event Calculus. Due to the law of inertia, the CE probability remains constant in the
absence of input data. Each time the initiation conditions are satisfied (e.g., in time-points 3 and 10),
the CE probability increases. Conversely, when the
termination conditions are satisfied (e.g., in timepoint 20), the CE probability decreases.
3.
UNCERTAINTY HANDLING
CER applications exhibit various types of uncertainty, ranging from incomplete and erroneous data
streams to imperfect CE patterns [2]. We have been
developing techniques for handling uncertainty in
CER by extending the Event Calculus with probabilistic reasoning. Prob-EC [21] is a logic programming implementation of the Event Calculus using
the ProbLog engine [13], that incorporates probabilistic semantics into logic programming. Prob-EC
is the first Event Calculus dialect able to deal with
uncertainty in the input data streams. For example, Prob-EC is more resilient to spurious data than
the standard (crisp) Event Calculus.
MLN-EC [22] is an Event Calculus implementation based on Markov Logic Networks (MLN)s [20],
a framework that combines first-order logic with
graphical models, in order to enable probabilistic
inference and learning. CE patterns may be associated with weight values, indicating our confidence
in them. Inference can then be performed regarding the time intervals during which CEs of interest hold. Like Prob-EC, MLN-EC increases the
probability of a CE every time its initiating conditions are satisfied, and decreases this probability
whenever its terminating conditions are satisfied,
as shown in Figure 1. Moreover, in MLN-EC the
domain-independent Event Calculus rules, expressing the law of inertia, may be associated with weight
values, introducing probabilistic inertia. This way,
the model is highly customisable, by tuning appropriately the weight values with the use of machine
learning techniques, and thus achieves high predic-
1
0.9
0.8
F1 score
0.7
0.6
0.5
0.4
0.3
0.2
MLN –EC
l–CRF
0.1
0
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
1
Threshold
Figure 2: CER under uncertainty. F1 -score of
MLN-EC and linear-chain CRFs for different CE
acceptance thresholds.
tive accuracy in a wide range applications.
The use of background knowledge about the task
and the domain, in terms of logic (the Event Calculus), can make MLN-EC more robust to variations in the data. Such variations are very common in practice, particularly in dynamic environments, such as the ones encountered in CER. The
common assumption made in machine learning that
the training and test data share the same statistical properties is often violated in these situations.
Figure 2, for example, compares the performance of
MLN-EC against linear-chain Conditional Random
Fields on a benchmark activity recognition dataset,
where evidence is incomplete in the test set as compared to the training set.
4.
EVENT PATTERN LEARNING
The manual authoring of CE patterns is a tedious and error-prone process. Consequently, the
automated construction of such patterns from data
is highly desirable. We have been developing supervised, online learning learning tools for constructing logical representations of CE patterns, from a
single-pass over a relational data stream. OSLα [16]
is such a learner for Markov Logic Networks (MLNs),
formulating CE patterns in the form of MLN-EC
theories. OSLα extends OSL [9] by exploiting a
background knowledge in order to significantly constrain the search for patterns.
In each step t of the online procedure, a set of
training examples Dt arrives containing input data
along with CE annotation. Dt is used together
with the already learnt hypothesis, if any, to predict the truth values of the CEs of interest. This
is achieved by MAP (maximum a posteriori) inference. Given Dt , OSLα constructs a hypergraph that
represents the space of possible structures as graph
paths. Then, for all incorrectly predicted CEs, the
hypergraph is searched using relational pathfinding, for clauses supporting the recognition of these
CEs. The paths discovered during the search are
generalised into first-order clauses. Subsequently,
the weights of the clauses that pass the evaluation
stage are optimised using off-the-shelf online weight
learners. Then, the weighted clauses are appended
to the hypothesis and the procedure is repeated for
the next set of training examples Dt+1 .
OLED [11] is an Inductive Logic Programming
(ILP) system that learns CE patterns, in the form of
Event Calculus theories, in a supervised fashion and
in a single pass over a data stream. OLED constructs
patterns by first encoding a positive example from
the input stream into a so-called bottom rule, i.e.,
a most-specific rule of the form α ← δ1 ∧ . . . ∧ δn ,
where α is an initiation or termination atom and
δ1 , . . . , δn are relational features, constructed onthe-fly, using prescriptions from a predefined language bias. These features express anything “interesting”, as defined by the language bias, that is true
within the positive example at hand. A bottom rule
is typically too restrictive to be useful. To learn a
useful rule, OLED searches within the space of rules
that θ-subsume the bottom rule, i.e., rules that involve some of the δi ’s only. To that end, OLED starts
from the most-general rule—a rule with an empty
body—and gradually specialises that rule by adding
δi ’s to its body, using a rule evaluation function to
assess the quality of each generated specialisation.
OLED’s single-pass strategy is based on the Hoeffding bound [8], a statistical tool that allows to
approximate the quality of a rule on the entire input using only a subset of the data. In particular, given a rule r and some of its specialisations
r1 , . . . , rk , the Hoeffding bound allows to identify
the best among them, with probability 1−δ and
within an error margin , using only N = O( 12 ln 1δ )
training examples from the input stream.
We have evaluated OLED and OSLα on real
datasets concerning activity recognition, maritime
monitoring, credit card fraud detection, and traffic management in smart cities [11, 16, 3, 15, 12].
We have also compared OLED and OSLα to OSL [9],
XHAIL, a ‘batch’ structure learner requiring many
passes over the data [19], and to hand-curated Event
Calculus patterns (with optimised weight values).
The results suggest that both OLED and OSLα can
match the predictive accuracy of batch learners as
well as that of hand-crafted patterns. Moreover,
OLED and OSLα have proven significantly faster than
both batch and online learners, making them more
appropriate for large data streams.
a
EVENT FORECASTING
Forecasting over time-evolving data streams is a
task that can be defined in multiple ways. There
is a conceptual difference between forecasting and
prediction, as the latter term is understood in machine learning, where the main goal is to “predict”
the output of a function on previously unseen input data, even if there is no temporal dimension.
In forecasting, time is a crucial component and the
goal is to predict the temporally future output of
some function or the occurrence of an event. Timeseries forecasting is an example of the former case
and is a field with a significant history of contributions. However, its methods cannot be directly
transferred to CER, since it handles streams of
(mostly) real-valued variables and focuses on forecasting relatively simple patterns. On the contrary,
in CER we are also interested in categorical values, related through complex patterns and involving multiple variables. Our group has developed
a method, where automata and Markov chains are
employed in order to provide (future) time intervals
during which a match is expected with a probability
above a confidence threshold [1].
We start with a given pattern of interest, defining relations between events, in the form of a regular
expression—i.e., using operators for sequence, disjunction and iteration. Our goal, besides detecting
occurrences of this pattern, is also to estimate, at
each new event arrival, the number of future events
that we will need to wait for until the expression is
satisfied, and thus a match be detected. A pattern
in the form of a regular expression is first converted
to a deterministic finite automaton (DFA) through
standard conversion algorithms. We then construct
a Markov chain that will be able to provide a probabilistic description of the DFA’s run-time behavior,
by employing Pattern Markov Chains (PMC) [17].
The resulting PMC depends both on the initial pattern and on the assumptions made about the statistical properties of the input stream—the order m
of the assumed Markov process.
After constructing a PMC, we can use it to calculate the so-called waiting-time distributions, which
can give us the probability of reaching a final state
of the DFA in k transitions from now. To estimate
the final forecasts, another step is required, since
our aim is not to provide a single future point with
the highest probability, but an interval in the form
of I=(start, end ). The meaning of such an interval is that the DFA is expected to reach a final
state sometime in the future between start and end
with probability at least some constant threshold
θfc (provided by the user). An example is shown
a
a
start
a
0
1
b
2
b
b
3
4
a
b
b
(a) Deterministic Finite Automaton, state 1.
1
Completion Probability
5.
state:0
state:1
interval:3,8
state:2
state:3
0.8
0.6
0.4
0.2
0
1
2
3
4
5
6
7
8
9
10
11
12
Number of future events
(b) Waiting-time distribution, state 1.
Figure 3: Event Forecasting. The event pattern
requires that one event of type a is followed by three
events of type b. θf c = 0.5. For illustration, the x
axis stops at 12 future events.
in Figure 3, where the DFA in Figure 3a is in state
1, the waiting-time distributions for all of its nonfinal states are shown in Figure 3b, and the distribution, along with the forecast interval, for state 1
are shown in green.
Figure 4 shows results of our implementation on
two real-world datasets from the financial and the
maritime domains. In the former case, the goal
was to forecast a specific case of credit card fraud,
whereas in the latter it was to forecast a specific
vessel manoeuver. Figures 4a and 4d show precision
results (the percentage of forecasts that were accurate), where the y axes correspond to different values of the threshold θfc , and the x axes correspond
to states of the PMC (more “advanced” states are
to the right of the axis), i.e., we measure precision
for the forecasts produced by each individual state.
Similarly, Figures 4b and 4e are per-state plots for
spread (the length of the forecast interval), and Figures 4c and 4f are per-state plots for distance (the
temporal distance between the time a forecast is
produced and the start of the forecast interval).
As expected, more “advanced” states produce forecasts with higher precision, smaller spread and distance. However, there are cases where we can get
earlier both high precision and low spread scores
(see Figures 4d and 4e). This may happen when
there exist strong probabilistic dependencies in the
stream, e.g., when one event type is very likely (or
0.6
60
0.4
40
20
0.2
0.8
8
0.6
6
0.4
4
2
0.2
1 11 12 13 2 21 3
4
5
6
7
0
1 11 12 13 2 21 3
4
5
6
0
0
1 11 12 13 2 21 3
State
State
(b) Spread.
(c) Distance.
60
0.4
40
20
0.2
11 ts 13 gse 14 gsn 15 gsw 16 gss 17 gen 18 gew 19 ges 20 gee
0
0.8
300
5
6
7
250
0.6
200
150
0.4
100
0.2
50
15
Prediction Threshold
80
4
350
Prediction Threshold
Prediction Threshold
5
0.2
State
0.6
9 tn
0.4
7
100
7 tw
10
0.6
(a) Precision.
0.8
3 te
0.8
0
0
0
15
Prediction Threshold
80
10
Prediction Threshold
Prediction Threshold
100
0.8
0.8
10
0.6
0.4
5
0.2
0
3 te
7 tw
9 tn
11 ts 13 gse 14 gsn 15 gsw 16 gss 17 gen 18 gew 19 ges 20 gee
0
3 te
7 tw
9 tn
11 ts 13 gse 14 gsn 15 gsw 16 gss 17 gen 18 gew 19 ges 20 gee
State
State
State
(d) Precision.
(e) Spread.
(f) Distance.
Figure 4: Event forecasting for credit card fraud management (top) and maritime monitoring (bottom). The
y axes correspond to different values of the threshold θfc . The x axes correspond to states of the PMC.
very unlikely) to appear, given that the last event(s)
is of a different event type. Our system can take
advantage of such cases in order to produce highquality forecasts early.
6.
PARTICIPATION IN RESEARCH & INNOVATION PROJECTS
The CER group has been participating in several research and innovation projects, contributing
to the development of intelligent systems in challenging domains. SPEEDD2 (Scalable Proactive
Event-Driven Decision Making) was an FP7 EUfunded project, coordinated by the CER group, that
developed tools for proactive analytics in Big Data
applications. In SPEEDD, the CER group worked
on credit card fraud detection and traffic management [3, 15], developing formal tools for highly scalable CER [4], and pattern learning [10, 16].
REVEAL3 (REVEALing hidden concepts in social media) was an FP7 EU project that developed
techniques for real-time extraction of knowledge from
social media, including influence and reputation assessment. In REVEAL, the CER group developed a
technique for online (single-pass) learning of event
patterns under uncertainty [11].
datACRON4 (Big Data Analytics for Time Crit2
http://speedd-project.eu/
3
http://revealproject.eu/
4
http://www.datacron-project.eu/
ical Mobility Forecasting) is an H2020 EU project
that introduces novel methods for detecting threats
and abnormal activity in very large fleets of moving entities, such as vessels and aircrafts. Similarly, AMINESS5 (Analysis of Marine Information
for Environmentally Safe Shipping) was a national
project that developed a computational framework
for environmental safety and cost reduction in the
maritime domain. The CER group has been working on maritime and aviation surveillance, developing algorithms for, among others, highly efficient
spatio-temporal pattern matching [18], complex
event forecasting [1], and parallel online learning
of complex event patterns [12].
Track & Know (Big Data for Mobility & Tracking Knowledge Extraction in Urban Areas) is an
H2020 EU-funded project that will research, develop and exploit a new software framework increasing the efficiency of Big Data applications in the
transport, mobility, motor insurance and health sectors. The CER team is responsible for the complex
event recognition and forecasting technology that
will be developed in Track & Know.
7.
CONTRIBUTIONS TO THE COMMUNITY
The CER group supports the research community at different levels; notably, by making avail5
http://aminess.eu/
able the proposed research methods as open-source
solutions. The RTEC CER engine (see Section 2)
is available as a monolithic Prolog implementation6
and as a parallel Scala implementation7 . The OLED
system for online learning of event patterns (see Section 4) is also available as an open-source solution8 ,
both for single-core and parallel learning. OLED is
implemented in Scala; both OLED and RTEC use
the Akka actors library for parallel processing.
The OSLα online learner (see Section 4), along
with MAP inference based on integer linear programming, and various weight optimisation algorithms (Max-Margin, CDA and AdaGrad), are contributed to LoMRF9 , an open-source implementation of Markov Logic Networks. LoMRF provides
predicate completion, clausal form transformation,
and function elimination. Moreover, it provides a
parallel grounding algorithm which efficiently constructs the minimal Markov Random Field.
8.
REFERENCES
[1] E. Alevizos, A. Artikis, and G. Paliouras.
Event forecasting with pattern markov chains.
In DEBS, pages 146–157. ACM, 2017.
[2] E. Alevizos, A. Skarlatidis, A. Artikis, and
G. Paliouras. Probabilistic complex event
recognition: A survey. ACM Comput. Surv.,
50(5):71:1–71:31, 2017.
[3] A. Artikis, N. Katzouris, I. Correia, C. Baber,
N. Morar, I. Skarbovsky, F. Fournier, and
G. Paliouras. A prototype for credit card
fraud management: Industry paper. In DEBS,
pages 249–260, 2017.
[4] A. Artikis, M. J. Sergot, and G. Paliouras. An
event calculus for event recognition. IEEE
TKDE, 27(4):895–908, 2015.
[5] A. Artikis, M. Weidlich, F. Schnitzler,
I. Boutsis, T. Liebig, N. Piatkowski,
C. Bockermann, K. Morik, V. Kalogeraki,
J. Marecek, A. Gal, S. Mannor, D. Gunopulos,
and D. Kinane. Heterogeneous stream
processing and crowdsourcing for urban traffic
management. In EDBT, pages 712–723, 2014.
[6] G. Cugola and A. Margara. Processing flows
of information: From data stream to complex
event processing. ACM Comput. Surv.,
44(3):15:1–15:62, 2012.
[7] N. Giatrakos, A. Artikis, A. Deligiannakis,
and M. N. Garofalakis. Complex event
recognition in the big data era. PVLDB,
6
https://github.com/aartikis/RTEC
https://github.com/kontopoulos/ScaRTEC
8
https://github.com/nkatzz/OLED
9
https://github.com/anskarl/LoMRF
7
10(12):1996–1999, 2017.
[8] W. Hoeffding. Probability inequalities for
sums of bounded random variables. JASA,
58(301):13–30, 1963.
[9] T. N. Huynh and R. J. Mooney. Online
Structure Learning for Markov Logic
Networks. In ECML, pages 81–96, 2011.
[10] N. Katzouris, A. Artikis, and G. Paliouras.
Incremental learning of event definitions with
inductive logic programming. Machine
Learning, 100(2-3):555–585, 2015.
[11] N. Katzouris, A. Artikis, and G. Paliouras.
Online learning of event definitions. TPLP,
16(5-6):817–833, 2016.
[12] N. Katzouris, A. Artikis, and G. Paliouras.
Parallel online learning of complex event
definitions. In ILP. Springer, 2017.
[13] A. Kimmig, B. Demoen, L. D. Raedt,
V. Costa, and R. Rocha. On the
implementation of the probabilistic logic
programming language ProbLog. TPLP,
11(2-3):235–262, 2011.
[14] R. A. Kowalski and M. J. Sergot. A
logic-based calculus of events. New
Generation Comput., 4(1):67–95, 1986.
[15] E. Michelioudakis, A. Artikis, and
G. Paliouras. Online structure learning for
traffic management. In ILP. Springer, 2016.
[16] E. Michelioudakis, A. Skarlatidis,
G. Paliouras, and A. Artikis. Online structure
learning using background knowledge
axiomatization. In ECML, 2016.
[17] G. Nuel. Pattern Markov Chains: Optimal
Markov Chain Embedding through
Deterministic Finite Automata. Journal of
Applied Probability, 2008.
[18] K. Patroumpas, E. Alevizos, A. Artikis,
M. Vodas, N. Pelekis, and Y. Theodoridis.
Online event recognition from moving vessel
trajectories. GeoInformatica, 21(2), 2017.
[19] O. Ray. Nonmonotonic abductive inductive
learning. JAL, 7(3):329–340, 2009.
[20] M. Richardson and P. M. Domingos. Markov
logic networks. Machine Learning,
62(1-2):107–136, 2006.
[21] A. Skarlatidis, A. Artikis, J. Filipou, and
G. Paliouras. A probabilistic logic
programming event calculus. TPLP,
15(2):213–245, 2015.
[22] A. Skarlatidis, G. Paliouras, A. Artikis, and
G. A. Vouros. Probabilistic Event Calculus for
Event Recognition. ACM Transactions on
Computational Logic, 16(2):11:1–11:37, 2015.
| 2 |
Proceedings of the 2007 Winter Simulation Conference
S. G. Henderson, B. Biller, M.-H. Hsieh, J. Shortle, J. D. Tew, and R. R. Barton, eds.
USING INTELLIGENT AGENTS TO UNDERSTAND
MANAGEMENT PRACTICES AND RETAIL PRODUCTIVITY
Peer-Olaf Siebers
Uwe Aickelin
Helen Celia
Chris W. Clegg
School of Computer Science, Automated Scheduling,
Optimisation and Planning Research Group (ASAP)
University of Nottingham
Nottingham, NG8 1BB, UK
Centre for Organisational Strategy, Learning & Change,
Leeds University Business School
University of Leeds
Leeds, LS2 9JT, UK
ABSTRACT
Intelligent agents offer a new and exciting way of understanding the world of work. In this paper we apply agentbased modeling and simulation to investigate a set of
problems in a retail context. Specifically, we are working
to understand the relationship between human resource
management practices and retail productivity. Despite the
fact we are working within a relatively novel and complex
domain, it is clear that intelligent agents could offer potential for fostering sustainable organizational capabilities
in the future. The project is still at an early stage. So far
we have conducted a case study in a UK department store
to collect data and capture impressions about operations
and actors within departments. Furthermore, based on our
case study we have built and tested our first version of a
retail branch simulator which we will present in this paper.
1
INTRODUCTION
The retail sector has been identified as one of the biggest
contributors to the productivity gap that persists between
the UK, Europe and the USA (Reynolds et al. 2005). It is
well documented that measures of UK retail productivity
rank lower than those of countries with comparably developed economies. Intuitively, it is inevitable that management practices are inextricably linked to a company’s
productivity and performance. However, many researchers have struggled to provide clear empirical evidence using more traditional research methods (for a review, see
Wall and Wood 2005).
Significant research has been done to investigate the
productivity gap and the common focus has been to quantify its size and determine the contributing factors. Best
practice guidelines have been developed and published,
but there remains considerable inconsistency and uncer-
tainty regarding how these are implemented and manifested in the retail work place. Siebers et al. (submitted)
have conducted a comprehensive literature review of this
pertinent research area linking management practices to
firm-level productivity. Practices are dichotomized according to their focus, whether operationally-focused or
people-focused. The authors conclude that management
practices are multidimensional constructs that generally
do not demonstrate a straightforward relationship with
productivity variables. Empirical evidence affirms that
management practices must be context specific to be effective, and in turn productivity indices must also reflect a
particular organization’s activities.
Currently there is no reliable and valid way to delineate the effects of management practices from other socially embedded factors. Most Operational Research (OR)
methods can be applied as analytical tools once management practices have been implemented, however they are
not very useful at revealing system-level effects of the introduction of specific management practices. This holds
particularly when the focal interest is the development of
the system over time, like in the real world. This contrasts
with more traditional techniques, which allow us to identify the state of the system at a certain point in time.
The overall aim of our project is to understand and
predict the impact of different management practices on
retail store productivity. To achieve this aim we have
adopted a case study approach and integrated applied research methods to collect both qualitative and quantitative
data. In summary, we have conducted four weeks of informal participant observations, forty staff interviews
supplemented by a short questionnaire on the effectiveness of various management practices, and drawn upon a
variety of established informational sources internal to the
case study organization. Using this data, we are applying
Agent-Based Modeling and Simulation (ABMS) to try to
Siebers, Aickelin, Celia, & Clegg
devise a functional representation of the case study departments.
In this paper we will focus on the simulation side of
the project. In Section 2 we summarize the literature review we have conducted to find a suitable research tool
for our study. Section 3 describes the conceptualization,
design and implementation of our retail branch simulator.
In Section 4 we describe two experiments that we have
conducted as a first step to validate our retail branch
simulator. Section 5 concludes the paper and unveils our
future ideas.
2
WHY AGENT-BASED SIMULATION?
OR is applied to problems concerning the conduct and coordination of the operations within an organization (Hillier and Lieberman 2005). An OR study usually involves
the development of a scientific model that attempts to abstract the essence of the real problem. When investigating
the behavior of complex systems the choice of an appropriate modeling technique is very important. In order to
be able to make a choice for our project, we reviewed the
relevant literature spanning the fields of Economics, Social Science, Psychology, Retail, Marketing, OR, Artificial Intelligence, and Computer Science. Within these
fields a wide variety of approaches are used which can be
classified into three main categories: analytical approaches, heuristic approaches, and simulation. In many
cases we found that combinations of these were used
within a single model. Common combinations were
‘simulation / analytical’ for comparing efficiency of different non-existing scenarios, (e.g. Greasley 2005), and
‘simulation / analytical’ or ‘simulation / heuristic’ where
analytical or heuristic models were used to represent the
behavior of the entities within the simulation model (e.g.
Schwaiger and Stahmer 2003).
In our review we put a particular emphasis on those
publications that try to model the link between management practices and productivity in the retail sector. We
found a very limited number of papers that investigate
management practices in retail at firm level. The majority
of these papers focus on marketing practices (e.g. Keh et
al. 2006). By far the most frequently used modeling technique we found being used was agent-based modeling
employing simulation as the method of execution. It
seems to be the natural way of system representation for
these purposes.
Simulation introduces the possibility of a new way of
thinking about social and economic processes, based on
ideas about the emergence of complex behavior from relatively simple activities (Simon 1996). While analytical
models typically aim to explain correlations between
variables measured at one single point in time, simulation
models are concerned with the development of a system
over time. Furthermore, analytical models usually work
on a much higher level of abstraction than simulation
models. For simulation models it is critical to define the
right level of abstraction. Csik (2003) states that on the
one hand the number of free parameters should be kept on
a level as low as possible. On the other hand, too much
abstraction and simplification might threaten the homomorphism between reality and the scope of the simulation
model. There are several different approaches to simulation, amongst them discrete event simulation, system dynamics, micro simulation and agent-based simulation. The
choice of the most suitable approach always depends on
the issues investigated, the input data available, the level
of analysis and the type of answers to be sought.
Although computer simulation has been used widely
since the 1960s, ABMS only became popular in the early
1990s (Epstein and Axtell 1996). ABMS can be used to
study how micro-level processes affect macro-level outcomes. A complex system is represented by a collection
of individual agents that are programmed to follow simple
behavioral rules. Agents can interact with each other and
with their environment to produce complex collective behavioral patterns. Macro behavior is not explicitly simulated; it emerges from the micro-decisions made by the
individual agents (Pourdehnad et al. 2002). The main
characteristics of agents are their autonomy, their ability
to take flexible action in reaction to their environment and
their pro-activeness depending on motivations generated
from their internal states. They are designed to mimic the
attributes and behaviors of their real-world counterparts.
The simulation output may be potentially used for explanatory, exploratory and predictive purposes (Twomey
and Cadman 2002). This approach offers a new opportunity to realistically and validly model organizational characters and their interactions, to allow a meaningful investigation of human resource management practices. ABMS
is still a relatively new simulation technology and its
principle application has been in academic research. With
the appearance of more sophisticated modeling tools in
the broader market, things are starting to change (Luck et
al. 2005). Also, an ever increasing number of computer
games use the ABMS approach.
A detailed description of ABMS and thoughts on the
appropriate contexts for ABMS versus conventional modeling techniques can be found in WSC introductory tutorial on ABMS (Macal and North 2006). Therefore we provide only a brief summary of some of our thoughts.
Due to the characteristics of the agents, this modeling
approach appears to be more suitable than Discrete Event
Simulation (DES) for modeling human-oriented systems
(Siebers 2006). ABMS seems to promote a natural form
of modeling, as active entities in the live environment are
interpreted as actors in the model. There is a structural
correspondence between the real system and the model
representation, which makes them more intuitive and easier to understand than for example a system of differential
Siebers, Aickelin, Celia, & Clegg
equations as used in System Dynamics. Hood (1998) emphasized that one of the key strengths of ABMS is that the
system as a whole is not constrained to exhibit any particular behavior as the system properties emerge from its
constituent agent interactions. Consequently assumptions
of linearity, equilibrium and so on, are not needed. With
regard to disadvantages there is a general consensus in the
literature that it is difficult to evaluate agent-based models, because the behavior of the system emerges from the
interactions between the individual entities. Furthermore,
problems often occur through the lack of adequate empirical data. Finally, there is always the danger that people
new to ABMS may expect too much from the models,
particularly in regard to predictive ability.
3
MODEL DESIGN AND IMPLEMENTATION
The strategy for our project is iterative, creating a relatively simple model and then building in more and more
complexity. To begin with we have been trying to understand the particular problem domain, to generate the underlying rules currently in place. We are now in the process of building an agent based simulation model of the
real system using the information gathered during our
case study and will then validate our model by simulating
the operation of the real system. This approach will allow
us to assess the accuracy of the system representation. If
the simulation provides a sufficiently good representation
we are able to move to the next stage, and generate new
scenarios for how the system could work using new rules.
3.1 Modelling Concepts
Our case study approach and analysis has played a crucial
Customer Agent
role allowing us to acquire a conceptual idea of how the
real system is structured. This is an important stage of the
project, revealing insights into the operation of the system
as well as the behavior of and interactions between the
different characters in the system. We have designed the
system by applying a DES approach to conceptualize and
model the system, and then an agent approach to conceptualize and model the actors within the system. This
method made it easier to design the model, and is possible
because only the actors’ action requires an agent based
approach.
In terms of performance indicators, these are identical to those of a DES model. Beyond this, ABMS can offer further insights. A simulation model can detect unintended consequences, which have been referred to as
‘emergent behavior’ (Gilbert and Troitzsch 2005). Such
unintended consequences can be difficult to understand
because they are not defined in the same way as the system inputs; however it is critical to fully understand all
system output to be able to accurately draw comparisons
between the relative efficiencies of competing systems.
Our conceptual ideas for the simulator are shown in
Figure 1. Within our simulation model we have three different types of agents (customers, sales staff, and managers) each of them having a different set of relevant parameters. We will use probabilities and frequency
distributions to assign slightly different values to each individual agent. In this way a population is created that reflects the variations in attitudes and behaviors of their real
human counterparts. In terms of other inputs, we need
global parameters which can influence any aspect of the
system, and may for example define the number of agents
in the system. With regards to the outputs we always hope
to find some unforeseeable, emergent behavior on a
Visual Dynamic Stochastic Simulation Model
Customer Agent
Shopping need, attitudes,
Customer Agent
demographics etc.
Emergent behaviour on
macro level
Customer Agent
Understanding about
interactions of entities within
the system
Sales Staff Agent
Sales Agent
Attitudes, length of service,
competencies, training etc.
Identification of bottlenecks
Manager Agent
Leadership quality, length of
service, competencies,
training etc.
Global Parameters
Performance Measures
Number of customers, sales
staff, managers etc.
Staff utilisation, average
response time, customer
satisfaction etc.
Interface for User
Interaction during Runtime
Figure 1: Conceptual model for our simulator
Siebers, Aickelin, Celia, & Clegg
macro level. Having a visual representation of the simulated system and its actors will allow us to monitor and
better understand the interactions of entities within the
system. Coupled with the standard DES performance
measures, we hope to identify bottlenecks and help to optimize the modeled system.
For the conceptual design of our agents we have decided to use state charts. State charts show the different
states an entity can be in and also define the events that
cause a transition from one state to another. This is exactly the information we need in order to represent our
agents later within the simulation environment. Furthermore, this form of graphical representation is also helpful
for validating the agent design as it is easier for nonspecialists to understand.
The art of modelling pivots on simplification and abstraction (Shannon 1975). A model is always a restricted
copy of the real world, and we have to identify the most
important components of a system to build effective models. In our case, instead of looking for components we
have to identify the most important behaviours of an actor
and the triggers that initiate a move from one state to another. We have developed state charts for all the relevant
actors in our retail branch model. Figure 2 shows as an
example the state charts for a customer agent. The transition rules have been replaced by numbers to keep the
chart comprehensible. They are explained in detail in the
Section 3.2.
A question that can be asked is whether our agents
are intelligent or not? Wooldridge (2002) states that in order to be intelligent agents need to have the following attributes: being reactive, being proactive and being social.
This is a widely accepted view. Being reactive means responding to changes in the environment (in a timely manner), while being proactive means persistently pursuing
goals and being social means interacting with other agents
(Padgham and Winikoff, 2004). Our agents perceive a
goal in that they want to either buy something or return
something. For buying they have a sub goal; that they are
trying to buy the right thing. If they are not sure they will
ask for help. Our agents are not only reactive but also
flexible, i.e. they are capable to recover from a failure of
action. They have alternatives inbuilt when they are unable to perceive their goal, e.g. if they want to pay and
things are not moving forward in the queue they always
have the chance to leave a queue and continue with another action. They are responding in a flexible way to certain changes in their environment, in this case the length
of the queue. Finally, as there is communication between
agents and staff, they can also be regarded as being social.
3.2 Empirical Data
Often agents are based on analytical models or heuristics
and in the absences of adequate empirical data theoretical
models are employed. However, for our agents we use
frequency distributions for state change delays and probability distributions for decision making processes as statistical distributions are the best format to represent the
data we have gathered during our case study due to their
numerical nature. The case study was conducted in the
Audio and Television (A&TV) and the WomensWear
(WW) departments of a leading UK department store. As
mentioned earlier we have conducted informal participant
observations, staff interviews, and drawn upon a variety
of established informational sources internal to the case
study organization.
Our frequency distributions are modeled as triangular
distributions supplying the time that an event lasts, using
the minimum, mode, and maximum duration. Our triangular distributions are based on our own observation and
expert estimates in the absence of numerical data. We
have collected this information from the two branches and
calculated an average value for each department type,
creating one set of data for A&TV and one set for WW.
Table 1 lists some sample frequency distributions that we
have used for modeling the A&TV department (the values
presented here are slightly amended to comply with con-
Customer State Chart
Complaining
Using Aftersales
Enter
Seeking help
Contemplating
passive/active
(dummy state)
Browsing
Leave
Queuing for help
Being helped
Queuing at till
Being served
(at till)
Figure 2: Conceptual model for customer agent
Siebers, Aickelin, Celia, & Clegg
fidentiality restrictions). The distributions are used as exit
rules for most of the states. All remaining exit rules are
based on queue development, i.e. the availability of staff.
situation
leave browse state after …
leave help state after …
leave pay queue (no patience) after …
min
1
3
5
mode
7
15
12
max
15
30
20
Table 1: Sample frequency distribution values
The probability distributions are partly based on
company data (e.g. conversion rates, i.e. the percentage of
customers who buy something) and partly on informed
guesses (e.g. patience of customers before they would
leave a queue). As before, we have calculated average
values for each department type. Some examples for
probability distributions we used to model the A&TV department can be found in Table 2. The distributions make
up most of the transition rules at the branches where decisions are made with what action to perceive (e.g. decision
to seek help). The remaining decisions are based on the
state of the environment (e.g. leaving the queue, if the
queue does not get shorter quickly enough).
event
someone makes a purchase after browsing
someone requires help
someone makes a purchase after getting help
probability it occurs
0.37
0.38
0.56
Table 2 – Sample probabilities
Company data is available about work team numbers
and work team composition, varying opening hours and
peak times (to be implemented in future). Also financial
data (e.g. transaction numbers and values) are available
but have not been used at this stage.
3.3 Implementation
Our simulation has been implemented in AnyLogic™
which is a Java™ based multi-paradigm simulation software (XJ Technologies 2007). During the implementation
we have used the knowledge, experience and data gained
through our case study work. The simulator can represent
the following actors: customers, service staff (including
cashiers, selling staff of two different training levels) and
managers. Figure 3 shows a screenshot of the current customer and staff agent logic as it has been implemented in
Figure 3: Customer (left) and staff (right) agent logic implementation in AnyLogic™
Siebers, Aickelin, Celia, & Clegg
AnyLogic™. Boxes show customer states, arrows possible transitions and numbers satisfaction weights.
Currently there are two different types of customer
goals implemented: making a purchase or obtaining a refund. If a refund is granted, the customer’s goal may then
change to making a new purchase, or alternatively they
will leave the shop straight away. The customer agent
template consists of three main blocks which all use a
very similar logic. These blocks are ‘Help’, ‘Pay’ and
‘Refund’. In each block, in the first instance, customers
will try to obtain service directly and if they cannot obtain
it (no suitable staff member available) they will have to
queue. They will then either be served as soon as the right
staff member becomes available or they will leave the
queue if they do not wait any longer (an autonomous decision). A complex queuing system has been implemented
to support different queuing rules. In comparison to the
customer agent template, the staff agent template is relatively simple. Whenever a customer requests a service
and the staff member is available and has the right level
of expertise for the task requested, the staff member
commences this activity until the customer releases the
staff member. While the customer is the active component
of the simulation model the staff member is currently passive, simply reacting to requests from the customer. In future we planned to add a more pro-active role for the staff
members, e.g. offering services to browsing customers.
A service level index is introduced as a new performance measure. The index allows customer service satisfaction to be recorded throughout the simulated lifetime. The
idea is that certain situations might have a bigger impact
on customer satisfaction than others, and therefore differential weightings are assigned to events to account for
this. For example, in our model if a customer starts to
wait for a refund and leaves without one, then their satisfaction index decreases by 4 (see figure 3). We measure
customer satisfaction in two different ways derived from
these weightings; both in terms of how many customers
leave the store with a positive service level index value,
and the sum of all customers’ service level index values.
Applied in conjunction with an ABMS approach, we expect to observe interactions with individual customer differences, variations which have been empirically linked to
differences in customer satisfaction. This helps the analyst
to find out to what extent customers underwent a positive
or negative shopping experience. It also allows the analyst
to put emphasis on different operational aspects and try
out the impact of different strategies.
The simulator can be initialized from an Excel™
spreadsheet and supports the simulation of the two types
of departments we looked at during our case study. These
differ with respect to their staffing, service provision and
customer requirements, which we hope will be reflected
in the simulation results. WW customers will ask for help
when they know what they want whereas A&TV custom-
ers will ask for help when they do not know what they
want. WW makes a lot more unassisted sales than A&TV
and service times are very different; in WW the average
service time is a lot shorter than in A&TV. This service
requirement has a differential impact on the profile of
employee skills at the department level.
4
A FIRST VALIDATION OF OUR SIMULATOR
To test the operation of our simulator and ascertain face
validity we have designed and run 2 sets of experiments
for both departments. Our case study work has helped us
to identify the distinguishing characteristics of the departments, for example different customer arrival rates
and different service times. In these experiments we will
examine the impact of these individual characteristics on
the volume of sales transactions and customer satisfaction
indices. All experiments hold the overall number of staffing resources constant at 10 staff and we run the simulation for a period of 10 weeks. We have conducted 20
repetitions for every experimental condition enabling the
application of rigorous statistical techniques.
Each set of results are analyzed for each dependent
variable using a two-way between-groups analysis of
variance (ANOVA). Despite our prior knowledge of how
the real system operates, we were unable to hypothesize
precise differences in variable relationships, instead predicting general patterns of relationships. Indeed, ABMS is
a decision-support tool and is only able to inform us about
directional changes between variables (actual figures are
notional). Where significant ANOVA results were found,
post-hoc tests were applied where possible to investigate
further the precise impact on outcome variables under different experimental conditions.
During our time in the case study organization, we
observed that over time the number of cashiers available
to serve customers would fluctuate. In the first experiments we vary the staffing arrangement (i.e. the number
of cashiers) and examine the impact on the volume of
sales transactions and two levels of customer satisfaction;
both customer satisfaction (how many customers leave
the store with a positive service level index value) and
overall satisfaction (the sum of all customers’ service
level index values). In reality, we saw that allocating extra
cashiers would reduce the shop floor sales team numbers,
and therefore the total number of customer-facing staff in
each department is kept constant at 10. We therefore predict that for each of our dependent measures: number of
sales transactions (1), customer satisfaction index (2) and
overall satisfaction index (3):
• Ha: An increase in the number of cashiers will be
linked to increases in 1, 2 and 3 to a peak level, beyond which 1, 2 and 3 will decrease.
• Hb: The peak level of 1, 2 and 3 will occur with a
smaller number of cashiers in A&TV than in WW.
Siebers, Aickelin, Celia, & Clegg
Department Cashiers
A&TV
WW
Total
1
2
3
4
5
1
2
3
4
5
1
2
3
4
5
Number of Transactions
mean
std. dev.
4853.50
26.38
9822.20
57.89
14279.90
96.34
14630.60
86.19
13771.85
97.06
8133.75
22.16
15810.10
56.16
25439.60
113.66
30300.70
249.30
28894.25
195.75
6493.63
1661.19
12816.15
3032.61
19859.75
5651.89
22465.65
7937.00
21333.05
7659.04
Customer Satisfaction
mean
std. dev.
12324.05
77.64
14762.45
81.04
17429.70
103.77
17185.00
99.09
16023.20
82.66
18508.20
88.68
22640.40
92.00
28833.10
115.65
32124.60
230.13
30475.20
176.41
15416.13
3132.55
18701.43
3990.07
23131.40
5775.35
24654.80
7566.98
23249.20
7319.32
Overall Satisfaction
mean
std. dev.
9366.40
563.88
19985.20
538.30
28994.80
552.60
32573.60
702.64
27916.05
574.56
17327.95
556.03
42339.10
736.61
58601.10
629.68
74233.30
570.79
76838.65
744.31
13347.18
4069.20
31162.15
11337.24
43797.95
15003.13
53403.45
21104.67
52377.35
24781.61
Table 3: Descriptives for first experiments (all to 2 d.p.)
An ANOVA was run for each dependent variable,
and all revealed statistically significant differences (see
Table 3 for descriptive statistics). For 1 and 2, Levene’s
test for equality of variances was violated (p<.05) so a
more stringent significance level was set (p<.01).
For 1 there were significant main effects for both department [F(1, 190) = 356441.1, p<.001] and staffing
[F(4, 190) = 124919.5, p<.001], plus a significant interaction effect [F(4, 190) = 20496.37, p<.001]. Tukey’s post
hoc tests for the impact of staffing revealed significant
differences for every single comparison (p<.001).
There is clear support for Hla. We expected this to
happen because the number of normal staff available to
provide customer advice will eventually reduce to the extent where there will be a detrimental impact on the number of customers making a purchase. Some customers will
become impatient waiting increasingly long for service,
and will leave the department without making a purchase.
Hlb is not supported, the data presents an interesting contrast, in that 1 plateaus in A&TV around 3 and 4 cashiers,
whereas WW benefits greatly from the introduction of a
fourth cashier. Nonetheless this finding supports the
thinking underlying this hypothesis, in that we expected
the longer average service times in A&TV to put a greater
‘squeeze’ on customer advice with even a relatively small
increase in the number of cashiers.
For 2, there were significant main effects for both
department [F(1, 190) = 391333.7, p<.001], and staffing
[F(4, 190) = 38633.83, p<.001], plus a significant interaction effect [F(4, 190) = 9840.07, p<.001]. Post hoc tests
for staffing revealed significant differences for every single comparison (p<.001).
The results support both H2a and H2b. We interpret
these findings in terms of A&TV’s greater service requirement, combined with the reduced availability of advisory sales staff. These factors result in a peak in purchasing customers’ satisfaction with a smaller number of
cashiers (4) than in WW (5).
For 3, there were significant main effects for both
department [F(1, 190) = 117214.4, p<.001], and staffing
[F(4, 190) = 29205.09, p<.001], plus a significant interaction effect [F(4, 190) = 6715.93, p<.001]. Tukey’s post
hoc comparisons indicated significant differences between all staffing levels (p<.001).
Our results support H3a for A&TV, showing a clear
peak in overall satisfaction. H3a is only partially supported for WW, in that no decline in 3 is evident with up
to 5 cashiers, although increasing this figure may well expose a peak because the overall satisfaction appears to be
starting to plateau out. The results offer firm support in
favor of H3b.
The second experiment investigates employee empowerment. During our case study we observed the implementation of a new refund policy. This new policy allows cashiers to independently decide whether or not to
make a refund up to the value of £50, rather than referring
the authorization decision to a section manager. To simulate this practice, we vary the probability that cashiers are
empowered to make refund decisions autonomously. We
assess its impact in terms of two performance measures:
overall customer refund satisfaction and cashier utilization (a proportion of maximum capacity). The staffing arrangement is held constant, consisting of 3 cashiers, 5
normal staff members, 1 expert staff member, and 1 section manager. We hypothesize that:
• H4. Higher levels of empowerment will be linked
to higher refund satisfaction.
• H5. Higher levels of empowerment will be linked
to greater cashier utilization.
An ANOVA was run for each outcome measure (see Table 4 for descriptives). For refund satisfaction, there were
significant main effects for both department [F(1, 190) =
508.73, p<.001], and empowerment [F(4, 190) = 120.46,
p<.001], plus a significant interaction effect [F(4, 190) =
29.81, p<.001]. Tukey’s post hoc tests for the impact of
empowerment revealed significant differences between all
comparisons (p<.001), except for .00 with .75, and .25
with .50, where there were no significant differences.
Siebers, Aickelin, Celia, & Clegg
Department
A&TV
WW
Total
Empowerment
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
Overall refund
mean
std. dev.
3130.70
242.58
3880.70
225.70
3876.50
181.47
3716.80
225.31
2991.60
245.15
3090.30
222.00
3116.00
266.70
3041.20
211.75
2716.80
217.79
2085.20
168.19
3110.50
230.43
3498.35
457.61
3458.85
465.61
3216.80
551.59
2538.40
503.70
Cashier utilization
mean
std. dev.
0.6286
0.00
0.6392
0.00
0.6488
0.00
0.6571
0.00
0.6623
0.00
0.6756
0.00
0.6737
0.00
0.6736
0.00
0.6722
0.00
0.6720
0.00
0.6521
0.02
0.6565
0.02
0.6612
0.01
0.6646
0.01
0.6672
0.01
Table 4: Descriptives for the third experiment (all to 2
d.p., except cashier utilization to 4 d.p.)
The data provides support for H4 between .00 and .25
levels of empowerment. However, as empowerment increases all of the results do not support our hypothesis,
and demonstrate a counterintuitive progressive decline of
refund satisfaction beyond the .25 level. Both departments
display the curvilinear relationship between these two
variables; refund satisfaction peaks at a middling level of
empowerment (.50 for A&TV, .25 for WW). These results suggest that some constraining factors are occurring
at the higher levels of empowerment. This may be linked
to the empowered employees adhering to a stricter refund
policy (resulting in less customer satisfaction), or the empowered employees taking longer to process the transaction.
For cashier utilization, there were significant main effects for both department [F(1, 190) = 2913.45, p<.001],
and empowerment [F(4, 190) = 126.37, p<.001], plus a
significant interaction effect [F(4, 190) = 190.64, p<.001].
Tukey’s post hoc tests for the impact of cashier utilization
confirmed significant differences between all comparisons (p=.01 between .75 and 1.00, p<.001 for all others).
The results support H5 for A&TV, but not for WW.
In WW, empowerment is significantly inversely related to
till utilization. Case study observations indicated that
A&TV cashiers, like A&TV sales staff, when they have a
higher level of empowerment they are motivated to work
more efficiently. Empirical evidence indicated that WW
cashiers may not be under the same time pressures to
work more quickly, however this data goes one step further and suggests an inverse relationship.
5
CONCLUSIONS AND FUTURE DIRECTIONS
In this paper we present the conceptual design, implementation and operation of a retail branch simulator used to
understand the impact of management practices on retail
productivity. As far as we are aware this is the first time
researchers have tried to use agent-based approaches to
simulate management practices such as training and empowerment. Although our simulator uses specific case
studies as source of information, we believe that the general model could be adapted to other retail companies and
areas of management practices that have a lot of human
interaction.
From what we can conclude from our current analyses, some findings are as hypothesized whereas others are
more mixed. Further experimentation is required to enable
rigorous statistical evolution of the outcome data and
identification of statistically significant differences.
Currently we are developing our agents with the intention of enhancing their intelligence and heterogeneity.
For this purpose we are introducing evolution and stereotypes. The most interesting system outcomes evolve over
time and many of the goals of the retail company (e.g.
service standards) are also planned long term. We are introducing an evolution of entities over time, including
product knowledge for staff. Moreover, the customer
population pool will be fixed to monitor customer agents
over time. This allows us to consider shopping experience
based on long-term satisfaction scores, with the overall
effect being a certain ‘reputation’ for the shop. Another
interesting aspect we are currently implementing is the
introduction of stereotypes. Our case study organization
has identified its particular customer stereotypes through
market research. It will be interesting to find out how
populations of certain customer types influence sales.
Overall, we believe that researchers should become
more involved in this multi-disciplinary kind of work to
gain new insights into the behavior of organizations. In
our view, the main benefit from adopting this approach is
the improved understanding of and debate about a problem domain. The very nature of the methods involved
forces researchers to be explicit about the rules underlying
behavior and to think in new ways about them. As a result, we have brought work psychology and agent-based
modeling closer together to form a new and exciting research area.
REFERENCES
Csik, B. 2003. Simulation of competitive market situations using intelligent agents. Periodica Polytechnica
Ser. Soc. Man. Sci. 11:83-93.
Epstein, J. M., and R. Axtell. 1996. Growing artificial societies: Social Science from the bottom up. Cambridge, MA: MIT Press.
Gilbert, N., and K. G. Troitzsch. 2005. Simulation for the
social scientist. 2nd ed. Milton Keynes, UK: Open
University Press.
Greasley, A. 2005. Using DEA and simulation in guiding
operating units to improved performance. Journal of
the Operational Research Society. 56:727-731.
Siebers, Aickelin, Celia, & Clegg
Hillier, F. S., and G. J. Lieberman. 2005. Introduction to
Operations Research. 8th ed. Boston, MA: McGrawHill Higher Education.
Hood, L. 1998. Agent-based modeling. In Conference
Proceedings: Greenhouse Beyond Kyoto, Issues, Opportunities and Challenges. Canberra, Australia.
Keh, H. T., S. Chu, and J. Xu. 2006. Efficiency, effectiveness and productivity of marketing in services.
European Journal of Operational Research. 170:265276.
Luck, M., P. McBurney, O. Shehory, and S. Willmott.
2005. Agent technology: computing as interaction (a
roadmap for agent based computing). Liverpool, UK:
AgentLink.
Macal, C. M., and M. J. North. 2006. Tutorial on agentbased modeling and simulation part 2: how to model
with agents. In Proceedings of the 37th Winter Simulation Conference, eds. L. F. Perrone, B. G. Lawson,
J. Liu, and F. P. Wieland. Monterey, CA.
Padgham, L., and M. Winikoff. 2004. Developing intelligent agent systems - a practical guide. New York,
NY: Wiley.
Pourdehnad, J., K. Maani, H. Sedehi. 2002. System dynamics and intelligent agent-based simulation: where
is the synergy? In Proceedings of the 20th International Conference of the System Dynamics Society.
Palermo, Italy.
Reynolds, J., E. Howard, D. Dragun, B. Rosewell, and P.
Ormerod. 2005. Assessing the productivity of the UK
retail sector. International Review of Retail, Distribution and Consumer Research. 15:237-280.
Schwaiger, A. and B. Stahmer. 2003. SimMarket: multiagent based customer simulation and decision support
for category management. In Lecture Notes in Artificial Intelligence (LNAI) 2831, eds. M. Schillo, M.
Klusch, J. Muller, and H. Tianfield. 74-84. Berlin:
Springer.
Shannon, R. E. 1975. Systems simulation: the art and science. Englewood Cliffs, NJ: Prentice-Hall.
Siebers, P.-O. 2006 Worker performance modeling in
manufacturing systems simulation: proposal for an
agent-based approach. In Handbook of Research on
Nature Inspired Computing for Economics and Management, ed. J. P. Rennard. Hershey, PA: Idea Group
Publishing
Siebers, P.-O., U. Aickelin, G. Battisti, H. Celia, C. W.
Clegg, X. Fu, R. De Hoyos, A. Iona, A. Petrescu, and
A. Peixoto. Forthcoming. The role of management
practices in closing the productivity gap. Submitted to
International Journal of Management Reviews.
Simon, H. A. 1996. The sciences of the artificial. 3rd ed.
Cambridge, MA: MIT Press.
Twomey, P., and R. Cadman. 2002. Agent-based modeling of customer behavior in the telecoms and media
markets. Info - The Journal of Policy, Regulation and
Strategy for Telecommunications. 4:56-63.
Wall, T. D., and S. J. Wood. 2005. Romance of human
resource management and business performance, and
the case for big science. Human Relations. 58:429462.
Wooldridge, M. J. 2002. An introduction to multiagent
systems. New York, NY: Wiley.
XJ Technologies. XJ Technologies - simulation software
and services. Available via <www.xjtek.com>
[accessed April 1, 2007].
AUTHOR BIOGRAPHIES
PEER-OLAF SIEBERS is a Research Fellow in the
School of Computer Science and IT at the University of
Nottingham. His main research interest is the application
of computer simulation to study human oriented complex
adaptive systems. Complementary fields of interest include distributed artificial intelligence, biologically inspired computing, game character behavior modeling, and
agent-based robotics. His webpage can be found via
<www.cs.nott.ac.uk/~pos>.
UWE AICKELIN is a Reader and Advanced EPSRC
Research Fellow in the School of Computer Science and
IT at the University of Nottingham. His research interests
are mathematical modeling, agent-based simulation, heuristic optimization and artificial immune systems. see his
webpage for more details <www.aickelin.com>.
HELEN CELIA is a Researcher at the Centre for Organisational Strategy, Learning and Change at Leeds University Business School. She is interested in developing ways
of applying work psychology to better inform the modeling of complex systems using agents. For more information visit <www.leeds.ac.uk/lubs/coslac>
CHRIS W. CLEGG is a Professor of Organizational
Psychology at Leeds University Business School. And the
Deputy Director of the Centre for Organisational Strategy,
Learning and Change. His research interests include: new
technology, systems design, information and control systems, socio-technical thinking and practice; organizational
change, change management, technological change; the
use and effectiveness of modern management practices,
innovation, productivity; new ways of working, job design, work organization. An emerging research interest is
modeling and simulation. His webpage can be found via:
<www.leeds.ac.uk/lubs/coslac>
| 5 |
Graph partitioning and a componentwise PageRank
algorithm
arXiv:1609.09068v1 [] 28 Sep 2016
Christopher Engström,
Division of Applied Mathematics
Education, Culture and Communication (UKK), Mälardalen University,
christopher.engstrom@mdh.se
Sergei Silvestrov
Division of Applied Mathematics
Education, Culture and Communication (UKK), Mälardalen University
sergei.silvestrov@mdh.se
January 13, 2018
Abstract
In this article we will present a graph partitioning algorithm which partitions a graph
into two different types of components: the well-known ‘strongly connected components’ as
well as another type of components we call ‘connected acyclic component’. We will give an
algorithm based on Tarjan’s algorithm for finding strongly connected components used to
find such a partitioning. We will also show that the partitioning given by the algorithm is
unique and that the underlying graph can be represented as a directed acyclic graph (similar
to a pure strongly connected component partitioning).
In the second part we will show how such an partitioning of a graph can be used to
calculate PageRank of a graph effectively by calculating PageRank for different components
on the same ‘level’ in parallel as well as allowing for the use of different types of PageRank
algorithms for different types of components.
To evaluate the method we have calculated PageRank on four large example graphs and
compared it with a basic approach, as well as our algorithm in a serial as well as parallel
implementation.
Contents
1 Introduction
3
2 Notation and Abbreviations
3
3 Graph concepts
3.1 PageRank . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
7
4 Method
9
4.1 Component finding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4.2 Intermediate step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.3 PageRank step . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1
4.4
4.3.1 CAC-PageRank algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.3.2 SCC-PageRank algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 15
Error estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
5 Experiments
5.1 Graph description and generation . . . . . . . . . . . . . . . . . . . . . . . . .
5.1.1 Barabási-Albert graph generation . . . . . . . . . . . . . . . . . . . . .
5.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
16
17
17
18
6 Conclusions
23
7 Future work
23
2
1
Introduction
The PageRank algorithm was initially used by S. Brinn and L. Page to rank homepages on
the Internet [7]. While the original algorithm itself is already very efficient, given the sheer
size and rate of growth of many real life networks there is a need for even faster methods.
Much is known of the original method using a power iteration of a modified adjacency matrix
such as how the damping factor c affect the condition number and convergence speed [11,13].
Many ways have been proposed in order to improve the method such as aggregating
webpages that are in some way ‘close’ [12] or by excluding webpages whose rank are found
to already have converged from the iteration procedure [14]. Another method to speed up
the algorithm is to remove so called dangling pages (pages with no links to any other page),
and then calculate their rank at the end separately [2, 16]. A similar method can also be
used for root vertices (pages with no links from any other pages) [19]. A good overview over
different methods for calculating PageRank can be found in [6].
The method we propose here have some similarities with the method proposed by [19],
the main difference being that we work on the level of graph components rather than single
vertices. Another method with a similar idea is the one proposed by [3], but we use two
types of components as well as look at using different PageRank algorithms for different
types of components and how different components can be calculated in parallell.
We presented some parts of this work at ASMDA 2015 [8] while in this paper we improve
the method further as well as implementing and presenting some results using the method.
The rest of this paper is organized as follows: In Sec. 3 we define some graph concepts
as well as define and prove a couple of properties of the graph partitioning which will be
used later in our PageRank algorithm. In Sec. 3.1 we define the variation of PageRank
that will be used as well as show how PageRank can be computed for different types of
vertices or graph components. In Sec. 4 we describe our proposed algorithm to calculate
PageRank, first by describing how to efficiently find the graph partitioning described earlier
and next how to use this to calculate PageRank. In the same section we also verify the linear
computational complexity of the algorithm as well as give some error estimates in Sec. 4.4.
At the end of our paper in Sec. 5 we describe our implementation of the method as well as
show some results using the method on a couple of different graphs.
2
Notation and Abbreviations
The following abbreviations will be used throughout the article
• SCC - strongly connected component. (see Def. 3.1)
• CAC - connected acyclic component. (see Def. 3.2)
• DAG - directed acyclic graph.
• BFS - breadth first search.
• DFS - depth first search.
Explanation of notation.
• We denote by G a graph with with vertex set V and edge set E, by |V | the number of
vertices and |E| the number of edges.
~, ~
• Vectors are denoted by an arrow for example V
u and matrices in bold capital letters
such as A, M.
• Each vertex in a graph has an assigned level L (see Def. 3.3), L+ denotes all vertices
of level L or greater while L− denotes all vertices of level L or less. For example if A
is the adjacency matrix of some graph, then AL+ ,(L−1)− denotes the submatrix of A
corresponding to all rows representing vertices of level L or greater and all columns
representing vertices of level L − 1 or less.
3
• Different versions of PageRank will be defined where the type is denoted inside paren~ (1) (see Def. 3.4 and Def. 3.5).
thesis, for example R
• Lowercase letters as subscripts denote single elements while capital letters denote a set
~ corresponding to vertex vi and
of elements for example wi denoting the element of W
~ L denoting the vector of elements corresponding to all vertices of level L.
W
• P (vi → vj ) denotes the probability to hit vertex vj in a random walk starting at vi
(see Def. 3.6).
3
Graph concepts
In this work we assume all graphs to be simple directed graphs. Since only the adjacency
matrix of any graph will be used, all graphs are also assumed to be unweighted. It should
be noted however that it would be fairly simple to generalize for certain types of weighted
graphs (such as those representing a Markov chain). We will start by defining two types of
graph components, one being the well-known ‘strongly connected component’ and another
we call ‘connected acyclic component’. We also define the notion of ‘level’ of a component
representing the depth of the component in the underlying graph where components are
represented by a single vertices.
Definition 3.1. A strongly connected component (SCC) of a directed graph G is a subgraph
S of G such that for every pair of vertices u, v in S there is a directed path from u to v and
from v to u. In addition S is maximal in the sense that adding any other set of vertices
and/or edges from G to S would break this property.
Definition 3.2. A connected acyclic component (CAC) of a directed graph G is a subgraph
S of G such that no vertex in S is part of any non-loop cycle in G and the underlying graph
is connected. Additionally any edge in G that exists between any two vertices in S is also a
part of S. A vertex in the CAC with no edge to any other edge in the CAC we call a leaf of
the CAC.
CACs can be seen as a connected collection of 1-vertex SCCs forming a tree. While CACs
keep the property that all internal edges between vertices in the component are preserved
from those in the original graph, it is not maximal in the sense that no more vertices could
be added to the component as is the case for SCCs. The reason for this is that we want to
be able to create a graph partitioning into components in which the underlying graph is a
directed acyclic graph (DAG) in the same way as for the ordinary partition into SCCs.
Definition 3.3. Consider a graph G with partition P into SCCs and CACs such that each
vertex is part of exactly one component and the underlying graph created by replacing every
component with a single vertex. If there is an edge between any two vertices between a pair of
components then there is an edge in the same direction between the two vertices representing
those two components as well. Consider the case where the underlying graph is a DAG (such
as for the commonly known partitioning of a graph into SCCs).
• The level LC of component C is equal to the length of the longest path in the underlying
DAG starting in C.
• The level Lvi of some vertex vi is defined as the level of the component for which vi
belongs (Lvi ≡ LC , if vi ∈ C).
We note that a SCC made up of only a single vertex is also a CAC, in our work it will
be easier to consider these components CACs rather than SCCs. We also note that while a
single vertex can only be part of a single SCC, it could be part of multiple CACs of different
size. There is however a unique graph partitioning into SCCs and CACs as seen below.
Theorem 3.1. Consider a directed graph G with a partition into SCCs. Let the underlying
graph be the DAG constructed by replacing every component with a single vertex. If there is
4
an edge between any two vertices between a pair of components then there is an edge in the
same direction between corresponding vertices in the underlying DAG. To each vertex in the
underlying DAG we attach a level equal to the longest existing path from this vertex to any
other vertex in the underlying graph. Next we start to merge SCCs consisting of a single
vertex into CACs under the following conditions:
• We start merging from the lowest level (vertices in the DAG with no edge to any other
vertex in the DAG) and only start merging on the next level when we cannot merge
any more components on the current level.
• All merges are done by merging a single ‘head’ 1-vertex CAC of level L containing
vertex v with all CACs of level L − 1 to which there is an edge from v. Unless v have
an edge to at least one SCC (of more than 1 vertex) of level L − 1 in which case no
merge is made. If a merge takes place, then the level of the new merged CAC is L − 1.
Then the following holds:
1. This gives a unique partitioning of the graph into SCCs and CACs and does not depend
on the order in which we apply merges of ‘head’ components on the same level.
2. This partition of SCCs and CACs can also be seen as a DAG where we attach a level
to each vertex equal to the longest existing path from this vertex to any other vertex in
the DAG.
Remark Note that after a merge some vertices with a level higher than the one where the
merge was made might get a lower level compared to before.
Proof. That a directed graph can be partitioned into SCCs is a well-known and easy to show
result from graph theory. Applying a level to the vertices in the DAG is nothing else than a
topological ordering of the vertices in the graph, something also well-known, hence we start
at the merging. Obviously all 1-vertex SCCs are also 1-vertex CACs since any vertex that
is part of any (non-loop) cycle must be part of a SCC of more than one vertex.
Since the head CAC of a merge is always connected with each other CAC that is part
of a merge, the subgraph representing the component is connected as well. It is also still
obviously acyclic since no vertex of any of the CACs is part of any cycle in G from the
definition of a CAC. Adding all edges between the head and all other merged CACs also
ensures that there is no missing edge between any two vertices of the new CAC. This holds
since there can be no edges between any two components on the same level. Hence we can
conclude that merging CACs creates a new CAC.
Next we prove statement 1) that the given partitioning is unique and does not depend
on the order of merges. CACs are created using a bottom-up approach and it is clear that
the level of a CAC never change after its first merge. This means that the level of a CAC
is uniquely defined by the level of any leaf in the CAC. All the leaves of a CAC are those
1-vertex CACs which could not be the head of any merge either because they have either
no outgoing edges orthey have at least one outgoing edge to a SCC (of more than 1 vertex)
of the next lower level.
Assume we have done all merges with head CACs of level L. Consider a 1-vertex CAC
with vertex v and level L + 1 and edges to one or more CACs but no SCC of more than one
vertex of level L (or higher). From the previous argument we get that the level of any CAC
linked to by v will not change from any future merges. This means that eventually v will
be part of the same CAC as all vertices part of any CAC with level L to which there is an
edge from v regardless of which order merges are made on level L + 1.
Repeating this argument for all 1-vertex CACs of level L + 1 we get that for each such
vertex v the neighboring vertices part of a CAC with a lower level for which v should be
part of the same CAC is uniquely determined after all the merges on level L. Repeating
this for all levels gives us a unique partitioning. This proves statement 1).
Last we show statement 2) by showing that merging of components does not create any
cycle in the underlying DAG created by the components.
5
Consider a merge of head CAC with vertex v, level L and an edge to each of n CACs
C1 , C2 , . . . , Cn with level L − 1. Since all CACs C1 , C2 , . . . , Cn have the same level, the
resulting CAC after merge can only have edges to components of level at most L − 2 since
we merge it with all CACs of level L − 1 to which there is an edge from v, but do not merge
if there is an edge from v to any SCC of level L − 1 and there can be no edges between any
components of the same level from the initial SCC partitioning. Since initially there can be
no edges from any component to another of the same or larger level and we do not create
any such edges when doing a merge, we do not create any cycle in the underlying DAG.
Since we only merge CACs, and merges does not create any cycles in the DAG, we
have proved that the new partitioning can also be represented by a DAG where each vertex
corresponds to one SCC or CAC. This proves statement 2).
Corollary 3.1. If a directed graph G has a SCC partitioning with maximum level LSCC
and a SCC/CAC partitioning with maximum level LSCC/CAC , then
LSCC/CAC ≤ LSCC
Proof. Every merge lowers the level of the ‘head’ vertex and possibly one or more components
of a higher level, this makes it easy to find a graph such that LSCC/CAC < LSCC such as
the 2-vertex graph with a single edge between them in one direction. Similarly we can easily
find a graph for which equality holds (such as the single vertex graph). However, since doing
a merge can never result in any vertex getting a higher level, LSCC/CAC ≤ LSCC .
For algorithms (such as PageRank) which can be done on the components of one level
at a time in parallel the SCC/CAC partitioning have the advantage over the usual SCC
partitioning in that it generally creates a lower number of levels and thus a larger amount
of vertices (but not necessary components) on the same level in the new partitioning on
average. In effect reducing the chance of bottlenecks where we have a level with only a
single or a few small components. The only components that get larger are the CACs which
are acyclic apart from any loops and thus often have specialized faster methods (for example
by exploiting the triangular adjacency matrix). thus increasing the size of these components
is often not as much of an issue and might in fact often be beneficial instead by reducing
overhead.
An example of a directed acyclic graph and its SCC/CAC partitioning can be seen in
Fig. 1. From this figure it is clear why we cannot merge when the ‘head’ have an edge
Level 2
3
Level 1
2
1
1
Level 0
1
0
0
0
Level 0
Figure 1: Example of a graph and corresponding components from SCC/CAC
partitioning of the graph (2 SCCs, 1 CAC and 1 1-vertex component). Vertex
labels denote the level of each vertex if we had only partitioned the graph
into SCCs (for the SCC/CAC partitioning the vertex-levels is the same as the
level of corresponding component in the figure).
6
to any SCC of the next lower level. If the top (level 2) component merged with the left
(level 0) component then this would have created a cycle in the underlying graph. It is also
possible to see how merging some components can result in the partitioning getting a lower
max-level, the SCC/CAC partitioning have only 3 levels while the SCC partitioning would
need 4 levels.
3.1
PageRank
PageRank was originally defined by S. Brin and L. Page as the eigenvector to the dominant
eigenvalue of a modified version of the adjacency matrix of a graph [7].
~ (1) for vertices in graph G consisting of |V | vertices is defined
Definition 3.4. PageRank R
as the (right) eigenvector with eigenvalue one to the matrix:
M = c(A + ~g w
~ > )> + (1 − c)w~
~ e>
(1)
where A is the adjacency matrix weighted such that the sum over every non-zero row is equal
to one (size |V | × |V |), ~g is a |V | × 1 vector with zeros for vertices with outgoing edges and
1 for all vertices with no outgoing edges, w
~ is a |V | × 1 non-negative vector with ||w||
~ 1 = 1,
~e is a one-vector with size |V | × 1 and 0 < c < 1 is a scalar.
The original normalized version of PageRank has the disadvantage in that it is harder to
compare PageRank between graphs or components, because of that we use a non-normalized
version of PageRank as described in for example [9].
~ (3) for graph G is defined as
Definition 3.5. ( [9]) PageRank R
X > (1)
~ (1) ~
~ (3) = R ||W ||1 , d = 1 −
~
R
cA R
d
(2)
~ is a non-negative weight vector such that W
~ ∝ w.
where W
~
In the same work [9] we also showed that it is possible to give another equivalent definition
of this non-normalized version of PageRank which will be useful later in some proofs.
Definition 3.6. ( [9]) Consider a random walk on a graph G = {V, E} described by A. In
each step of the random walk move to a new vertex from the current vertex by traversing a
random edge from the current vertex with probability 0 < c < 1 and stop the random walk
~ (3) for a single vertex vj can be written as
with probability 1 − c. Then PageRank R
(3)
Rj
= Wj +
X
Wi P (vi → vj )
vi ∈V,vi 6=vj
∞
X
!
(P (vj → vj ))k
(3)
k=0
where P (vi → vj ) is the probability to hit vertex vj in a random walk starting at vertex
vi . This can be seen as the expected number of visits to vj if we do multiple random walks,
~.
starting at every vertex a number of times described by W
In [8] we showed how to calculate PageRank for the five different types of vertices defined
below
Definition 3.7. For the vertices of a simple directed graph we can define 5 distinct groups
G1 , G2 , . . . , G5
1. G1 : Vertices with no outgoing or incoming edges.
2. G2 : Vertices with no outgoing edges and at least one incoming edge (also called dangling
vertices).
3. G3 : Vertices with at least one outgoing edge, but no incoming edges (also called root
vertices).
7
4. G4 : Vertices with at least one outgoing and incoming edge, but which is not part of
any (non-loop) directed cycle (no path from the vertex back to itself apart from the
possibility of a loop).
5. G5 : Vertices that is part of at least one non-loop directed cycle.
Qing Yu et al gave a similar but slightly different definition of 5 (non distinct) groups
for vertices, namely dangling and root vertices (G2 and G3 ), vertices that can be made into
dangling or root vertices by recursively removing dangling or root vertices (part of G4 ) and
remaining vertices (part of G4 and G5 ) [19]. Given PageRank of a vertex not part of a cycle
(group 1-4), then the PageRank of other vertices can be calculated by removing the vertex
and modifying the initial weight of other vertices.
~ g(3) of vertex vg where vg is not part of any non-loop cycle,
Theorem 3.2. Given PageRank R
the PageRank of another vertex vi from which there exist no path to vg can be expressed as
(3)
Ri
∞
X
X
(3)
(3)
(P (vi → vi ))k
(Wj + Rg cagj )P (vj → vi )
= Wi + Rg cagi +
!
(4)
k=0
vj ∈V
vj 6=vi ,vg
where cagi is the one-step probability to go from vg to vi .
The proof with minor modifications is similar to the one found in [8] where it is formulated
for vertices in G3 on graphs with no loops.
Proof. Consider Rg(3) from Definition. 3.6. Since we know that there is no path from vi
back to vg (or vg would be part of a non-loop cycle) we know that the right hand side will
be identical for all other vertices. We rewrite the influence of vg using
X
Rg(3) P (vg → vi ) = Rg(3) cagi +
Rg(3) cagj P (vj → vi ) .
(5)
vj ∈V
vj 6=vi ,vg
We can now rewrite the left sum in Definition. 3.6:
X
Wi P (vi → vj ) = Rg(3) cagi +
vi ∈V,vi 6=vj
X
(Wj + Rg(3) cagj )P (vj → vi )
(6)
vj ∈V
vj 6=vi ,vg
which when substituted into (3) proves the theorem.
It is also easy to show that any SCC can also be divided into one of the first four groups
if we consider each SCC as a vertex in the underlying DAG (a SCC can never be part of
a cycle). The important part of this is that it is also possible to calculate PageRank one
component at a time rather than for the whole graph at once.
~ (3)+ be PageRank of all vertices belonging to components of level L
Corollary 3.2. Let R
L
or greater. Then PageRank of a vertex vi belonging to a component of level L − 1 can be
computed by
(3)
Ri
=
∞
X
!
k
(P (vi → vi ))
k=0
>
>
X
~ (3)+
~ (3)+
Wi + c R
~aL+ ,i +
Wj + c R
~aL+ ,j P (vj → vi )
L
L
vj ∈V
vj 6=vi
where ~aL+ ,i is a vector containing all 1-step probabilities from vertices of level L or greater
to vertex vi .
8
Proof. Follows immediately from Theorem 3.2 by replacing the rank of a single vertex with
the sum of rank of all vertices belonging to components of a higher level. Those in lower
level components or other components on the same level does not affect the rank since they
automatically does not have any path to vi .
Using Corollary 3.2 it is clear that after calculating PageRank of all vertices belonging
to components of level L and above we can calculate those of level L − 1 by first changing
their initial weight and then consider the component by itself. In matrix notation we can
update the weight vector for all components of lower level by calculating
new
old
~ L−1
~ L−1
~ (3)+
W
=W
+ cAL+ ,L−1 R
L
(7)
where AL+ ,L−1 corresponds to the submatrix of A with all rows corresponding to vertices of
level L or greater and all columns of level L − 1. This is essentially the same method which
is used in [16] but here we have formulated it for any component instead of for dangling
vertices (vertices with no outgoing edges).
4
Method
The complete PageRank algorithm can be described in three main steps.
1. Component finding: Finding the SCC/CAC partitioning of the graph.
2. Intermediate step: Create relevant component matrices and weight vectors.
3. PageRank step: Calculate PageRank one level at a time and components on the same
level one at a time or in parallel.
In order to be able calculate PageRank for each component we obviously first need to find
the components themselves, this is done in the component finding part of the algorithm
where we find a SCC/CAC partitioning of the graph as well as the level of each component.
By using the CAC/SCC partitioning rather than the usual SCC partitioning we reduce the
risk of having very few vertices on the same level, the aim of this is to be able to avoid some
of the disadvantages with some other similar methods such as the one in [15] where a large
number of small levels (small diagonal blocks) increases the overhead cost [19]. This step is
similar to the initial matrix reordering made by [3]. However instead of only finding a partial
ordering we have modified the depth first search slightly in order to identify components
that can be calculated in parallell as well as group 1-vertex components on the same level
together. Another advantage is that different methods can be used for different types of
components as we will see later. The component finding step is described in Sec. 4.1.
In the intermediate step the data (edge list, vertex weights) need to be managed such that
the individual matrices for every component can quickly and easily be extracted. This section
of the code can vary a lot between implementations and is one of the main contributors of
overhead in the algorithm. The SCC/CAC partitioning can easily be transformed into a
permutation matrix and used to permute the graph matrix and then solve the resulting linear
system, this can be seen as an alternative to the recursive reordering algorithm described
in [15]. This step is described in Sec. 4.2.
After the intermediate step we are ready to start calculating PageRank of the vertices.
This is done one level at a time starting with the highest and modifying vertex weights between levels using (7). Components on the same level can use different methods to calculate
PageRank and can either be calculated sequentially or in parallel. The PageRank step is
described in Sec. 4.3.
4.1
Component finding
The component finding part of the algorithm consists of finding a SCC/CAC partitioning
of the graph as well as the corresponding levels of the components. Since any loops in the
9
graph have no effect on which SCC or CAC a vertex is part of, these will be ignored in
the component finding step. Finding the components and their level can be done through a
modified version of Tarjan’s well-known SCC finding algorithm using a depth first search [18].
For every vertex v we assign six values:
• v.index containing the order in which it was discovered in the depth first search.
• v.lowlink for a SCC representing the lowest index of any vertex we can reach from v,
or for a CAC representing the ‘head’ vertex of corresponding component.
• v.comp representing the component the vertex is part of, assigned at the end of the
component finding step.
• v.depth used to implement efficient merges of components. It can be removed if the
extra memory is needed, but it could result in slowdown because of merges for some
graphs.
• v.type indicates if v is part of a SCC or a CAC (1-vertex SCCs are considered CACs).
• v.level indicating the level of the component to which v belongs.
Of these the first three can be seen in Tarjan’s algorithm as well, and play virtually the same
role here (although in Tarjan’s the comp value can be assigned as components are created).
During the depth first search each vertex v goes through three steps in order.
1. Discover: Initialize values for the vertex.
2. Explore: Visit all neighbors of v, finishing the DFS of any unvisited neighbors before
going to the next. After a vertex is visited we update v.lowlink and v.level.
3. Finish: After all neighbors are visited we create a new component if appropriate. If a
CAC is created we also check for and do any merge with v as head.
During the discover step values are initialized after which the vertex is put on the stack
(.type and .comp do not need to be initialized)
D i s c o v e r ( Vertex v )
v . i n d e x := i n d e x
v . l o w l i n k := i n d e x
v . l e v e l := 1
v . depth := 1
i n d e x++
s t a c k . push ( v ) // add v t o t h e s t a c k
end
Here index is a counter starting at 1 and increasing by one for every new vertex we discover.
The explore step as well works much like Tarjan’s depth first search except that it also
updates the level of the vertex we are exploring.
E x p l o r e ( Vertex v )
f o r each ( v , w) ∈
E
i f w i s not i n i t i a l i z e d
DFS(w) // D i s c o v e r (w) , E x p l o r e (w) and F i n i s h (w)
end
/∗ At t h i s p o i n t w i s e i t h e r i n a component a l r e a d y
or b e l o n g t o t h e same SCC as v ∗/
i f w i s i n a component ( . type i s d e f i n e d )
v . l e v e l = max( v . l e v e l , w . l e v e l +1)
e l s e //w b e l o n g t o t h e same SCC
v . l e v e l = max( v . l e v e l , w . l e v e l )
v . l o w l i n k = min ( v . l o w l i n k , w . l o w l i n k )
end
10
end
end
The last step in the DFS is where we evaluate if a new component should be created and
handle any merges needed with this vertex as ‘head’ component. The initial component is
created in the same way as in Tarjan’s algoriithm: if v.lowlink = v.index we create a SCC
by popping vertices from the stack until we pop v from the stack.
F i n i s h ( Vertex v )
i f v . lowlink = v . index
size = 0;
do
w := s t a c k . pop ( )
w. l o w l i n k = v
w. l e v e l = v . l e v e l
w . type = s c c
merge (w, v ) // add w t o component
s i z e++
while (w != v )
i f s i z e = 1 // ( one v e r t e x component )
v . type = c a c
// c h e c k f o r merges
f o r each ( v , w) ∈ E
m = false
list = []
i f w . type = s c c and w . l e v e l = v . l e v e l −1
m = false
break
end
i f w . type = c a c and w . l e v e l = v . l e v e l −1
l i s t . add (w)
m = true
end
end
i f m = true // ( a d j u s t . l e v e l i f a merge o c c u r e d )
v . l e v e l = v . l e v e l −1
f o r each w i n l i s t
merge (w, v )
end
end
end
end
end
The first part is similar to Tarjan’s with some extra bookkeeping so we will focus our
explanation of the second part. However in order to explain how the merges are done in
constant time we start by explaining how we store the component data. The components are
stored in a merge-find data structure through the .lowlink and .depth attribute. A mergefind data structure allows us to do the two operations we need: merging two components and
finding a ‘head’ vertex representing a component (used in merge, and by itself later). This
has the advantage that both operations can be done in constant amortized time (O(α(|V |)))
as well as requiring very little memory.
In Tarjan’s algorithm you don’t need the .depth values since the .comp value can be
assigned while creating a component. The reason we do not do it here is because it cannot
11
be updated when doing a merge of two CACs (unless we loop through all vertices), and even
if we could, it would be possible to end up with some empty components that would have
to be ’cleaned up’ in one way or another later anyway.
Looking at the computational complexity of the component finding step we see that
discover, explore and finish are all done once for every vertex, of these discover is obviously
done in constant time. During explore we will eventually have to go through all edges
exactly once but all operations take only constant time. Last finish is called once for every
vertex doing O(α(|V |)) work in the first half (amortized constant because we do merges
rather than assigning component values directly). In the second half we will at most visit
every edge once (over all vertices) doing O(α(|V |)) work. Thus in total we end up with
O(|V | + |E| + |V |α(|V |) + |E|α(|V |)) ≈ O(|E|α(|V |)), if |E| > |V | , in other words linear
amortized time in the number of edges.
Before returning the results |V | find operations also need to be done in order to assign
the .comp value for each vertex, since the find operation takes O(α(|V |)) time this takes
O(|V |α(|V |)) time in total.
f o r each v ∈ V
h := f i n d ( v ) // f i n d head v e r t e x
i n d := 1
i f h . comp i s not d e f i n e d
h . comp := i n d
i n d++
end
v . comp := h . comp
end
Hence the complete component finding algorithm takes O(|E|α(n)) time which is comparable to Tarjan’s which can be implemented in O(|E|) time. We note that if the .depth
value is ignored everything works but the merges are no longer guaranteed to be made in
constant amortized time. If memory is a concern or if the size of the CACs are assumed to
be small it might be worthwile to work without the .depth value even though merges could
be slow in the worst case scenario.
4.2
Intermediate step
After we have found the SCC/CAC partitioning some additional work need to be done in
order to continue with the PageRank calculations effectively.
The goal of the preprocessing step is to make sure that we easily and quickly can construct corresponding matrix for each component as well as sort the components. We sorted
components first in order of level and second in the size of the component (both descending
order) so that we can work on one component at a time starting with the largest on every
level.
We note that the preprocessing step can differ highly depending on implementation,
hence we will only give a short comment on how we choose to do it. The goal is to have
separate edge lists for edges within each component as well as edges between levels. To
get this we start by sorting the first according to their level and second according to their
size. This is then used to permute the edge list such that they are ordered in the same way
depending on their starting vertex. It is important to note that this sorting of components
or edges can both be done in linear time rather than O(n log n) as with ordinary sort. This
is true since the possible values are known and bounded hence no comparisons need to be
made.
After the edges are sorted we go through the edge list and assign each edge to the
correct component or in between level list such that corresponding component matrices can
12
be created. In this step we also merge all 1-vertex components of each level to avoid having
to calculate the rank of multiple 1-vertex components of the same level.
After the preprocessing step we should have the matrices (or edge lists) Mc and weight
vector Vc for all components stored such that we can access them when needed.
It is worth to note that in our implementation this section of the code contains much of
the extra overhead needed for our method (more than the previous component finding step
itself).
4.3
PageRank step
Now that all preliminary work is done we can start the actual PageRank calculation where
we calculate PageRank for all vertices of one level at a time (starting by the largest). The
PageRank step can be described by the following steps.
1. Initiate L to the maximum level among all components.
2. For each component of level L: pick a suitable method and calculate PageRank of the
component.
3. Update weight vector V for all remaining components (of lower level).
4. Decrease L by one and go to step 2 unless we have already calculating PageRank of
all vertices.
Depending on the type and size of the component PageRank we calculate PageRank in one
of four different ways:
• Component is made up of a collection of single vertex components (no internal edges
apart from loops): PageRank of any such collection of 1-vertex components is simply
wi
the initial weight wi for vertices with no loop and
for any vertex with a loop,
1 − caii
where aii is the weight on the loop edge.
• Component is a CAC of more than one vertex: use CAC-PageRank algorithm 4.3.1
described later.
• Component is a SCC but small (for example less than 100 vertices): Calculate PageR~ =W
~ (using for example
ank by directly solving the linear equation system (I − cA> )R
LU factorization).
• Component is a SCC and large: Use iterative method, in our case using a power series
formulation.
Out of these the first one is done in O(|V |) time (copy the weight vector) or O(1) if no
separate vector is used for the resulting PageRank and we assume there is no loops, the
second and fourth is done in O(|E|), however the coefficient in front of the first is much
lower since it is guaranteed to only visit every edge once, while the iterative method needs
to visit every edge in every iteration (number of which depend on error tolerance and method
chosen). The third is done in O(|V |3 ) using LU factorization, however since |V | is small this
is still faster than the iterative method unless the error tolerance chosen is large.
We also note that only the fourth method actually depend on the error tolerance at all,
every other method can be done in the same time regardless of error tolerance (down to
machine precision).
After we have calculated PageRank for all components on the current level we need
to adjust the weight of all vertices in lower level components as shown in 7. This can be
done using a single matrix-vector multiplication using the edges between the two sets of
components.
~ new − = W
~ old − + M>+
~
W
− RL
(L−1)
(L−1)
13
L ,(L−1)
This is the same kind of correction as is done in for example [2] and for the non-normalized
PageRank used here in [8].
In the weight adjustment step every edge is needed once if it is an edge between components and never if it is an edge within a component. Hence if we look over the whole
algorithm: every edge is visited at most twice in the DFS, then every edge that is not part
of a SCC is visited exactly once more (either as part of a CAC or as an edge between components) while those that are part of a SCC are typically visited a significantly larger number
of times depending on algorithm, error tolerance and convergence criterion used. Of course
there is also some extra overhead that would need to be taken into consideration for a more
in-depth analysis.
We note that calculating PageRank for all components on a single level can be computed
in parallel (hence why we sort them by their size starting by the largest). The weight
adjustment can either be done in parallel for each component or as we have done here once
for all components of the same level. In case there is a single very large component on a
level it might be more appropriate to do it one component at a time instead to reduce the
time waiting for the large component to finish.
4.3.1
CAC-PageRank algorithm
The CAC-PageRank algorithm exploits the fact that there are no non-loop cycles to calculate
the PageRank of the graph using a variation of Breadth first search. The algorithm starts
by calculating the in-degree (ignoring any loops) of every vertex and stores it in v.degree
for each vertex v, this can be done by looping through all edges once. We also keep a value
v.rank initialized to corresponding value in the weight vector for each vertex. The BFS itself
can be described by the following psuedocode.
f o r each v ∈ V
i f v . degree = 0
Queue . enqueue ( v )
while Queue . s i z e > 0
w = Queue . dequeue ( )
w . rank = w . rank ∗(1/(1 −W(w, w) ) )
// a d j u s t f o r l o o p s
f o r each (w, u ) ∈ E/ (w, w)
u . rank = u . rank + w . rank ∗ W(w, u )
u . d e g r e e = u . d e g r e e −1
i f w. degree = 0
Queue . enqueue (w)
end
end
w . d e g r e e = w . d e g r e e −1 // e n s u r e w i s n e v e r enqued a g a i n
end
end
end
Here W(w,u) is the weight on the (w,u)-edge (cawu ). Note that W (w, w) = 0 if w has
no loop, which gives simply multiplication by 1 when adjusting for loops. The difference
between this and ordinary BFS is that we only add a vertex v to the queue once we have
visited all incoming edges to v. We also loop through all vertices to ensure that we wisit
each vertex once since there could be multiple vertices with no incoming edges.
Usually PageRank is defined only for graphs with no loops (or by ignoring any loops
that are present), this simplifies the algorithm slightly in that the loop adjustment step can
be ignored.
14
Looking at the computational complexity it is easy to see that we visit every vertex and
every edge once doing constant time work, hence we have the same time complexity as for
ordinary BFS O(|V | + |E|) ≈ O(|E|), if |E| > |V |. While it has the same computational
complexity as most numerical methods used to calculate PageRank, in practice it will often
be much faster in that the coefficient in front will be smaller, especially as the error tolerance
decreases.
4.3.2
SCC-PageRank algorithm
If the component is a SCC, then we cannot use the previous algorithm. Instead one of many
iterative methods needs to be used (unless the size of the component is small). We have
chosen to calculate PageRank for SCCs using a power series as described in [2] since it is
a more natural fit with the non-normalized variation of PageRank used here. The method
works by iterating the following until some convergence criterion is met.
~n+1
P
~0
P
~n
= cAP
~
=W
(3)
~n
R
=
n
X
~k
P
k=0
Although any method can be used, by using a power series we have the advantage in
that we get the PageRank in the correct non-normalized form we need without the need to
re-scale the result. In practice any other method such as a power iteration could be used
as well by scaling the result as described in Def. 3.5. The calculation of these components
could also benefit from the use of other methods designed to improve the calculation time
of PageRank such as the adaptive algorithm described in [14]. Any other algorithm which
depend on the presence of dangling vertices or SCCs would not be useful here however since
we are working on a single SCC.
The method you get by using a power series formulation have been shown to have the
same or similar convergence speed as the more conventional Power method experimentally [2]. It is easy to show that this is equivalent to using the Jacobi method with initial
~ , and obviously Gauss-Seidel or a similar method using a power series
vector equal to W
formulation could be used and would likely provide some additional speedup. Theoretically
the convergence of the Power method can be shown to be geometric depending on the two
largest eigenvalues with ratio |λ2 |/|λ1 | where λ2 ≤ c [11], obviously we have a similar convergence by calculating corresponding geometric sum (geometric, bounded from above by
c). The method can be described through the following pseudocode.
rank = W // i n i t i a l i z e rank t o t h e w e i g h t v e c t o r
mr = W; // ( i n i t i a l i z e rank added i n p r e v i o u s i t e r a t i o n
do
mr = M∗mr
rank = rank+mr
while (max(mr) ≥ t o l )
This is also the baseline method we use for comparison with our own method later.
4.4
Error estimation
When looking at the error we have two different kinds of errors to consider, first errors
from the iterative method used to calculate PageRank of large SCCs (depending on error
~ ). We will mainly
tolerance) and second any errors because of errors in data (M or W
concern ourselves with the first type which is likely to dominate unless the error tolerance
is very small.
15
We start by looking at a single isolated component, if this component is a CAC or a
small SCC we calculate PageRank exactly and errors can be assumed to be small as long
as an appropriate method is used to solve the linear system for the small SCCs using for
example LU-decomposition. For large SCCs we stop iterations after the maximum change
of rank for any vertex between any two iterations is less than the error tolerance (tol). Since
the rank is monotonically increasing we can be sure that the true rank is always a little
higher in reality than what is actually calculated.
The true rank can be described by R(3) =
∞
X
~ , where we let p
~ . Since
Mk W
~k = Mk W
k=0
every row sum of M is less than or equal to c, one has c|~
pk−1 | ≥ |~
pk |. This means that the
maximum change in rank over all vertices in the graph after K iterations is bounded by
∞
X
ck |~
pK | =
k=1
c|~
pK |
≈ 5.66|~
pk |, c = 0.85
1−c
This does not change if there are any edges to or from other components in the graph
although the difference will be spread over a larger amount of vertices if there are edges
from the component. There might also be additional additive error from components with
edges to the single component we are considering. Over all vertices and all components we
can estimate bounds for the total error tot over all vertices as well as the average error avg
given some error tolerance tol.
tot < |SCCl | · tol
avg <
c
1−c
|SCCl |
c
c
· tol
≤ tol
|V |
(1 − c)
(1 − c)
where |SCCl | is the number of vertices part of a ’large’ SCC (for which we need to use
an iterative method) and |V | is the total number of vertices in the graph. It should be
noted that this estimate is likely to be many times larger than in reality unless all the
vertices have approximately the same rank. Given that PageRank for many real systems
approximately follows a power distribution [5], most vertices will have orders of magnitude
smaller change in rank when finally those with a very high rank have a change smaller than
the error tolerance. Additionally if the graph contains some dangling vertices (vertices with
no outgoing edges), then these will further reduce the error (can be seen as an increased
chance to stop the random walk).
5
Experiments
Our Implementation of the algorithm is done in a mixture of c and c++ for the graph
search algorithms (component finding and CAC-PageRank algorithm) and Matlab for the
ordinary PageRank algorithm for SCCs and weight adjustment as well as the main code
gluing the different parts together. The reason to use c/c++ for some parts is that while
Matlab is rather fast at doing elementary matrix operations (PageRank of SCCs and weight
adjustment), it is very slow when you attempt to do for example DFS or BFS on a graph.
• Component finding: Implemented in c++ as a variation of the depth first search in the
boost library. The method is implemented iteratively rather than recursively (hence
it can handle large graphs which could otherwise give a very large recursion depth).
• CAC-PageRank algorithm: Implemented in c.
• Power series PageRank algorithm: Implemented in Matlab. Used for SCCs as well as
on the whole graph for comparison.
• Main program: Implemented in Matlab, with c/c++ parts used through mex files.
16
5.1
Graph description and generation
Many real world networks including the graph used for calculating PageRank for pages on
the Internet share a number of important properties.
First of all they should be scale-free, these networks are characterized by their degree
distributions roughly following a power law, if k is the degree then the cumulative distribution for the degree can be written as P (k) = k−γ . In practices this means that there
are a low number of very high degree vertices as well as a large amount of very low degree
vertices.
Secondly they should be small-world, these networks are characterized by two different
properties 1) the average shortest path distance between any two vertices in the graph is
small, roughly the logarithm of the number of vertices in the graph and 2) the network
has a high clustering coefficient. So while the first property implies a high connectivity
in the network because of the short distances, the second property says that the network
should contain multiple small communities with high connectivity among themselves but
low connectivity to vertices outside the own group.
In order to evaluate the method we have chosen to look at five different graphs of varying
properties and size.
• B-A: A graph generated using the Barabási-Albert graph generation algorithm [4] with
mean degree 12, after generation only some of outgoing edges are kept in order to generate a directed graph. This graph contains 1000000 vertices of which 959760 are part
of a CAC and 999128 edges, there are 188010 1-vertex CACs out of 239258 CACs in
total as well as 19813 SCCs. Maximum component size is 483874 vertices (CAC). A
second graph where even fewer outgoing edges was kept was also used.
The Barabási-Albert graph generation algorithm creates a graph which is scale-free,
however it does not have the small-world property [10], the reason we still used this
for one of our tests is that it makes it easy to create a scale-free graph with primary
acyclic components.
• Web: A graph released by Google as part of a contest in 2002 [1] part of the collection
of datasets maintained by the SNAP group at Stanford University [17]. This graph
contains 916428 vertices of which 399605 are part of a CAC and 5105039 edges, there
are 302768 1-vertex CACs out of 321098 CACs in total as well as 12874 SCCs. Maximum component size is 434818 vertices (SCC). This graph has both the scale-free and
small-world properties making it a good example of the type of graph we would be
interested to calculate PageRank on in real applications.
We also created two additional even larger graphs by using multiple copies of this
graph: 1) the graph composed of ten disjoint copies of this graph and 2) a graph
composed of ten copies of the web-graph with a small number (20) extra random edges
in order to get a single very large component as is common for real-world networks.
5.1.1
Barabási-Albert graph generation
The model works by selecting a starting seed graph of m0 vertices, in our case a 20×20 graph
with uniformly random edges with mean degree 5. Then new vertices are added iteratively
to the graph one at a time by connecting each new vertex with m ≤ m0 existing vertices
with probability proportional to the number of edges already connected to each old vertex.
This can be written as
di
pi = P
d
j j
where di is the degree of vertex i.
17
The Barabśi-Albert model gives an undirected graph. In order to transform it into a
directed graph we then went through each vertex and removed some edges. The number of
edges that was kept originating from each vertex Ei,keep can be described by
Ei,keep = dlog2 (Ei )e
where Ei is the number of edges originally originating from vertex i. We choose m = 12
which after removal of edges gives an average (in+out)-degree of log2 (24) ≈ 4.6. After
removal of vertices all vertices will have similar out-degree, but the in-degree will be similar
to how it was after the original Barabśi-Albert graph generation (in-degree following a power
law).
5.2
Results
All experiments are performed on a computer with a quad core 2.7Ghz(core)-3.5Ghz(turbo)
processor (Intel(R) Core(TM) i7-4800MQ) using Matlab R2014a with four threads on any
of the parts computed in parallel.
Three different methods where used
1. Calculate PageRank as a single large component as described in Sec. 4.3.2
2. Using the method described in 4 with components on the same level calculated sequentially.
3. Using the method described in 4 with components on the same level calculated in
parallel.
In addition for all three method any loops in the graph where ignored (as is common for
PageRank).
We note that the intermediate step between method 2 and 3 differs. Because of limitations in how the parallelization can be implemented we had to separate edges of the same
component into their own cell-array for the parallel version, this accounts for the main difference in overhead between method 2 and 3. It should be noted that the intermediate step
is not parallelized (in either version) and is something which could probably be significantly
improved by implementation in another programming language.
After doing the SCC/CAC partitioning of the graph and sorting all components according
to their level and component size (both descending order) we can visualize the non-zero
values of this new reordered adjacency matrix. The density of non-zeros before and after
reordering for the Web graph can be seen in Fig. 2. Note that the diagonal lines are not
single vertex components but rather a large amount of small components on the same level
(hence they can be computed in parallel), for a view of some of the small components see
Fig. 3. Any 1-vertex component are not colored since they have no internal edges, two
large section of 1-vertex components are right before the middle large component and in the
bottom right corner of the matrix. A cutoff at a density of 1.7 · 10−15 non-zero elements is
used in order to avoid problems with a few very high density sections as well as maintaining
the same scale in both figures.
After finding the SCC/CAC partitioning large sections of zeros can clearly be seen,
something which is not present in the original matrix. The single very large component in
the graph is seen in the middle of the matrix, with a section of small components both above
and below it.
Langville and Meyer does a similar reordering by recursively reordering the vertices by
putting any dangling vertices last and not considering edges to those already put last once
for any further reordering of remaining vertices [15]. This effectively creates one or more
CACs along with one large component. The advantage of our approach compared to this is
that we can also find components above the single large component rather than combining
them into a single even larger component as well as finding sets of components which can
be computed in parallel. In Fig. 3 some common types of smaller components can be seen.
18
Figure 2: Non-zero values of adjacency matrix for the Web graph before and
after sorting vertices according to level and component.
Figure 3: Non-zero elements in parts of the bottom right diagonal ’line’ of
the reordered adjacency matrix.
First there are 2 CACs where the majority of the vertices link to the same or a very limited
number of vertices creating a very shallow tree. The third component is also a CAC (as
seen by the zero rows) have a more advanced structure, while the last one is probably a
SCC. There is a large amount of CACs similar to the first 2 with the majority of the vertices
linking to one or two vertices but there are also some characterized by a horizontal line
representing one or a few vertices linking to a large number of dangling vertices.
The total number of levels in the Web graph was 28, with the majority being located
right at the very top or right after the large central component. More research would be
needed to verify if this is usually the case, but if so it might be a good idea to merge some
of these very small levels and calculate them as if it was a SCC in order to reduce overhead.
19
For example the first 10 levels contain just 85 vertices in total and could be merged, after
the large component there is only 2-3 of these very small components with the rest having at
least a hundred or so vertices hence it might not be as useful here. If no merging of 1-vertex
CACs was done (using the ordinary SCC partitioning) the number of levels was increased
to 34 levels instead.
Since PageRank of different SCCs converge in varying amounts of iterations it is also of
interest to see how the number of iterations for different components varies, as well as how
it compares to the number of iterations needed by the basic algorithm where we calculate
PageRank as if the graph was a single component. Number of iterations for all SCCs of more
than 2 vertices of the Web graph with c = 0.85 and tol = 10−9 can be seen in Fig. 4. In Fig.
Figure 4: Number of iterations needed per SCC ordered according to their
level first and number of vertices second (both descending order). The dotted
vertical lines denotes where one level ends and the next one starts while the
horizontal line denotes the result where the whole graph is considered a single
component. c = 0.85, tol = 10−9
4 we can see a couple of things, first the number of iterations over any component is less than
the number of iterations that would be needed if we calculated PageRank of the graph as if
it was a single huge component. The average number of iterations per edge was 148, which
can be compared to the number of iterations for the graph as a single component which
was 168, this gives an improvement of approximately 12%. This might look small looking
at the figure, but remember that the largest components on each level are put first on their
level and the size of components approximately follows a power law, hence large parts of the
figure represent relatively few vertices. It should also be noted that a significant number of
edges (approximately 26%) lies either between components or within CACs both of which
are not counted for here since they don’t use the iterative method and are instead used
only once either to modify weights between levels or as part of the DFS when calculating
PageRank of CACs.
The second point of interest is that there is a clear difference between components at
the last level compared to those of a higher level. Any SCC on the last level is by definition
a stochastic matrix (before multiplication with c) since they have no edges to any vertex
in any other component, this gives a lower bound on the number of iterations equal to
log tol/ log c ≈ 128 easily seen from the relation citr ≤ tol, where itr is the number of
iterations. However those component of higher level are by definition a sub stochastic matrix
(before multiplication with c) since there is at least one edge to some other component. This
is equivalent to some vertices having a lower c value and the algorithm can therefor converge
20
faster.
The third observation is that a large component generally needs more iterations than a
smaller component. This makes sense if we consider that as long as most vertices in the
component do not contain edges to other components, as the component grows in size at
least some part of the component will behave similar to those in the last level and we thus
need a larger number of iterations. For large components the estimated number of iterations
is usually quite good, while for small components it usually gives a too high estimate (unless
it is part of the last level).
The running time in seconds for the Web graph for the three different methods for
different values of error tolerance from 10−1 to 10−20 can be seen in Fig. 5a. From this it
(a)
(b)
Figure 5: Running time needed to calculate PageRank for 3 different methods
(a) depending on error tolerance using c = 0.85 and (b) depending on c
between 0.5 and 0.99 using tol = 10−10 on the Web graph.
is clear that our method adds a significant amount of overhead, especially the parallel one.
While all three methods need a longer time if the error tolerance is smaller, our method shows
a significantly smaller increase compared to the basic method. While the break even here
seems to lie at around 10−5 for the sequential algorithm and at around 10−9 for the parallel
algorithm because of additional overhead. If the overhead in particular for the parallell
algorithm could be further reduced this breakpoint could potentially be significantly earlier.
Note that because of limits in machine precision we might not have the correct rank down to
the last 20 decimals at the lowest tolerance, however since we sum over successively smaller
parts we still get a good approximation of the actual computation time.
The running time when we let c vary between 0.5 and 0.99 with a constant error tolerance
(10−10 ) can be seen in Fig. 5b.
From Fig. 5b there is a clear indication that our proposed method can be significantly
faster for values of c close to one. This is quite natural given that computation time of some
components does not depend on c at all (CACs and small components). Even for those
components that does depend on error tolerance (large SCCs) there should be a significant
amount of edges out of the component meaning some rows will have a sum lower than c
thus roughly simulating a lower c value.
In order to get some estimation of how the result changes with the size of the graph we
also did the same experiments with ten copies of this graph. The results of this can be seen
in Fig. 6a. With a ten times larger graph the basic and sequential approach takes about ten
times longer (sequential a little less, basic a little more), hence the relationship between the
two are more or less the same but with a slightly earlier point at which they have the same
performance. The parallel version however only took approximately eight times longer on
the much larger graph presumably since we get many more components on the same level.
This moves the breakpoint from 10−10 to 10−6 .
21
(a)
(b)
Figure 6: Running time needed to calculate PageRank for 3 different methods
depending on error tolerance used on (a) ten copies of the Web graph and (b)
ten copies of the Web graph with some extra edges added.
While a ten-times copied graph might not represent a true network of that size it does
have one important likeness, namely that the diameter of many real world networks (such
as the world wide web) increases significantly slower than the size of the network itself. If
the diameter is small it is clear that the number of levels need to be small as well, which
means that as the size of the network increases we should have more and more vertices on
the same level increasing the opportunity for the parallelization.
By adding a couple of random edges we retain a single large component (as is the case
of the original graph), the results after doing this can be seen in Fig. 6b. We do not see
any significant differences between this and the previous graph, the parallel method takes
slightly longer while the other two stay more or less the same. From this we see that even
if we have a very large component (composed of roughly half the vertices) we can still get
significant gains using our method as long as the graph is sufficiently big.
Last we also did the same experiment on a graph generated by the Barabási-Albert model,
the results for this graph can be seen in Fig. 7. Since this graph has a very small number
(a)
(b)
Figure 7: Running time needed to calculate PageRank for 3 different methods
depending on error tolerance used on the graph generated by the BarabásiAlbert graph generation method with (a) roughly 1 edge/vertex after removal
of edges and (b) roughly 5 edges/vertex after removal of edges.
of SCCs, both the sequential and parallel method barely depend on the error tolerance at
all, while the basic approach seems to have similar behavior as with the previous graphs
22
with a roughly linear increase in time needed as the error tolerance decreases. However
since the PageRank calculations themselves are faster for this graph (for all methods), the
overhead needed represent a larger part of the total time since this does not decrease in the
same way. Overall the results of our approach are promising, we have gotten better results
the bigger the graph is as well as the lower the error tolerance is as compared to the basic
approach. Our implementation of the parallel method had significantly more overhead than
the sequential method, this is likely to change if the algorithm was implemented in full (with
parallelization in mind) in for example c++ where we have more control over how it can be
implemented.
6
Conclusions
We have seen that by dividing the graph into components we could get improved performance
of the PageRank algorithm, however it does come with a cost of increased overhead (very
much depending on implementation) as well as algorithm complexity. We could see that we
needed to do significantly less number of iterations overall using our method compared to
the ordinary method using a power series or power iterations. This came from the fact that
some edges lie between components or inside a CAC (hence needing only one iteration) as
well as the fact that a larger component is more likely to need a larger number of iterations
than a small one.
From our experiments we can see that our proposed algorithm is more effective compared
to the basic approach as the size of the graph increases, at least as long as we can keep
everything in memory. More experiments would be needed for any conclusions after that
although given that components can be calculated by themselves in our method (hence
a lower memory requirement in the PageRank step) we expect our method to compare
even better when this is the case. We could also see better performance using our method
compared to the basic method if the error tolerance is small while it is generally slower
because of overhead if the tolerance for errors is large.
The results for the parallel method compared to the serial method was not clear, however
this is likely something that can be improved significantly using a better implementation
and memory handling as well as having more of an advantage when memory overall is more
of a concern for even larger graphs.
By calculating PageRank for different kinds of components differently we could see a
large improvement for certain types of graphs such as the one generated by the B-A model
where the time needed to calculate PageRank was more or less constant depending on error
tolerance using our method.
7
Future work
One obvious and likely next step would be to implement and try the same component finding
algorithm for something else such as sparse equation solving or calculating the inverse of
sparse matrices, both of which have similar behavior in that components only affect other
components downwards in levels.
It would also be interesting to try an improved implementation of the method by implementing it all in c++ or another ‘fast’ programming language in order to reduce some of
the extra overhead incurred. Similarly it would also be interesting to see how the algorithm
compares to other algorithms for extremely large graphs where memory become more of an
issue.
A third interesting direction would be to look at how this method interacts with other
methods to calculate PageRank by using a method such as the one proposed by [14] on the
largest components.
23
Another important problem is how to update PageRank after doing some changes to the
graph, it would be very interesting to see how this partitioning of the graph in combination
with non-normalized PageRank could potentially be used to recalculate PageRank faster
than for example using the old PageRank as initial rank and doing power iterations from
there.
References
[1] Google web graph. Google programming contest, http://snap.stanford.edu/data/
web-Google.html, 2002.
[2] Fredrik Andersson and Sergei Silvestrov. The mathematics of internet search engines.
Acta Appl. Math., 104:211–242, 2008.
[3] Arvind Arasu, Jasmine Novak, Andrew Tomkins, and John Tomlin. Pagerank computation and the structure of the web: Experiments and algorithms. In In Proceedings of
the Eleventh International Conference on World Wide Web, Alternate Poster Tracks.,
2002.
[4] Albert-Laszlo Barabasi and Reka Albert. Emergence of scaling in random networks.
Science, 286(5439):509–512, 1999.
[5] Luca Becchetti and Carlos Castillo. The distribution of pagerank follows a powerlaw only for particular values of the damping factor. In In Proceedings of the 15th
international conference on World Wide Web, pages 941–942. ACM Press, 2006.
[6] Pavel Berkhin. A survey on pagerank computing. Internet Mathematics, 2:73–120,
2005.
[7] Sergey Brin and Lawrence Page. The anatomy of a large-scale hypertextual web search
engine. In Proceedings of the Seventh International Conference on World Wide Web
7, WWW7, pages 107–117, Amsterdam, The Netherlands, The Netherlands, 1998.
Elsevier Science Publishers B. V.
[8] Christopher Engström and Sergei Silvestrov. A componentwise pagerank algorithm. In
Applied Stochastic Models and Data Analysis (ASMDA 2015). The 16th Conference of
the ASMDA International Society, pages 186–199, 2015.
[9] Christopher Engström and Sergei Silvestrov. Non-normalized pagerank and random
walks on n-partite graphs. In 3rd Stochastic Modeling Techniques and Data Analysis
International Conference (SMTDA2014), pages 193–202, 2015.
[10] Ernesto Estrada. The Structure of Complex Networks: Theory and Applications. Oxford
University Press, Inc., New York, NY, USA, 2011.
[11] Taher Haveliwala and Sepandar Kamvar. The second eigenvalue of the google matrix.
Technical Report 2003-20, Stanford InfoLab, 2003.
[12] H. Ishii, R. Tempo, E.-W. Bai, and F. Dabbene. Distributed randomized pagerank
computation based on web aggregation. In Decision and Control, 2009 held jointly
with the 2009 28th Chinese Control Conference. CDC/CCC 2009. Proceedings of the
48th IEEE Conference on, pages 3026–3031, 2009.
[13] Sepandar Kamvar and Taher Haveliwala. The condition number of the pagerank problem. Technical Report 2003-36, Stanford InfoLab, June 2003.
[14] Sepandar Kamvar, Taher Haveliwala, and Gene Golub. Adaptive methods for the
computation of pagerank. Linear Algebra and its Applications, 386(0):51 – 65, 2004.
Special Issue on the Conference on the Numerical Solution of Markov Chains 2003.
[15] Amy N. Langville and Carl D. Meyer. A reordering for the pagerank problem. SIAM
J. Sci. Comput., 27(6):2112–2120, December 2005.
24
[16] Chris P. Lee, Gene H. Golub, and Stefanos A. Zenios. A two-stage algorithm for
computing pagerank and multistage generalizations. Internet Mathematics, 4(4):299–
327, 2007.
[17] Jure Leskovec and Andrej Krevl. SNAP Datasets: Stanford large network dataset
collection. http://snap.stanford.edu/data, June 2014.
[18] Robert Tarjan. Depth first search and linear graph algorithms. SIAM Journal on
Computing, 1(2):146–160, 1972.
[19] Qing Yu, Zhengke Miao, Gang Wu, and Yimin Wei. Lumping algorithms for computing
google’s pagerank and its derivative, with attention to unreferenced nodes. Information
Retrieval, 15(6):503–526, 2012.
25
| 8 |
Slot Games for Detecting Timing Leaks of Programs
Aleksandar S. Dimovski
Faculty of Information-Communication Tech., FON University, Skopje, 1000, MKD
aleksandar.dimovski@fon.edu.mk
In this paper we describe a method for verifying secure information flow of programs, where apart
from direct and indirect flows a secret information can be leaked through covert timing channels. That
is, no two computations of a program that differ only on high-security inputs can be distinguished by
low-security outputs and timing differences. We attack this problem by using slot-game semantics
for a quantitative analysis of programs. We show how slot-games model can be used for performing
a precise security analysis of programs, that takes into account both extensional and intensional
properties of programs. The practicality of this approach for automated verification is also shown.
1
Introduction
Secure information flow analysis is a technique which performs a static analysis of a program with the
goal of proving that it will not leak any sensitive (secret) information improperly. If the program passes
the test, then we say that it is secure and can be run safely. There are several ways in which secret
information can be leaked to an external observer. The most common are direct and indirect leakages,
which are described by the so-called non-interference property [13, 18]. We say that a program satisfies
the non-interference property if its high-security (secret) inputs do not affect its low-security (public)
outputs, which can be seen by external observers.
However, a program can also leak information through its timing behaviour, where an external observer can measure its total running time. Such timing leaks are difficult to detect and prevent, because
they can exploit low-level implementation details. To detect timing leaks, we need to ensure that the total
running time of a program do not depend on its high-security inputs.
In this paper we describe a game semantics based approach for performing a precise security analysis. We have already shown in [8] how game semantics can be applied for verifying the non-interference
property. Now we use slot-game semantics to check for timing leaks of closed and open programs. We
focus here only on detecting covert timing channels, since the non-interference property can be verified
similarly as in [8]. Slot-game semantics was developed in [11] for a quantitative analysis of Algollike programs. It is suitable for verifying the above security properties, since it takes into account both
extensional (what the program computes) and intensional (how the program computes) properties of programs. It represents a kind of denotational semantics induced by the theory of operational improvement
of Sands [19]. Improvement is a refinement of the standard theory of operational approximation, where
we say that one program is an improvement of another if its execution is more efficient in any program
context. We will measure efficiency of a program as the sum of costs associated with basic operations
it can perform. It has been shown that slot-game semantics is fully abstract (sound and complete) with
respect to operational improvement, so we can use it as a denotational theory of improvement to analyse
programming languages.
The advantages of game semantics (denotational) based approach for verifying security are several.
We can reason about open programs, i.e. programs with non-locally defined identifiers. Moreover, game
semantics is compositional, which enables analysis about program fragments to be combined into an
Gabriele Puppis, Tiziano Villa (Eds.): Fourth International
Symposium on Games, Automata, Logics and Formal Verification
EPTCS 119, 2013, pp. 166–179, doi:10.4204/EPTCS.119.15
c A. S. Dimovski
This work is licensed under the
Creative Commons Attribution License.
A. S. Dimovski
167
analysis of a larger program. Also the model hides the details of local-state manipulation of a program,
which results in small models with maximum level of abstraction where are represented only visible
input-output behaviours enriched with costs that measure their efficiency. All other behaviour is abstracted away, which makes this model very suitable for security analysis. Finally, the game model for
some language fragments admits finitary representation by using regular languages or CSP processes
[10, 6], and has already been applied to automatic program verification. Here we present another application of algorithmic game semantics for automatically verifying security properties of programs.
Related work. The most common approach to ensure security properties of programs is by using
security-type systems [14]. Here for every program component are defined security types, which contain
information about their types and security levels. Programs that are well-typed under these type systems
satisfy certain security properties. Type systems for enforcing non-interference of programs have been
proposed by Volpano and Smith in [20], and subsequently they have been extended to detect also covert
timing channels in [21, 2]. A drawback of this approach is its imprecision, since many secure programs
are not typable and so are rejected. A more precise analysis of programs can be achieved by using
semantics-based approaches [15].
2
Syntax and Operational Semantics
We will define a secure information flow analysis for Idealized Algol (IA), a small Algol-like language
introduced by Reynolds [16] which has been used as a metalanguage in the denotational semantics community. It is a call-by-name λ -calculus extended with imperative features and locally-scoped variables.
In order to be able to perform an automata-theoretic analysis of the language, we consider here its secondorder recursion-free fragment (IA2 for short). It contains finitary data types D: intn = {0, . . . , n − 1} and
bool = {tt, ff }, and first-order function types: T ::= B | B → T, where B ranges over base types: expressions (expD), commands (com), and variables (varD).
Syntax of the language is given by the following grammar:
M ::=x | v | skip | diverge | M op M | M; M | if M thenM else M | while M do M
| M := M |!M | newD x := v in M | mkvarD MM | λ x.M | MM
where v ranges over constants of type D.
Typing judgements are of the form Γ ⊢ M : T, where Γ is a type context consisting of a finite number
of typed free identifiers. Typing rules of the language are standard [1], but the general application rule is
broken up into the linear application and the contraction rule 1 .
Γ⊢M:B→T
∆⊢N:B
Γ, ∆ ⊢ MN : T
Γ, x1 : T, x2 : T ⊢ M : T ′
Γ, x : T ⊢ M[x/x1 , x/x2 ] : T ′
We use these two rules to have control over multiple occurrences of free identifiers in terms during
typing.
Any input/output operation in a term is done through global variables, i.e. free identifiers of type
varD. So an input is read by de-referencing a global variable, while an output is written by an assignment
to a global variable.
1
M[N/x] denotes the capture-free substitution of N for x in M.
Slot Games for Detecting Timing Leaks of Programs
168
Γ ⊢ n1 op n2 , s −→kop n, s, where n = n1 opn2
Γ ⊢ skip; skip, s −→kseq skip, s′
Γ ⊢ if tt then M1 elseM2 , s −→kif M1 , s
Γ ⊢ if ff then M1 elseM2 , s −→kif M2 , s
Γ ⊢ x := v′ , s ⊗ (x 7→ v) −→kasg skip, s ⊗ (x 7→ v′ )
Γ ⊢!x, s ⊗ (x 7→ v) −→kder v, s ⊗ (x 7→ v)
Γ ⊢ (λ x.M)M ′ , s −→kapp M[M ′ /x], s
Γ ⊢ newD x := v in skip, s −→knew skip, s
Table 1: Basic Reduction Rules
The operational semantics is defined in terms of a small-step evaluation relation using a notion of an
evaluation context [9]. A small-step evaluation (reduction) relation is of the form:
Γ ⊢ M, s −→ M ′ , s′
where Γ is a so-called var-context which contains only identifiers of type varD; s, s′ are Γ-states which
assign data values to the variables in Γ; and M, M ′ are terms. The set of all Γ-states will be denoted by
St(Γ).
Evaluation contexts are contexts 2 containing a single hole which is used to identify the next sub-term
to be evaluated (reduced). They are defined inductively by the following grammar:
E ::= [−] | EM | E; M | skip; E | E op M | v op E | if E then M else M | M := E | E := v |!E
The operational semantics is defined in two stages. First, a set of basic reduction rules are defined
in Table 1. We assign different (non-negative) costs to each reduction rule, in order to denote how much
computational time is needed for a reduction to complete. They are only descriptions of time and we can
give them different interpretations describing how much real time they denote. Such an interpretation
can be arbitrarily complex. So the semantics is parameterized on the interpretation of costs. Notice that
we write s ⊗ (x 7→ v) to denote a {Γ, x}-state which properly extends s by mapping x to the value v.
We also have reduction rules for iteration, local variables, and mkvarD construct, which do not incur
additional costs.
Γ ⊢ while b do M, s −→ if b then (M; while b do M) else skip, s
Γ, y ⊢ M[y/x], s ⊗ (y 7→ v) −→ M ′ , s′ ⊗ (y 7→ v′ )
Γ ⊢ newD x := v in M, s −→ newD x := v′ in M ′ [x/y], s′
Γ ⊢ (mkvar D M1 M2 ) := v, s −→ M1 v, s
Γ ⊢!(mkvarD M1 M2 ), s −→ M2 , s
Next, the in-context reduction rules for arbitrary terms are defined as:
Γ ⊢ M, s −→n M ′ , s′
Γ ⊢ E[M], s −→n E[M ′ ], s′
The small-step evaluation relation is deterministic, since arbitrary term can be uniquely partitioned into
an evaluation context and a sub-term, which is next to be reduced.
We define the reflexive and transitive closure of the small-step reduction relation as follows:
2 A context C[−] is a term with (several occurrences of) a hole in it, such that if Γ ⊢ M : T is a term of the same type as the
hole then C[M] is a well-typed closed term of type com, i.e. ⊢ C[M] : com.
A. S. Dimovski
169
′
Γ ⊢ M, s −→n M ′ , s′
Γ ⊢ M, s n M ′ , s′
Γ ⊢ M ′ , s′ n M ′′ , s′′
′
n
′
′
Γ ⊢ M, s
M ,s
Γ ⊢ M, s n+n M ′′ , s′′
Now a theory of operational improvement is defined [19]. Let Γ ⊢ M : com be a term, where Γ is a
var-context. We say that M terminates in n steps at state s, written M, s ⇓n , if Γ ⊢ M, s n skip, s′ for
some state s′ . If M is a closed term and M, 0/ ⇓n , then we write M ⇓n . If M ⇓n and n ≤ n′ , we write
′
M ⇓≤n . We say that a term Γ ⊢ M : T may be improved by Γ ⊢ N : T, denoted by Γ ⊢ M & N, if and only
if for all contexts C[−], if C[M] ⇓n then C[N] ⇓≤n . If two terms improve each other they are considered
improvment-equivalent, denoted by Γ ⊢ M ≈ N.
Let Γ, ∆ ⊢ M : T be a term where Γ is a var-context and ∆ is an arbitrary context. Such terms are
called split terms, and we denote them as Γ | ∆ ⊢ M : T. If ∆ is empty, then these terms are called semiclosed. The semi-closed terms have only some global variables, and the operational semantics is defined
only for them. We say that a semi-closed term h : varD | − ⊢ M : com does not have timing leaks if the
initial value of the high-security variable h does not influence the number of reduction steps of M. More
formally, we have:
Definition 1. A semi-closed term h : varD | − ⊢ M : com has no timing leaks if
∀ s1 , s2 ∈ St({h}). s1 (h) 6= s2 (h) ∧
h : varD ⊢ M, s1
⇒ n1 = n2
n1
skip, s1 ′ ∧ h : varD ⊢ M, s2
n2
skip, s2 ′
(1)
Definition 2. We say that a split term h : varD | ∆ ⊢ M : com does not have timing leaks, where ∆ =
x1 : T1 , . . . , xk : Tk , if for all closed terms ⊢ N1 : T1 , . . . , ⊢ Nk : Tk , we have that the term h : varD | − ⊢
M[N1 /x1 , . . . , Nk /xk ] : com does not have timing leaks.
The formula (1) can be replaced by an equivalent formula, where instead of two evaluations of the
same term we can consider only one evaluation of the sequential composition of the given term with
another its copy [3]. So sequential composition enables us to place these two evaluations one after the
other. Let h : varD ⊢ M : com be a term, we define M ′ to be α -equivalent to M[h′ /h] where all bound vari′
ables are suitable renamed. The following can be shown: h ⊢ M, s1 n skip, s1 ′ ∧ h′ ⊢ M ′ , s2 n skip, s2 ′
′
iff h, h′ ⊢ M; M ′ , s1 ⊗ s2 n+n skip; skip, s1 ′ ⊗ s2 ′ . In this way, we provide an alternative definition to
formula (1) as follows. We say that a semi-closed term h | − ⊢ M : T has no timing leaks if
∀ s1 ∈ St({h}), s2 ∈ St({h′ }).
s1 (h) 6= s2 (h′ ) ∧
h, h′ ⊢ M; M ′ , s1 ⊗ s2
⇒ n1 = n2
n1
skip; M ′ , s1 ′ ⊗ s2
n2
skip; skip, s1 ′ ⊗ s2 ′
(2)
3
Algorithmic Slot-Game Semantics
We now show how slot-game semantics for IA2 can be represented algorithmically by regular-languages.
In this approach, types are interpreted as games, which have two participants: the Player representing
the term, and the Opponent representing its context. A game (arena) is defined by means of a set of
moves, each being either a question move or an answer move. Each move represents an observable
action that a term of a given type can perform. Apart from moves, another kind of action, called token
(slot), is used to take account of quantitative aspects of terms. It represents a payment that a participant
needs to pay in order to use a resource such as time. A computation is interpreted as a play-withcosts, which is given as a sequence of moves and token-actions played by two participants in turns.
Slot Games for Detecting Timing Leaks of Programs
170
We will work here with complete plays-with-costs which represent the observable effects along with
incurred costs of a completed computation. Then a term is modelled by a strategy-with-costs, which
is a set of complete plays-with-costs. In the regular-language representation of game semantics [10],
types (arenas) are expressed as alphabets of moves, computations (plays-with-costs) as words, and terms
(strategies-with-costs) as regular-languages over alphabets.
Each type T is interpreted by an alphabet of moves A[[T]] , which can be partitioned into two subsets
of questions Q[[T]] and answers A[[T]] . For expressions, we have: Q[[expD]] = {q} and A[[expD]] = D, i.e. there
are a question move q to ask for the value of the expression and values from D are possible answers.
For commands, we have: Q[[com]] = {run} and A[[com]] = {done}, i.e. there are a question move run
to initiate a command and an answer move done to signal successful termination of a command. For
variables, we have: Q[[varD]] = {read, write(a) | a ∈ D} and A[[varD]] = D ∪ {ok}, i.e. there are moves
for writing to the variable, write(a), acknowledged by the move ok, and for reading from the variable,
we have a question move read, and an answer to it can be any value from D. For function types, we
i +A
have A[[B1 →...→Bk →B]] = ∑1≤i≤k A[[B
[[B]] , where + means a disjoint union of alphabets. We will use
i ]]
1
k
superscript tags to keep record from which type of the disjoint union each move comes from. We denote
the token-action by $ . A sequence of n token-actions $ will be written as n .
For any (β -normal) term we define a regular language specified by an extended regular expression R.
Apart from the standard operations for generating regular expressions, we will use some more specific
operations. We define composition of regular expressions R defined over alphabet A 1 + B 2 + { $ } and
S over B 2 + C 3 + { $ } as follows:
R 9oB2 S = {w s/a2 · b2 | w ∈ S, a2 · s · b2 ∈ R}
where R is a set of words of the form a2 · s · b2 , such that a2 , b2 ∈ B 2 and s contains only letters from A 1
and { $ }. Notice that the composition is defined over A 1 + C 3 + { $ }, and all letters of B 2 are hidden.
The shuffle operation R ⊲⊳ S generates the set of all possible interleavings from words of R and S, and
the restriction operation R |A ′ (R defined over A and A ′ ⊆ A ) removes from words of R all letters from
A ′.
If w, w′ are words, m is a move, and R is a regular expression, define m · w a w′ = m · w′ · w, and
R a w′ = {w a w′ | w ∈ R}. Given a word with costs w defined over A + { $ }, we define the underlying
word of w as w† = w |{ $ } , and the cost of w as w |A = n , which we denote as | w |= n.
The regular expression for Γ ⊢ M : T is denoted [[Γ ⊢ M : T]] and is defined over the alphabet A[[Γ⊢T]] =
∑x:T ′ ∈Γ A[[Tx ′ ]] + A[[T]] + { $ }. Every word in [[Γ ⊢ M : T]] corresponds to a complete play-with-costs in
the strategy-with-costs for Γ ⊢ M : T.
Free identifiers x ∈ Γ are interpreted by the copy-cat regular expressions, which contain all possible
computations that terms of that type can have. Thus they provide the most general closure of an open
term.
x,k
x
1
k
[[Γ, x : Bx,1
1 → . . . Bk → B ⊢ x : B1 → . . . Bk → B]] =
i
x
∑ q · q · ∑ ( ∑ qx,i
1 · q1 ·
q∈Q[[B]]
1≤i≤k q1 ∈Q[[Bi ]]
∑
∗
ai1 · ax,i
1 ) ·
a1 ∈A[[Bi ]]
∑
ax · a
a∈A[[B]]
When a first-order non-local function is called, it may evaluate any of its arguments, zero or more times,
and then it can return any value from its result type as an answer. For example, the term [[Γ, x : expDx ⊢
x : expD]] is modelled by the regular expression: q · qx · ∑n∈D nx · n.
The linear application is defined as:
[[Γ, ∆ ⊢ M N : T]] = [[∆ ⊢ N : B1 ]] o9A 1 [[Γ ⊢ M : B1 → T]]
[[B]]
A. S. Dimovski
171
Since we work with terms in β -normal form, function application can occur only when the function term
is a free identifier. In this case, the interpretation is the same as above except that we add the cost kapp
corresponding to function application. Notice that kapp denotes certain number of $ units that are needed
for a function application to take place. The contraction [[Γ, x : T x ⊢ M[x/x1 , x/x2 ] : T ′ ]] is obtained from
[[Γ, x1 : T x1 , x2 : T x2 ⊢ M : T ′ ]], such that the moves associated with x1 and x2 are de-tagged so that they
represent actions associated with x.
To represent local variables, we first need to define a (storage) ‘cell’ regular expression cellv which
imposes the good variable behaviour on the local variable. So cellv responds to each write(n) with ok,
and plays the most recently written value in response to read, or if no value has been written yet then
answers the read with the initial value v. Then we have:
∗
cellv = (read · v)∗ · ∑ write(n) · ok · (read · n)∗
n∈D
x
[[Γ, x : varD ⊢ M]] ◦ cellxv = [[Γ, x : varD ⊢ M]] ∩ (cellxv ⊲⊳ (A[[Γ⊢B]] + $ )∗ ) |A[[varD]]
x
[[Γ ⊢ newD x := v in M]] = [[Γ, x : varD ⊢ M]] ◦ cellv a kvar
Note that all actions associated with x are hidden away in the model of new, since x is a local variable
and so not visible outside of the term.
Language constants and constructs are interpreted as follows:
[[v : expD]] = {q · v} [[skip : com]] = {run · done} [[diverge : com]] = 0/
[[op : expD1 × expD2 → expD′ ]] = q · kop · q1 · ∑m∈D m1 · q2 · ∑n∈D n2 ·(m op n)
[[; : com1 → com2 → com]] = run · run1 · done1 · kseq · run2 · done2 · done
[[if : expbool1 → com2 → com3 → com]] = run · kif · q1 · tt1 · run2 · done2 · done +
run · kif · q1 · ff 1 · run3 · done3 · done
[[while : expbool1 → com2 → com]] = run · (kif · q1 · tt1 · run2 · done2 )∗ · kif · q1 · ff 1 · done
[[:=: varD1 → expD2 → com]] = ∑n∈D run · kasg · q2 · n2 · write(n)1 · ok1 · done
[[! : varD1 → expD]] = ∑n∈D q · kder · read1 · n1 · n
Although it is not important at what position in a word costs are placed, for simplicity we decide to attach
them just after the initial move. The only exception is the rule for sequential composition (; ), where the
cost is placed between two arguments. The reason will be explained later on.
We now show how slot-games model relates to the operational semantics. First, we need to show
how to represent the state explicitly in the model. A Γ-state s is interpreted as follows:
k
1
[[s : varDx11 × . . . × varDxkk ]] = cellxs(x
⊲⊳ . . . ⊲⊳ cellxs(x
1)
k)
xk
x1
, and words in [[s]] are
The regular expression [[s]] is defined over the alphabet A[[varD
+ . . . + A[[varD
1 ]]
k ]]
such that projections onto xi -component are the same as those of suitable initialized cells(xi ) strategies.
Note that [[s]] is a regular expression without costs. The interpretation of Γ ⊢ M : com at state s is:
[[Γ ⊢ M]] ◦ [[s]] = [[Γ ⊢ M]] ∩ ([[s]] ⊲⊳ (A[[com]] + $ )∗ ) |A[[Γ]]
which is defined over the alphabet A[[com]] + { $ }. The interpretation [[Γ ⊢ M]] ◦ [[s]] can be studied
more closely by considering words in which moves from A[[Γ]] are not hidden. Such words are called
interaction sequences. For any interaction sequence run · t · done ⊲⊳ n from [[Γ ⊢ M]] ◦ [[s]], where t
is an even-length word over A[[Γ]] , we say that it leaves the state s′ if the last write moves in each xi component are such that xi is set to the value s′ (xi ). For example, let s = (x 7→ 1, y 7→ 2), then the
Slot Games for Detecting Timing Leaks of Programs
172
following interaction: run · write(5)y · oky · readx · 1x · done leaves the state s′ = (x 7→ 1, y 7→ 5). Any twomove word of the form: runxi · nxi or write(n)xi · okxi will be referred to as atomic state operation of A[[Γ]] .
The following results are proved in [11] for the full ICA (IA plus parallel composition and semaphores),
but they also hold for the restricted fragment of it.
Proposition 1. If Γ ⊢ M : {com, expD} and Γ ⊢ M, s −→n M ′ , s′ , then for each interaction sequence i · t
from [[Γ ⊢ M ′ ]] ◦ [[s′ ]] (i is an initial move) there exists an interaction i · ta · t a n ∈ [[Γ ⊢ M]] ◦ [[s]] such
that ta is an empty word or an atomic state operation of A[[Γ]] which leaves the state s′ .
Proposition 2. If Γ ⊢ M, s
n
M ′ , s′ then [[Γ ⊢ M ′ ]] ◦ [[s′ ]] ⊲⊳ n ⊆ [[Γ ⊢ M]] ◦ [[s]].
Theorem 1 (Consistency). If M, s ⇓n then ∃ w ∈ [[Γ ⊢ M]] ◦ [[s]] such that | w |= n and w† = run · done .
Theorem 2 (Computational Adequacy). If ∃ w ∈ [[Γ ⊢ M]] ◦ [[s]] such that | w |= n and w† = run · done,
then M, s ⇓n .
w†
We say that a regular expression R is improved by S, denoted as R & S, if ∀ w ∈ R, ∃ t ∈ S, such that
= t† and | w |≥| t |.
Theorem 3 (Full Abstraction). Γ ⊢ M & N iff [[Γ ⊢ M]] & [[Γ ⊢ N]].
This shows that the two theories of improvement based on operational and game semantics are identical.
4
Detecting Timing Leaks
In this section slot-game semantics is used to detect whether a term with a secret global variable h can
leak information about the initial value of h through its timing behaviour.
For this purpose, we define a special command skip# which similarly as skip does nothing, but its
slot-game semantics is: [[skip# ]] = {run · # · done}, where # is a new special action, called delimiter.
Since we verify security of a term by running two copies of the same term one after the other, we will
use the command skip# to specify the boundary between these two copies. In this way, we will be able
to calculate running times of the two terms separately.
Theorem 4. Let h : varD | − ⊢ M : com be a semi-closed term, and 3
R = [[k : expD ⊢ newD h := k in M; skip# ; newD h′ := k in M ′ : com]]
(3)
Any word of R is of the form w = w1 · # · w2 such that | w1 |=| w2 | iff M has no timing leaks, i.e. the fact
(2) holds.
Proof. Suppose that any word w ∈ R is of the form w = w1 · # · w2 such that | w1 |=| w2 |. Let us analyse
the regular expression R defined in (3). We have:
R = {run · kvar · qk · vk · w1 · kseq · # · kseq · kvar · qk · v′k · w2 · done |
′
run · w1 · done ∈ [[h ⊢ M]] ◦ cellhv , run · w2 · done ∈ [[h′ ⊢ M ′ ]] ◦ cellhv′ }
for arbitrary values v, v′ ∈ D. In order to ensure that one kseq unit of cost occurs before and after the
delimiter action, kseq is played between two arguments of the sequential composition as was described
′
in Section 3. Given that run · w1 · done ∈ [[h ⊢ M]] ◦ cellhv and run · w2 · done ∈ [[h′ ⊢ M ′ ]] ◦ cellhv′ for any
3 The
free identifier k in (3) is used to initialize the variables h and h′ to arbitrary values from D.
A. S. Dimovski
173
v, v′ ∈ D, by Computational Adequacy we have that M, (h 7→ v) ⇓|w1 | and M ′ , (h′ 7→ v′ ) ⇓|w2 | . Since
| w1 |=| w2 |, it follows that the fact (2) holds.
Let us consider the opposite direction. Suppose that the fact (2) holds. The term in (3) is α -equivalent
to k ⊢ newD h := k in newD h′ := k in M; skip# ; M ′ . Consider [[h, h′ ⊢ M; skip# ; M ′ ]] ◦ [[(h 7→ v) ⊗ (h′ 7→ v′ )]],
where v, v′ ∈ D. By Consistency, we have that ∃ w1 ∈ [[h, h′ ⊢ M]] ◦ [[(h 7→ v) ⊗ (h′ 7→ v′ )]] such that
| w1 |= n and w1 leaves the state (h 7→ v1 ) ⊗ (h′ 7→ v′ ), and ∃ w2 ∈ [[h, h′ ⊢ M ′ ]] ◦ [[(h 7→ v1 ) ⊗ (h′ 7→ v′ )]]
such that | w2 |= n and w2 leaves the state (h 7→ v1 ) ⊗ (h′ 7→ v′1 ). Any word w ∈ R is obtained from w1
and w2 as above (| w1 |=| w2 |), and so satisfies the requirements of the theorem.
We can detect timing leaks from a semi-closed term by verifying that all words in the model in (3)
are in the required form. To do this, we restrict our attention only to the costs of words in R.
Example 1. Consider the term:
h : var int2 ⊢ if (!h > 0) then h := !h + 1; else skip : com
The slot-game semantics of this term extended as in (3) is:
run · kvar · qk · 0k · kseq · # · kseq · kvar · qk · (0k · done + 1k · kder · k+ · done)
+1k · kseq · kder · k+ · # · kseq · kvar · qk · (0k · done + 1k · kder · k+ · done)
This model includes all possible observable interactions of the term with its environment, which contains
only the identifier k, along with the costs measuring its running time. Note that the first value for k read
from the environment is used to initialize h, while the second value for k is used to initialize h′ .
By inspecting we can see that the model contains the word:
run · kvar · qk · 0k · kseq · # · kseq · kvar · qk · 1k · kder · k+ · done
which is not of the required form. This word (play) corresponds to two computations of the given term
where initial values of h are 0 and 1 respectively, such that the cost of the second computation has
additional kder + k+ units more than the first one.
We now show how to detect timing leaks of a split (open) term h : varD | ∆ ⊢ M : com, where ∆ =
x1 : T1 , . . . , xk : Tk . To do this, we need to check timing efficiency of the following model:
[[h, h′ : varD ⊢ M[N1 /x1 , . . . , Nk /xk ]; skip# ; M ′ [N1 /x1 , . . . , Nk /xk ]]]
(4)
at state (h 7→ v, h′ 7→ v′ ), for any closed terms ⊢ N1 : T1 , . . . , ⊢ Nk : Tk , and for any values v, v′ ∈ D. As
we have shown slot-game semantics respects theory of operational improvement, so we will need to
examine whether all its complete plays-with-costs s are of the form s1 · # · s2 where | s1 |=| s2 |. However,
the model in (4) can not be represented as a regular language, so it can not be used directly for detecting
timing leaks.
Let us consider more closely the slot-game model in (4). Terms M and M ′ are run in the same context
∆, which means that each occurrence of a free identifier xi from ∆ behaves uniformly in both M and M ′ .
So any complete play-with-costs of the model in (4) will be a concatenation of complete plays-with-costs
from models for M and M ′ with additional constraints that behaviours of free identifiers from ∆ are the
same in M and M ′ . If these additional constraints are removed from the above model, then we generate
a model which is an over-approximation of it and where free identifiers from ∆ can behave freely in M
and M ′ . Thus we obtain:
[[h, h′ : varD ⊢ M[N1 /x1 , . . . , Nk /xk ]; skip# ; M ′ [N1 /x1 , . . . , Nk /xk ]]] ⊆
[[h, h′ : varD ⊢ M; skip# ; M ′ [N1 /x1 , . . . , Nk /xk ]]]
174
Slot Games for Detecting Timing Leaks of Programs
If ⊢ N1 : T1 , . . . , ⊢ Nk : Tk are arbitrary closed terms, then they are interpreted by identity (copy-cat)
strategies corresponding to their types, and so we have:
[[h, h′ : varD ⊢ M; skip# ; M ′ [N1 /x1 , . . . , Nk /xk ]]] = [[h, h′ : varD, ∆ ⊢ M; skip# ; M ′ ]]
This model is a regular language and we can use it to detect timing leaks.
Theorem 5. Let h : varD | ∆ ⊢ M : com be a split (open) term, where ∆ = x1 : T1 , . . . , xk : Tk , and
S = [[k : expD, ∆ ⊢ newD h := k in M; skip# ; newD h′ := k in M ′ : com]]
(5)
If any word of S is of the form w = w1 · # · w2 such that | w1 |=| w2 |, Then h : varD | ∆ ⊢ M has no timing
leaks.
Note that the opposite direction in the above result does not hold. That is, if there exists a word from
S which is not of the required form then it does not follow that M has timing leaks, since the found word
(play) may be spurious introduced due to over-approximation in the model in (5), and so it may be not
present in the model in (4).
Example 2. Consider the term:
h : varint2 , f : expint2 f ,1 → comf ⊢ f (!h) : com
where f is a non-local call-by-name function.
The slot-game model for this term is as follows:
run · kapp · runf · (qf ,1 · kder · readh · (0h · 0f ,1 + 1h · 1f ,1 ))∗ · donef · done
Once f is called, it may evaluate its argument, zero or more times, and then it terminates successfully.
Notice that moves tagged with f represent the actions of calling and returning from the function f , while
moves tagged with f , 1 indicate actions of the first argument of f .
If we generate the slot-game model of this term extended as in (5), we obtain a word which is not in
the required form:
run · kvar · qk · 0k · kapp · runf · qf ,1 · kder · 0f ,1 · donef · kseq · # · kseq · kvar · qk · 1k · kapp · runf · donef · done
This word corresponds to two computations of the term, where the first one calls f which evaluates its
argument once, and the second calls f which does not evaluate its argument at all. The first computation
will have the cost of kder units more that the second one. However, this is a spurious counter-example,
since f does not behave uniformly in the two computations, i.e. it calls its argument in the first but not in
the second computation.
To handle this problem, we can generate an under-approximation of the model given in (4) which can
be represented as a regular language. Let h : varD | ∆ ⊢ M be a term derived without using the contraction
rule for any identifier from ∆. Consider the following model:
[[h, h′ : varD | ∆ ⊢ M; skip# ; M ′ ]]m = [[h, h′ : varD | ∆ ⊢ M; skip# ; M ′ ]] ∩
(deltaxT11 ,m ⊲⊳ . . . ⊲⊳ deltaxTkk ,m ⊲⊳ (A[[h,h′ :varD⊢com]] + $ )∗ )
(6)
where m ≥ 0 denotes the number of times that free identifiers of function types may evaluate its arguments at most. The regular expressions deltaT,m are used to repeat zero or once an arbitrary behaviour
for terms of type T, and are defined as follows.
deltaexpD,0 = q · ∑n∈D n · (ε + q · n)
deltacom,0 = run · done · (ε + run · done)
deltavarD,0 = (read · ∑n∈D n · (ε + read · n)) + (∑n∈D write(n) · ok · (ε + write(n) · ok))
A. S. Dimovski
175
If T is a first-order function type, then deltaT,m will be a regular language only when the number of times
its arguments can be evaluated is limited. For example, we have that:
m
deltacom1 →com,m = run · ∑ (run1 · done1 )r · done · (ε + run · (run1 · done1 )r · done)
r=0
If T is a function type with k arguments, then we have to remember not only how many times arguments
are evaluated in the first call, but also the exact order in which arguments are evaluated.
Notice that we allow an arbitrary behavior of type T to be repeated zero or once in deltaT,m , since it
is possible that depending on the current value of h an occurrence of a free identifier from ∆ to be run in
M but not in M ′ , or vice versa. For example, consider the term:
h : var int2 | x, y : exp int2 ⊢ newint2 z := 0 in if (!h > 0) then z := x else z := y + 1
This term has timing leaks, and the corresponding counter-example contains only one interaction with x
occurred in a computation, and one interaction with y occurred in the other computation. This counterexample will be included in the model in (6), only if deltaT,m is defined as above.
Let h : varD | ∆ ⊢ M be an arbitrary term where identifiers from ∆ may occur more than once in
M. Let h : varD | ∆1 ⊢ M1 be derived without using the contraction for ∆1 , such that h : varD | ∆ ⊢ M
is obtained from it by applying one or more times the contraction rule for identifiers from ∆. Then
[[h, h′ : varD | ∆ ⊢ M; skip# ; M ′ ]]m is obtained by first computing [[h, h′ : varD | ∆1 ⊢ M1 ; skip# ; M1′ ]]m as
defined in (6), and then by suitable tagging all moves associated with several occurrences of the same
identifier from ∆ as described in the interpretation of contraction. We have that:
[[h, h′ : varD, ∆ ⊢ M; skip# ; M ′ ]]m ⊆ [[h, h′ : varD ⊢ M[N1 /x1 , . . . , Nk /xk ]; skip# ; M ′ [N1 /x1 , . . . , Nk /xk ]]]
for any m ≥ 0 and arbitrary closed terms ⊢ N1 : T1 , . . . , ⊢ Nk : Tk .
In the case that ∆ contains only identifiers of base types B which do not occur in any while-subterm of
M, then in the above formula the subset relation becomes the equality for m = 0. If a free identifier occurs
in a while-subterm of M, then it can be called arbitrary many times in M, and so we cannot reproduce its
behaviour in M ′ .
Theorem 6. Let h : varD | ∆ ⊢ M be a split (open) term, where ∆ = x1 : T1 , . . . , xk : Tk , and
T = [[k : expD, ∆ ⊢ newD h := k in M; skip# ; newD h′ := k in M ′ : com]]m
(7)
(i) Let ∆ contains only identifiers of base types B, which do not occur in any while-subterm of M. Any
word of T (where m = 0) is of the form w1 · # · w2 such that | w1 |=| w2 | iff M has no timing leaks.
(ii) Let ∆ be an arbitrary context. If there exists a word w = w1 · # · w2 ∈ T such that | w1 |6=| w2 |, Then
M does have timing leaks.
Note that if a counter-example witnessing a timing leakage is found, then it provides a specific context
∆, i.e. a concrete definition of identifiers from ∆, for which the given open term have timing leaks.
5
Detecting Timing-Aware Non-interference
The slot-game semantics model contains enough information to check the non-interference property of
terms along with timing leaks. The method for verifying the non-interference property is analogous to
Slot Games for Detecting Timing Leaks of Programs
176
the one described in [8], where we use the standard game semantics model. As slot-game semantics
can be considered as the standard game semantics augmented with the information about quantitative
assessment of time usage, we can use it as underlying model for detection of both non-interference
property and timing leaks, which we call timing-aware non-interference.
In what follows, we show how to verify timing-aware non-interference property for closed terms. In
the case of open terms, the method can be extended straightforwardly by following the same ideas for
handling open terms described in Section 4.
Let l : varD, h : varD′ ⊢ M : com be a term where l and h represent low- and high-security global
variables respectively. We define Γ1 = l : varD, h : varD′ , Γ′1 = l′ : varD, h′ : varD′ , and M ′ is α -equivalent
to M[l′ /l, h′ /h] where all bound variables are suitable renamed. We say that Γ1 | − ⊢ M : com satisfies
timing-aware non-interference if
∀ s1 ∈ St(Γ1 ), s2 ∈ St(Γ′1 ). s1 (l) = s2 (l′ ) ∧ s1 (h) 6= s2 (h′ ) ∧
Γ1 ⊢ M; M ′ , s1 ⊗ s2 n1 skip; M ′ , s1 ′ ⊗ s2
⇒ s′1 (l) = s′2 (l′ ) ∧ n1 = n2
n2
skip; skip, s1 ′ ⊗ s2 ′
Suppose that abort is a special free identifier of type comabort in Γ. We say that a term Γ ⊢ M is safe
4
iff Γ ⊢ M[skip/abort] ⊏
∼ M[diverge/abort] ; otherwise we say that a term is unsafe. It has been shown in
abort , which we
[5] that a term Γ ⊢ M is safe iff [[Γ ⊢ M]] does not contain any play with moves from A[[com]]
call unsafe plays. For example, [[abort : comabort ⊢ skip ; abort : com]] = run · runabort · doneabort · done,
so this term is unsafe.
By using Theorem 4 from Section 4 and the corresponding result for closed terms from [8], it is easy
to show the following result.
L = [[k : expD, k′ : expD′ , abort : com ⊢ newD l := k in newD′ h := k′ in
newD l′ := !l in newD′ h′ := k′ in
skip# ; M; skip# ; M ′ ; skip# ; if (!l 6=!l′ ) then abort : com]]
(8)
The regular expression L contains no unsafe word (plays) and all its words are of the form w = w1 · # ·
w2 · # · w3 · # · w4 such that | w2 |=| w3 | iff M satisfies the timing-aware non-interference property.
Notice that the free identifier k in (8) is used to initialize the variables l and l′ to any value from D
which is the same for both l and l′ , while k′ is used to initialize h and h′ to any values from D′ . The last
if command is used to check values of l and l′ in the final state after evaluating the term in (8). If their
values are different, then abort is run.
6
Application
We can also represent slot-game semantics model of IA2 by using the CSP process algebra. This can be
done by extending the CSP representation of standard game semantics given in [6], by attaching the costs
corresponding to each translation rule. In the same way, we have adapted the verification tool in [6] to
automatically convert an IA2 term into a CSP process [17] that represents its slot-game semantics. The
CSP process outputted by our tool is defined by a script in machine readable CSP which can be analyzed
by the FDR tool. It represents a model checker for the CSP process algebra, and in this way a range of
properties of terms can be verified by calls to it.
4⊏
∼
denotes observational approximation of terms (see [1])
A. S. Dimovski
177
$
read x[0]
run
read h
0 x[0]
1h
0h
read x[0]
done
1x[0]
0 x[0]
1x[0]
$
read x[1]
x[1]
0,1
read x[1]
$
0,1x[1]
Figure 1: Slot-game semantics for the linear search with k=2
In the input syntax of terms, we use simple type annotations to indicate what finite sets of integers will
be used to model free identifiers and local variables of type integer. An operation between values of types
intn1 and intn2 produces a value of type intmax{n1 ,n2 } . The operation is performed modulo max{n1 , n2 }.
In order to use this tool to check for timing leaks in terms, we need to encode the required property
as a CSP process (i.e. regular-language). This can be done only if we know the cost of the worst plays
(paths) in the model of a given term. We can calculate the worst-case cost of a term by generating its
model, and then by counting the number of tokens in its plays. The property we want to check will be:
∑ni=0 i · # · i , where n denotes the worst-case cost of a term.
To demonstrate practicality of this approach for automated verification, we consider the following
implementation of the linear-search algorithm.
h : varint2 , x[k] : varint2 ⊢
newint2 a[k] := 0 in
newintk+1 i := 0 in
while (i < k) do {a[i] :=!x[i]; i :=!i + 1; }
newint2 y := !h in
newbool present := ff in
while (i < k && ¬present) do {
if (compare(!a[i], !y)) then present := tt;
i :=!i + 1;
} : com
The meta variable k > 0 represents the array size. The term copies the input array x into a local array a,
and the input value of h into a local variable y. The linear-search algorithm is then used to find whether
the value stored in y is in the local array. At the moment when the value is found in the array, the term
terminates successfully. Note that arrays are introduced in the model as syntactic sugar by using existing
term formers. So an array x[k] is represented as a set of k distinct variables x[0], . . . , x[k − 1] (see [6, 10]
for details).
Suppose that we are only interested in measuring the efficiency of the term relative to the number of
compare operations. It is defined as follows compare : expint2 → expint2 → expbool, and its semantics
compares for equality the values of two arguments with cost $ :
[[compare : expint12 → expint22 → expbool]] = q · $ · q1 · (∑m6=n m1 · q2 · n2 · ff ) + (∑m=n m1 · q2 · n2 · tt)
where m, n ∈ {0, 1}. We assume that the costs of all other operations are relatively negligible (e.g.
kvar = kder = . . . = 0).
Slot Games for Detecting Timing Leaks of Programs
178
We show the model for this term with k = 2 in Fig. 1. The worst-case cost of this term is equal to
the array’s size k, which occurs when the search fails or the value of h is compared with all elements of
the array. We can perform a security analysis for this term by considering the model extended as in (7),
where m = 0. We obtain that this term has timing leaks, with a counter-example corresponding to two
computations, such that initial values of h are different, and the search succeeds in the one after only one
iteration of while and fails in the other. For example, this will happen when all values in the array x are
0’s, and the value of h is 0 in the first computation and 1 in the second one.
We can also automatically analyse in an analogous way terms where the array size k is much larger.
Also the set of data that can be stored into the global variable h and array x can be larger than {0, 1}. In
these cases we will obtain models with much bigger number of states, but they still can be automatically
analysed by calls to the FDR tool.
7
Conclusion
In this paper we have described how game semantics can be used for verifying security properties of
open sequential programs, such as timing leaks and non-interference. This approach can be extended to
terms with infinite data types, such as integers, by using some of the existing methods and tools based
on game semantics for verifying such terms. Counter-example guided abstraction refinement procedure
(ARP) [5] and symbolic representation of game semantics model [7] are two methods which can be used
for this aim. The technical apparatus introduced here applies not only to time as a resource but to any
other observable resource, such as power or heating of the processor. They can all be modeled in the
framework of slot games and checked for information leaks.
We have focussed here on analysing the IA language, but we can easily extend this approach to any
other language for which game semantics exists. Since fully abstract game semantics was also defined
for probabilistic [4], concurrent [12], and programs with exceptions [1], it will be interesting to extend
this approach to such programs.
References
[1] Abramsky, S., and McCusker, G: Game Semantics. In Proceedings of the 1997 Marktoberdorf Summer
School: Computational Logic , (1998), 1–56. Springer.
[2] Agat, J: Transforming out Timing Leaks. In: Wegman, M.N., Reps, T.W. (eds.) POPL 2000. ACM, pp.
40–53. ACM, New York (2000), doi:10.1145/325694.325702.
[3] Barthe, G., D’Argenio, P.R., Rezk, T: Secure information flow by self-composition. In: IEEE CSFW
2004. pp. 100–114. IEEE Computer Society Press, (2004), doi:10.1109/CSFW.2004.17.
[4] V. Danos and R. Harmer. Probabilistic Game Semantics. In Proceedings of LICS 2000. 204–213. IEEE
Computer Society Press, Los Alamitos (2000), doi:10.1109/LICS.2000.855770.
[5] Dimovski, A., Ghica, D. R., Lazić, R. Data-Abstraction Refinement: A Game Semantic Approach. In:
Hankin, C., Siveroni, I. (eds.) SAS 2005. LNCS vol. 3672, pp. 102–117. Springer, Heidelberg (2005),
doi:10.1007/11547662 9.
[6] Dimovski, A., Lazić, R: Compositional Software Verification Based on Game Semantics and Process
Algebras. In Int. Journal on STTT 9(1), pp. 37–51, (2007), doi:10.1007/s10009-006-0005-y.
[7] Dimovski, A: Symbolic Representation of Algorithmic Game Semantics. In: Faella, M., Murano, A. (eds.) GandALF 2012. EPTCS vol. 96, pp. 99–112. Open Publishing Association, (2012),
doi:10.4204/EPTCS.96.8.
A. S. Dimovski
179
[8] Dimovski, A: Ensuring Secure Non-interference of Programs by Game Semantics. Submitted for publication.
[9] Cartwright, R., Curien, P. L., and Felleisen, M: Fully abstract semantics for observably sequential
languages. In Information and Computation 111(2), pp. 297–401, (1994), doi:10.1006/inco.1994.1047.
[10] Ghica, D. R., McCusker, G: The Regular-Language Semantics of Second-order Idealized Algol. Theoretical Computer Science 309 (1–3), pp. 469–502, (2003), doi:10.1016/S0304-3975(03)00315-3.
[11] Ghica, D. R. Slot Games: a quantitative model of computation. In Palsberg, J., Abadi, M. (eds.) POPL
2005. ACM, pp. 85–97. ACM Press, New York (1998), doi:10.1145/1040305.1040313.
[12] Ghica, D. R., Murawski, A: Compositional Model Extraction for Higher-Order Concurrent Programs.
In: Hermanns, H., Palsberg, J. (eds.) TACAS 2006. LNCS vol. 3920, pp. 303–317. Springer, Heidelberg
(2006), doi:10.1007/11691372 20.
[13] Goguen, J., Meseguer, J: Security polices and security models. In: IEEE Symp. on Security and Privacy
1982. pp. 11–20. IEEE Computer Society Press, (1982).
[14] Heintze, N., Riecke, J.G: The SLam calculus: programming with secrecy and integrity. In:
MacQueen, D.B., Cardelli, L. (eds.) POPL 1998. ACM, pp. 365–377. ACM, New York (1998),
doi:10.1145/268946.268976.
[15] Joshi, R., and Leino, K.R.M: A semantic approach to secure information flow. In Science of Computer
Programming 37, pp. 113–138, (2000), doi:10.1016/S0167-6423(99)00024-6.
[16] Reynolds, J. C: The essence of Algol. In: O’Hearn, P.W, and Tennent, R.D. (eds), Algol-like languages.
(Birkhaüser, 1997).
[17] Roscoe, W. A: Theory and Practice of Concurrency. Prentice-Hall, 1998.
[18] Sabelfeld, A., and Myers, A.C: Language-based information-flow security. In IEEE Journal on Selected
Areas in Communications 21(1), (2003), 5–19, doi:10.1109/JSAC.2002.806121.
[19] Sands, D: Improvement Theory and its Applications. Cambridge University Press, 1998.
[20] Volpano, D., Smith, G., and Irvine, C: A sound type system for secure flow analysis. In Journal of
Computer Security 4(2/3), (1996), 167–188, doi:10.3233/JCS-1996-42-304.
[21] Volpano, D., Smith, G: Eliminating covert flows with minimum typings.
In: IEEE Computer
Security Foundations Workshop (CSFW), 1997, 156–169. IEEE Computer Society Press, (1997),
doi:10.1109/CSFW.1997.596807.
| 6 |