text
stringlengths 7.27k
650k
| label
int64 0
10
|
---|---|
arXiv:1608.03703v2 [] 26 Apr 2017
Template estimation in computational anatomy:
Fréchet means in top and quotient spaces are
not consistent
Loïc Devilliers∗, Stéphanie Allassonnière†, Alain Trouvé‡,
and Xavier Pennec§
April 27, 2017
Abstract
In this article, we study the consistency of the template estimation
with the Fréchet mean in quotient spaces. The Fréchet mean in quotient
spaces is often used when the observations are deformed or transformed
by a group action. We show that in most cases this estimator is actually
inconsistent. We exhibit a sufficient condition for this inconsistency, which
amounts to the folding of the distribution of the noisy template when it
is projected to the quotient space. This condition appears to be fulfilled
as soon as the support of the noise is large enough. To quantify this
inconsistency we provide lower and upper bounds of the bias as a function
of the variability (the noise level). This shows that the consistency bias
cannot be neglected when the variability increases.
Keyword : Template, Fréchet mean, group action, quotient space, inconsistency,
consistency bias, empirical Fréchet mean, Hilbert space, manifold
∗ Université
Côte d’Azur, Inria, France, loic.devilliers@inria.fr
Ecole polytechnique, CNRS, Université Paris-Saclay, 91128, Palaiseau, France
‡ CMLA, ENS Cachan, CNRS, Université Paris-Saclay, 94235 Cachan, France
§ Université Côte d’Azur, Inria, France
† CMAP,
1
Contents
1 Introduction
3
2 Definitions, notations and generative model
5
3 Inconsistency for finite group when the template is
point
3.1 Presence of inconsistency . . . . . . . . . . . . . . . .
3.2 Upper bound of the consistency bias . . . . . . . . . .
3.3 Study of the consistency bias in a simple example . . .
. . . . . .
. . . . . .
. . . . . .
4 Inconsistency for any group when the template is
point
4.1 Presence of an inconsistency . . . . . . . . . . . . . .
4.2 Analysis of the condition in theorem 4.1 . . . . . . .
4.3 Lower bound of the consistency bias . . . . . . . . .
4.4 Upper bound of the consistency bias . . . . . . . . .
4.5 Empirical Fréchet mean . . . . . . . . . . . . . . . .
4.6 Examples . . . . . . . . . . . . . . . . . . . . . . . .
4.6.1 Action of translation on L2 (R/Z) . . . . . . .
4.6.2 Action of discrete translation on RZ/NZ . . . .
4.6.3 Action of rotations on Rn . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
5 Fréchet means top and quotient spaces
the template is a fixed point
5.1 Result . . . . . . . . . . . . . . . . . .
5.2 Proofs of these theorems . . . . . . . .
5.2.1 Proof of theorem 5.1 . . . . . .
5.2.2 Proof of theorem 5.2 . . . . . .
6 Conclusion and discussion
a regular
8
9
12
13
not a fixed
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
14
14
15
18
20
22
22
23
23
23
are not consistent when
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
25
25
26
26
28
28
A Proof of theorems for finite groups’ setting
29
A.1 Proof of theorem 3.2: differentiation of the variance in the quotient space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
A.2 Proof of theorem 3.1: the gradient is not zero at the template . . 32
A.3 Proof of theorem 3.3: upper bound of the consistency bias . . . . 32
A.4 Proof of proposition 3.2: inconsistency in R2 for the action of
translation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
B Proof of lemma 5.1: differentiation of the variance in the top
space
35
2
1
Introduction
In Kendall’s shape space theory [Ken89], in computational anatomy [GM98],
in statistics on signals, or in image analysis, one often aims at estimating a
template. A template stands for a prototype of the data. The data can be the
shape of an organ studied in a population [DPC+ 14] or an aircraft [LAJ+ 12],
an electrical signal of the human body, a MR image etc. To analyse the observations, one assumes that these data follow a statistical model. One often
models observations as random deformations of the template with additional
noise. This deformable template model proposed in [GM98] is commonly used
in computational anatomy. The concept of deformation introduces the notion of
group action: the deformations we consider are elements of a group which acts
on the space of observations, called here the top space. Since the deformations
are unknown, one usually considers equivalent classes of observations under the
group action. In other words, one considers the quotient space of the top space
(or ambient space) by the group. In this particular setting, the template estimation is most of the time based on the minimisation of the empirical variance
in the quotient space (for instance [KSW11, JDJG04, SBG08] among many others). The points that minimise the empirical variance are called the empirical
Fréchet mean. The Fréchet means introduced in [Fré48] is comprised of the
elements minimising the variance. This generalises the notion of expected value
in non linear spaces. Note that the existence or uniqueness of Fréchet mean is
not ensured. But sufficient conditions may be given in order to reach existence
and uniqueness (for instance [Kar77] and [Ken90]).
Several group actions are used in practice: some signals can be shifted in
time compared to other signals (action of translations [HCG+ 13]), landmarks
can be transformed rigidly [Ken89], shapes can be deformed by diffeomorphisms [DPC+ 14], etc. In this paper we restrict to transformation which leads
the norm unchanged. Rotations for instance leave the norm unchanged, but it
may seem restrictive. In fact, the square root trick detailed in section 5, allows
to build norms which are unchanged, for instance by reparametrization of curves
with a diffeomorphism, where our work can be applied.
We raise several issues concerning the estimation of the template.
1. Is the Fréchet mean in the quotient space equal to the original template
projected in the quotient space? In other words, is the template estimation
with the Fréchet mean in quotient space consistent?
2. If there is an inconsistency, how large is the consistency bias? Indeed,
we may expect the consistency bias to be negligible in many practicable
cases.
3. If one gets only a finite sample, one can only estimate the empirical Fréchet
mean. How far is the empirical Fréchet mean from the original template?
These issues originated from an example exhibited by Allassonnière, Amit and
Trouvé [AAT07]: they took a step function as a template and they added some
3
noise and shifted in time this function. By repeating this process they created a
data sample from this template. With this data sample, they tried to estimate
the template with the empirical Fréchet mean in the quotient space. In this
example, minimising the empirical variance did not succeed in estimating well
the template when the noise added to the template increases, even with a large
sample size.
One solution to ensure convergence to the template is to replace this estimation method with a Bayesian paradigm ([AKT10, BG14] or [ZSF13]). But there
is a need to have a better understanding of the failure of the template estimation with the Fréchet mean. One can studied the inconsistency of the template
estimation. Bigot and Charlier [BC11] first studied the question of the template
estimation with a finite sample in the case of translated signals or images by
providing a lower bound of the consistency bias. This lower bound was unfortunately not so informative as it is converging to zero asymptotically when the
dimension of the space tends to infinity. Miolane et al. [MP15, MHP16] later
provided a more general explanation of why the template is badly estimated
for a general group action thanks to a geometric interpretation. They showed
that the external curvature of the orbits is responsible for the inconsistency.
This result was further quantified with Gaussian noise. In this article, we provide sufficient conditions on the noise for which inconsistency appears and we
quantify the consistency bias in the general (non necessarily Gaussian) case.
Moreover, we mostly consider a vector space (possibly infinite dimensional) as
the top space while the article of Miolane et al. is restricted to finite dimensional manifolds. In a preliminary unpublished version of this work [ADP15],
we proved the inconsistency when the transformations come from a finite group
acting by translation. The current article extends these results by generalizing
to any isometric action of finite and non-finite groups.
This article is organised as follows. Section 2 details the mathematical terms
that we use and the generative model. In sections 3 and 4, we exhibit sufficient
condition that lead to an inconsistency when the template is not a fixed point
under the group action. This sufficient condition can be roughly understand as
follows: with a non zero probability, the projection of the random variable on
the orbit of the template is different from the template itself. This condition is
actually quite general. In particular, this condition it is always fulfilled with the
Gaussian noise or with any noise whose support is the whole space. Moreover
we quantify the consistency bias with lower and upper bounds. We restrict
our study to Hilbert spaces and isometric actions. This means that the space
is linear, the group acts linearly and leaves the norm (or the dot product)
unchanged. Section 3 is dedicated to finite groups. Then we generalise our
result in section 4 to non-finite groups. To complete this study, we extend in
section 5 the result when the template is a fixed point under the group action
and when the top space is a manifold. As a result we show that the inconsistency
exists for almost all noises. Although the bias can be neglected when the noise
level is sufficiently small, its linear asymptotic behaviour with respect to the
noise level show that it becomes unavoidable for large noises.
4
2
Definitions, notations and generative model
We denote by M the top space, which is the image/shape space, and G the
group acting on M . The action is a map:
G×M
(g, m)
→
M
7→ g · m
satisfying the following properties: for all g, g 0 ∈ G, m ∈ M (gg 0 )·m = g ·(g 0 ·m)
and eG · m = m where eG is the neutral element of G. For m ∈ M we note by
[m] the orbit of m (or the class of m). This is the set of points reachable from
m under the group action: [m] = {g · m, g ∈ G}. Note that if we take two orbits
[m] and [n] there are two possibilities:
1. The orbits are equal: [m] = [n] i.e. ∃g ∈ G s.t. n = g · m.
2. The orbits have an empty intersection: [m] ∩ [n] = ∅.
We call quotient of M by the group G the set all orbits. This quotient is noted
by:
Q = M/G = {[m], m ∈ M }.
The orbit of an element m ∈ M can be seen as the subset of M of all elements
g · m for g ∈ G or as a point in the quotient space. In this article we use these
two ways. We project an element m of the top space M into the quotient by
taking [m].
Now we are interested in adding a structure on the quotient from an existing
structure in the top space: take M a metric space, with dM its distance. Suppose
that dM is invariant under the group action which means that ∀g ∈ G, ∀a, b ∈
M dM (a, b) = dM (g · a, g · b). Then we obtain a pseudo-distance on Q defined
by:
dQ ([a], [b]) = inf dM (g · a, b).
(1)
g∈G
We remind that a distance on M is a map dM : M × M 7→ R+ such that for all
m, n, p ∈ M :
1. dM (m, n) = dM (n, m) (symmetry).
2. dM (m, n) ≤ dM (m, p) + dM (p, n) (triangular inequality).
3. dM (m, m) = 0.
4. dM (m, n) = 0 ⇐⇒ m = n.
A pseudo-distance satisfies only the first three conditions. If we suppose that
all the orbits are closed sets of M , then one can show that dQ is a distance. In
this article, we assume that dQ is always a distance, even if a pseudo-distance
would be sufficient. dQ ([a], [b]) can be interpreted as the distance between the
shapes a and b, once one has removed the parametrisation by the group G. In
other words, a and b have been registered. In this article, except in section 5, we
5
suppose that the the group acts isometrically on an Hilbert space, this means
that the map x 7→ g ·x is linear, and that the norm associated to the dot product
is conserved: kg · xk = kxk. Then dM (a, b) = ka − bk is a particular case of
invariant distance.
We now introduce the generative model used in this article for M a
vector space. Let us take a template t0 ∈ M to which we add a unbiased noise
: X = t0 + . Finally we transform X with a random shift S of G. We assume
that this variable S is independent of X and the only observed variable is:
Y = S · X = S · (t0 + ), with E() = 0,
(2)
while S, X and are hidden variables.
Note that it is not the generative model defined by Grenander and often
used in computational anatomy. Where the observed variable is rather Y 0 =
S · t0 + 0 . But when the noise is isotropic and the action is isometric, one can
show that the two models have the same law, since S · and have the same
probability distribution. As a consequence, the inconsistency of the template
estimation with the Fréchet mean in quotient space with one model implies the
inconsistency with the other model. Because the former model (2) leads to
simpler computation we consider only this model.
We can now set the inverse problem: given the observation Y , how to estimate the template t0 in M ? This is an ill-posed problem. Indeed for some
element group g ∈ G, the template t0 can be replaced by the translated g ·t0 , the
shift S by Sg −1 and the noise by g, which leads to the same observation Y . So
instead of estimating the template t0 , we estimate its orbit [t0 ]. By projecting
the observation Y in the quotient space we obtain [Y ]. Although the observation
Y = S · X and the noisy template X are different random variables in the top
space, their projections on the quotient space lead to the same random orbit
[Y ] = [X]. That is why we consider the generative model (2): the projection
in the quotient space remove the transformation of the group G. From now on,
we use the random orbit [X] in lieu of the random orbit of the observation [Y ].
The variance of the random orbit [X] (sometimes called the Fréchet functional or the energy function) at the quotient point [m] ∈ Q is the expected
value of the square distance between [m] and the random orbit [X], namely:
Q 3 [m] 7→ E(dQ ([m], [X])2 )
(3)
An orbit [m] ∈ Q which minimises this map is called a Fréchet mean of [X].
If we have an i.i.d sample of observations Y1 , . . . , Yn we can write the empirical quotient variance:
n
Q 3 [m] 7→
n
1X
1X
dQ ([m], [Yi ])2 =
inf km − gi · Yi k2 .
n i=1
n i=1 gi ∈G
(4)
Thanks to the equality of the quotient variables [X] and [Y ], an element which
minimises this map is an empirical Fréchet mean of [X].
6
In order to minimise the empirical quotient variance (4), the max-max algon
P
rithm1 alternatively minimises the function J(m, (gi )i ) = n1 km−gi ·Yi k2 over
i=1
a point m of the orbit [m] and over the hidden transformation (gi )1≤i≤n ∈ Gn .
With these notations we can reformulate our questions as:
1. Is the orbit of the template [t0 ] a minimiser of the quotient variance defined
in (3)? If not, the Fréchet mean in quotient space is an inconsistent
estimator of [t0 ].
2. In this last case, can we quantify the quotient distance between [t0 ] and a
Fréchet mean of [X]?
3. Can we quantify the distance between [t0 ] and an empirical Fréchet mean
of a n-sample?
This article shows that the answer to the first question is usually "no" in the
framework of an Hilbert space M on which a group G acts linearly and isometrically. The only exception is theorem 5.1 where the top space M is a manifold.
In order to prove inconsistency, an important notion in this framework is the
isotropy group of a point m in the top space. This is the subgroup which leaves
this point unchanged:
Iso(m) = {g ∈ G, g · m = m}.
We start in section 3 with the simple example where the group is finite and the
isotropy group of the template is reduced to the identity element (Iso(t0 ) =
{eG }, in this case t0 is called a regular point). We turn in section 4 to the case
of a general group and an isotropy group of the template which does not cover
the whole group (Iso(t0 ) 6= G) i.e t0 is not a fixed point under the group action.
To complete the analysis, we assume in section 5 that the template t0 is a fixed
point which means that Iso(t0 ) = G.
In sections 3 and 4 we show lower and upper bounds of the consistency bias
which we define as the quotient distance between the template orbit and the
Fréchet mean in quotient space. These results give an answer to the second
question. In section 4, we show a lower bound for the case of the empirical
Fréchet mean which answers to the third question.
As we deal with different notions whose name or definition may seem similar,
we use the following vocabulary:
1. The variance of the noisy template X in the top space is the function
E : m ∈ M 7→ E(km − Xk2 ). The unique element which minimises this
function is the Fréchet mean of X in the top space. With our assumptions
it is the template t0 itself.
2. We call variability (or noise level) of the template the value of the variance
at this minimum: σ 2 = E(kt0 − Xk2 ) = E(t0 ).
1 The term max-max algorithm is used for instance in [AAT07], and we prefer to keep the
same name, even if it is a minimisation.
7
3. The variance of the random orbit [X] in the quotient space is the function
F : m 7→ E(dQ ([m], [X])2 ). Notice that we define this function from the
top space and not from the quotient space. With this definition, an orbit
[m? ] is a Fréchet mean of [X] if the point m? is a global minimiser of F .
In sections 3 and 4, we exhibit a sufficient condition for the inconsistency,
which is: the noisy template X takes value with a non zero probability in
the set of points which are strictly closer to g · t0 for some g ∈ G than the
template t0 itself. This is linked to the folding of the distribution of the noisy
template when it is projected to the quotient space. The points for which the
distance to the template orbit in the quotient space is equal to the distance to
the template in the top space are projected without being folded. If the support
of the distribution of the noisy template contains folded points (we only assume
that the probability measure of X, noted P, is a regular measure), then there
is inconsistency. The support of the noisy template X is defined by the set of
points x such that P(X ∈ B(x, r)) > 0 for all r > 0. For different geometries of
the orbit of the template, we show that this condition is fulfilled as soon as the
support of the noise is large enough.
The recent article of Cleveland et al. [CWS16] may seem contradictory with
our current work. Indeed the consistency of the template estimation with the
Fréchet mean in quotient space is proved under hypotheses which seem to satisfy
our framework: the norm is unchanged under their group action (isometric
action) and a noise is present in their generative model. However we believe
that the noise they consider might actually not be measurable. Indeed, their
top space is:
Z 1
L2 ([0, 1]) = f : [0, 1] → R such that f is measurable and
f 2 (t)dt < +∞ .
0
The noise e is supposed to be in L2 ([0, 1]) such that for all t, s ∈ [0, 1], E(e(t)) = 0
and E(e(t)e(s)) = σ 2 1s=t , for σ > 0. This means that e(t) and e(s) are chosen
without correlation as soon as s 6= t. In this case, it is not clear for us that the
resulting function e is measurable, and thus that its Lebesgue integration makes
sense. Thus, the existence of such a random process should be established before
we can fairly compare the results of both works.
3
Inconsistency for finite group when the template is a regular point
In this Section, we consider a finite group G acting isometrically and effectively
on M = Rn a finite dimensional space equipped with the euclidean norm k k,
associated to the dot product h , i.
We say that the action is effective if x 7→ g · x is the identity map if and only
if g = eG . Note that if the action is not effective, we can define a new effective
action by simply quotienting G by the subgroup of the element g ∈ G such that
x 7→ g · x is the identity map.
8
The template is assumed to be a regular point which means that the isotropy
group of the template is reduced to the neutral element of G. Note that the
measure of singular points (the points which are not regular) is a null set for
the Lebesgue measure (see item 1 in appendix A.1).
Example 3.1. The action of translation on coordinates: this action is a simplified setting for image registration, where images can be obtained by the translation of one scan to another due to different poses. More precisely, we take
the vector space M = RT where G = T = (Z/N Z)D is the finite torus in Ddimension. An element of RT is seen as a function m : T → R, where m(τ ) is
the grey value at pixel τ . When D = 1, m can be seen like a discretised signal
with N points, when D = 2, we can see m like an image with N × N pixels etc.
We then define the group action of T on RT by:
τ ∈ T, m ∈ RT
τ · m : σ 7→ m(σ + τ ).
This group acts isometrically and effectively on M = RT .
In this setting, if E(kXk2 ) < +∞ then the variance of [X] is well defined:
F : m ∈ M 7→ E(dQ ([X], [m])2 ).
In this framework, F is non-negative and continuous.
Schwarz inequality we have:
lim F (m) ≥
kmk→∞
(5)
Thanks to Cauchy-
lim kmk2 − 2kmkE(kXk) + E(kXk2 ) = +∞.
kmk→∞
Thus for some R > 0 we have: for all m ∈ M if kmk > R then F (m) ≥ F (0) + 1.
The closed ball B(0, R) is a compact set (because M is a finite vector space)
then F restricted to this ball reached its minimum m? . Then for all m ∈ M ,
if m ∈ B(0, R), F (m? ) ≤ F (m), if kmk > R then F (m) ≥ F (0) + 1 > F (0) ≥
F (m? ). Therefore [m? ] is a Fréchet mean of [X] in the quotient Q = M/G.
Note that this ensure the existence but not the uniqueness.
In this Section, we show that as soon as the support of the distribution of
X is big enough, the orbit of the template is not a Fréchet mean of [X]. We
provide a upper bound of the consistency bias depending on the variability of
X and an example of computation of this consistency bias.
3.1
Presence of inconsistency
The following theorem gives a sufficient condition on the random variable X for
an inconsistency:
Theorem 3.1. Let G be a finite group acting on M = Rn isometrically and
effectively. Assume that the random variable X is absolutely continuous with
respect to the Lebesgue’s measure, with E(kXk2 ) < +∞. We assume that t0 =
E(X) is a regular point.
9
g · t0
0
Cone(t0 )
t0
g 0 · t0
Figure 1: Planar representation of a part of the orbit of the template t0 . The
lines are the hyperplanes whose points are equally distant of two distinct elements of the orbit of t0 , Cone(t0 ) represented in points is the set of points closer
from t0 than any other points in the orbit of t0 . Theorem 3.1 states that if the
support (the dotted disk) of the random variable X is not included in this cone,
then there is an inconsistency.
We define Cone(t0 ) as the set of points closer from t0 than any other points
of the orbit [t0 ], see fig. 1 or item 6 in appendix A.1 for a formal definition. In
other words, Cone(t0 ) is defined as the set of points already registered with t0 .
Suppose that:
P (X ∈
/ Cone(t0 )) > 0,
(6)
then [t0 ] is not a Fréchet mean of [X].
The proof of theorem 3.1 is based on two steps: first, differentiating the
variance F of [X]. Second, showing that the gradient at the template is not
zero, therefore the template can not be a minimum of F . Theorem 3.2 makes
the first step.
Theorem 3.2. The variance F of [X] is differentiable at any regular points. For
m0 a regular point, we define g(x, m0 ) as the almost unique g ∈ G minimising
km0 − g · xk (in other words, g(x, m0 ) · x ∈ Cone(m0 )). This allows us to
compute the gradient of F at m0 :
∇F (m0 ) = 2(m0 − E(g(X, m0 ) · X)).
(7)
This Theorem is proved in appendix A.1. Then we show that the gradient
of F at t0 is not zero. To ensure that F is differentiable at t0 we suppose in
the assumptions of theorem 3.1 that t0 = E(X) is a regular point. Thanks
to theorem 3.2 we have:
∇F (t0 ) = 2(t0 − E(g(X, t0 ) · X)).
Therefore ∇F (t0 )/2 is the difference between two terms, which are represented on fig. 2: on fig. 2a there is a mass under the two hyperplanes outside
10
g · t0
0
g · t0
Cone(t0 )
t0
0
g 0 · t0
Cone(t0 )
t0
Z
g 0 · t0
(a) Graphic representation of the
template t0 = E(X) mean of
points of the support of X.
(b) Graphic representation of
Z = E(g(X, t0 ) · X). The points
X which were outside Cone(t0 )
are now in Cone(t0 ) (thanks to
g(X, t0 )). This part, in grid-line,
represents the points which have
been folded.
Figure 2: Z is the mean of points in Cone(t0 ) where Cone(t0 ) is the set of points
closer from t0 than g · t0 for g ∈ G \ eG . Therefore it seems that Z is higher that
t0 , therefore ∇F (t0 ) = 2(t0 − Z) 6= 0.
Cone(t0 ), so this mass is nearer from gt0 for some g ∈ G than from t0 . In the following expression Z = E(g(X, t0 ) · X), for X ∈
/ Cone(t0 ), g(X, t0 )X ∈ Cone(t0 )
such points are represented in grid-line on fig. 2. This suggests that the point
Z = E(g(X, t0 ) · X) which is the mean of points in Cone(t0 ) is further away
from 0 than t0 . Then ∇F (t0 )/2 = t0 − Z should be not zero, and t0 = E(X) is
not a critical point of the variance of [X]. As a conclusion [t0 ] is not a Fréchet
mean of [X]. This is turned into a rigorous proof in appendix A.2.
In the proof of theorem 3.1, we took M an Euclidean space and we work with
the Lebesgue’s measure in order to have P(X ∈ H) = 0 for every hyperplane
H. Therefore the proof of theorem 3.1 can be extended immediately to any
Hilbert space M , if we make now the assumption that P(X ∈ H) = 0 for
every hyperplane H, as long as we keep a finite group acting isometrically and
effectively on M .
Figure 2 illustrates the condition of theorem 3.1: if there is no mass beyond
the hyperplanes, then the two terms in ∇F (t0 ) are equal (because almost surely
g(X, t0 ) · X = X). Therefore in this case we have ∇F (t0 ) = 0. This do not
prove necessarily that there is no inconsistency, just that the template t0 is a
critical point of F . Moreover this figure can give us an intuition on what the
consistency bias (the distance between [t0 ] and the set of all Fréchet mean in
the quotient space) depends: for t0 a fixed regular point, when the variability
of X (defined by E(kX − t0 k2 )) increases the mass beyond the hyperplanes
on fig. 2 also increases, the distance between E(g(X, t0 ) · X) and t0 (i.e. the
norm of ∇F (t0 )) augments. Therefore q the Fréchet mean should be further
from t0 , (because at this point one should have ∇F (q) = 0 or q is a singular
11
point). Therefore the consistency bias appears to increase with the variability
of X. By establishing a lower and upper bound of the consistency bias and
by computing the consistency bias in a very simple case, sections 3.2, 3.3, 4.3
and 4.4 investigate how far this hypothesis is true.
We can also wonder if the converse of theorem 3.1 is true: if the support is
included in Cone(t0 ), is there consistency? We do not have a general answer
to that. In the simple example section 3.3 it happens that condition (6) is
necessary and sufficient. More generally the following proposition provides a
partial converse:
Cone(y)
g · t0
y
t0
O
Cone(t0 )
g 0 · t0
Figure 3: y 7→ Cone(y) is continuous. When the support of the X is bounded
and included in the interior of Cone(t0 ) the hatched cone. For y sufficiently
close to the template t0 , the support of the X (the ball in red) is still included
in Cone(y) (in grey), then F (y) = (E(kX − yk2 ). Therefore in this case, [t0 ] is
at least a Karcher mean of [X].
Proposition 3.1. If the support of X is a compact set included in the interior
of Cone(t0 ), then the orbit of the template [t0 ] is at least a Karcher mean of [X]
(a Karcher mean is a local minimum of the variance).
Proof. If the support of X is a compact set included in the interior of Cone(t0 )
then we know that X-almost surely: dQ ([X], [t0 ]) = kX −t0 k. Thus the variance
at t0 in the quotient space is equal to the variance at t0 in the top space. Now
by continuity of the distance map (see fig. 3) for y in a small neighbourhood
of t0 , the support of X is still included in the interior of Cone(y). We still
have dQ ([X], [y]) = kX − yk X-almost surely. In other words, locally around
t0 , the variance in the quotient space is equal to the variance in the top space.
Moreover we know that t0 = E(X) is the only global minimiser of the variance
of X: m 7→ E(km − Xk2 ) = E(m). Therefore t0 is a local minimum of F
the variance in the quotient space (since the two variances are locally equal).
Therefore [t0 ] is at least a Karcher mean of [X] in this case.
3.2
Upper bound of the consistency bias
In this Subsection we show an explicit upper bound of the consistency bias.
12
Theorem 3.3. When G is a finite group acting isometrically on M = Rn ,
we denote |G| the cardinal of the group G. If X is Gaussian vector: X ∼
N (t0 , s2 IdRn ), and m? ∈ argmin F , then we have the upper bound of the consistency bias:
p
(8)
dQ ([t0 ], [m? ]) ≤ s 8 log(|G|).
The proof is postponed in appendix A.3. When X ∼ N (t0 , s2 Idn ) the
variability of X is σ 2 = E(||X − t0 ||2 )p= ns2 and we can write the upper
bound of the bias: dQ ([t0 ], [m? ]) ≤ √σn 8 log |G|. This Theorem shows that
the consistency bias is low when the variability of X is small, which tends to
confirm our hypothesis in section 3.1. It is important to notice that this upper
bound explodes when the cardinal of the group tends to infinity.
3.3
Study of the consistency bias in a simple example
In this Subsection, we take a particular case of example 3.1: the action of
translation with T = Z/2Z. We identify RT with R2 and we note by (u, v)T an
element of RT . In this setting, one can completely describe the action of T on
RT : 0 · (u, v)T = (u, v)T and 1 · (u, v)T = (v, u)T . The set of singularities is the
line L = {(u, u)T , u ∈ R}. We note HPA = {(u, v)T , v > u} the half-plane
above L and HPB the half-plane below L. This simple example will allow us
to provide necessary and sufficient condition for an inconsistency at regular and
singular points. Moreover we can compute exactly the consistency bias, and
exhibit which parameters govern the bias. We can then find an equivalent of
the consistency bias when the noise tends to zero or infinity. More precisely, we
have the following theorem proved in appendix A.4:
Proposition 3.2. Let X be a random variable such that E(kXk2 ) < +∞ and
t0 = E(X).
1. If t0 ∈ L, there is no inconsistency if and only if the support of X is
included in the line L = {(u, u), u ∈ R}. If t0 ∈ HPA (respectively in
HPB ), there is no inconsistency if and only if the support of X is included
in HPA ∪ L (respectively in HPB ∪ L).
2. If X is Gaussian: X ∼ N (t0 , s2 Id2 ), then the Fréchet mean of [X] exists
and is unique. This Fréchet mean [m? ] is on the line passing through E(X)
and perpendicular to L and the consistency bias ρ̃ = dQ ([t0 ], [m? ]) is the
function of s and d = dist(t0 , L) given by:
2
Z
2 +∞ 2
r
d
ρ̃(d, s) = s
r exp −
g
dr,
(9)
π ds
2
rs
where g is a non-negative function on [0, 1] defined by g(x) = sin(arccos(x))−
x arccos(x).
(a) If d > 0 then s 7→ ρ̃(d, s) has an asymptotic linear expansion:
2
Z
2 +∞ 2
r
ρ̃(d, s) ∼ s
r exp −
dr.
s→∞ π 0
2
13
(10)
(b) If d > 0, then ρ̃(d, s) = o(sk ) when s → 0, for all k ∈ N.
(c) s →
7 ρ̃(0, s) is linear with respect to s (for d = 0 the template is a
fixed point).
Remark 3.1. Here, contrarily to the case of the action of rotation in [MHP16],
it is not the ratio kE(X)k over the noise which matters to estimate the consistency bias. Rather the ratio dist(E(X), L) over the noise. However in both cases
we measure the distance between the signal and the singularities which was {0}
in [MHP16] for the action of rotations, L in this case.
4
Inconsistency for any group when the template
is not a fixed point
In section 3 we exhibited sufficient condition to have an inconsistency, restricted
to the case of finite group acting on an Euclidean space. We now generalize this
analysis to Hilbert spaces of any dimension included infinite. Let M be such
an Hilbert space with its dot product noted by h , i and its associated norm
k k. In this section, we do not anymore suppose that the group G is finite.
In the following, we prove that there is an inconsistency in a large number of
situations, and we quantify the consistency bias with lower and upper bounds.
Example 4.1. The action of continuous translation: We take G = (R/Z)D
acting on M = L2 ((R/Z)D , R) with:
∀τ ∈ G
∀f ∈ M
(τ · f ) : t 7→ f (t + τ )
This isometric action is the continuous version of the example 3.1: the elements
of M are now continuous images in dimension D.
4.1
Presence of an inconsistency
We state here a generalization of theorem 3.1:
Theorem 4.1. Let G be a group acting isometrically on M an Hilbert space,
and X a random variable in M , E(kXk2 ) < +∞ and E(X) = t0 6= 0. If:
P (dQ ([t0 ], [X]) < kt0 − Xk) > 0,
(11)
P sup hg · X, t0 i > hX, t0 i > 0.
(12)
or equivalently:
g∈G
Then [t0 ] is not a Fréchet mean of [X] in Q = M/G.
The condition of this Theorem is the same condition of theorem 3.1: the
support of the law of X contains points closer from gt0 for some g than t0 .
Thus the condition (12) is equivalent to E(dQ ([X], [t0 ])2 ) < E(kX − t0 k2 ). In
other words, the variance in the quotient space at t0 is strictly smaller than the
variance in the top space at t0 .
14
Proof. First the two conditions are equivalent by definition of the quotient distance and by expansion of the square norm of kt0 − Xk and of kt0 − gXk for
g ∈ G.
As above, we define the variance of [X] by:
2
F (m) = E inf kg · X − mk .
g∈G
In order to prove this Theorem, we find a point m such that F (m) < F (t0 ),
which directly implies that [t0 ] is not be a Fréchet mean of [X].
In the proof of theorem 3.1, we showed that under condition (6) we had
h∇F (t0 ), t0 i < 0. This leads us to study F restricted to R+ t0 : we define for
a ∈ R+ f (a) = F (at0 ) = E(inf g∈G kg · X − ak2 ). Thanks to the isometric action
we can expand f (a) by:
f (a) = a2 kt0 k2 − 2aE sup hg · X, t0 i + E(kXk2 ),
(13)
g∈G
and explicit the unique element of R+ which minimises f :
E sup hg · X, t0 i
g∈G
.
a? =
kt0 k2
(14)
For all x ∈ M , we have sup hg · x, t0 i ≥ hx, t0 i and thanks to condition (12) we
g∈G
get:
E(sup hg · X, t0 i) > E(hX, t0 i) = hE(X), t0 i = kt0 k2 ,
(15)
g∈G
which implies a? > 1. Then F (a? t0 ) < F (t0 ).
Note that kt0 k2 (a? − 1) = E supg∈G hg · X, t0 i − E(hX, t0 i) (which is positive) is exactly − h∇F (t0 ), t0 i /2 in the case of finite group, see Equation (44).
Here we find the same expression without having to differentiate the variance
F , which may be not possible in the current setting.
4.2
Analysis of the condition in theorem 4.1
We now look for general cases when we are sure that Equation (12) holds which
implies the presence of inconsistency. We saw in section 3 that when the group
was finite, it is possible to have no inconsistency only if the support of the
law is included in a cone delimited by some hyperplanes. The hyperplanes were
defined as the set of points equally distant of the template t0 and g ·t0 for g ∈ G.
Therefore if the cardinal of the group becomes more and more important, one
could think that in order to have no inconsistency the space where X should
takes value becomes smaller and smaller. At the limit it leaves only at most an
hyperplane. In the following, we formalise this idea to make it rigorous. We
show that the cases where theorem 4.1 cannot be applied are not generic cases.
15
First we can notice that it is not possible to have the condition (12) if t0 is a
fixed point under the action of G. Indeed in this case hg · X, t0 i = X, g −1 t0 =
hX, t0 i). So from now, we suppose that t0 is not a fixed point. Now let us see
some settings when we have the condition (11) and thus condition (12).
Proposition 4.1. Let G be a group acting isometrically on an Hilbert space M ,
and X a random variable in M , with E(kXk2 ) < +∞ and E(X) = t0 6= 0. If:
1. [t0 ] \ {t0 } is a dense set in [t0 ].
2. There exists η > 0 such that the support of X contains a ball B(t0 , η).
Then condition (12) holds, and the estimator is inconsistent according to theorem 4.1.
B(t0 , η)
O
t0
g · t0
[t0 ]
Figure 4: The smallest disk is included in the support of X and the points in
that disk is closer from g · t0 than from t0 . According to theorem 4.1 there is an
inconsistency.
Proof. By density, one takes g · t0 ∈ B(t0 , η) \ {t0 } for some g ∈ G, now if we
take r < min(kg ·t0 −t0 k/2, η −kg ·t0 −t0 k) then B(g ·t0 , r) ⊂ B(t0 , ). Therefore
by the assumption we made on the support one has P(X ∈ B(g · t0 , r)) > 0.
For y ∈ B(g · t0 , r) we have that kgt0 − yk < kt0 − yk (see fig. 4). Then we
have: P (dQ ([X], [t0 ]) < kX − t0 k) ≥ P(X ∈ B(g · t0 , r)) > 0. Then we verify
condition (12), and we can apply theorem 4.1.
Proposition 4.1 proves that there is a large number of cases where we can
ensure the presence of an inconsistency. For instance when M is a finite dimensional vector space and the random variable X has a continuous positive
density (for the Lebesgue’s measure) at t0 , condition 2 of Proposition 4.1 is
fulfilled. Unfortunately this proposition do not cover the case where there is no
mass at the expected value t0 = E(X). This situation could appear if X has
two modes for instance. The following proposition deals with this situation:
16
Proposition 4.2. Let G be a group acting isometrically on M . Let X be a
random variable in M , such that E(kXk2 ) < +∞ and E(X) = t0 6= 0. If:
1. ∃ϕ s.t. ϕ : (−a, a) → [t0 ] is C 1 with ϕ(0) = t0 , ϕ0 (0) = v 6= 0.
2. The support of X is not included in the hyperplane v ⊥ : P(X ∈
/ v ⊥ ) > 0.
Then condition (12) is fulfilled, which leads to an inconsistency thanks to Theorem 4.1.
Proof. Thanks to the isometric action: ht0 , vi = 0. We choose y ∈
/ v ⊥ in the
support of X and make a Taylor expansion of the following square distance (see
also Figure 5) at 0:
kϕ(x) − yk2 = kt0 + xv + o(x) − yk2 = kt0 − yk2 − 2x hy, vi + o(x).
Then: ∃x? ∈ (−a, a) s.t. kx? k < a, x hy, vi > 0 and kϕ(x? ) − yk < kt0 − yk. For
some g ∈ G, ϕ(x? ) = g · t0 . By continuity of the norm we have:
∃r > 0 s.t. ∀z ∈ B(y, r) kg · t0 − zk < kt0 − zk.
Then P(kg·t0 −Xk < kt0 −Xk) ≥ P(X ∈ B(y, r)) > 0. Theorem 4.1 applies.
Proposition 4.2 was a sufficient condition on inconsistency in the case of an
orbit which contains a curve. This brings us to extend this result for orbits
which are manifolds:
Proposition 4.3. Let G be a group acting isometrically on an Hilbert space M ,
X a random variable in M , with E(kXk2 ) < +∞. Assume X = t0 + σ, where
t0 6= 0 and E() = 0, and E(kk) = 1. We suppose that [t0 ] is a sub-manifold of
M and write Tt0 [t0 ] the linear tangent space of [t0 ] at t0 . If:
P(X ∈
/ Tt0 [t0 ]⊥ ) > 0,
(16)
P( ∈
/ Tt0 [t0 ]⊥ ) > 0,
(17)
which is equivalent to:
then there is an inconsistency.
Proof. First t0 ⊥ Tt0 [t0 ] (because the action is isometric), Tt0 [t0 ]⊥ = t0 +
Tt0 [t0 ]⊥ , then the event {X ∈ Tt0 [t0 ]⊥ } is equal to { ∈ Tt0 [t0 ]⊥ }. This proves
that equations (16) and (17) are equivalent. Thanks to assumption (16), we can
choose y in the support of X such that y ∈
/ Tt0 [t0 ]⊥ . Let us take v ∈ Tt0 [t0 ]
1
such that hy, vi =
6 0 and choose ϕ a C curve in [t0 ], such that ϕ(0) = t0 and
ϕ0 (0) = v. Applying proposition 4.2 we get the inconsistency.
Note that Condition (16) is very weak, because Tt0 [t0 ] is a strict linear
subspace of M .
17
[t0 ]
Tt0 [t0 ]
y
g · t0
O
t0
Tt0 [t0 ]⊥
Figure 5: y ∈
/ Tt0 [t0 ]⊥ therefore y is closer from g · t0 for some g ∈ G than t0
itself. In conclusion, if y is in the support of X, there is an inconsistency.
4.3
Lower bound of the consistency bias
Under the assumption of Theorem 4.1, we have an element a? t0 such that
F (a? t0 ) < F (t0 ) where F is the variance of [X]. From this element, we deduce lower bounds of the consistency bias:
Theorem 4.2. Let δ be the unique positive solution of the following equation:
δ 2 + 2δ (kt0 k + EkXk) − kt0 k2 (a? − 1)2 = 0.
Let δ? be the unique positive solution of the following equation:
p
δ 2 + 2δkt0 k 1 + 1 + σ 2 /kt0 k2 − kt0 k2 (a? − 1)2 = 0,
(18)
(19)
where σ 2 = E(kX − t0 k2 ) is the variability of X. Then δ and δ? are two lower
bounds of the consistency bias.
Proof. In order to prove this Theorem, we exhibit a ball around t0 such that the
points on this ball have a variance bigger than the variance at the point a? t0 ,
where a? was defined in Equation (14): thanks to the expansion of the function
f we did in (13) we get :
F (t0 ) − F (a? t0 ) = kt0 k2 (a? − 1)2 > 0,
(20)
Moreover we can show (exactly like equation (43)) that for all x ∈ M :
2
2
|F (t0 ) − F (x)| ≤ E inf kg · X − t0 k − inf kg · X − xk
g∈G
g∈G
≤ kx − t0 k (2kt0 k + kx − t0 k + E(k2Xk)) .
(21)
With Equations (20) and (21), for all x ∈ B(t0 , δ) we have F (x) > F (a? t0 ).
No point in that ball mapped in the quotient space is a Fréchet mean of [X]. So
18
δ is a lower bound of the consistency bias. Now by usingthe fact that E(kXk) ≤
p
p
kt0 k2 + σ 2 , we get: 2|F (t0 )−F (x)| ≤ 2kx−t0 k×kt0 k 1 + 1 + σ 2 /kt0 k2 +
kx − t0 k2 . This proves that δ? is also a lower bound of the consistency bias.
δ? is smaller than δ, but the variability of X intervenes in δ? . Therefore we
propose to study the asymptotic behaviour of δ? when the variability tends to
infinity. We have the following proposition:
Proposition 4.4. Under the hypotheses of Theorem 4.2, we write X = t0 + σ,
with E() = 0, and E(kk2 ) = 1 and note ν = E(supg∈G hg, t0 /kt0 ki) ∈ (0, 1],
we have that:
p
δ? ∼ σ( 1 + ν 2 − 1),
σ→+∞
In particular, the consistency bias explodes when the variability of X tends
to infinity.
Proof. First, let us prove that that ν ∈ (0, 1] under the condition (12). We
have ν ≥ E(h, t0 /kt0 ki = 0. By a reductio ad absurdum: if ν = 0, then
sup hg, t0 i = h, t0 i almost surely. We have then almost surely: hX, t0 i ≤
g∈G
supg∈G hgX, t0 i ≤ kt0 k2 + supg∈G σ hg, t0 i = kt0 k2 + σ h, t0 i ≤ hX, t0 i , which
p
is in contradiction with (12). Besides ν ≤ E(kk) ≤ Ekk2 = 1
Second, we exhibit equivalent of the terms in equation (19) when σ → +∞:
p
2kt0 k 1 + 1 + σ 2 /kt0 k2 ∼ 2σ.
(22)
Now by definition of a? in Equation (14) and the decomposition of X = t0 + σ
we get:
1
E sup (hg · t0 , t0 i + hg · σ, t0 i) − kt0 k
kt0 k(a? − 1) =
kt0 k
g∈G
1
kt0 k(a? − 1) ≤
E sup hg · σ, t0 i = σν
(23)
kt0 k
g∈G
1
kt0 k(a? − 1) ≥
E sup hg · σ, t0 i − 2kt0 k = σν − 2kt0 k,
(24)
kt0 k
g∈G
The lower bound and the upper bound of kt0 k(a? −1) found in (23) and (24) are
both equivalent to σν, when σ → +∞. Then the constant term of the quadratic
Equation (19) has an equivalent:
− kt0 k2 (a? − 1)2 ∼ −σ 2 ν 2 .
(25)
Finallye if we solve the quadratic Equation (19), we write δ? as a function of
the coefficients of the quadratic equation (19). We use the equivalent of each of
these terms thanks to equation (22) and (25), this proves proposition 4.4.
19
Remark 4.1. Thanks to inequality (24), if ktσ0 k < ν2 , then kt0 k2 (1 − a? )2 ≥
(σν −2kt0 k)2 , then we write δ? as a function of the coefficients of Equation (19),
we obtain a lower bound of the inconsistency bias as a function of kt0 k, σ and
ν for σ > 2kt0 k/ν:
q
p
p
δ?
2
2
≥ −(1 + 1 + σ /kt0 k ) + (1 + 1 + σ 2 /kt0 k2 )2 + (σν/kt0 k − 2)2 .
kt0 k
Although the constant ν intervenes in this lower bound, it is not an explicit
term. We now explicit its behaviour depending on t0 . We remind that:
1
ν=
E sup hg, t0 i .
kt0 k
g∈G
To this end, we first note that the set of fixed points under the action of G is a
closed linear space, (because we can write it as an intersection of the kernel of
the continuous and linear functions: x 7→ g · x − x for all g ∈ G). We denote by
p the orthogonal projection on the set of fixed points Fix(M ). Then for x ∈ M ,
we have: dist(x, Fix(M )) = kx − p(x)k. Which yields:
hg, t0 i = hg, t0 − p(t0 )i + h, p(t0 )i .
(26)
The right hand side of Equation (26) does not depend on g as p(t0 ) ∈ Fix(M ).
Then:
kt0 kν = E sup hg, t0 − p(t0 )i + hE(), p(t0 )i .
g∈G
Applying the Cauchy-Schwarz inequality and using E() = 0, we can conclude
that:
ν≤
1
dist(t0 , Fix(M ))E(kk) = dist(t0 /kt0 k, Fix(M ))E(kk).
kt0 k
(27)
This leads to the following comment: our lower bound of the consistency bias is
smaller when our normalized template t0 /kt0 k is closer to the set of fixed points.
4.4
Upper bound of the consistency bias
In this Section, we find a upper bound of the consistency bias. More precisely
we have the following Theorem:
Proposition 4.5. Let X be a random variable in M , such that X = t0 + σ
where σ > 0, E() = 0 and E(||||2 ) = 1. We suppose that [m? ] is a Fréchet
mean of [X]. Then we have the following upper bound of the quotient distance
between the orbit of the template t0 and the Fréchet mean of [X]:
p
dQ ([m? ], [t0 ]) ≤ σν(m∗ −m0 )+ σ 2 ν(m∗ − m0 )2 + 2dist(t0 , Fix(M ))σν(m∗ − m0 ),
(28)
where we have noted ν(m) = E(supg hg, m/kmki) ∈ [0, 1] if m 6= 0 and
ν(0) = 0, and m0 the orthogonal projection of t0 on F ix(M ).
20
Note that we made no hypothesis on the template
pin this proposition. We
deduce from Equation (28) that √
dQ ([m? ], [t0 ]) ≤ σ + σ 2 + 2σdist(t0 , Fix(M ))
is a O(σ) when σ → ∞, but a O( σ) when σ → 0, in particular the consistency
bias can be neglected when σ is small.
Proof. First we have:
F (m? ) ≤ F (t0 ) = E(inf ||t0 − g(t0 + σ)||2 ) ≤ E(||σ||2 ) = σ 2 .
g
(29)
Secondly we have for all m ∈ M , (in particular for m? ):
F (m) =
E(inf (km − gt0 k2 + σ 2 kk2 − 2hgσ, m − gt0 i))
≥
dQ ([m], [t0 ])2 + σ 2 − 2E(suphσ, gmi).
g
(30)
g
With Inequalities (29) and (30) one gets:
dQ ([m∗ ], [t0 ])2 ≤ 2E(sup hσ, gm? i) = 2σν(m? )||m? ||,
g
note that at this point, if m? = 0 then E(supg hσ, gm? i) = 0 and ν(m? ) = 0
although Equation (4.4) is still true even if m? = 0. Moreover with the triangular
inequality applied at [m? ], [0] and [t0 ], one gets: km? k ≤ kt0 k + dQ ([m? ], [t0 ])
and then:
dQ ([m∗ ], [t0 ])2 ≤ 2σν(m? )(dQ ([m∗ ], [t0 ]) + kt0 k).
(31)
We can solve inequality (31) and we get:
p
dQ ([m? ], [t0 ]) ≤ σν(m? ) + σ 2 ν(m? )2 + 2kt0 kσν(m? ),
(32)
We note by FX instead of F the variance in the quotient space of [X], and we
want to apply inequality (32) to X − m0 . As m0 is a fixed point:
2
FX (m) = E inf kX − m0 − g · (m − m0 )k = FX−m0 (m − m0 )
g∈G
Then m? minimises FX if and only if m? − m0 minimises FX−m0 . We apply
Equation (32) to X − m0 , with E(X − m0 ) = t0 − m0 and [m? − m0 ] a Fréchet
mean of [X − m0 ]. We get:
p
dQ ([m? −m0 ], [t0 −m0 ]) ≤ σν(m∗ −m0 )+ σ 2 ν(m∗ − m0 )2 + 2kt0 − m0 kσν(m∗ − m0 ).
Moreover dQ ([m? ], [t0 ]) = dQ ([m? − m0 ], [t0 − m0 ]), which concludes the proof.
21
4.5
Empirical Fréchet mean
In practice, we never compute the Fréchet mean in quotient space, only the
empirical Fréchet mean in quotient space when the size of a sample is supposed
to be large enough. If the empirical Fréchet in the quotient space means converges to the Fréchet mean in the quotient space then we can not use these
empirical Fréchet mean in order to estimate the template. In [BB08], it has
been proved that the empirical Fréchet mean converges to the Fréchet mean
with a √1n convergence speed, however the law of the random variable is supposed to be included in a ball whose radius depends on the geometry on the
manifold. Here we are not in a manifold, indeed the quotient space contains
singularities, moreover we do not suppose that the law is necessarily bounded.
However in [Zie77] the empirical Fréchet means is proved to converge to the
Fréchet means but no convergence rate is provided.
We propose now to prove that the quotient distance between the template
and the empirical Fréchet mean in quotient space have an lower bound which
is the asymptotic of the one lower bound of the consistency bias found in (18).
Take X, X1 , . . . , Xn independent and identically distributed (with t0 = E(X)
not a fixed point). We define the empirical variance of [X] by:
n
m ∈ M 7→ Fn (m) =
n
1X
1X
dQ ([m], [Xi ])2 =
inf km − g · Xi k2 ,
n i=1
n i=1 g∈G
and we say that [mn? ] is a empirical Fréchet mean of [X] if mn? is a global
minimiser of Fn .
Proposition 4.6. Let X, X1 , . . . , Xn independent and identically distributed
random variables, with t0 = E(X). Let be [mn? ] be an empirical Fréchet mean
of [X]. Then δn is a lower bound of the quotient distance between the orbit of
the template and [mn? ], where δn is the unique positive solution of:
!
n
1X
2
kXi k δ − kt0 k2 (an? − 1)2 = 0.
δ + 2 ||t0 || +
n i=1
an? is defined like a? in section 4.1 by:
n
P
1
sup hg · Xi , t0 i
n
i=1g∈G
an? =
.
kt0 k2
We have that δn → δ by the law of large numbers.
The proof is a direct application of theorem 4.2, but applied to the empirical
law of X given by the realization of X1 , . . . , Xn .
4.6
Examples
In this Subsection, we discuss, in some examples, the application of theorem 4.1
and see the behaviour of the constant ν. This constant intervened in lower
bound of the consistency bias.
22
4.6.1
Action of translation on L2 (R/Z)
We take an orbit O = [f0 ], where f0 ∈ C 2 (R/Z), non constant. We show
easily that O is a manifold of dimension 1 and the tangent space at f0 is2
Rf00 . Therefore a sufficient condition on X such that E(X) = f0 to have an
inconsistency is: P(X ∈
/ f00⊥ ) > 0 according to proposition 4.3. Now if we
denote by 1 the constant function on R/Z equal to 1. We have in this setting:
that the set of fixed points under the action of G is the set of constant functions:
Fix(M ) = R1 and:
s
2
Z 1
Z 1
f0 (t) −
f0 (s)ds dt.
dist(f0 , Fix(M )) = kf0 − hf0 , 1i 1k =
0
0
This distance to the fixed points is used in the upper bound of the constant ν in
Equation (27). Note that if f0 is not differentiable, then [f0 ] is not necessarily
a manifold, and (4.3) does not apply. However proposition 4.1 does: if f0 is not
a constant function, then [f0 ] \ {f0 } is dense in [f0 ]. Therefore as soon as the
support of X contains a ball around f0 , there is an inconsistency.
4.6.2
Action of discrete translation on RZ/NZ
We come back on example 3.1, with D = 1 (discretised signals). For some signal
t0 , ν previously defined is:
1
ν=
E max h, τ · t0 i .
kt0 k
τ ∈Z/NZ
Therefore if we have a sample of size I of iid, then:
ν=
I
1X
1
lim
max hi , τi · t0 i ,
kt0 k I→+∞ I i=1 τi ∈Z/N Z
By an exhaustive research, we can find the τi ’s which maximise the dot product, then with this sample and t0 we can approximate ν. We have done this
approximation for several signals t0 on fig. 6. According the previous results,
the bigger ν is, the more important the lower bound of the consistency bias is.
We remark that the ν estimated is small, ν 1 for different signals.
4.6.3
Action of rotations on Rn
Now we consider the action of rotations on Rn with a Gaussian noise. Take
X ∼ N (t0 , s2 Idn ) then the variability of X is ns2 , then X has a decomposition:
] − 21 , 12 [ →
O
is a local parametrisation of O: f0 = ϕ(0), and we
t
7→ f0 (. − t)
0
check that: lim kϕ(x) − ϕ(0) − xf0 kL2 = 0 with Taylor-Lagrange inequality at the order
2 Indeed
ϕ :
x→0
2. As a conclusion ϕ is differentiable at 0, and it is an immersion (since f00 6= 0), and
D0 ϕ : x 7→ xf00 , then O is a manifold of dimension 1 and the tangent space of O at f0 is:
Tf0 O = D0 ϕ(R) = Rf00 .
23
nu value for each signal
0.4
0.14456
0.082143
0.24981
0.3
0.2
0.1
0
-0.1
-0.2
-0.3
-0.4
0
0.2
0.4
0.6
0.8
1
Figure 6: Different signals and their ν approximated with a sample of size 103
in RZ/100Z . is here a Gaussian noise in RZ/100Z , such that E() = 0 and
E(kk2 ) = 1. For instance the blue signal is a signal defined randomly, and
when we approximate the ν which corresponds to that t0 we find ' 0.25.
√
X = t0 + ns with E() = 0 and E(kk2 ) = 1. According to proposition 4.4
we have by noting δ? the lower bound of the consistency bias when s → ∞:
p
√
δ?
→ n(−1 + 1 + ν 2 ).
s
Now ν = E(supg∈G hg, t0 )i /kt0 k = E(kk) → 1 when n tends to infinity (expected value of the Chi distribution) we have that for n large enough:
√ √
δ?
' n( 2 − 1).
s→∞ s
We compare this result with the exact computation of the consistency bias
(noted here CB) made by Miolane et al. [MHP16], which writes with our current
notations:
CB √ Γ((n + 1)/2)
lim
= 2
.
s→∞ s
Γ(n/2)
lim
Using a standard Taylor expansion on the Gamma function, we have that for n
large enough:
CB √
lim
' n.
s→∞ s
As a conclusion, when the dimension of the space is large enough our lower
bound and the exact computation of the
√ bias have the same asymptotic behaviour. It differs only by the constant 2 − 1 ' 0.4 in our lower bound, 1 in
the work of Miolane et al. [MP15].
24
5
Fréchet means top and quotient spaces are not
consistent when the template is a fixed point
In this Section, we do not assume that the top space M is a vector space, but
rather a manifold. We need then to rewrite the generative model likewise: let
t0 ∈ M , and X any random variable of M such as t0 is a Fréchet mean of X.
Then Y = S · X is the observed variable where S is a random variable whose
value are in G. In this Section we make the assumption that the template t0 is
a fixed point under the action of G.
5.1
Result
Let X be a random variable on M and define the variance of X as:
E(m) = E(dM (m, X)2 ).
We say that t0 is a Fréchet mean of X if t0 is a global minimiser of the variance
E. We prove the following result:
Theorem 5.1. Assume that M is a complete finite dimensional Riemannian
manifold and that dM is the geodesic distance on M . Let X be a random variable
on M , with E(d(x, X)2 ) < +∞ for some x ∈ M . We assume that t0 is a fixed
point and a Fréchet mean of X and that P(X ∈ C(t0 )) = 0 where C(t0 ) is the
cut locus of t0 . Suppose that there exists a point in the support of X which is
not a fixed point nor in the cut locus of t0 . Then [t0 ] is not a Fréchet mean of
[X].
The previous result is finite dimensional and does not cover interesting infinite dimensional setting concerning curves for instance. However, a simple
extension to the previous result can be stated when M is a Hilbert vector space
since then the space is flat and some technical problems like the presence of cut
locus point do not occur.
Theorem 5.2. Assume that M is a Hilbert space and that dM is given by the
Hilbert norm on M . Let X be a random variable on M , with E(kXk2 ) < +∞.
We assume that t0 = E(X). Suppose that there exists a point in the support of
the law of X that is not a fixed point for the action of G. Then [t0 ] is not a
Fréchet mean of [X].
Note that the reciprocal is true: if all the points in the support of the law
of X are fixed points, then almost surely, for all m ∈ M and for all g ∈ G we
have:
dM (X, m) = dM (g · X, m) = dQ ([X], [m]).
Up to the projection on the quotient, we have that the variance of X is equal to
the variance of [X] in M/G, therefore [t0 ] is a Fréchet mean of [X] if and only
if t0 is a Fréchet mean of X. There is no inconsistency in that case.
25
Example 5.1. Theorem 5.2 covers the interesting case of the Fisher Rao metric
on functions:
F = {f : [0, 1] → R
|
f is absolutely continuous}.
Then considering for G the group of smooth diffeomorphisms γ on [0, 1] such
that γ(0) = 0 and γ(1) = 1, we have a right group action G × F → F given
by γ · f = f ◦ γ. The Fisher Rao metric is built as a pull back metric
q of the
2
2
˙
L ([0, 1], R) space through the map Q : F → L given by: Q(f ) = f / |f˙|. This
square root trick is often used, see for instance [KSW11]. Note that in this case,
Rt
Q is a bijective mapping with inverse given by q 7→ f with √
f (t) = 0 q(s)|q(s)|ds.
We can define a group action on M = L2 as: γ · q = q ◦ γ γ̇, for which one can
check easily by a change of variable that:
p
p
kγ · q − γ · q 0 k2 = kq ◦ γ γ̇ − q 0 ◦ γ γ̇k2 = kq − q 0 k2 .
So up to the mapping Q, the Fisher Rao metric on curve corresponds to the
situation M where theorem 5.2 applies. Note that in this case the set of fixed
points under the action of G corresponds in the space F to constant functions.
We can also provide an computation of the consistency bias in this setting:
Proposition 5.1. Under the assumptions of theorem 5.2, we write X = t0 + σ
where t0 is a fixed point, σ > 0, E() = 0 and E(kk2 ) = 1, if there is a Fréchet
mean of [X], then the consistency bias is linear with respect to σ and it is equal
to:
σ sup E(sup hv, g · i).
kvk=1
g∈G
Proof. For λ > 0 and kvk = 1, we compute the variance F in the quotient space
of [X] at the point t0 + λv. Since t0 is a fixed point we get:
F (t0 +λv) = E( inf kt0 +λv−gXk2 ) = E(kXk2 )−kt0 k2 −2λE(sup hv, g(X − t0 )i)+λ2 .
g∈G
g
Then we minimise F with respect to λ, and after we minimise with respect to
v (with kvk = 1). Which concludes.
5.2
5.2.1
Proofs of these theorems
Proof of theorem 5.1
We start with the following simple result, which aims to differentiate the variance
of X. This classical result (see [Pen06] for instance) is proved in appendix B in
order to be the more self-contained as possible:
Lemma 5.1. Let X a random variable on M such that E(d(x, X)2 ) < +∞ for
some x ∈ M . Then the variance m 7→ E(m) = E(dM (m, X)2 ) is a continuous
26
function which is differentiable at any point m ∈ M such that P(X ∈ C(m)) = 0
where C(m) is the cut locus of m. Moreover at such point one has:
∇E(m) = −2E(logm (X)),
where logm : M \ C(m) → Tm M is defined for any x ∈ M \ C(m) as the unique
u ∈ Tm M such that expm (u) = x and kukm = dM (x, m).
We are now ready to prove theorem 5.1.
Proof. (of theorem 5.1) Let m0 be a point in the support of M which is not a
fixed point and not in the cut locus of t0 . Then there exists g0 ∈ G such that
m1 = g0 m0 6= m0 . Note that since x 7→ g0 x is a symmetry (the distance is
equivariant under the action of G) have that m1 = g0 m0 ∈
/ C(g0 t0 ) = C(t0 ) (t0
is a fixed point under the action of G). Let v0 = logt0 (m0 ) and v1 = logt0 (m1 ).
We have v0 6= v1 and since C(t0 ) is closed and the logt0 is continuous application
on M \ C(t0 ) we have:
lim
→0 P(X
1
E(1X∈B(m0 ,) logt0 (X)) = v0 .
∈ B(m0 , ))
(we use here the fact that since m0 is in the support of the law of X, P(X ∈
B(m0 , )) > 0 for any > 0 so that the denominator does not vanish and the
fact that since M is a complete manifold, it is a locally compact space (the
closed balls are compacts) and logt0 is locally bounded). Similarly:
lim
→0 P(X
1
E(1X∈B(m0 ,) logt0 (g0 X)) = v1 .
∈ B(m0 , ))
Thus for sufficiently small > 0 we have (since v0 6= v1 ):
E(logt0 (X)1X∈B(m0 ,) ) 6= E(logt0 (g0 X)1X∈B(m0 ,) ).
(33)
By using using a reductio ad absurdum, we suppose that [t0 ] is a Fréchet mean
of [X] and we want to find a contradiction with (33). In order to do that we
introduce simple functions as the function x 7→ 1x∈B(m0 ,) which intervenes in
Equation (33). Let s : M → G be a simple function (i.e. a measurable function
with finite number of values in G). Then x 7→ h(x) = s(x)x is a measurable
function3 . Now, let Es (x) = E(d(x, s(X)X)2 ) be the variance of the variable
s(X)X. Note that (and this is the main point):
∀g ∈ G
3 Indeed
if: s =
dM (t0 , x) = dM (gt0 , gx) = dM (t0 , gx) = dQ ([t0 ], [x]),
n
P
gi 1Ai where (Ai )1≤i≤n is a partition of M (such that the sum is always
i=1
defined). Then for any Borel set B ⊂ M we have: h−1 (B) =
n
S
gi−1 (B) ∩ Ai is a measurable
i=1
set since x 7→ gi x is a measurable function.
27
we have: Es (t0 ) = E(t0 ). Assume now that [t0 ] a Fréchet mean for [X] on the
quotient space and let us show that Es has a global minimum at t0 . Indeed for
any m, we have:
Es (m) = E(dM (m, s(X)X)2 ) ≥ E(dQ ([m], [X])2 ) ≥ E(dQ ([t0 ], [X])2 ) = Es (t0 ).
Now, we want to apply lemma 5.1 to the random variables s(X)X and X at the
point t0 . Since we assume that X ∈
/ C(t0 ) almost surely and X ∈
/ C(t0 ) implies
s(X)X ∈
/ C(t0 ) we get P(s(X)X ∈ C(t0 )) = 0 and the lemma 5.1 applies. As
t0 is a minimum, we already know that the differential of Es (respectively E)
at t0 should be zero. We get:
E(logt0 (X)) = E(logt0 (s(X)X)) = 0.
(34)
Now we apply Equation (34) to a particular simple function defined by s(x) =
g0 1x∈B(m0 ,) + eG 1x∈B(m
. We split the two expected values in (34) into two
/
0 ,)
parts:
E(logt0 (X)1X∈B(m0 ,) ) + E(logt0 (X)1X ∈B(m
) = 0,
(35)
/
0 ,)
) = 0.
E(logt0 (g0 X)1X∈B(m0 ,) ) + E(logt0 (X)1X ∈B(m
/
0 ,)
(36)
By substrating (35) from (36), one gets:
E(logt0 (X)1X∈B(m0 ,) ) = E(logt0 (g0 X)1X∈B(m0 ,) ),
which is a contradiction with (33). Which concludes.
5.2.2
Proof of theorem 5.2
Proof. The extension to theorem 5.2 is quite straightforward. In this setting
many things are now explicit since d(x, y) = kx − yk , ∇x d(x, y)2 = 2(x − y),
logx (y) = y − x and the cut locus is always empty. It is then sufficient to go
along the previous proof and to change the quantity accordingly. Note that the
local compactness of the space is not true in infinite dimension. However this
was only used to prove that the log was locally bounded but this last result is
trivial in this setting.
6
Conclusion and discussion
In this article, we exhibit conditions which imply that the template estimation
with the Fréchet mean in quotient space is inconsistent. These conditions are
rather generic. As a result, without any more information, a priori there is
inconsistency. The behaviour of the consistency bias is summarized in table 1.
Surely future works could improve these lower and upper bounds.
In a more general case: when we take an infinite-dimensional vector space
quotiented by a non isometric group action, is there always an inconsistency?
An important example of such action is the action of diffeomorphisms. Can we
estimate the consistency bias? In this setting, one estimates the template (or
28
Table 1: Behaviour of the consistency bias with respect to σ 2 the variability of
X = t0 + σ. The constants Ki ’s depend on the kind of noise, on the template
t0 and on the group action.
Consistency bias : CB
G is any group
Supplementary properties for
G a finite group
√
Upper bound of CB
CB ≤ σ + 2 σ 2 + K1 σ CB ≤ K2 σ (theorem 3.3)
(proposition 4.5)
Lower bound of CB for σ → ∞
CB ≥ L ∼ K3 σ (proposition 4.4)
σ→∞
when the template is not a fixed
point
√
Behavior of CB for σ → 0 when CB ≤ U ∼ K4 σ
CB = o(σ k ), ∀k ∈ N in the
σ→0
0
the template is not a fixed point
section 3.3, can we extend this
result for finite group?
CB = σ sup E(supg∈G hv, gi) (proposition 5.1)
CB when the template is a fixed
point
kvk=1
an atlas), but does not exactly compute the Fréchet mean in quotient space,
because a regularization term is added. In this setting, can we ensure that the
consistency bias will be small enough to estimate the original template? Otherwise, one has to reconsider the template estimation with stochastic algorithms
as in [AKT10] or develop new methods.
A
Proof of theorems for finite groups’ setting
A.1
Proof of theorem 3.2: differentiation of the variance
in the quotient space
In order to show theorem 3.2 we proceed in three steps. First we see some
following properties and definitions which will be used. Most of these properties
are the consequences of the fact that the group G is finite. Then we show that
the integrand of F is differentiable. Finally we show that we can permute
gradient and integral signs.
1. The set of singular points in Rn , is a null set (for the Lebesgue’s measure),
since it is equal to:
[
ker(x 7→ g · x − x),
g6=eG
a finite union of strict linear subspaces of Rn thanks to the linearity and
effectively of the action and to the finite group.
2. If m is regular, then for g, g 0 two different elements of G, we pose:
H(g · m, g 0 · m) = {x ∈ Rn , kx − g · mk = kx − g 0 · mk}.
Moreover H(g · m, g 0 · m) = (g · m − g 0 · m)⊥ is an hyperplane.
29
3. For m a regular point we define the set of points which are equally distant
from two different points of the orbit of m:
[
H(g · m, g 0 · m).
Am =
g6=g 0
Then Am is a null set. For m regular and x ∈
/ Am the minimum in the
definition of the quotient distance :
dQ ([m], [x]) = minkm − g · xk,
g∈G
(37)
is reached at a unique g ∈ G, we call g(x, m) this unique element.
4. By expansion of the squared norm: g minimises km − g · xk if and only if
g maximises hm, g · xi.
5. If m is regular and x ∈
/ Am then:
∀g ∈ G \ {g(x, m)}, km − g(x, m) · xk < km − g · xk,
by continuity of the norm and by the fact that G is a finite group, we can
find α > 0, such that for µ ∈ B(m, α) and y ∈ B(x, α):
∀g ∈ G \ {g(x, m)} kµ − g(x, m) · yk < kµ − g · yk.
(38)
Therefore for such y and µ we have:
g(x, m) = g(y, µ).
6. For m a regular point, we define Cone(m) the convex cone of Rn :
Cone(m) = {x ∈ Rn / ∀g ∈ G kx − mk ≤ kx − g · mk}
(39)
n
= {x ∈ R / ∀g ∈ G hm, xi ≥ hgm, xi}.
This is the intersection of |G| − 1 half-spaces: each half space is delimited
by H(m, gm) for g 6= eG (see fig. 1). Cone(m) is the set of points whose
projection on [m] is m, (where the projection of one point p on [m] is one
point g · m which minimises the set {kp − g · mk, g ∈ G}).
7. Taking a regular T
point m allows us to see the
T quotient. For every point x ∈
Rn we have: [x] Cone(m) 6= ∅, card([x] Cone(m)) ≥ 2 if and only if
x ∈ Am . The borders of the cone is Cone(m)\Int(Cone(m)) = Cone(m)∩
Am (we denote by Int(A) the interior of a part A). Therefore Q = Rn /G
can be seen like Cone(m) whose border have been glued together.
The proof of theorem 3.2 is the consequence of the following lemmas. The
first lemma studies the differentiability of the integrand, and the second allows
us to permute gradient and integral sign. Let us denote by f the integrand of
F:
30
∀ m, x ∈ M
f (x, m) = minkm − g · xk2 .
(40)
g∈G
Thus we have: F (m) = E(f (X, m)). The min of differentiable functions is not
necessarily differentiable, however we prove the following result:
Lemma A.1. Let m0 be a regular point, if x ∈
/ Am0 then m 7→ f (x, m) is
differentiable at m0 , besides we have:
∂f
(x, m0 ) = 2(m0 − g(x, m0 ) · x)
∂m
(41)
Proof. If m0 is regular and x ∈
/ Am0 then we know from the item 5 of the
appendix A.1 that g(x, m0 ) is locally constant. Therefore around m0 , we have:
f (x, m) = km − g(x, m0 ) · xk2 ,
which can differentiate with respect to m at m0 . This proves the lemma A.1.
Now we want to prove that we can permute the integral and the gradient
sign. The following lemma provides us a sufficient condition to permute integral
and differentiation signs thanks to the dominated convergence theorem:
Lemma A.2. For every m0 ∈ M we have the existence of an integrable function
Φ : M → R+ such that:
∀m ∈ B(m0 , 1), ∀x ∈ M
|f (x, m0 ) − f (x, m)| ≤ km − m0 kΦ(x).
(42)
Proof. For all g ∈ G, m ∈ M we have:
kg · x − m0 k2 − kg · x − mk2 = hm − m0 , 2g · x − (m0 + m)i
≤ km − m0 k × (km0 + mk + k2xk)
2
minkg · x − m0 k ≤ km − m0 k (km0 + mk + k2xk) + kg · x − mk2
g∈G
minkg · x − m0 k2 ≤ km − m0 k (km0 + mk + k2xk) + minkg · x − mk2
g∈G
2
g∈G
2
minkg · x − m0 k − minkg · x − mk ≤ km − m0 k (2km0 k + km − m0 k + k2xk)
g∈G
g∈G
By symmetry we get also the same control of f (x, m) − f (x, m0 ), then:
|f (x, m0 ) − f (x, m)| ≤ km0 − mk (2km0 k + km − m0 k + k2xk)
(43)
The function Φ should depend on x or m0 , but not on m. That is why we take
only m ∈ B(m0 , 1), then we replace km−m0 k by 1 in (43), which concludes.
31
A.2
Proof of theorem 3.1: the gradient is not zero at the
template
To prove it, we suppose that ∇F (t0 ) = 0, and we take the dot product with t0 :
h∇F (t0 ), t0 i = 2E(hX, t0 i − hg(X, t0 ) · X, t0 i) = 0.
(44)
The item 4 of (x, m) 7→ g(x, m) seen at appendix A.1 leads to:
hX, t0 i − hg(X, t0 ) · X, t0 i ≤ 0 almost surely.
So the expected value of a non-positive random variable is null. Then
hX, t0 i − hg(X, t0 ) · X, t0 i = 0 almost surely hX, t0 i = hg(X, t0 ) · X, t0 i almost surely.
Then g = eG maximizes the dot product almost surely. Therefore (as we know
that g(X, t0 ) is unique almost surely, since t0 is regular):
g(X, t0 ) = eG almost surely,
which is a contradiction with Equation (6).
A.3
Proof of theorem 3.3: upper bound of the consistency
bias
In order to show this Theorem, we use the following lemma:
Lemma A.3. We write X = t0 + where E() = 0 and we make the assumption
that the noise is a subgaussian random variable. This means that it exists c > 0
such that:
2
s kmk2
.
(45)
∀m ∈ M = Rn , E(exp(h, mi)) ≤ c exp
2
If for m ∈ M we have:
p
ρ̃ := dQ ([m], [t0 ]) ≥ s 2 log(c|G|),
(46)
p
ρ̃2 − ρ̃s 8 log(c|G|) ≤ F (m) − E(kk2 ).
(47)
then we have:
Proof. (of lemma A.3) First we expand the right member of the inequality (47):
E(kk2 ) − F (m) = E max(kX − t0 k2 − kX − gmk2 )
g∈G
We use the formula kAk2 − kA + Bk2 = −2 hA, Bi − kBk2 with A = X − t0 and
B = t0 − gm:
E(kk2 ) − F (m) = E max −2 hX − t0 , t0 − gmi − kt0 − gmk2 = E(max ηg ),
g∈G
g∈G
(48)
32
with ηg = −kt0 − gmk2 + 2 h, gm − t0 i. Our goal is to find a lower bound of
F (m) − E(kk2 ), that is why we search an upper bound of E(maxηg ) with the
g∈G
Jensen’s inequality. We take x > 0 and we get by using the assumption (45):
X
exp(xE(max ηg )) ≤ E(exp(max xηg )) ≤ E
exp(xηg )
g∈G
g∈G
≤
X
g∈G
2
exp(−xkt0 − gmk )E(exp(h, 2x(gm − t0 )i)
g
X
≤c
exp(−xkt0 − gmk2 ) exp(2s2 x2 kgm − t0 k2 )
g
X
≤c
exp(kgm − t0 k2 (−x + 2x2 s2 ))
(49)
g
Now if (−x + 2t2 x2 ) < 0, we can take an upper bound of the sum sign in (49)
by taking the smallest value in the sum sign, which is reached when g minimizes
kg · m − t0 k multiplied by the number of elements summed. Moreover (−x +
2x2 s) < 0 ⇐⇒ 0 < x < 2s12 . Then we have:
exp(xE(max ηg )) ≤ c|G| exp(ρ̃2 (−x + 2x2 s2 )) as soon as 0 < x <
g∈G
1
.
2s2
Then by taking the log:
E(maxηg ) ≤
g∈G
log c|G|
+ (2xs2 − 1)ρ̃2 .
x
(50)
Now we find the x which optimizes inequality (50).p By differentiation, the
right member of inequality (50) is minimal for x? = log c|G|/2/(sρ̃) which is
a valid choice because x? ∈ (0, 2s12 ) by using the assumption (46). With the
equations (48) and (50) and x? we get the result.
Proof. (of theorem 3.3) We take m? ∈ argmin F , ρ̃ = dQ ([m? ], [t0 ]), and =
2
2
X − tp
0 . We have: F (m? ) ≤ F (t0 ) ≤ E(kk ) then F (m? ) − E(kk ) ≤ 0. If
ρ̃ > s 2 log(|G|) then we can apply lemma A.3 with c = 1. Thus:
p
ρ̃2 − ρ̃s 8 log(|G|) ≤ 2F (m? ) − E(kk2 ) ≤ 0,
p
p
which yields to ρ̃ ≤ s 8 log(|G|). If ρ̃ ≤ s 2 log(|G|), we have nothing to
prove.
Note that the proof of this upper bound does not use the fact that the action
is isometric, therefore this upper bound is true for every finite group action.
33
A.4
Proof of proposition 3.2: inconsistency in R2 for the
action of translation
Proof. We suppose that E(X) ∈ HPA ∪ L. In this setting we call τ (x, m) one
of element of the group G = T which minimises kτ · x − mk see (37) instead of
g(x, m). The variance in the quotient space at the point m is:
F (m) = E min kτ · X − mk2 = E(kτ (X, m) · X − mk2 ).
τ ∈Z/2Z
As we want to minimize F and F (1 · m) = F (m), we can suppose that m ∈
HPA ∪ L. We can completely write what take τ (x, m) for x ∈ M :
• If x ∈ HPA ∪ L we can set τ (x, m) = 0 (because in this case x, m are on
the same half plane delimited by L the perpendicular bisector of m and
−m).
• If x ∈ HPB then we can set τ (x, m) = 1 (because in this case x, m are
not on the same half plane delimited by L the perpendicular bisector of
m and −m).
This allows use to write the variance at the point m ∈ HPA :
F (m) = E kX − mk2 1{X∈HPA ∪L} + E k1 · X − mk2 1{X∈HPB }
Then we define the random variable Z by: Z = X1X∈HPA ∪L + 1 · X1X∈HPB ,
such that for m ∈ HPA we have: F (m) = E(kZ − mk2 ) and F (m) = F (1 · m).
Thus if m? is a global minimiser of F , then m? = E(Z) or m? = 1 · E(Z). So the
Fréchet mean of [X] is [E(Z)]. Here instead of using theorem 3.1, we can work
explicitly: Indeed there is no inconsistency if and only if E(Z) = E(X), (E(Z) =
1 · E(X) would be another possibility, but by assumption E(Z), E(X) ∈ HPA ),
by writing X = X1X∈HPA + X1X∈HPB ∪L , we have:
E(Z) = E(X) ⇐⇒ E(1 · X1X∈HPB ∪L ) = E(X1X∈HPB ∪L )
⇐⇒ 1 · E(X1X∈HPB ∪L ) = E(X1X∈HPB ∪L )
⇐⇒ E(X1X∈HPB ∪L ) ∈ L
⇐⇒ P(X ∈ HPB ) = 0,
Therefore there is an inconsistency if and only if P(X ∈ HPB ) > 0 (we remind
that we made the assumption that E(X) ∈ HPA ∪ L). If E(X) is regular (i.e.
E(X) ∈
/ L), then there is an inconsistency if and only if X takes values in HPB ,
(this is exactly the condition of theorem 3.1, but in this particular case, this is
a necessarily and sufficient condition). This proves point 1. Now we make the
assumption that X follows a Gaussian noise in order compute E(Z) (note that
we could take another noise, as long as we are able to compute E(Z)). For that
we convert to polar coordinates: (u, v)T = E(X) + (r cos θ, r sin θ)T where r > 0
et θ ∈ [0, 2π]. We also define: d = dist(E(X), L), E(X) is a regular point if
34
and only if d > 0. We still suppose that E(X) = (α, β)T ∈ HPA ∪ L. First we
parametrise in function of (r, θ) the points which are in HPB :
v < u ⇐⇒ β + r sin θ < α + r cos θ ⇐⇒
β−α √
π
< 2 cos(θ + )
r
4
d
π
< cos(θ + )
r h
4
i
π
π
⇐⇒ θ ∈ − − arccos(d/r), − + arccos(d/r) and d < r
4
4
⇐⇒
Then we compute E(Z):
E(Z) =E(X1X∈HPA ) + E(1 · X1X∈HPB )
exp − r2
Z d Z 2π
2s2
α + r cos θ
rdθdr
E(Z) =
2
β
+
r
sin
θ
2πs
0
0
exp − r2
Z +∞ Z 2π− π4 −arccos( dr )
2
2s
α + r cos θ
rdrdθ
+
2
β
+
r
sin
θ
d
π
2πs
arccos( r )− 4
d
2
r
Z +∞ Z − π4 +arccos( dr )
β + r sin θ exp − 2s2
+
rdrdθ
α + r cos θ
d
2πs2
d
−π
4 −arccos( r )
Z +∞ 2
r2 √
r exp(− 2s
d
2)
=E(X) +
2g
dr × (−1, 1)T ,
2
πs
r
d
We compute ρ̃ = dQ ([E(X)], [E(Z)]) where dQ is the distance in the quotient
space defined in (1). As we know that E(X), E(Z) are in the same half-plane
delimited by L, we have: ρ̃ = dQ ([E(Z)], [E(X)]) = kE(Z) − E(X)k. This proves
eq. (9), note that items 2a to 2c are the direct consequence of eq. (9) and basic
analysis.
B
Proof of lemma 5.1: differentiation of the variance in the top space
Proof. By triangle inequality it is easy to show that E is finite and continuous
everywhere. Moreover, it is a well known fact that x 7→ dM (x, z)2 is differentiable at any m ∈ M \ C(z) (i.e. z ∈
/ C(m)) with derivative −2 logm (z). Now
since:
|dM (x, z)2 − dM (y, z)2 | = |dM (x, z) − dM (y, z)kdM (x, z) + dM (y, z)|
≤ dM (x, y)(2dM (x, z) + dM (y, x)),
we get in a local chart φ : U → V ⊂ Rn at t = φ(m) we have locally around t
that:
h 7→ dM (φ−1 (t), φ−1 (t + h)),
35
is smooth and |dM (φ−1 (t), φ−1 (t+h))| ≤ C|h| for a C > 0. Hence for sufficiently
small h, |dM (φ−1 (t), z)2 − dM (φ−1 (t + h), z)2 | ≤ C|h|(2dM (m, z) + 1). We get
the result from dominated convergence Lebesgue theorem with E(dM (m, X)) ≤
E(dM (m, X)2 + 1) < +∞.
References
[AAT07]
Stéphanie Allassonnière, Yali Amit, and Alain Trouvé. Towards a
coherent statistical framework for dense deformable template estimation. Journal of the Royal Statistical Society: Series B (Statistical
Methodology), 69(1):3–29, 2007.
[ADP15]
Stéphanie Allassonnière, Loïc Devilliers, and Xavier Pennec. Estimating the template in the total space with the fréchet mean on
quotient spaces may have a bias: a case study on vector spaces quotiented by the group of translations. In Mathematical Foundations
of Computational Anatomy (MFCA’15), 2015.
[AKT10]
Stéphanie Allassonnière, Estelle Kuhn, and Alain Trouvé. Construction of bayesian deformable models via a stochastic approximation
algorithm: a convergence study. Bernoulli, 16(3):641–678, 2010.
[BB08]
Abhishek Bhattacharya and Rabi Bhattacharya. Statistics on riemannian manifolds: asymptotic distribution and curvature. Proceedings of the American Mathematical Society, 136(8):2959–2967,
2008.
[BC11]
Jérémie Bigot and Benjamin Charlier. On the consistency of fréchet
means in deformable models for curve and image analysis. Electronic
Journal of Statistics, 5:1054–1089, 2011.
[BG14]
Dominique Bontemps and Sébastien Gadat. Bayesian methods
for the shape invariant model. Electronic Journal of Statistics,
8(1):1522–1568, 2014.
[CWS16]
Jason Cleveland, Wei Wu, and Anuj Srivastava. Norm-preserving
constraint in the fisher–rao registration and its application in signal estimation. Journal of Nonparametric Statistics, 28(2):338–359,
2016.
[DPC+ 14] Stanley Durrleman, Marcel Prastawa, Nicolas Charon, Julie R Korenberg, Sarang Joshi, Guido Gerig, and Alain Trouvé. Morphometry of anatomical shape complexes with dense deformations and
sparse parameters. NeuroImage, 101:35–49, 2014.
[Fré48]
Maurice Fréchet. Les elements aléatoires de nature quelconque dans
un espace distancié. In Annales de l’institut Henri Poincaré, volume 10, pages 215–310, 1948.
36
[GM98]
Ulf Grenander and Michael I. Miller. Computational anatomy: An
emerging discipline. Q. Appl. Math., LVI(4):617–694, December
1998.
[HCG+ 13] Sebastian Hitziger, Maureen Clerc, Alexandre Gramfort, Sandrine
Saillet, Christian Bénar, and Théodore Papadopoulo. Jitter-adaptive
dictionary learning-application to multi-trial neuroelectric signals.
arXiv preprint arXiv:1301.3611, 2013.
[JDJG04] Sarang Joshi, Brad Davis, Mathieu Jomier, and Guido Gerig. Unbiased diffeomorphic atlas construction for computational anatomy.
Neuroimage, 23:S151–S160, 2004.
[Kar77]
Hermann Karcher. Riemannian center of mass and mollifier smoothing. Communications on pure and applied mathematics, 30(5):509–
541, 1977.
[Ken89]
David G Kendall. A survey of the statistical theory of shape. Statistical Science, pages 87–99, 1989.
[Ken90]
Wilfrid S Kendall. Probability, convexity, and harmonic maps with
small image i: uniqueness and fine existence. Proceedings of the
London Mathematical Society, 3(2):371–406, 1990.
[KSW11]
Sebastian A. Kurtek, Anuj Srivastava, and Wei Wu. Signal estimation under random time-warpings and nonlinear signal alignment. In J. Shawe-Taylor, R.S. Zemel, P.L. Bartlett, F. Pereira, and
K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 24, pages 675–683. Curran Associates, Inc., 2011.
[LAJ+ 12] Sidonie Lefebvre, Stéphanie Allassonnière, Jérémie Jakubowicz,
Thomas Lasne, and Eric Moulines. Aircraft classification with a
low resolution infrared sensor. Machine Vision and Applications,
24(1):175–186, 2012.
[MHP16]
Nina Miolane, Susan Holmes, and Xavier Pennec. Template
shape estimation: correcting an asymptotic bias. arXiv preprint
arXiv:1610.01502, 2016.
[MP15]
Nina Miolane and Xavier Pennec. Biased estimators on quotient
spaces. In Geometric Science of Information. Second International
Conference, GSI 2015, Palaiseau, France, October 28-30, 2015, Proceedings, volume 9389. Springer, 2015.
[Pen06]
Xavier Pennec. Intrinsic statistics on riemannian manifolds: Basic
tools for geometric measurements. Journal of Mathematical Imaging
and Vision, 25(1):127–154, 2006.
37
[SBG08]
Mert Sabuncu, Serdar K. Balci, and Polina Golland. Discovering
modes of an image population through mixture modeling. Proceeding
of the MICCAI conference, LNCS(5242):381–389, 2008.
[Zie77]
Herbert Ziezold. On expected figures and a strong law of large numbers for random elements in quasi-metric spaces. In Transactions of
the Seventh Prague Conference on Information Theory, Statistical
Decision Functions, Random Processes and of the 1974 European
Meeting of Statisticians, pages 591–602. Springer, 1977.
[ZSF13]
Miaomiao Zhang, Nikhil Singh, and P.Thomas Fletcher. Bayesian
estimation of regularization and atlas building in diffeomorphic image registration. In JamesC. Gee, Sarang Joshi, KilianM. Pohl,
WilliamM. Wells, and Lilla Zöllei, editors, Information Processing
in Medical Imaging, volume 7917 of Lecture Notes in Computer Science, pages 37–48. Springer Berlin Heidelberg, 2013.
38
| 10 |
Tiny SSD: A Tiny Single-shot Detection Deep Convolutional Neural Network for
Real-time Embedded Object Detection
arXiv:1802.06488v1 [] 19 Feb 2018
Alexander Wong, Mohammad Javad Shafiee, Francis Li, Brendan Chwyl
Dept. of Systems Design Engineering
University of Waterloo, DarwinAI
{a28wong, mjshafiee}@uwaterloo.ca, {francis, brendan}@darwinai.ca
Abstract—Object detection is a major challenge in computer
vision, involving both object classification and object localization within a scene. While deep neural networks have been
shown in recent years to yield very powerful techniques for
tackling the challenge of object detection, one of the biggest
challenges with enabling such object detection networks for
widespread deployment on embedded devices is high computational and memory requirements. Recently, there has been
an increasing focus in exploring small deep neural network
architectures for object detection that are more suitable for embedded devices, such as Tiny YOLO and SqueezeDet. Inspired
by the efficiency of the Fire microarchitecture introduced in
SqueezeNet and the object detection performance of the singleshot detection macroarchitecture introduced in SSD, this paper
introduces Tiny SSD, a single-shot detection deep convolutional
neural network for real-time embedded object detection that
is composed of a highly optimized, non-uniform Fire subnetwork stack and a non-uniform sub-network stack of highly
optimized SSD-based auxiliary convolutional feature layers
designed specifically to minimize model size while maintaining
object detection performance. The resulting Tiny SSD possess
a model size of 2.3MB (∼26X smaller than Tiny YOLO) while
still achieving an mAP of 61.3% on VOC 2007 (∼4.2% higher
than Tiny YOLO). These experimental results show that very
small deep neural network architectures can be designed for
real-time object detection that are well-suited for embedded
scenarios.
Keywords-object detection; deep neural network; embedded;
real-time; single-shot
I. I NTRODUCTION
Object detection can be considered a major challenge in
computer vision, as it involves a combination of object classification and object localization within a scene (see Figure 1).
The advent of modern advances in deep learning [7], [6]
has led to significant advances in object detection, with the
majority of research focuses on designing increasingly more
complex object detection networks for improved accuracy
such as SSD [9], R-CNN [1], Mask R-CNN [2], and other
extended variants of these networks [4], [8], [15]. Despite the
fact that such object detection networks have showed stateof-the-art object detection accuracies beyond what can be
achieved by previous state-of-the-art methods, such networks
are often intractable for use for embedded applications due
to computational and memory constraints. In fact, even faster
variants of these networks such as Faster R-CNN [13] are only
Figure 1. Tiny SSD results on the VOC test set. The bounding boxes,
categories, and confidences are shown.
capable of single-digit frame rates on a high-end graphics
processing unit (GPU). As such, more efficient deep neural
networks for real-time embedded object detection is highly
desired given the large number of operational scenarios that
such networks would enable, ranging from smartphones to
aerial drones.
Recently, there has been an increasing focus in exploring
small deep neural network architectures for object detection
that are more suitable for embedded devices. For example,
Redmon et al. introduced YOLO [11] and YOLOv2 [12],
which were designed with speed in mind and was able to
achieve real-time object detection performance on a high-end
Nvidia Titan X desktop GPU. However, the model size of
YOLO and YOLOv2 remains very large in size (753 MB and
193 MB, respectively), making them too large from a memory
perspective for most embedded devices. Furthermore, their
object detection speed drops considerably when running on
embedded chips [14]. To address this issue, Tiny YOLO [10]
was introduced where the network architecture was reduced
considerably to greatly reduce model size (60.5 MB) as well
as greatly reduce the number of floating point operations
required (just 6.97 billion operations) at a cost of object
detection accuracy (57.1% on the twenty-category VOC 2017
test set). Similarly, Wu et al. introduced SqueezeDet [16], a
fully convolutional neural network that leveraged the efficient
Fire microarchitecture introduced in SqueezeNet [5] within
an end-to-end object detection network architecture. Given
that the Fire microarchitecture is highly efficient, the resulting
SqueezeDet had a reduced model size specifically for the
purpose of autonomous driving. However, SqueezeDet has
only been demonstrated for objection detection with limited
object categories (only three) and thus its ability to handle
larger number of categories have not been demonstrated.
As such, the design of highly efficient deep neural network
architectures that are well-suited for real-time embedded
object detection while achieving improved object detection
accuracy on a variety of object categories is still a challenge
worth tackling.
In an effort to achieve a fine balance between object
detection accuracy and real-time embedded requirements
(i.e., small model size and real-time embedded inference
speed), we take inspiration by both the incredible efficiency
of the Fire microarchitecture introduced in SqueezeNet [5]
and the powerful object detection performance demonstrated
by the single-shot detection macroarchitecture introduced
in SSD [9]. The resulting network architecture achieved
in this paper is Tiny SSD, a single-shot detection deep
convolutional neural network designed specifically for realtime embedded object detection. Tiny SSD is composed
of a non-uniform highly optimized Fire sub-network stack,
which feeds into a non-uniform sub-network stack of highly
optimized SSD-based auxiliary convolutional feature layers,
designed specifically to minimize model size while retaining
object detection performance.
This paper is organized as follows. Section 2 describes the
highly optimized Fire sub-network stack leveraged in the Tiny
SSD network architecture. Section 3 describes the highly
optimized sub-network stack of SSD-based convolutional
feature layers used in the Tiny SSD network architecture.
Section 4 presents experimental results that evaluate the
efficacy of Tiny SSD for real-time embedded object detection.
Finally, conclusions are drawn in Section 5.
II. O PTIMIZED F IRE S UB - NETWORK S TACK
The overall network architecture of the Tiny SSD network
for real-time embedded object detection is composed of two
main sub-network stacks: i) a non-uniform Fire sub-network
stack, and ii) a non-uniform sub-network stack of highly
optimized SSD-based auxiliary convolutional feature layers,
with the first sub-network stack feeding into the second subnetwork stack. In this section, let us first discuss in detail
the design philosophy behind the first sub-network stack
of the Tiny SSD network architecture: the optimized fire
sub-network stack.
A powerful approach to designing smaller deep neural
network architectures for embedded inference is to take a
more principled approach and leverage architectural design
strategies to achieve more efficient deep neural network
microarchitectures [3], [5]. A very illustrative example of
such a principled approach is the SqueezeNet [5] network architecture, where three key design strategies were leveraged:
1) reduce the number of 3 × 3 filters as much as possible,
Figure 2. An illustration of the Fire microarchitecture. The output of
previous layer is squeezed by a squeeze convolutional layer of 1 × 1 filters,
which reduces the number of input channels to 3 × 3 filters. The result of
the squeeze convolutional layers is passed into the expand convolutional
layer which consists of both 1 × 1 and 3 × 3 filters.
2) reduce the number of input channels to 3 × 3 filters
where possible, and
3) perform downsampling at a later stage in the network.
This principled designed strategy led to the design of what
the authors referred to as the Fire module, which consists of
a squeeze convolutional layer of 1x1 filters (which realizes
the second design strategy of effectively reduces the number
of input channels to 3 × 3 filters) that feeds into an expand
convolutional layer comprised of both 1 × 1 filters and 3 × 3
filters (which realizes the first design strategy of effectively
reducing the number of 3 × 3 filters). An illustration of the
Fire microarchitecture is shown in Figure 2.
Inspired by the elegance and simplicity of the Fire
microarchitecture design, we design the first sub-network
stack of the Tiny SSD network architecture as a standard
convolutional layer followed by a set of highly optimized
Fire modules. One of the key challenges to designing this
sub-network stack is to determine the ideal number of Fire
modules as well as the ideal microarchitecture of each of
the Fire modules to achieve a fine balance between object
detection performance and model size as well as inference
speed. First, it was determined empirically that 10 Fire
modules in the optimized Fire sub-network stack provided
strong object detection performance. In terms of the ideal
microarchitecture, the key design parameters of the Fire
microarchitecture are the number of filters of each size
(1 × 1 or 3 × 3) that form this microarchitecture. In the
SqueezeNet network architecture that first introduced the
Fire microarchitecture [5], the microarchitectures of the Fire
modules are largely uniform, with many of the modules
sharing the same microarchitecture configuration. In an effort
to achieve more optimized Fire microarchitectures on a permodule basis, the number of filters of each size in each Fire
Table I
T HE OPTIMIZED F IRE SUB - NETWORK STACK OF THE T INY SSD
NETWORK ARCHITECTURE . T HE NUMBER OF FILTERS AND INPUT SIZE TO
EACH LAYER ARE REPORTED FOR THE CONVOLUTIONAL LAYERS AND
F IRE MODULES . E ACH F IRE MODULE IS REPORTED IN ONE ROW FOR A
BETTER REPRESENTATION . ”x@S – y@E1 – z@E3" STANDS FOR x
NUMBERS OF 1 × 1 FILTERS IN THE SQUEEZE CONVOLUTIONAL LAYER , y
NUMBERS OF 1 × 1 FILTERS AND z NUMBERS OF 3 × 3 FILTERS IN THE
EXPAND CONVOLUTIONAL LAYER .
Type / Stride
Conv1 / s2
Pool1 / s2
Fire1
Fire2
Figure 3.
An illustration of the network architecture of the second
sub-network stack of Tiny SSD. The output of three Fire modules and
two auxiliary convolutional feature layers, all with highly optimized
microarchitecture configurations, are combined together for object detection.
module is optimized to have as few parameters as possible
while still maintaining the overall object detection accuracy.
As a result, the optimized Fire sub-network stack in the Tiny
SSD network architecture is highly non-uniform in nature for
an optimal sub-network architecture configuration. Table I
shows the overall architecture of the highly optimized Fire
sub-network stack in Tiny SSD, and the number of parameters
in each layer of the sub-network stack.
III. O PTIMIZED S UB - NETWORK S TACK OF SSD- BASED
C ONVOLUTIONAL F EATURE L AYERS
In this section, let us first discuss in detail the design
philosophy behind the second sub-network stack of the Tiny
SSD network architecture: the sub-network stack of highly
optimized SSD-based auxiliary convolutional feature layers.
One of the most widely-used and effective object detection
network macroarchitectures in recent years has been the
single-shot multibox detection (SSD) macroarchitecture [9].
The SSD macroarchitecture augments a base feature extraction network architecture with a set of auxiliary convolutional
feature layers and convolutional predictors. The auxiliary
convolutional feature layers are designed such that they
decrease in size in a progressive manner, thus enabling the
flexibility of detecting objects within a scene across different
scales. Each of the auxiliary convolutional feature layers
can then be leveraged to obtain either: i) a confidence score
for a object category, or ii) a shape offset relative to default
bounding box coordinates [9]. As a result, a number of object
detections can be obtained per object category in this manner
in a powerful, end-to-end single-shot manner.
Inspired by the powerful object detection performance
and multi-scale flexibility of the SSD macroarchitecture [9],
the second sub-network stack of Tiny SSD is comprised of
a set of auxiliary convolutional feature layers and convo-
Pool3 / s2
Fire3
Fire4
Pool5 / s2
Fire5
Fire6
Fire7
Fire8
Pool9 / s2
Fire 9
Pool10 / s2
Fire10
Filter Shapes
3 × 3 × 57
3×3
15@S – 49@E1 – 53@E3
Concat1
15@S – 54@E1 – 52@E3
Concat2
3×3
29@S – 92@E1 – 94@E3
Concat3
29@S – 90@E1 – 83@E3
Concat4
3×3
44@S – 166@E1 – 161@E3
Concat5
45@S – 155@E1 – 146@E3
Concat6
49@S – 163@E1 – 171@E3
Concat7
25@S – 29@E1 – 54@E3
Concat8
3×3
37@S – 45@E1 – 56@E3
Concat9
3×3
38@S – 41@E1 – 44@E3
Concat10
Input Size
300 × 300
149 × 149
74 × 74
74 × 74
74 × 74
37 × 37
37 × 37
37 × 37
18 × 18
18 × 18
18 × 18
18 × 18
18 × 18
9×9
4×4
lutional predictors with highly optimized microarchitecture
configurations (see Figure 3).
As with the Fire microarchitecture, a key challenge to
designing this sub-network stack is to determine the ideal
microarchitecture of each of the auxiliary convolutional
feature layers and convolutional predictors to achieve a fine
balance between object detection performance and model
size as well as inference speed. The key design parameters
of the auxiliary convolutional feature layer microarchitecture
are the number of filters that form this microarchitecture.
As such, similar to the strategy taken for constructing
the highly optimized Fire sub-network stack, the number
of filters in each auxiliary convolutional feature layer is
optimized to minimize the number of parameters while
preserving overall object detection accuracy of the full Tiny
SSD network. As a result, the optimized sub-network stack
of auxiliary convolutional feature layers in the Tiny SSD
network architecture is highly non-uniform in nature for
an optimal sub-network architecture configuration. Table II
shows the overall architecture of the optimized sub-network
stack of the auxiliary convolutional feature layers within the
Tiny SSD network architecture, along with the number of
Table II
T HE OPTIMIZED SUB - NETWORK STACK OF THE AUXILIARY
CONVOLUTIONAL FEATURE LAYERS WITHIN THE T INY SSD NETWORK
ARCHITECTURE . T HE INPUT SIZES TO EACH CONVOLUTIONAL LAYER
AND KERNEL SIZES ARE REPORTED .
Type / Stride
Conv12-1 / s2
Conv12-2
Conv13-1
Conv13-2
Fire5-mbox-loc
Fire5-mbox-conf
Fire9-mbox-loc
Fire9-mbox-conf
Fire10-mbox-loc
Fire10-mbox-conf
Fire11-mbox-loc
Fire11-mbox-conf
Conv12-2-mbox-loc
Conv12-2-mbox-conf
Conv13-2-mbox-loc
Conv13-2-mbox-conf
Filter Shape
3 × 3 × 51
3 × 3 × 46
3 × 3 × 55
3 × 3 × 85
3 × 3 × 16
3 × 3 × 84
3 × 3 × 24
3 × 3 × 126
3 × 3 × 24
3 × 3 × 126
3 × 3 × 24
3 × 3 × 126
3 × 3 × 24
3 × 3 × 126
3 × 3 × 16
3 × 3 × 84
Input Size
4×4
4×4
2×2
2×2
37 × 37
37 × 37
18 × 18
18 × 18
9×9
9×9
4×4
4×4
2×2
2×2
1×1
1×1
parameters in each layer.
Model
size
60.5MB
2.3MB
mAP
(VOC 2007)
57.1%
61.3%
Table IV
R ESOURCE USAGE OF T INY SSD.
Model
Name
Tiny SSD
V. E XPERIMENTAL R ESULTS AND D ISCUSSION
To study the utility of Tiny SSD for real-time embedded object detection, we examine the model size, object
detection accuracies, and computational operations on the
VOC2007/2012 datasets. For evaluation purposes, the Tiny
YOLO network [10] was used as a baseline reference comparison given its popularity for embedded object detection,
and was also demonstrated to possess one of the smallest
model sizes in literature for object detection on the VOC
2007/2012 datasets (only 60.5MB in size and requiring
just 6.97 billion operations). The VOC2007/2012 datasets
consist of natural images that have been annotated with 20
different types of objects, with illustrative examples shown
in Figure 4. The tested deep neural networks were trained
using the VOC2007/2012 training datasets, and the mean
average precision (mAP) was computed on the VOC2007
test dataset to evaluate the object detection accuracy of the
deep neural networks.
A. Training Setup
Table III
O BJECT DETECTION ACCURACY RESULTS OF T INY SSD ON VOC 2007
TEST SET. T INY YOLO RESULTS ARE PROVIDED AS A BASELINE
COMPARISON .
Model
Name
Tiny YOLO [10]
Tiny SSD
reductions while having a negligible effect on object detection
accuracy.
Total number
of Parameters
1.13M
Total number
of MACs
571.09M
IV. PARAMETER P RECISION O PTIMIZATION
In this section, let us discuss the parameter precision
optimization strategy for Tiny SSD. For embedded scenarios
where the computational requirements and memory requirements are more strict, an effective strategy for reducing
computational and memory footprint of deep neural networks
is reducing the data precision of parameters in a deep neural
network. In particular, modern CPUs and GPUs have moved
towards accelerated mixed precision operations as well as
better handling of reduced parameter precision, and thus the
ability to take advantage of these factors can yield noticeable
improvements for embedded scenarios. For Tiny SSD, the
parameters are represented in half precision floating-point,
thus leading to further deep neural network model size
The proposed Tiny SSD network was trained for 220,000
iterations in the Caffe framework with training batch size of
24. RMSProp was utilized as the training policy with base
learning rate set to 0.00001 and γ = 0.5.
B. Discussion
Table III shows the model size and the object detection
accuracy of the proposed Tiny SSD network on the VOC
2007 test dataset, along with the model size and the object
detection accuracy of Tiny YOLO. A number of interesting
observations can be made. First, the resulting Tiny SSD
possesses a model size of 2.3MB, which is ∼26X smaller
than Tiny YOLO. The significantly smaller model size of
Tiny SSD compared to Tiny YOLO illustrates its efficacy
for greatly reducing the memory requirements for leveraging
Tiny SSD for real-time embedded object detection purposes.
Second, it can be observed that the resulting Tiny SSD
was still able to achieve an mAP of 61.3% on the VOC
2007 test dataset, which is ∼4.2% higher than that achieved
using Tiny YOLO. Figure 5 demonstrates several example
object detection results produced by the proposed Tiny SSD
compared to Tiny YOLO. It can be observed that Tiny SSD
has comparable object detection results as Tiny YOLO in
some cases, while in some cases outperforms Tiny YOLO in
assigning more accurate category labels to detected objects.
For example, in the first image case, Tiny SSD is able to
detect the chair in the scene, while Tiny YOLO misses the
chair. In the third image case, Tiny SSD is able to identify
the dog in the scene while Tiny YOLO detects two bounding
boxes around the dog, with one of the bounding boxes
incorrectly labeling it as cat. This significant improvement
Figure 4.
Example images from the Pascal VOC dataset. The ground-truth bounding boxes and object categories are shown for each image.
in object detection accuracy when compared to Tiny YOLO
illustrates the efficacy of Tiny SSD for providing more
reliable embedded object detection performance. Furthermore,
as seen in Table IV, Tiny SSD requires just 571.09 million
MAC operations to perform inference, making it well-suited
for real-time embedded object detection. These experimental
results show that very small deep neural network architectures
can be designed for real-time object detection that are wellsuited for embedded scenarios.
VI. C ONCLUSIONS
In this paper, a single-shot detection deep convolutional
neural network called Tiny SSD is introduced for real-time
embedded object detection. Composed of a highly optimized,
non-uniform Fire sub-network stack and a non-uniform subnetwork stack of highly optimized SSD-based auxiliary
convolutional feature layers designed specifically to minimize
model size while maintaining object detection performance,
Tiny SSD possesses a model size that is ∼26X smaller than
Tiny YOLO, requires just 571.09 million MAC operations,
while still achieving an mAP of that is ∼4.2% higher than
Tiny YOLO on the VOC 2007 test dataset. These results
demonstrates the efficacy of designing very small deep neural
network architectures such as Tiny SSD for real-time object
detection in embedded scenarios.
ACKNOWLEDGMENT
The authors thank Natural Sciences and Engineering Research Council of Canada, Canada Research Chairs Program,
DarwinAI, and Nvidia for hardware support.
R EFERENCES
[1] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra
Malik. Rich feature hierarchies for accurate object detection
and semantic segmentation. In Proceedings of the IEEE
conference on computer vision and pattern recognition, pages
580–587, 2014.
[2] K. He, G. Gkioxari, P. Dollar, and R. Girshick. Mask r-cnn.
ICCV, 2017.
[3] Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry
Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv
preprint arXiv:1704.04861, 2017.
[4] Jonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu,
Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna,
Yang Song, Sergio Guadarrama, et al. Speed/accuracy tradeoffs for modern convolutional object detectors. In IEEE CVPR,
2017.
[5] Forrest N Iandola, Song Han, Matthew W Moskewicz, Khalid
Ashraf, William J Dally, and Kurt Keutzer. Squeezenet:
Alexnet-level accuracy with 50x fewer parameters and< 0.5
mb model size. arXiv preprint arXiv:1602.07360, 2016.
[6] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks.
In NIPS, 2012.
[7] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep
learning. Nature, 2015.
[8] Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He,
Bharath Hariharan, and Serge Belongie. Feature pyramid
networks for object detection. In CVPR, volume 1, page 4,
2017.
[9] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian
Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C Berg.
SSD: Single shot multibox detector. In European conference
on computer vision, pages 21–37. Springer, 2016.
[10] J. Redmon.
YOLO: Real-time object
https://pjreddie.com/darknet/yolo/, 2016.
detection.
[11] Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali
Farhadi. You only look once: Unified, real-time object
detection. In Proceedings of the IEEE conference on computer
vision and pattern recognition, pages 779–788, 2016.
[12] Joseph Redmon and Ali Farhadi. YOLO9000: better, faster,
stronger. arXiv preprint, 1612, 2016.
[13] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun.
Faster R-CNN: Towards real-time object detection with region
proposal networks. In Advances in neural information
processing systems, pages 91–99, 2015.
[14] Mohammad Javad Shafiee, Brendan Chywl, Francis Li, and
Alexander Wong. Fast YOLO: A fast you only look once
system for real-time embedded object detection in video. arXiv
preprint arXiv:1709.05943, 2017.
Input Image
Tiny YOLO
Tiny SSD
Figure 5. Example object detection results produced by the proposed Tiny SSD compared to Tiny YOLO. It can be observed that Tiny SSD has comparable
object detection results as Tiny YOLO in some cases, while in some cases outperforms Tiny YOLO in assigning more accurate category labels to detected
objects. This significant improvement in object detection accuracy when compared to Tiny YOLO illustrates the efficacy of Tiny SSD for providing more
reliable embedded object detection performance.
[15] Abhinav Shrivastava, Abhinav Gupta, and Ross Girshick.
Training region-based object detectors with online hard
example mining. In Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition, pages 761–769,
2016.
[16] Bichen Wu, Forrest Iandola, Peter H Jin, and Kurt Keutzer.
Squeezedet: Unified, small, low power fully convolutional
neural networks for real-time object detection for autonomous
driving. arXiv preprint arXiv:1612.01051, 2016.
| 1 |
"1\n\nAn overview of deep learning based methods for\nunsupervised and semi-supervised anomaly\ndete(...TRUNCATED) | 1 |
"Energy Clustering\nGuilherme França∗ and Joshua T. Vogelstein†\nJohns Hopkins University\n\nA(...TRUNCATED) | 1 |
"Bayesian Probabilistic Numerical Methods\nJon Cockayne∗\n\nChris Oates†\n\nTim Sullivan‡\n\nM(...TRUNCATED) | 10 |
"Inverse Stability Problem and Applications to\nRenewables Integration\n\narXiv:1703.04491v5 [] 14 O(...TRUNCATED) | 3 |
"arXiv:1201.3325v2 [] 14 Aug 2012\n\nSIMPLICIAL COMPLEXES WITH RIGID DEPTH\nADNAN ASLAM AND VIVIANA (...TRUNCATED) | 0 |
"Orthogonal Series Density Estimation for Complex Surveys\nShangyuan Ye, Ye Liang and Ibrahim A. Ahm(...TRUNCATED) | 10 |
"A Parametric MPC Approach to Balancing the Cost of Abstraction for\nDifferential-Drive Mobile Robot(...TRUNCATED) | 3 |
"Online Model Estimation for Predictive Thermal Control of Buildings\nPeter Radecki, Member, IEEE, a(...TRUNCATED) | 3 |
End of preview. Expand
in Dataset Viewer.
No dataset card yet
New: Create and edit this dataset card directly on the website!
Contribute a Dataset Card- Downloads last month
- 23